uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,498,815
arxiv
\section{Introduction} The recent advancement in deep learning techniques has enabled end-to-end text-to-speech (TTS) systems to achieve perceptual quality comparable to human speech~\cite{oord2016wavenet, shen2018natural, ren2019fastspeech, kons2019high}. Usually, such TTS systems are trained for a large number of epochs from which a final model is selected for production. One of the approaches for selection is to choose the model with the least training-objective loss on an unseen validation split, for example, the mean squared error (MSE) computed on mel spectrogram representation~\cite{shen2018natural}. However, the computation of this loss requires both text sequences and corresponding audio recordings that may not always available. More importantly, the TTS model chosen with the least training-objective loss does not always correlate to the best perceptual quality~\cite{Wang2017, theis2015note}. One of the approaches to choose a TTS model with the best perceptual quality is to manually rate a few synthesized recordings for different TTS models and choose the best model. These TTS models for manual listening tests are sampled randomly from the region where the training-objective loss on the training split has saturated. Mean opinion score (MOS) is a popular choice for such perceptual quality rating and is most often used to evaluate either the naturalness or the intelligibility of the synthesized speech. Naturalness refers to how close the synthesized speech is to real-life speech, and intelligibility refers to the ease of understanding the spoken content. However, performing such a subjective analysis of multiple recordings across models can be time-consuming, expensive, and not scalable. To overcome the scalability issue of MOS, several objective metrics have been proposed to replicate the MOS results. Specifically, these metrics propose to replace the MOS of speech intelligibility using modules from automatic speech recognition (ASR)~\cite{vich2008automatic, cervnak2009diagnostic, wang2012objective, ullmann2015withReference, ullmann2015withoutReference}. Cer\v{n}ak et. al.~\cite{cervnak2009diagnostic} employed subjective analysis and showed that the word error rate (WER) computed between the input text to TTS, and decoded text of synthesized speech using an ASR system correlates with speech intelligibility. In~\cite{ullmann2015withReference}, a phonetic acoustic model was employed to estimate the phone posterior probability sequence of both the reference and synthesized recordings. The distance between them was computed using a dynamic time warping algorithm. This distance was observed to correlate with subjective speech intelligibility. However, this metric requires the reference audio recording, which may not always be available. In~\cite{lo2019mosnet}, a deep learning-based model was trained to directly estimate the MOS for the synthesized audio. This model was trained on a dataset of synthesized speech and their corresponding MOS. However, the collection of such datasets for different languages and objectives of naturalness and intelligibility is not trivial. All of the existing works have only studied the correlation between speech intelligibility and their proposed objective metrics for their final chosen TTS model. However, to the best of the authors' knowledge, there have been no studies on the selection of the final TTS model itself that has the best perceptual quality, such as speech intelligibility. In this paper, we propose to use ASR based phone error rate (PER) as the objective metric for choosing the TTS model with the best speech intelligibility. The PER is computed between the input text to the TTS system and the decoded text of the TTS synthesized speech using an ASR system. We propose to employ PER for model selection during TTS system training. Using subjective preference tests, we have validated that the PER chosen model has better speech intelligibility compared to a model chosen using the least training-objective loss on validation split. A similar character error rate metric is employed in the popular TTS framework of ESPnet~\cite{hayashi2019espnettts}. However, they only use it to automatically detect the alignment failures in the synthesized speech and not as an objective metric to evaluate the speech intelligibility. Finally, we show that the choice of the TTS model depends on the genre of the target domain text. We study this by comparing the model selection using PER for two separate validation splits - the first with similar text as the training split, and the second with text from a different genre (news) with 57.1\% unseen unique words. The proposed model selection criteria chose different epochs for the two validation splits, that each corresponded to the best speech intelligibility for the respective splits according to the subjective analysis. In this paper, we only study the model selection for neural TTS frontend of Tacotron 2~\cite{shen2018natural}, while keeping the vocoder fixed across our experiments. The proposed method itself is generic and can be employed for other popular TTS frontends. \begin{figure}[!htp] \centerline{\includegraphics[width=\linewidth]{images/diag25.png}} \caption{Block diagram of TTS model selection using PER.} \label{pereval} \end{figure} \section{Proposed method} The block diagram of the proposed model selection method is shown in Figure~\ref{pereval}. The TTS module consists of a frontend and a vocoder. Given an input text, the frontend maps it to a sequence of mel spectrogram features corresponding to a spoken version of the input text. These features are further mapped to an audio waveform using a vocoder. In this paper, we only study the selection of the frontend model and use a fixed vocoder across experiments. Hereafter, model selection refers only to the frontend model selection. In this paper, we propose to employ PER as the objective metric for speech intelligibility and choose the model from the training epoch that has the least PER on validation split. To compute PER, we first decode the phone sequence from the vocoder output using an ASR system. The PER is then computed using the following equation between the decoded phone sequence and the corresponding input text to the TTS module, at the phone level. \begin{equation} PER = \frac{S + D + I}{N}, \end{equation} where \textit{S}, \textit{D} and \textit{I} represents the number of substitutions, deletions and insertions, respectively, between the decoded and the reference phone sequence. \textit{N} represents the number of phones in the reference text. Finally, we compare the speech intelligibility of the best PER model with the best training-objective loss model using subjective evaluation (described in Section~\ref{sec:exp}). We propose to use PER instead of the popular WER as it provides more insights about the TTS model itself. For example, from the analysis of the PER results, the observation of a particular phone being deleted often throws light on poor modeling of the phone, potentially due to lack of sufficient training examples. Similarly, the observation of a phone being substituted often with another phone can be a result of confusing pronunciation by the speaker in the training data. Such insights cannot be obtained directly using WER as an objective metric. The TTS and ASR systems are both trained using identical training and validation splits from the same dataset. The details of these individual TTS and ASR systems are discussed below. \subsection{TTS system} \label{ttssystem} A neural TTS system is generally comprised of a frontend and vocoder as shown in Figure~\ref{pereval}. As the fronted, we use the Tacotron 2 (v3) recipe of ESPnet~\cite{hayashi2019espnettts}, which is an auto-regressive based sequence-to-sequence model with a location-sensitive and guided attention mechanism. The frontend is trained with the phone sequence of the input text, and the 80-band mel spectrogram feature of the corresponding audio, computed with a 1024 point discrete Fourier transform and 256 sample hop-length. The frontend is trained for 800 epochs with a batch size of 56 on 4 GPUs. During each epoch of the training, we compute the training-objective loss on the validation split. The training-objective loss in the Tacotron 2 (v3) recipe is the sum of the MSE, $l_1$, binary cross-entropy (BCE), and attention loss. Where the MSE and $l_1$ are computed between the predicted and reference mel spectrograms, BCE is computed for stop-token prediction, and the attention loss measures the diagonality of alignment between the encoder and decoder outputs. Finally, the model during the epoch with the least training-objective loss on the validation split is chosen as the best validation loss model. The mel spectrogram output of fronted is mapped to waveform using the parallel wavegan (PWG) vocoder~\cite{yamamoto2020parallel}. The PWG vocoder is a non-autoregressive variant of the WaveNet~\cite{oord2016wavenet} vocoder that has a significantly faster inference time. We use the publicly available implementation of PWG, whose code is accessible here\footnote{https://github.com/kan-bayashi/ParallelWaveGAN}. The ASR system discussed in the next section is then used to decode the phone sequence from the PWG synthesized audio. Since the PER results are only meaningful after the initial convergence of the frontend model, we start computing the PER after epoch 50 and thereafter compute it for every 10 epochs. Finally, we choose the model with the least PER on validation split as the best PER model. \subsection{ASR system} \label{asrsystem} The ASR system is built using the Kaldi toolkit~\cite{povey2011kaldi}. We initially train the acoustic model using the Gaussian mixture model-hidden Markov model (GMM-HMM) recipe\footnote{wsj/s5/run.sh} and obtain alignments from the triphone model. Thereafter, these alignments are used to train our ASR system using the deep neural network-hidden Markov model (DNN-HMM) recipe\footnote{wsj/s5/local/chain/tuning/run\_tdnn\_1g.sh}. The default augmentations of speed and volume perturbations, that are part of the DNN-HMM recipe are used during training. The ASR system uses phone-based language models (LM) built using the SRILM toolkit~\cite{stolcke2002srilm} during decoding. \begin{figure}[!tp] \centering \centerline{\includegraphics[width=\linewidth]{images/per_diff_grams.png}} \caption{PER across training epochs for synthesized (solid lines) and original (dashed lines) recordings using different language models (LM) in an ASR system.} \label{per_all} \end{figure} \section{Evaluation} \subsection{Dataset} As the dataset, we use a subset of IndicTTS~\cite{babycbblr2016} dataset of Hindi language and male speaker. This dataset consists of about 9 hours of studio-quality recording by a professional voice over artist. The original recordings are at a 48 kHz sampling rate, however, all the studies in this paper are performed at 16 kHz. The genre of the spoken text is fiction and children's stories. We map this to phonemic text with Indian common label set phones~\cite{ramani2013common} using a grapheme to phoneme model~\cite{nlp:tsd16conf, arunThesis}. Among the 6032 sentences in the dataset, we randomly choose 5732 sentences (8.76 hours) as the \textit{training split} and the remaining 300 sentences (0.44 hours) as the \textit{validation split}. The text in the training split has about 13,536 unique words. Both the ASR and TTS modules are trained using these training and validation splits using phonetic transcripts of 60 unique phones. The TTS modules are additionally trained with five punctuations and a word boundary. Similarly, the ASR system consists of an additional silence-phone. All our objective and subjective studies regarding the synthesis quality are carried out on the \textit{validation split}, which we also refer to as the \textit{in-domain test data}. This data has about 1887 unique words, of which 21\% words are unseen, and it only consists of 55 of the 60 total phones in the training split. To study the synthesis quality on the text of a different genre, we manually curated an \textit{out-of-domain test data}, that has no audio as follows. In contrast to the training split that mainly consists of story genre text, we scraped news genre text from online resources. From which we chose sentences that had a word length in the range of 3-6. This test data consists of 1239 unique words of which 57.1\% are unseen, and it only consists of 50 of the 60 total phones in the training split. \subsection{Experiments} \label{sec:exp} To verify the role of LM in the proposed model selection approach, we first study the choice of LM for ASR system. We generate 1-, 2-, 3- and 4-gram LMs on the training split using SRILM and study the model selection on in-domain test data. The choice of PER as a speech intelligibility metric is evaluated with subjective hearing tests. The best models for the in-domain test data, based on the validation loss and PER are used to synthesize the text of in-domain test data. A pair-wise comparison test is conducted between the recordings of the two best models with both native (L1) and non-native (L2) speakers. During the test, the participants were given 30 pairs of synthesized recordings randomly chosen from the 300 recordings of in-domain test data. The participants were asked to rate the speech intelligibility by choosing either of the two recordings or both if they were equally intelligible. There were 20 participants in total, with equal distribution of L1 and L2 speakers. Finally, we study the model selection for the text of a different genre from training data. The best PER model is selected based on out-of-domain test data whose genre is different from the training split. The speech intelligibility of the best out-of-domain test data model is then compared using subjective pair-wise evaluations with both the best validation loss model and the best in-domain test data model. The samples used for the subjective evaluations in our paper are available at \url{https://www.zapr.in/interspeech2020/samples}. \begin{figure}[!tp] \centering \includegraphics[width=\linewidth]{images/per_mse.png} \caption{The training and validation loss curves, in-domain and out-of-domain PERs across epochs. The best validation loss model MDL-LOSS is chosen for the epoch with the least validation loss. Similarly, the best PER models of in-domain MDL-PER-ID and out-of-domain MDL-PER-OD are chosen for the corresponding epochs with the least PER.} \label{mse_per} \end{figure} \section{Results and discussion} \subsection{Language model selection} The PER values for synthesized recordings of in-domain test data obtained using different LMs are shown as solid lines in Figure~\ref{per_all}. The dotted lines with similar color as solid lines represent the corresponding PER scores for the original recordings of in-domain test data. From the figure, it is clear that irrespective of the chosen LM, the PER curve trends are similar and the corresponding epoch with minimum PER are identical across the curves. Hence, hereafter we only use the 1-gram LM for our studies. The choice of 1-gram additionally removes the dependency of ASR output on neighboring phones. This enables a more reliable understanding of individual phone modeling in TTS frontend based on the PER results. \begin{table}[] \begin{adjustbox}{max width=8cm} \centering \begin{tabular}{|c|c|c|c|c|} \hline {\color[HTML]{000000} } & {\color[HTML]{000000} } & \multicolumn{2}{c|}{{\color[HTML]{000000} \textbf{Correctly detected}}} & {\color[HTML]{000000} } \\ \cline{3-4} \multirow{-2}{*}{{\color[HTML]{000000} \textbf{Phones}}} & \multirow{-2}{*}{{\color[HTML]{000000} \textbf{\begin{tabular}[c]{@{}c@{}}Total\\ Occurences\end{tabular}}}} & {\color[HTML]{000000} \textbf{MDL-LOSS}} & {\color[HTML]{000000} \textbf{MDL-PER-ID}} & \multirow{-2}{*}{{\color[HTML]{000000} \textbf{\begin{tabular}[c]{@{}c@{}}Difference \\ count\end{tabular}}}} \\ \hline \hline {\color[HTML]{000000} \textbf{/a/}} & {\color[HTML]{000000} 1784} & {\color[HTML]{000000} 1395} & {\color[HTML]{000000} \textbf{1460}} & {\color[HTML]{000000} 65} \\ \hline {\color[HTML]{000000} \textbf{/ee/}} & {\color[HTML]{000000} 1407} & {\color[HTML]{000000} 1203} & {\color[HTML]{000000} \textbf{1261}} & {\color[HTML]{000000} 58} \\ \hline {\color[HTML]{000000} \textbf{/i/}} & {\color[HTML]{000000} 717} & {\color[HTML]{000000} 464} & {\color[HTML]{000000} \textbf{498}} & {\color[HTML]{000000} 34} \\ \hline {\color[HTML]{000000} \textbf{/k/}} & {\color[HTML]{000000} 1217} & {\color[HTML]{000000} 1061} & {\color[HTML]{000000} \textbf{1095}} & {\color[HTML]{000000} 34} \\ \hline {\color[HTML]{000000} \textbf{/h/}} & {\color[HTML]{000000} 676} & {\color[HTML]{000000} 497} & {\color[HTML]{000000} \textbf{528}} & {\color[HTML]{000000} 31} \\ \hline\hline {\color[HTML]{000000} \textbf{/z/}} & {\color[HTML]{000000} 55} & {\color[HTML]{000000} \textbf{42}} & {\color[HTML]{000000} 40} & {\color[HTML]{000000} -2} \\ \hline {\color[HTML]{000000} \textbf{/uu/}} & {\color[HTML]{000000} 131} & {\color[HTML]{000000} \textbf{97}} & {\color[HTML]{000000} 94} & {\color[HTML]{000000} -3} \\ \hline {\color[HTML]{000000} \textbf{/gh/}} & {\color[HTML]{000000} 33} & {\color[HTML]{000000} \textbf{31}} & {\color[HTML]{000000} 27} & {\color[HTML]{000000} -4} \\ \hline {\color[HTML]{000000} \textbf{/sh/}} & {\color[HTML]{000000} 123} & {\color[HTML]{000000} \textbf{99}} & {\color[HTML]{000000} 95} & {\color[HTML]{000000} -4} \\ \hline {\color[HTML]{000000} \textbf{/q/}} & {\color[HTML]{000000} 633} & {\color[HTML]{000000} \textbf{145}} & {\color[HTML]{000000} 137} & {\color[HTML]{000000} -8} \\ \hline \end{tabular} \end{adjustbox} \vspace{10pt} \caption{The number of phone instances that were decoded correctly using an ASR system for recordings synthesized using MDL-LOSS and MDL-PER-ID models. The table only presents the top and bottom five phones with difference in phone detection between the two models. } \label{phoneme_detect} \vspace{-10pt} \end{table} \subsection{In-domain analysis} The training-objective loss on the training split, the validation (in-domain test data) split, and the corresponding PER across different epochs of training is visualized in Figure~\ref{mse_per}. The best model selected based on the validation loss curve is at epoch 207, hereafter referred to as MDL-LOSS. However, according to the proposed metric PER, the model with the best speech intelligibility was at epoch 710, hereafter referred to as MDL-PER-ID. Figure~\ref{per_all} gives a different perspective for the same choice of epoch 710, where the PER for synthesized recordings (solid lines) converges to the PER with the original in-domain test data recordings (dashed lines). Further, although both the PER and validation losses are computed on the same in-domain test data, according to the validation loss curve in Figure~\ref{mse_per}, the model started over-fitting after epoch 207. However, according to the PER curve, no such over-fitting is observed, and the model continues to improve the speech intelligibility until the end of the training (800 epochs). To study the improvement in speech intelligibility between the two models - MDL-LOSS and MDL-PER-ID, we analyzed the phone detection statistics. The ASR system was used to decode the phone sequences for all the recordings synthesized with the in-domain text, using both the models. The correct phones detection according to PER was studied. The original test data consists of 55 unique phones with a total occurrence of 18251. Among which, the MDL-PER-ID correctly identified 13983 instances compared to the 13612 instances by MDL-LOSS. This improvement in the detection of phones by MDL-PER-ID was a result of improved detection of 29 of the 55 unique phones, which we correlate to better speech intelligibility in synthesis. The detection of 15 other phones remained unchanged and the performance reduced marginally for the rest. Table~\ref{phoneme_detect} shows the top five phones whose detection improved and the bottom five phones whose detection reduced. We see that the overall difference in improved detection is much larger compared to the phones whose detection reduced. The results of the subjective analysis for the in-domain recordings synthesized by MDL-LOSS and MDL-PER-ID is shown in Table~\ref{pctest}. Overall, 42.8\% of participants chose the proposed MDL-PER-ID recordings, in comparison to 28.1\% who preferred MDL-LOSS recordings. Both the L1 and L2 speakers preferred the proposed MDL-PER-ID recordings, and this preference was stronger for L2 speakers. These results prove that the PER can be used as an objective metric for speech intelligibility. Correspondingly, the TTS model chosen with the best PER on the validation data correlates to synthesis with better speech intelligibility, compared to a model chosen based on training-objective loss. \subsection{Out-of-domain analysis} The PER curve for the out-of-domain test data is shown in Figure~\ref{mse_per}. In comparison to the best in-domain test data model MDL-PER-ID, which was chosen at epoch 710, the model with the best speech intelligibility for the out-of-domain test data MDL-PER-OD is obtained at epoch 550. This shows that the best PER model is dependent on the target domain genre of the text. In the phone detection studies, among the 9763 total phones the MDL-PER-OD detected 7468 of them correctly, compared to 7345 from MDL-LOSS. This was a result of improved detection of 29 of the 50 unique phones in comparison to MDL-LOSS. The detection of 9 other phones remained unchanged, and the rest reduced marginally. We performed two separate pair-wise subjective studies on MDL-PER-OD. The first (\emph{Exp.\,1}), between the best validation loss model MDL-LOSS and MDL-PER-OD. The results of which are presented in Table~\ref{pctest}. Similar to the results of in-domain analysis, both L1 and L2 participants continued to prefer the proposed PER based MDL-PER-OD recordings. Further, in comparison to the MDL-LOSS recordings, the preference for MDL-PER-OD recordings in the out-of-domain analysis was much stronger than the preference for MDL-PER-ID recordings in the in-domain analysis. We conducted a second experiment (\emph{Exp.\,2}) on the out-of-domain text to check if the participants would continue to prefer MDL-PER-OD recordings over MDL-PER-ID recordings. From the Table~\ref{pctest}, we observe that both L1 and L2 participants continued to prefer MDL-PER-OD recordings for out-of-domain text. However, the preference of L1 speakers for MDL-PER-OD recordings was not as strong as the L2 speakers. Further, nearly 50\% of the participants, felt both the models were equally intelligible. In general, these results show that when the genre of the target domain text is different from the training data, the PER based model selection is better than using a generic validation loss based model across different target text genres. \begin{table}[] \centering \begin{adjustbox}{max width=8cm} \begin{tabular}{|c|c|c|c|c|} \hline {\color[HTML]{000000} } & {\color[HTML]{000000} \textbf{Speakers}} & {\color[HTML]{000000} \textbf{MDL-LOSS}} & {\color[HTML]{000000} \textbf{\begin{tabular}[c]{@{}c@{}}MDL-\\ PER-ID\end{tabular}}} & {\color[HTML]{000000} \textbf{Both}} \\ \cline{2-5} {\color[HTML]{000000} } & {\color[HTML]{000000} L1} & {\color[HTML]{000000} 27.5} & {\color[HTML]{000000} 40.0} & {\color[HTML]{000000} 32.5} \\ \cline{2-5} {\color[HTML]{000000} } & {\color[HTML]{000000} L2} & {\color[HTML]{000000} 28.8} & {\color[HTML]{000000} 45.6} & {\color[HTML]{000000} 25.6} \\ \cline{2-5} \multirow{-4}{*}{{\color[HTML]{000000} \begin{tabular}[c]{@{}c@{}}In-domain\\ Analysis\end{tabular}}} & {\color[HTML]{000000} \textbf{Overall}} & {\color[HTML]{000000} 28.1} & {\color[HTML]{000000} 42.8} & {\color[HTML]{000000} 29.1} \\ \hline \hline {\color[HTML]{000000} } & {\color[HTML]{000000} \textbf{Speakers}} & {\color[HTML]{000000} \textbf{MDL-LOSS}} & {\color[HTML]{000000} \textbf{\begin{tabular}[c]{@{}c@{}}MDL-\\ PER-OD\end{tabular}}} & {\color[HTML]{000000} \textbf{Both}} \\ \cline{2-5} {\color[HTML]{000000} } & {\color[HTML]{000000} L1} & {\color[HTML]{000000} 30.0} & {\color[HTML]{000000} 44.4} & {\color[HTML]{000000} 25.6} \\ \cline{2-5} {\color[HTML]{000000} } & {\color[HTML]{000000} L2} & {\color[HTML]{000000} 15.0} & {\color[HTML]{000000} 53.3} & {\color[HTML]{000000} 31.7} \\ \cline{2-5} \multirow{-4}{*}{{\color[HTML]{000000} \begin{tabular}[c]{@{}c@{}}Out-of-domain\\ Analysis\\ \emph{Exp.\,1}\end{tabular}}} & {\color[HTML]{000000} \textbf{Overall}} & {\color[HTML]{000000} 22.5} & {\color[HTML]{000000} 48.9} & {\color[HTML]{000000} 28.6} \\ \hline \hline {\color[HTML]{000000} } & {\color[HTML]{000000} \textbf{Speakers}} & {\color[HTML]{000000} \textbf{\begin{tabular}[c]{@{}c@{}}MDL-\\ PER-ID\end{tabular}}} & {\color[HTML]{000000} \textbf{\begin{tabular}[c]{@{}c@{}}MDL-\\ PER-OD\end{tabular}}} & {\color[HTML]{000000} \textbf{Both}} \\ \cline{2-5} {\color[HTML]{000000} } & {\color[HTML]{000000} L1} & {\color[HTML]{000000} 33.3} & {\color[HTML]{000000} 34.0} & {\color[HTML]{000000} 32.7} \\ \cline{2-5} {\color[HTML]{000000} } & {\color[HTML]{000000} L2} & {\color[HTML]{000000} 12.7} & {\color[HTML]{000000} 20.0} & {\color[HTML]{000000} 67.3} \\ \cline{2-5} \multirow{-4}{*}{{\color[HTML]{000000} \begin{tabular}[c]{@{}c@{}}Out-of-domain\\ Analysis\\ \emph{Exp.\,2}\end{tabular}}} & {\color[HTML]{000000} \textbf{Overall}} & {\color[HTML]{000000} 23.0} & {\color[HTML]{000000} 27.0} & {\color[HTML]{000000} 50.0} \\ \hline \end{tabular} \end{adjustbox} \vspace{10pt} \caption{Pair-wise comparison results for subjective evaluation of speech intelligibility (values in percentages).} \label{pctest} \vspace{-10pt} \end{table} \section{Conclusion} In this work, we proposed to employ PER as an objective metric to select the TTS model with the best speech intelligibility. Our preference test studies show that the TTS model with the best PER on validation split has better speech intelligibility compared to the model chosen based on minimum training-objective loss on validation split. Finally, using the PER metric and subjective preference test on unseen out-of-domain text, we show that the choice of the best TTS model is dependent on the target application text genre. All the experiments were conducted on a Hindi language dataset. However, the proposed method itself is language independent. \bibliographystyle{IEEEtran}
1,116,691,498,816
arxiv
\section{Introduction} A probabilistic model over a discrete state space is classified as energy-based if it can be written in the form \begin{equation} \label{energy_based_model} \begin{aligned} p(\mathbf{s};\boldsymbol{\alpha}) &= \frac{e^{-E(\mathbf{s};\boldsymbol{\alpha})}}{Z(\boldsymbol{\alpha})}, \\ Z(\boldsymbol{\alpha}) &= \sum_{\mathbf{s}}e^{-E(\mathbf{s};\boldsymbol{\alpha})}, \end{aligned} \end{equation} where the \emph{energy} $E(\mathbf{s};\boldsymbol{\alpha})$ is a computationally tractable function of the system's configuration~$\mathbf{s} = (s_1, s_2, \ldots, s_N)$, $\boldsymbol{\alpha}$ is a set of parameters to be learned from data, and $Z(\boldsymbol{\alpha})$ is a normalization constant also known as the \emph{partition function}. Many methods for learning energy-based models exist (e.g. \cite{Tieleman2008,SohlDickstein2011}) which makes them useful in a wide variety of fields, most recently as data analysis tools in neuroscience \cite{Tkacik2014, Koster2014}. A popular way of parametrizing the energy function is by decomposing it into a sum of \emph{potentials} representing interactions among different groups of variables, i.e. \begin{equation} \label{gibbs} \begin{aligned} E(\mathbf{s};\boldsymbol{\alpha}) =& \sum_{i=1}^N \Phi_i(s_i;\boldsymbol{\alpha}_i) + \sum_{i,j=1}^N \Phi_{ij}(s_i, s_j;\boldsymbol{\alpha}_{ij})\ + \\ &+ \sum_{i,j,k=1}^N \Phi_{ijk}(s_i, s_j, s_k;\boldsymbol{\alpha}_{ijk}) + \ldots . \end{aligned} \end{equation} The resulting models, also termed Gibbs random fields \cite{Kindermann1980}, are easy to interpret but, even for moderate~$N$, learning the potential functions from data is intractable unless we can a priori set most of them to zero, or we know how parameters of multiple potentials relate to each other. A common assumption is to consider single- and two-variable potentials only, but many tasks require an efficient parametrization of higher-order interactions. A powerful way of modeling higher-order dependencies is to assume that they are mediated through hidden variables coupled to the observed system. Many hidden variable models are, however, notoriously hard to learn (e.g. Boltzmann machines \cite{Hinton1986}), and their distributions over observed variables cannot be represented with tractable energy functions. An important exception is the restricted Boltzmann machine (RBM) \cite{Smolensky1986} which is simple to learn even when the dimension of the data is large, and which has proven effective in many applications \cite{Hinton2009, Salakhutdinov2007, Koster2014}. This paper considers a new alternative for modeling higher-order interactions. We generalize any model of the form \eqref{energy_based_model} to \begin{equation} \label{Venergy_based_model} \begin{aligned} p(\mathbf{s};\boldsymbol{\alpha},V) &= \frac{e^{-V(E(\mathbf{s};\boldsymbol{\alpha}))}}{Z(\boldsymbol{\alpha},V)}, \\ Z(\boldsymbol{\alpha}, V) &= \sum_{\mathbf{s}}e^{-V(E(\mathbf{s};\boldsymbol{\alpha}))}, \end{aligned} \end{equation} where $V$ is an arbitrary strictly increasing and twice differentiable function which needs to be learned from data together with the parameters $\boldsymbol{\alpha}$. While this defines a new energy-based model with an energy function $E'(\mathbf{s};\boldsymbol{\alpha},V) = V(E(\mathbf{s};\boldsymbol{\alpha}))$, we will keep referring to $E(\mathbf{s};\boldsymbol{\alpha})$ as the energy function. This terminology reflects our interpretation that $E(\mathbf{s};\boldsymbol{\alpha})$ should parametrize local interactions between small groups of variables, e.g. low-order terms in Eq. \eqref{gibbs}, while the function $V$ globally couples the whole system. We will formalize this intuition in Section \ref{why_it_works}. Since setting $V(E) = E$ recovers \eqref{energy_based_model}, we will refer to $V$ simply as \emph{the nonlinearity}. Generalized energy-based models have been previously studied in the physics literature on nonextensive statistical mechanics \cite{Hanel20112} but, to our knowledge, they have never been considered as data-driven generative models. If $\mathbf{s}$ is a continuous rather than a discrete vector, then models \eqref{Venergy_based_model} are related to elliptically symmetric distributions \cite{Siwei2009}. \section{Non-parametric model of the nonlinearity} \label{nonparametric} We wish to make as few prior assumptions about the shape of the nonlinearity $V$ as possible. We restrict ourselves to the class of strictly monotone twice differentiable functions for which $V''/V'$ is square-integrable. It is proved in \cite{Ramsay1998} that any such function can be represented in terms of a square-integrable function $W$ and two constants $\gamma_1$ and $\gamma_2$ as \begin{equation} \label{V_E} V(E) = \gamma_1 + \gamma_2 \int_{E_0}^{E} \exp \left( \int_{E_0}^{E'} W(E'') \diff E'' \right) \diff E', \end{equation} where $E_0$ is arbitrary and sets the constants to $\gamma_1 = V(E_0)$, $\gamma_2 = V'(E_0)$. Eq. \eqref{V_E} is a solution to the differential equation $V'' = W V'$, and so $W$ is a measure of the local curvature of $V$. In particular, $V$ is a linear function on any interval on which $W = 0$. The advantage of writing the nonlinearity in the form \eqref{V_E} is that we can parametrize it by expanding $W$ in an arbitrary basis without imposing any constraints on the coefficients of the basis vectors. This will allow us to use unconstrained optimization techniques during learning. We will use piecewise constant functions to parametrize $W$. Let $[E_0, E_1]$ be an interval containing the range of energies $E(\mathbf{s};\boldsymbol{\alpha})$ which we expect to encounter during learning. We divide the interval $[E_0, E_1]$ into $Q$ non-overlapping bins of the same width with indicator functions $I_i$, i.e. $I_i(E) = 1$ if $E$ is in the $i$th bin, otherwise $I_i(E) = 0$, and we set $W(E) \equiv W(E;\boldsymbol{\beta}) = \sum_{i=1}^Q \beta_i I_i(E)$. The integrals in Eq.~\eqref{V_E} can be carried out analytically for this choice of $W$ yielding an exact expression for $V$ as a function of $\boldsymbol{\gamma}$ and $\boldsymbol{\beta}$, as well as for its gradient with respect to these parameters (see Appendix A). The range $[E_0, E_1]$ and the number of bins $Q$ are metaparameters which potentially depend on the number of samples in the training set, and which should be chosen by cross-validation. \section{Maximum likelihood learning} \label{learning} Any off-the-shelf method for learning energy-based models is suitable for learning the parameters of the nonlinearity and the energy function simultaneously. The nature of these two sets of parameters is, however, expected to be very different, and an algorithm that takes this fact into account explicitly would likely be useful. As a step in this direction, we present an approximation to the likelihood function of the models \eqref{Venergy_based_model} which can be used to efficiently learn the nonlinearity when the parameters of the energy function are fixed. \subsection{The nonlinearity enforces a match between the model and the data probability distributions of the energy} Let $\rho(E'; \boldsymbol{\alpha}) = \sum_{\mathrm{s}} \delta_{E', E(\mathbf{s};\boldsymbol{\alpha})}$ count the number of states which map to the same energy $E'$. The probability distribution of $E(\mathbf{s};\boldsymbol{\alpha})$ when $\mathbf{s}$ is distributed according to \eqref{Venergy_based_model} is \begin{equation} \label{p_E_model} \begin{aligned} p(E'; \boldsymbol{\alpha}, V) &= \sum_{\mathbf{s}} p(\mathbf{s};\boldsymbol{\alpha},V)\delta_{E', E(\mathbf{s};\boldsymbol{\alpha})} \\ &= \frac{\rho(E'; \boldsymbol{\alpha})e^{-V(E')}}{Z(\boldsymbol{\alpha},V)}. \end{aligned} \end{equation} Given data $\{\mathbf{s}^{(i)}\}_{i=1}^M$, let $\hat{p}(E'; \boldsymbol{\alpha}) = \frac{1}{M} \sum_{i=1}^M \delta_{E', E(\mathbf{s}^{(i)};\boldsymbol{\alpha})}$ be the data distribution of the energy, and let $\Omega_{\boldsymbol{\alpha}}$ be the image of $E(\mathbf{s};\boldsymbol{\alpha})$. The average log-likelihood of the data can be rewritten as \begin{equation} \label{likelihood} \begin{aligned} L(\boldsymbol{\alpha}, V) &= -\log Z(\boldsymbol{\alpha}, V) - \sum_{E' \in \Omega_{\boldsymbol{\alpha}}} \hat{p}(E'; \boldsymbol{\alpha})V(E') \\ &= -\sum_{E' \in \Omega_{\boldsymbol{\alpha}}} \hat{p}(E'; \boldsymbol{\alpha}) \log \rho(E'; \boldsymbol{\alpha}) \\ &\qquad + \sum_{E' \in \Omega_{\boldsymbol{\alpha}}} \hat{p}(E'; \boldsymbol{\alpha}) \log p(E'; \boldsymbol{\alpha}, V), \end{aligned} \end{equation} where the first line is a standard expression for energy based models, and the second line follows by substituting the logarithm of \eqref{p_E_model}. Eq. \eqref{likelihood} has a simple interpretation. The last term, which is the only one depending on $V$, is the average log-likelihood of the samples $\{E(\mathbf{s}^{(i)};\boldsymbol{\alpha})\}_{i=1}^M$ under the model $p(E; \boldsymbol{\alpha}, V)$, and so, for any $\boldsymbol{\alpha}$, the purpose of the nonlinearity is to reproduce the data probability distribution of the energy. Our restriction that $V$ is a twice differentiable increasing function can be seen as a way of regularizing learning. If $V$ was arbitrary, then, for any $\boldsymbol{\alpha}$, an upper bound on the likelihood \eqref{likelihood} is attained when $\hat{p}(E; \boldsymbol{\alpha}) = p(E; \boldsymbol{\alpha}, V)$. According to \eqref{p_E_model}, this can be satisfied with any function $V$ (potentially infinite at some points) such that for all $E \in \Omega_{\boldsymbol{\alpha}}$ \begin{equation} \label{eq_exact} V(E) = \log \rho(E; \boldsymbol{\alpha}) - \log \hat{p}(E; \boldsymbol{\alpha}) + \text{const.}\end{equation} Energy functions often assign distinct energies to distinct states in which case the choice \eqref{eq_exact} leads to a model which exactly reproduces the empirical distribution of data, and hence overfits. \subsection{Integral approximation of the partition function, and of the likelihood} \label{sec_approx} The partition function \eqref{Venergy_based_model} can be rewritten as \begin{equation} Z(\boldsymbol{\alpha}, V) = \sum_{E' \in \Omega_{\boldsymbol{\alpha}}} \rho(E'; \boldsymbol{\alpha}) e^{-V(E')}. \end{equation} The size of $\Omega_{\boldsymbol{\alpha}}$ is generically exponential in the dimension of $\mathbf{s}$, making the above sum intractable for large systems. To make it tractable, we will use a standard trick from statistical mechanics, and approximate the sum over distinct energies by smoothing $\rho(E; \boldsymbol{\alpha})$. Let $[E_{\text{min}}, E_{\text{max}}]$ be an interval which contains the set $\Omega_{\mathbf{\alpha}}$, and which is divided into $K$ non-overlapping bins~$I_i \in [E_{\text{min}}, E_{\text{max}}]$ of width $\Delta = (E_{\text{max}}-E_{\text{min}})/K$. Let $E_i$ be the energy in the middle of the $i$th bin. The partition function can be approximated as \begin{equation} \label{Z_approx} \begin{aligned} Z(\boldsymbol{\alpha}, V) &= \sum_{i=1}^K \sum_{E' \in I_i} \rho(E'; \boldsymbol{\alpha}) e^{-V(E')} \\ &= \sum_{i=1}^K e^{-V(E_i)} \sum_{E' \in I_i} \rho(E'; \boldsymbol{\alpha}) + o(\Delta) \\ &\equiv \sum_{i=1}^K e^{-V(E_i)} \bar{\rho}(E_i; \boldsymbol{\alpha}) \Delta + o(\Delta), \end{aligned} \end{equation} where $\bar{\rho}(E_i; \boldsymbol{\alpha}) = (1/\Delta) \sum_{E' \in I_i} \rho(E'; \boldsymbol{\alpha})$ is the \emph{density of states}, and $o(\Delta) \to 0$ as $\Delta \to 0$ since $V$ is differentiable. The approximation $o(\Delta) \approx 0$ is useful for two reasons. First, in most problems, this approximation becomes very good already when the number of bins $K$ is still much smaller than the size of $\Omega_{\boldsymbol{\alpha}}$. Second, an efficient Monte Carlo method, the Wang and Landau algorithm \cite{Wang2001, Belardinelli2007}, exists for estimating the density of states for any fixed value of $\boldsymbol{\alpha}$. The approximation \eqref{Z_approx} yields the following approximation of the average log-likelihood \eqref{likelihood} \begin{equation} \label{L_approx} \begin{aligned} L(\boldsymbol{\alpha}, V) \approx & -\log \left( \sum_{i=1}^K e^{-V(E_i)} \bar{\rho}(E_i; \boldsymbol{\alpha}) \Delta \right) \\ &- \frac{1}{M}\sum_{i=1}^M V(E(\mathbf{s}^{(i)};\boldsymbol{\alpha})), \end{aligned} \end{equation} which can be maximized with respect to the parameters of $V$ using any gradient-based optimization technique (after specifying the metaparameters $E_{min}$, $E_{max}$, and $K$). Many learning algorithms represent information about the model $p(\mathbf{s};\boldsymbol{\alpha},V)$ as a finite list of samples from the model. This representation is necessarily bad at capturing the low probability regions of $p(E; \boldsymbol{\alpha}, V)$. According to \eqref{likelihood}, this means that any such algorithm is expected to be inefficient for learning $V$ in these regions. On the other hand, the density of states $\bar{\rho}$ estimated with the Wang and Landau algorithm carries information about the low probability regions of $p(E; \boldsymbol{\alpha}, V)$ with the same precision as about the high probability regions, and so algorithms based on maximizing \eqref{L_approx} should be efficient at learning the nonlinearity. The approximation \eqref{L_approx} cannot be used to learn the parameters $\boldsymbol{\alpha}$ of the energy function, and so the above algorithm has to be supplemented with other techniques. \section{Experiments: modeling neural activity of retinal ganglion cells} \label{neurons} A major question in neuroscience is how populations of neurons, rather than individual cells, respond to inputs from the environment. It is well documented that single neuron responses are correlated with each other but the precise structure of the underlying redundancy, and its functional role in information processing is only beginning to be unraveled \cite{Tkacik2014}. The response of a population of $N$ neurons during a short time window can be represented as a binary vector $\mathbf{s} \in \{0,1\}^N$ by assigning a $1$ to every neuron which elicited a spike. We pool all population responses recorded during an experiment, and we ask what probabilistic model would generate these samples. This question was first asked in a seminal paper \cite{Schneidman2006} which showed that, for small networks of retinal neurons ($N < 20$), the fully visible Boltzmann machine, or a \emph{pairwise model}, \begin{equation} \label{pairwise} p(\mathbf{s};\boldsymbol{J}) = \frac{1}{Z(\boldsymbol{J})}\exp \left(-\sum_{i,j=1}^N J_{ij} s_i s_j\right), \end{equation} is a good description of the data. Later it was realized that for large networks ($40 < N < 120$) pairwise models cannot accurately capture probability distributions of data statistics which average across the whole population such as the total population activity~$K(\mathbf{s}) = \sum_{i=1}^N s_i$. This issue was solved by the introduction of the so-called \emph{K-pairwise models} \cite{Tkacik2014}, \begin{equation} \begin{aligned} p(\mathbf{s};\boldsymbol{J}, \boldsymbol{\phi}) = \frac{1}{Z(\boldsymbol{J}, \boldsymbol{\phi})}& \exp \left(-\sum_{i,j=1}^N J_{ij} s_i s_j \right. \\ &- \left. \sum_{k=0}^N \phi_k \delta_{k,K(\mathbf{s})}\right). \end{aligned} \end{equation} Here we look at the performance of two additional models. A \emph{semiparametric pairwise model}, which is a generalization \eqref{Venergy_based_model} of the pairwise model, and a restricted Boltzmann machine with $N/2$ hidden units which is known to be an excellent model of higher order dependencies. The philosophy of our comparison is not to find a state-of-the art model, but rather to contrast several different models of comparable complexity (measured as the number of parameters) in order to demonstrate that the addition of a simple nonlinearity to an energy-based model can result in a significantly better fit to data. \subsection{Data} We analyze a simultaneous recording from $160$ neurons in a salamander's retina which is presented with $297$ repetitions of a $19$ second natural movie. The data was collected as part of \cite{Tkacik2014}, and it contains a total of $\sim 2.8 \times 10^5$ population responses. Our goal is to compare the performance, measured as the out-of-sample log-likelihood per sample per neuron, of the above models across several subnetworks of different sizes. To this end, we use our data to construct $48$ smaller datasets as follows. We randomly select $40$ neurons from the total of $160$ as the first dataset. Then we augment this dataset with $20$ additional neurons to yield the second dataset, and we keep repeating this process until we have a dataset of $140$ neurons. This whole process is repeated $8$ times, resulting in $8$ datasets for each of the $6$ different network sizes. Only the datasets with $40$ and $60$ neurons do not have significant overlaps which is a limitation set by the currently available experiments. For each dataset, we set aside responses corresponding to randomly selected $60$ (out of 297) repetitions of the movie, and use these as test data. \subsection{Results} \begin{figure} \centering \includegraphics[width=\linewidth]{./fig1.pdf} \caption{Out-of-sample log-likelihood improvement per sample per neuron averaged over subnetworks. Errorbars denote variation over subnetworks (one standard deviation), not the accuracy of likelihood estimates. Baseline is the pairwise model.} \label{fig1} \end{figure} Our results are summarized in Figure \ref{fig1}. We see that both semiparametric pairwise models and RBMs with $N/2$ hidden units substantially outperform the previously considered K-pairwise models. In particular, the improvement increases as we go to larger networks. RBMs are consistently better than semiparametric pairwise models, but, interestingly, this gap does not seem to scale with the network size. The inferred nonlinearity $V$ for semiparametric pairwise models does not vary much across subnetworks of the same size. This is not surprising for large networks since there is a substantial overlap between the datasets, but it is nontrivial for smaller networks. The average inferred nonlinearities are shown in Figure \ref{fig2}A. Semiparametric pairwise models have one free parameter since the transformation $\boldsymbol{J} \to \kappa \boldsymbol{J}, V(E) \to V(E/\kappa)$ does not change their probability distribution. Therefore, to make the comparison of $V$ across different datasets fair, we fix $\kappa$ by requiring $\sum_{i,j} J_{ij}^2 = 1$. Furthermore, the nonlinearities in Figure \ref{fig2}A are normalized by $N$ because we expect the probabilities in a system of dimension $N$ to scale as $e^{-N}$. The inferred nonlinearities are approximately concave, and their curvature increases with the network size. \begin{figure} \label{fig_nonlinearity} \centering \includegraphics[width=\linewidth]{./fig2_arxiv.pdf} \caption{A. Inferred nonlinearities of the semiparametric pairwise model. The shift along the y-axis is arbitrary, and its only purpose is to increase readability. Black lines are averages over subnetworks. Shaded regions denote variation across subnetworks (one standard deviation). Curves are ordered in increasing order (bottom is $N=40$, top is $N=140$). B. Inferred probability densities of the latent variable for one sequence of subnetworks.} \label{fig2} \end{figure} \subsection{Training the models} Training was done by a variation of Persistent Contrastive Divergence \cite{Tieleman2008} which performs an approximate gradient ascent on the log-likelihood of any energy-based model \eqref{energy_based_model}. Given an initial guess of the parameters $\boldsymbol{\alpha}_0$, and a list of $M_s$ samples drawn from $p(\mathbf{s};\boldsymbol{\alpha}_0)$, the algorithm can be summarized as \begin{program} \FOR t:=1 \TO L \boldsymbol{\alpha}_t = \boldsymbol{\alpha}_{t-1} + \eta (\mathbf{E}[\nabla_{\boldsymbol{\alpha}}E(\mathbf{s};\boldsymbol{\alpha}_{t-1})]_{\text{samples}_{t-1}} \qquad \qquad \qquad \quad - \mathbf{E}[\nabla_{\boldsymbol{\alpha}}E(\mathbf{s};\boldsymbol{\alpha}_{t-1})]_{\text{data}}) \text{samples}_t = \text{GIBBS}^n(\text{samples}_{t-1},\boldsymbol{\alpha}_t) \end{program} where $L$ is the number of iterations, $\eta$ is the learning rate, $\mathbf{E}[\cdot]_{\text{list}}$ denotes an average over the list of states, and $\text{GIBBS}^n$ represents $n$ applications (i.e. $n$ neurons are switched) of the Gibbs sampling transition operator. In the case of semiparametric pairwise models, Persistent Contrastive Divergence was used as a part of an alternating maximization algorithm in which we learned the nonlinearity by maximizing the approximate likelihood \eqref{L_approx} while keeping the couplings $\boldsymbol{J}$ fixed, and then learned the couplings using Persistent Contrastive Divergence with the nonlinearity fixed. Details of the learning algorithms for all models are described in Appendix B. The most interesting metaparameter is the number of bins $Q$ necessary to model the nonlinearity $V$. We settled on $Q=12$ but we observed that decreasing it to $Q=6$ would not significantly change the training likelihood. However, $Q=12$ yielded more consistent nonlinearities over different subgroups. \subsection{Estimating likelihood and controlling for overfitting} Estimating likelihood of our data is easy because the state $\mathbf{s}_0 = (0,\ldots,0)$ occurs with high probability ($\sim 0.2$), and all the inferred models retain this property. Therefore, for any energy-based model, we estimated $p(\mathbf{s}_0;\boldsymbol{\alpha})$ by drawing $3\times 10^6$ samples using Gibbs sampling, calculated the partition function as $Z(\boldsymbol{\alpha}) = \exp (-E(\mathbf{s}_0); \boldsymbol{\alpha})/p(\mathbf{s}_0;\boldsymbol{\alpha})$, and used it to calculate the likelihood. The models do not have any explicit regularization. We tried to add a smoothed version of L1 regularization on coupling matrices but we did not see any improvement in generalization using a cross-validation on one of the training datasets. Certain amount of regularization is due to sampling noise in the estimates of likelihood gradients, helping us to avoid overfitting. \section{Why does it work?} \label{why_it_works} Perhaps surprisingly, the addition of a simple nonlinearity to the pairwise energy function $E(\mathbf{s};\boldsymbol{J}) = \sum_{i,j=1}^N J_{ij} s_i s_j$ significantly improves the fit to data. Here we give heuristic arguments that this should be expected whenever the underlying system is globally coupled. \subsection{Many large complex systems are globally coupled} Let $p_N(\mathbf{s})$ be a sequence of positive probabilistic models ($\mathbf{s}$ is of dimension $N$), and suppose that $p_N(\mathbf{s})$ can be (asymptotically) factorized into subsystems statistically independent of each other whose number is proportional to $N$. Then $(1/N)\log p_N(\mathbf{s})$ is an average of independent random variables, and we expect its standard deviation $\sigma(\log p_N(\mathbf{s}))/N$ to vanish in the $N \to \infty$ limit. Alternatively, if $\sigma(\log p_N(\mathbf{s}))/N \nrightarrow 0$, then the system cannot be decomposed into independent subsystems, and there must be some mechanism globally coupling the system together. It has been argued that many natural systems \cite{Mora2011} including luminance in natural images \cite{Stephens2013}, amino acid sequences of proteins \cite{Mora2010}, and neural activity such as the one studied in Sec. \ref{neurons} \cite{Tkacik2015, Mora2015} belong to the class of models whose log-probabilities per dimension have large variance even though their dimensionality is big. Therefore, models of such systems should reflect the prior expectation that there is a mechanism which couples the whole system together. In our case, this mechanism is the nonlinearity $V$. \subsection{Mapping the nonlinearity to a latent variable} Recent work attributes the strong coupling observed in many systems to the presence of latent variables (\cite{Schwab2014, Aitchison2014}). We can rewrite the model \eqref{Venergy_based_model} in terms of a latent variable considered in \cite{Schwab2014} if we assume that $\exp (-V(E))$ is a totally monotone function, i.e. that it is continuous for $E \geq 0$ (we assume, without loss of generality, that $E(\mathbf{s};\boldsymbol{\alpha}) \geq 0$), infinitely differentiable for $E > 0$, and that $(-1)^n d^n \exp (-V(E))/dE^n \geq 0$ for $n \geq 0$. Bernstein's theorem \cite{Widder1946} then asserts that we can rewrite the model \eqref{Venergy_based_model} as \begin{equation} \label{eq_latent} \begin{aligned} \frac{e^{-V(E(\mathbf{s};\boldsymbol{\alpha}))}}{Z(\boldsymbol{\alpha},V)} &= \int_{0}^{\infty} q(h)\frac{e^{-hE(\mathbf{s}; \boldsymbol{\alpha})}}{Z(h;\boldsymbol{\alpha})} \diff h, \\ Z(h;\boldsymbol{\alpha}) &= \sum_{\mathbf{s}} e^{-hE(\mathbf{s};\boldsymbol{\alpha})}, \end{aligned} \end{equation} where $q(h)$ is a probability density (possibly containing delta functions). Suppose that the energy function has the form \eqref{gibbs}. Then we can interpret \eqref{eq_latent} as a latent variable $h$ being coupled to every group of interacting variables, and hence inducing a coupling between the whole system whose strength depends on the size of the fluctuations of $h$. While the class of positive, twice differentiable, and decreasing functions that we consider is more general than the class of totally monotone functions, we can find the maximum likelihood densities $q(h)$ which correspond to the pairwise energy functions inferred in Section \ref{neurons} using the semiparametric pairwise model. We model $q(h)$ as a histogram, and maximize the likelihood under the model \eqref{eq_latent} by estimating $Z(h;\boldsymbol{J})$ using the approximation \eqref{Z_approx}. The maximum likelihood densities are shown in Figure \ref{fig2}B for one particular sequence of networks of increasing size. The units of the latent variables are arbitrary, and set by the scale of $\boldsymbol{J}$ which we normalize so that $\sum_{i,j} J_{ij}^2 = 1$. The bimodal structure of the latent variables is observed across all datasets. We do not observe a significant decrease in likelihood by replacing the nonlinearity $V$ with the integral form \eqref{eq_latent}. Therefore, at least for the data in Sec. \ref{neurons}, the nonlinearity can be interpreted as a latent variable globally coupling the system. \subsection{Asymptotic form of the nonlinearity} Suppose that the true system $\hat{p}_N(\mathbf{s})$ which we want to model with \eqref{Venergy_based_model} satisfies $\sigma(\log p_N(\mathbf{s}))/N \nrightarrow 0$. Then we also expect that energy functions $E_N(\mathbf{s};\boldsymbol{\alpha})$ which accurately model the system satisfy $\sigma(E_N(\mathbf{s};\boldsymbol{\alpha}))/N \nrightarrow 0$. In the limit $N \to \infty$, the approximation of the likelihood \eqref{L_approx} becomes exact when $V$ is differentiable ($\Delta$ has to be appropriately scaled with $N$), and arguments which led to \eqref{eq_exact} can be reformulated in this limit to yield \begin{equation} \label{eq_limit} V_N(E) = \log \bar{\rho}_N(E; \boldsymbol{\alpha}) - \log \hat{\bar{p}}_N(E; \boldsymbol{\alpha}) + \text{const.}, \end{equation} where $\bar{\rho}_N(E; \boldsymbol{\alpha})$ is the density of states, and $\hat{\bar{p}}_N(E; \boldsymbol{\alpha})$ is now the probability density of $E_N(\mathbf{s};\boldsymbol{\alpha})$ under the true model. In statistical mechanics, the first term $\log \bar{\rho}_N(E; \boldsymbol{\alpha})$ is termed the \emph{microcanonical entropy}, and is expected to scale linearly with $N$. On the other hand, because $\sigma(E_N(\mathbf{s};\boldsymbol{\alpha}))/N \nrightarrow 0$, we expect the second term to scale at most as $\log N$. Thus we make the prediction that if the underlying system cannot be decomposed into independent subsystems, then the maximum likelihood nonlinearity satisfies $V(E) \approx \log \bar{\rho}(E; \boldsymbol{\alpha})$ up to an arbitrary constant. In Figure \ref{fig3}A we show a scatter plot of the inferred nonlinearity for one sequence of subnetworks in Sec. \ref{neurons} vs the microcanonical entropy estimated using the Wang and Landau algorithm. While the convergence is slow, the plot suggests that these functions approach each other as the network size increases. To demonstrate this prediction on yet another system, we used the approach in \cite{Siwei2009} to fit the semiparametric pairwise model with $\mathbf{s} \in \mathbb{R}^{L^2}$ to $L \times L$ patches of pixel log-luminances in natural scenes from the database \cite{Tkacik2011}. This model has an analytically tractable density of states which makes the inference simple. Figure \ref{fig3}B shows the relationship between the inferred nonlinearity and the microcanonical entropy for collections of patches which increase in size, confirming our prediction that the nonlinearity should be given by the microcanonical entropy. \begin{figure} \centering \includegraphics[width=\linewidth]{./fig3_arxiv.pdf} \caption{A. Plot of the inferred nonlinearity vs the microcanonical entropy for the semiparametric pairwise model. B. The same for the elliptically symmetrical model of luminance distribution in natural images.} \label{fig3} \end{figure} \section{Conclusions} We presented a tractable extension of any energy-based model which can be interpreted as augmenting the original model with a latent variable. As demonstrated on the retinal activity data, this extension can yield a substantially better fit to data even though the number of additional parameters is negligible compared to the number of parameters of the original model. In light of our results, we hypothesize that combing a nonlinearity with the energy function of a restricted Boltzmann machine might yield a model of retinal activity which is not only accurate, but also simple as measured by the number of parameters. Simplicity is an important factor in neuroscience because of experimental limitations on the number of samples. We plan to pursue this hypothesis in future work. Our models are expected to be useful whenever the underlying system cannot be decomposed into independent components. This phenomenon has been observed in many natural systems, and the origins of this global coupling, and especially its analogy to physical systems at critical points, have been hotly debated. Our models effectively incorporate the prior expectations of a global coupling in a simple nonlinearity, making them superior to models based on Gibbs random fields which might need a large number of parameters to capture the same dependency structure. \section*{Acknowledgments} We thank David Schwab, Elad Schneidman, and William Bialek for helpful discussions. This work was supported in part by HFSP Program Grant RGP0065/2012 and Austrian Science Fund (FWF) grant P25651.
1,116,691,498,817
arxiv
\section{Introduction} Random geometric graphs have been a very active area of research for some decades. The most basic model of these graphs is obtained by choosing a random set of vertices in $\mathds{R}^d$ and connecting any two vertices by an edge whenever their distance does not exceed some fixed parameter $\rho>0$. In the seminal work \cite{G_1961} by Gilbert, these graphs were suggested as a model for random communication networks. In consequence of the increasing relevance of real-world networks like social networks or wireless networks, variations of Gilbert's model have gained considerable attention in recent years, see e.g. \cite{LP_2013_1, LP_2013_2, CYG_2014, MP_2010, DF_2011}. For a detailed historical overview on the topic, we refer the reader to Penrose's fundamental monograph \cite{P_2003} on random geometric graphs. \smallskip For a given random geometric graph $\mathfrak{G}$ and a fixed connected graph $H$ on $k$ vertices, the corresponding \emph{subgraph count} $N^H$ is the random variable that counts the number of occurrences of $H$ as a subgraph of $\mathfrak{G}$. Note that only non-induced subgraphs are considered throughout the present work. The resulting class of random variables has been studied by many authors, see \cite[Chapter 3]{P_2003} for a historical overview on related results. \smallskip The purpose of the present paper is to establish \emph{concentration inequalities} for $N^H$, i.e. upper bounds on the probabilities ${\mathds P}(N^H\geq M+r)$ and ${\mathds P}(N_H\leq M-r)$, when the vertices of $\mathfrak{G}$ are given by the points of a Poisson point process. Our concentration results for subgraph counts, gathered in the following theorem, provide estimates for the cases where $M$ is either the expectation or a median of $N^H$. \begin{thm} \label{main} Let $\eta$ be a Poisson point process in $\mathds{R}^d$ with non-atomic intensity measure $\mu$. Let $H$ be a connected graph on $k\geq 2$ vertices. Consider the corresponding subgraph count $N^H=N$ in a random geometric graph associated to $\eta$. Assume that $\mu$ is such that almost surely $N < \infty$. Then all moments of $N$ exist and for all $r\geq 0$, \begin{align*} {\mathds P}(N\geq \mathds{E} N + r) &\leq \exp\left(-\frac{((\mathds{E} N + r)^{1/(2k)}-(\mathds{E} N)^{1/(2k)})^2}{2k^2c_d}\right),\\ {\mathds P}(N\leq \mathds{E} N - r) &\leq \exp\left(-\frac{r^2}{2k\mathds{V} N}\right),\\ {\mathds P}(N> \mathds{M} N+r) &\leq 2 \exp\left(-\frac{r^2}{4k^2c_d(r+\mathds{M} N)^{2-1/k}}\right),\\ {\mathds P}(N< \mathds{M} N-r) &\leq 2 \exp\left(-\frac{r^2}{4k^2c_d(\mathds{M} N)^{2-1/k}}\right), \end{align*} where $c_d>0$ is a constant that depends on $H$ and $d$ only, $\mathds{E} N$ and $\mathds{V} N$ denote the expectation and the variance of $N$, and $\mathds{M} N$ is the smallest median of $N$. \end{thm} One might think that the study of subgraph counts needs to be restricted to finite graphs to ensure that all occuring variables are finite. We stress that this is \emph{not} the case. There are Poisson processes in $\mathds{R}^d$, such that the associated random geometric graph has a.s. infinitely many vertices but still a.s. a finite number of edges. Similarly, one can also have a.s. infinitely many edges, but still a finite number of triangles. This phenomenon seems to be quite unexplored so far. In the context of concentration properties, a natural question is whether concentration inequalities also hold in these situations. We emphasize that Theorem~\ref{main} only requires $N < \infty$ almost surely and hence covers such cases, as opposed to previous results from \cite{RST_2013, LR_2015} where only finite intensity measures are considered. \smallskip The asymptotic exponents in the upper tail bounds for both the expectation and the median are equal to $1/k$. This is actually best possible as will be pointed out later. Note also that the asymptotic exponent of the estimates in previous results from \cite{RST_2013, LR_2015} are $1/k$ for both the upper and the lower tail which is compatible with our upper tail bounds but worse than our lower tail inequalities. \smallskip The essential tools for proving our concentration inequalities are new results on Poisson U-statistics. Given a (simple) Poisson point process $\eta$ on $\mathds{R}^d$, denote by $\eta_{\neq}^k$ the set of all $k$-tuples of distinct elements of $\eta$. A \emph{Poisson U-statistic} of order $k$ is a random variable that can be written as $$\sum_{ {\bf x}\in\eta_{\neq}^k} f( {\bf x}), $$ where $f:(\mathds{R}^d)^k\to\mathds{R}$ is a symmetric measurable map. The methods that we use in the present work to prove concentration of Poisson U-statistics around the mean are based on tools recently developed by Bachmann and Peccati in \cite{BP_2015}. In this work, concentration inequalities for the edge count in random geometric graphs were established. The approach to achieve these estimates will be generalized in the present article to arbitrary subgraph counts. \smallskip In addition to this, we will present a refinement of the method suggested in \cite{RST_2013, LR_2015} by Lachi{\`e}ze-Rey, Reitzner, Schulte and Th{\"a}le which gives concentration of Poisson U-statistics around the median. We improve this method further with the particular advantage, apart from giving clean and easy to use tail bounds, that the underlying Poisson process may have infinite intensity measure. \smallskip As an application of the concentration inequalities in Theorem \ref{main} we are going to establish strong laws of large numbers for subgraph counts. In \cite[Chapter 3]{P_2003} one can find strong laws for subgraph counts associated to i.i.d. points in $\mathds{R}^d$. To the best of our knowledge, there are no comparable strong laws so far for the Poisson process case and a particular feature of our results is that they even apply to certain Poisson processes with non-finite intensity measure. \smallskip In recent years, the study of random geometric simplicial complexes built on random point sets in $\mathds{R}^d$ has attracted a considerable attention, see e.g. \cite{K_2011, DFR_2014, KM_2013, YSA_2014} and in particular the survey \cite{BK_2014} by Bobrowski and Kahle on this topic. Motivations to study these complexes arise particularly from topological data analysis (for a survey on this see \cite{C_2009}), but also from other applications of geometric complexes like sensor networks (see e.g. \cite{SG_2007}). One of the models for random simplicial complexes is the so-called Vietoris-Rips complex. We stress that the number of $k$-dimensional simplices in this complex is exactly given by the number of complete subgraphs on $k$-vertices in the corresponding random geometric graph. So the results in the present paper are directly related to the study of random Vietoris-Rips complexes and may be useful in future research on these models. \smallskip Related to our investigations are the findings by Eichelsbacher, Rai{\v{c}} and Schreiber in \cite{ERS_2015} where deviation inequalities for stabilizing functionals of finite intensity measure Poisson processes were derived. These results are in principle also applicable to U-statistics (and hence to subgraph counts), as it was pointed out in \cite{RST_2013}, although the appearing asymptotic exponents tend to be non-optimal. Moreover, since the constants in the tail bounds from \cite{ERS_2015} depend on the intensity and are not given explicitly, it remains as an interesting open question whether these estimates can be extended to settings with non-finite intensity measures. \smallskip The paper is organized as follows. In Section 2 we will introduce the general assumptions and definitions that form the framework for the entire work. As anticipated, we will prove the announced concentration inequalities not only for subgraph counts, but for local Poisson U-statistics that satisfy certain additional assumptions. The tail estimates for subgraph counts will then follow directly from the more general results. In Section 3 we will present these general concentration inequalities. The announced applications to subgraph counts in random geometric graphs are presented in Section 4. Here, after pointing out how the general results apply to the setting of subgraph counts, we will analyse the asymptotic behaviour of expectation, median and variance which is needed for the subsequent presentation of the strong laws of large numbers. The proofs of all statements are gathered in Section 5. \section{Framework} \label{s:framework} For the remainder of the article we denote by ${\bf N}$ the space of locally finite point configurations in $\mathds{R}^d$. Elements in ${\bf N}$ can be regarded as locally finite subsets of $\mathds{R}^d$, but also as locally finite simple point measures on $\mathds{R}^d$. In this spirit, for $\xi\in{\bf N}$, we will use set related notations like for example $\xi\cap A$ as well as measure related notations like $\xi(A)$. Moreover, we denote by $\mathcal{N}$ the $\sigma$-algebra over ${\bf N}$ that is generated by the maps \begin{align*} {\bf N}\to\mathds{N}\cup\{\infty\}, \xi \mapsto \xi(A), \end{align*} where $A$ ranges over all Borel subsets of $\mathds{R}^d$. Throughout, we will consider a (non-trivial) Poisson point process $\eta $ on $\mathds{R}^d$. Then $\eta $ is a random element in ${\bf N}$ and its intensity measure $\mu$ is the measure on $\mathds{R}^d$ defined by $\mu(A) = \mathds{E}\eta (A)$ for any Borel set $A\subseteq\mathds{R}^d$. It will be assumed that $\mu$ is locally finite and does not have atoms. A \emph{(Poisson) U-statistic of order $k$} is a functional $F:{\bf N}\to\mathds{R}\cup\{\pm\infty\}$ that can be written as \begin{align*} F(\xi) = \sum_{ {\bf x}\in\xi_{\neq}^k} f( {\bf x}), \end{align*} where $f:(\mathds{R}^d)^k \to \mathds{R}$ is a symmetric measurable map called the \emph{kernel} of $F$, and for any $\xi\in{\bf N}$, \begin{align*} \xi_{\neq}^k = \{(x_1,\ldots,x_k)\in\xi^k : x_i\neq x_j \ \text{whenever} \ i\neq j\}. \end{align*} The map $F$ and the corresponding random variable $F(\eta )$ will be identified in what follows if there is no risk of ambiguity. Note that in this work we will only consider U-statistics $F$ with non-negative kernels. Also, it will be assumed throughout that $\eta$ guarantees almost surely $F(\eta)<\infty$ if not stated otherwise. Motivated by the application to random geometric graphs, we will define further properties that a U-statistic $F$ with kernel $f\geq 0$ might satisfy. We will refer to these properties in the remainder of the article. \begin{itemize} \item [(K1)] There is a constant $\rho_F>0$ such that \begin{align*} f(x_1,\ldots,x_k) > 0 \ \text{whenever} \ \diam(x_1,\ldots,x_k) \leq \rho_F. \end{align*} \item [(K2)] There is a constant $\Theta_F\geq 1$ such that \begin{align*} f(x_1,\ldots,x_k)=0 \ \text{whenever} \ \diam(x_1,\ldots,x_k) > \Theta_F\rho_F. \end{align*} \item [(K3)] There are constants $M_F \geq m_F > 0$ such that \begin{align*} M_F\geq f(x_1,\ldots,x_k) \geq m_F \ \text{whenever} \ f(x_1,\ldots,x_k)>0. \end{align*} \end{itemize} U-Statistics that satisfy property (K2) are also referred to as \emph{local} U-Statistics. Property (K3) is particularly satisfied for U-Statistics that count occurrences of certain subconfigurations. \medskip \section{General Results} The concentration inequalities for U-statistics we are about to present are based on methods developed in \cite{RST_2013, LR_2015} and \cite{BP_2015}. We begin by citing the results that we use from these articles. To do so, we first need to introduce a further notion. The \emph{local version} of a U-statistic $F$ is defined for any $\xi\in{\bf N}$ and $x\in\xi$ by \begin{align*} F(x,\xi) = \sum_{{\bf y}\in (\xi\setminus x)^{k-1}_{\neq}} f(x,{\bf y}), \end{align*} where $\xi\setminus x$ is shorthand for $\xi\setminus\{x\}$. Note, that \begin{align*} F(\xi) = \sum_{x\in\xi}F(x,\xi). \end{align*} The upcoming result uses a notion introduced in \cite{BP_2015}: A U-statistic $F$ is called \emph{well-behaved} if there is a measurable set $B\subseteq {\bf N}$ with ${\mathds P}(\eta\in B)=1$, such that \begin{enumerate} \item $F(\xi)<\infty$ for all $\xi\in B$, \item $\xi\cup\{x\}\in B$ whenever $\xi\in B$ and $x\in\mathds{R}^d$, \item $\xi\setminus \{x\}\in B$ whenever $\xi\in B$ and $x\in\xi$. \end{enumerate} Roughly speaking, the motivation for this notion is that for a well-behaved \linebreak U-statistic $F$ one has almost surely $F(\eta)<\infty$ as well as $F(\eta\cup\{x\})<\infty$ for $x\in\mathds{R}^d$ and $F(\eta\setminus\{x\})<\infty$ for $x\in\eta$. This ensures that the local versions $F(x,\eta)$ for $x\in\eta$ and also $F(x, \eta \cup\{x\})$ for $x\in\mathds{R}^d$ are almost surely finite, thus preventing technical problems arising from non-finiteness of these quantities. The following theorem combines \cite[Corollary 5.2]{BP_2015} and \cite[Corollary 5.3]{BP_2015} and the fact that by \linebreak \cite[Lemma 3.4]{Ka_2002} the exponential decay of the upper tail implies existence of all moments of $F$. \begin{thm} \label{generalUpperThm} Let $F$ be a well-behaved U-statistic of order $k$ with kernel $f\geq 0$. Assume that for some $\alpha\in[0,2)$ and $c>0$ we have almost surely \begin{align} \label{condUstat} \sum_{x\in\eta }F(x,\eta )^2 \leq c F^\alpha. \end{align} Then all moments of $F$ exist and for all $r\geq 0$, \begin{align*} {\mathds P}(F\geq \mathds{E} F + r) \leq \exp\left(-\frac{((\mathds{E} F + r)^{1-\alpha/2}-(\mathds{E} F)^{1-\alpha/2})^2}{2k^2c}\right) \end{align*} and \begin{align*} {\mathds P}(F\leq \mathds{E} F - r) \leq \exp\left(-\frac{r^2}{2k\mathds{V} F}\right) . \end{align*} \end{thm} Note that condition (K2), as defined in Section \ref{s:framework}, clearly ensures that all U-statistics considered in the present paper are well-behaved in the sense described above. In addition to the latter result, a variation of the approach presented in \cite{RST_2013, LR_2015} gives that for finite intensity measure Poisson processes, the condition (\ref{condUstat}) also implies a concentration inequality for the median instead of the expectation of the considered U-statistic. Moreover, it is possible to extend these tail estimates to U-statistics built over non-finite intensity measure processes, resulting in the forthcoming theorem. To state this result, we first need to introduce a further notation. For any real random variable $Z$, we denote by $\mathds{M} Z$ the smallest median of $Z$, i.e. \begin{align} \label{MedDef} \mathds{M} Z = \inf\{x\in\mathds{R}: {\mathds P}(Z\leq x)\geq 1/2\}. \end{align} Note that $\mathds{M} Z$ is exactly the value which the quantile-function of $Z$ takes at $1/2$. We are well prepared to state the announced result. \begin{thm} \label{thm2} Let $F$ be a U-statistic of order $k$ with kernel $f\geq 0$. Assume that $F$ is almost surely finite and satisfies (\ref{condUstat}) for some $\alpha\in[0,2)$ and $c>0$. Then for all $r\geq 0$, \begin{align*} {\mathds P}(F> \mathds{M} F+r) &\leq 2 \exp\left(-\frac{r^2}{4k^2c(r+\mathds{M} F)^{\alpha}}\right),\\ {\mathds P}(F< \mathds{M} F-r) &\leq 2 \exp\left(-\frac{r^2}{4k^2c(\mathds{M} F)^{\alpha}}\right). \end{align*} \end{thm} In the light of the above results, a natural approach towards concentration inequalities for U-statistics is to establish condition (\ref{condUstat}). For U-statistics satisfying the conditions (K1) to (K3) we obtain the following. \begin{thm} \label{lem2} Let $F$ be a U-statistic of order $k$ with kernel $f\geq 0$ that satisfies (K1) to (K3). Then for any $\xi\in{\bf N}$ we have \begin{align} \label{condUStat2} \sum_{x\in \xi} F(x,\xi)^2 \leq c_dF(\xi)^{\tfrac{2k-1}{k}}, \end{align} where \[ c_d = \left(2\left\lceil \Theta_F\sqrt{d}\right\rceil+1\right)^{2d(k-1)}\cdot M_F^2 \left(\frac{k^k}{m_Fk!}\right)^{\tfrac{2k-1}{k}}. \] \end{thm} The above statement guarantees that Theorem \ref{generalUpperThm} and Theorem \ref{thm2} apply to any almost surely finite U-statistic $F$ that satisfies (K1) to (K3). Thus, the following tail estimates are established. \pagebreak \begin{cor} \label{concUstat} Let $F$ be a U-statistic of order $k$ with kernel $f\geq 0$. Assume that $F$ is almost surely finite and satisfies (K1) to (K3). Then all moments of $F$ exist and for all $r\geq 0$, \begin{align*} {\mathds P}(F\geq \mathds{E} F + r) &\leq \exp\left(-\frac{((\mathds{E} F + r)^{1/(2k)}-(\mathds{E} F)^{1/(2k)})^2}{2k^2c_d}\right),\\ {\mathds P}(F\leq \mathds{E} F - r) &\leq \exp\left(-\frac{r^2}{2k\mathds{V} F}\right),\\ {\mathds P}(F> \mathds{M} F+r) &\leq 2 \exp\left(-\frac{r^2}{4k^2c_d(r+\mathds{M} F)^{2-1/k}}\right),\\ {\mathds P}(F< \mathds{M} F-r) &\leq 2 \exp\left(-\frac{r^2}{4k^2c_d(\mathds{M} F)^{2-1/k}}\right), \end{align*} where $c_d$ is defined as in Theorem \ref{lem2}. \end{cor} We conclude this section with a brief discussion about optimality of the upper tail concentration inequalities that were established above. Consider a U-statistic $F$ such that the assumptions of Corollary \ref{concUstat} hold. Assume that \begin{align}\label{tailCond} {\mathds P}(F\geq M + r) \leq \exp(-I(r)), \end{align} where $M$ is either the mean or a median of $F$ and $I$ is a function that satisfies \begin{align}\label{ICond} \liminf_{r\to\infty} I(r)/r^a > 0 \end{align} for some $a>0$. The upper tail estimates from Corollary \ref{concUstat} yield such functions $I$ for the exponent $a=1/k$. The next result states that this is optimal. \begin{prop}\label{optProp} Let $F$ be a U-statistic of order $k$ with positive kernel $f\geq 0$ such that $F$ satisfies (K1) to (K3). Let $I$ and $a>0$ be such that (\ref{tailCond}) and (\ref{ICond}) are verified. Then $a\leq 1/k$. \end{prop} \section{Subgraph Counts in Random Geometric Graphs}\label{s:SCRGG} In the following, we are going to investigate a model for random geometric graphs which was particularly investigated in \cite{LP_2013_1} and \cite{LP_2013_2}. Let $S\subseteq \mathds{R}^d$ be such that $S=-S$. To any countable subset $\xi\subset \mathds{R}^d$ we assign the \emph{geometric graph} $\mathfrak{G}_S(\xi)$ with vertex set $\xi$ and an edge between two distinct vertices $x$ and $y$ whenever $x-y\in S$. For $ {\bf x}=(x_1,\ldots, x_k)\in(\mathds{R}^d)^k$ we will occasionally write $\mathfrak{G}_S( {\bf x})$ instead of $\mathfrak{G}_S(\{x_1,\ldots,x_k\})$. Now, assuming that the vertices are chosen at random according to some Poisson point process $\eta $, we obtain the \emph{random geometric graph} $\mathfrak{G}_S(\eta )$. Denote the closed ball centered at $x\in\mathds{R}^d$ with radius $\rho\in\mathds{R}_+$ by $B(x,\rho)$. Throughout, we will assume that $B(0,\rho) \subseteq S \subseteq B(0,\theta\rho)$ for some $\rho>0$ and $\theta\geq 1$. Note that if we take $\theta = 1$, then $S = B(0,\rho)$ and we end up with the classical model of random geometric graphs for the euclidean norm, often referred to as \emph{random disk graph}. Also, the classical geometric graphs based on any other norm in $\mathds{R}^d$ are covered by the model introduced above. These classical models are extensively described in \cite{P_2003} for the case when the underlying point process $\eta$ has finite intensity measure. Before proceeding with the discussion, we present a picture that illustrates how the graphs that will be considered in the following might look like (in a window around the origin). \medskip \begin{figure}[h] \includegraphics[scale=0.24]{RGG} \caption{Random unit disk graph, intensity measure $\mu=18(\lVert x \rVert+1)^ {-1} dx$} \label{RGGSim} \end{figure} A class of random variables that is frequently studied in the literature on random geometric graphs are the subgraph counts. For any connected graph $H$ on $k$ vertices the corresponding \emph{subgraph count} is the U-statistic \[ {N^H}(\eta ) = \sum_{ {\bf x}\in \eta ^k_{\neq}} f_H( {\bf x}), \] where the kernel $f_H$ is given by \[ f_H( {\bf x}) = \frac{|\{\text{subgraphs} \ H' \ \text{of} \ \mathfrak{G}_S( {\bf x}): H'\cong H\}|}{k!}. \] At this, a subgraph $H'$ of $\mathfrak{G}_S( {\bf x})$ is a graph on a vertex set which is a subset of $\{x_1,\ldots,x_k\}$ such that any edge of $H'$ is also an edge of $\mathfrak{G}_S( {\bf x})$. Note that we write $H'\cong H$ if $H'$ and $H$ are isomorphic graphs. So, the random variable ${N^H}(\eta )$ counts the occurrences of $H$ as a subgraph of $\mathfrak{G}_S(\eta )$. We will write $N = N^H$ if $H$ is clear form the context. Note that we do not consider induced subgraphs here. Also note that since we are particularly interested in graphs built on non-finite intensity measure Poisson processes, it is possible that $N$ takes the value $\infty$. \subsection{Concentration Inequalities for Subgraph Counts} From the general concentration result Corollary \ref{concUstat} we obtain the deviation inequalities for subgraph counts stated in Theorem \ref{main}. To see this, let $H$ be a connected graph on $k$ vertices and consider the corresponding subgraph count $N^H$. Denote by $a_H$ the number of subgraphs isomorphic to $H$ in a complete graph on $k$ vertices. Moreover, let $\diam(H)$ be the diameter of $H$, i.e. the length of the longest shortest path in $H$. Then $N=N^H$ satisfies the conditions (K1) to (K3) with \begin{align*} \rho_N = \rho, \ \ \Theta_N = \diam(H)\theta,\ \ M_N = \frac{a_H}{k!},\ \ m_N = \frac{1}{k!}. \end{align*} Hence, Theorem \ref{main} follows from Corollary \ref{concUstat} where \begin{align*} c_d = \left(2\left\lceil\diam(H)\theta\sqrt{d}\right\rceil+1\right)^{2d(k-1)}\cdot \left(\frac{a_H}{k!}\right)^2 k^{2k-1}. \end{align*} \subsection{Asymptotic Behaviour of Subgraph Counts}\label{ABSC} The concentration inequalities presented in our main Theorem \ref{main} depend on expectation, median or variance of the considered subgraph count. It is therefore of interest, how these quantities behave asymptotically in settings where the parameters of the model are being varied. In what comes, we will study these asymptotic behaviours for sequences of subgraph counts with the particular goal of establishing strong laws of large numbers. For the remainder of this section, we consider a sequence of random geometric graphs $(\mathfrak{G}_{\rho_tS}(\eta _t))_{t\in\mathds{N}}$. At this, $(\eta _t)_{t\in \mathds{N}}$ is a sequence of Poisson point processes in $\mathds{R}^d$ where each $\eta _t$ has intensity measure $\mu_t = t \mu$ and $\mu$ is given by a Lebesgue density $m$. Moreover, $(\rho_t)_{t\in\mathds{N}}$ is a sequence of positive real numbers such that $\rho_t\to 0$ as $t\to\infty$. In correspondence with the conventions at the beginning of Section \ref{s:SCRGG}, the set $S$ is assumed to satisfy $B(0,1)\subseteq S \subseteq B(0,\theta)$ for some $\theta\geq 1$. Any connected graph $H$ on $k$ vertices now yields a corresponding sequence $N^H_t = N_t$ of subgraph counts in the random graphs $\mathfrak{G}_{\rho_t S}(\eta _t)$. We are particularly interested in settings where the underlying Poisson point process has non-finite intensity measure. In this situation it is not automatically guaranteed that the considered subgraph counts are almost surely finite. Note (again) that by Theorem \ref{main}, almost sure finiteness of a subgraph count is equivalent to existence of all its moments. The upcoming result gives a sufficient condition for almost sure finiteness of the considered random variables. \begin{prop}\label{IntegrableProp} Assume that the Lebesgue density $m$ is bounded. Assume in addition that there exists a constant $c\geq 1$ such that \begin{align} \label{desityCond} m(x) \leq c m(y) \ \ \text{whenever} \ \ \lVert x-y \rVert \leq \diam(H)\theta\sup_{t\in\mathds{N}}\rho_t. \end{align} Then, for any $t\in\mathds{N}$, the random variable $N_t$ is almost surely finite if and only if \begin{align} \label{integrableCond} \int_{\mathds{R}^d} m(x)^k dx < \infty. \end{align} \end{prop} \begin{rem*} The reason for considering densities that satisfy condition (\ref{desityCond}) is that this allows us to use a dominated convergence argument in the proof of the upcoming Theorem \ref{SGCExp}. A class of densities that satisfy condition (\ref{desityCond}) is given by \begin{align*} m(x) = A (\lVert x \rVert + 1)^{-\gamma}, \end{align*} where $A,\gamma>0$. Indeed, for any $x,y\in \mathds{R}^d$, \begin{align*} \frac{m(x)}{m(y)} = \left(\frac{\lVert y \rVert + 1}{\lVert x \rVert + 1}\right)^\gamma \leq \left(\frac{\lVert x \rVert + \lVert x-y \rVert + 1}{\lVert x \rVert + 1}\right)^\gamma \leq (1+\lVert x-y \rVert)^\gamma. \end{align*} An example of how the resulting random graphs for these densities might look like is illustrated in Figure \ref{RGGSim} above. Observe also that condition (\ref{integrableCond}) for a density $m$ given by $m(x)=A(\lVert x \rVert + 1)^{-\gamma}$ is equivalent to $k > d / \gamma$. In particular, the graph in Figure \ref{RGGSim} has almost surely infinitely many edges, but any subgraph count of order at least $3$ (for example the number of triangles) is almost surely finite. \end{rem*} Our goal is to establish strong laws for suitably rescaled versions of the subgraph counts $N_t$. To do so, it is crucial to know the asymptotic behaviour of the expectation $\mathds{E} N_t$ and the variance $\mathds{V} N_t$ as $t\to\infty$. In the case when the intensity measure of the underlying Poisson point process is finite, these asymptotics are well known (see e.g. \cite{P_2003}). Since our focus is on random geometric graphs built on non-finite intensity measure processes, the first step towards establishing strong laws is to study expectation and variance asymptotics. We also have concentration inequalities with respect to the median, so the asymptotic behaviour of the median may as well be of interest in applications. The upcoming result addresses these issues. Note that we will write $a_n\sim b_n$ if sequences $(a_n)_{n\in\mathds{N}}$ and $(b_n)_{n\in\mathds{N}}$ are \emph{asymptotically equivalent}, meaning that $\lim_{n\to\infty} a_n/b_n = 1$. \begin{thm} \label{SGCExp} Assume that the intensity measure $\mu$ is given by a continuous and bounded Lebesgue density $m:\mathds{R}^d\to\mathds{R}_+$ that satisfies (\ref{desityCond}) and (\ref{integrableCond}). Then the following holds: \begin{enumerate} \item There exists a constant $a>0$ such that \begin{align*} \mathds{E} N_t \sim at^k\rho_t^{d(k-1)}. \end{align*} \item If $\lim_{t\to\infty}\mathds{E} N_t = \infty$, then \begin{align*} \mathds{M} N_t \sim \mathds{E} N_t. \end{align*} \item There exist constants $A^{(n)} > 0$ for $1\leq n \leq k$ such that \begin{align*} \mathds{V} N_t \sim t^k\rho_t^{d(k-1)}\sum_{n=1}^k (t\rho_t^d)^{k-n} A^{(n)}. \end{align*} \end{enumerate} \end{thm} The concentration inequalities together with asymptotic results for expectation and variance can be used to obtain the upcoming strong laws for subgraph counts. Note that for the next result to hold, we only need that expectation and variance behave asymptotically as stated in Theorem \ref{SGCExp}. According to \cite[Proposition 3.1, Proposition 3.7]{P_2003} this asymptotic behaviour is also in order whenever the intensity measure of $\eta$ is finite and has a bounded Lebesgue density, so the following theorem also applies in these situations. The strong law presented below complements the results \cite[Theorem 3.17, Theorem 3.18]{P_2003} that deal with random geometric graphs built over i.i.d. points. \begin{thm}\label{SLLN} Assume that the statements (i) and (iii) of Theorem \ref{SGCExp} hold. \linebreak Assume in addition that for some $\gamma>0$, \begin{align} \label{regime} \liminf_{t\to\infty} t^{k-\gamma} \rho_t^{d(k-1)} > 0. \end{align} Let $a>0$ denote the limit of the sequence $\mathds{E} N_t/(t^k\rho_t^{d(k-1)})$. Then \begin{align*} \frac{N_t}{t^k \rho_t^{d(k-1)}} \overset{a.s.}{\longrightarrow} a \ \ \text{as} \ \ t\to\infty. \end{align*} \end{thm} \section{Proofs} \subsection*{Proof of Theorem \ref{lem2}} To get prepared for the proof of Theorem \ref{lem2}, which is crucial to establish the desired deviation and concentration inequalities, we first prove the following two lemmas. \begin{lem} \label{lem3} Let $x_1,\ldots,x_k,y_1,\ldots, y_k\in\mathds{R}_{\geq 0}$. Then \[ \prod_{i=1}^k x_i + \prod_{i=1}^k y_i \leq \prod_{i=1}^k \max(x_i,y_i) + \prod_{i=1}^k \min(x_i,y_i). \] \end{lem} \begin{proof} Let $I$ and $J$ be the sets of indices $i$ satisfying $x_i<y_i$ and $x_i\geq y_i$, respectively. Then we have \[ \prod_{i\in I} x_i < \prod_{i\in I} y_i \ \ \text{ and } \ \ \prod_{i\in J} x_i \geq \prod_{i\in J} y_i. \] It follows that \begin{align*} \prod_{i=1}^k x_i + \prod_{i=1}^k y_i &= \prod_{i\in I} x_i \prod_{i\in J} x_i + \prod_{i\in I} y_i \prod_{i\in J} y_i\\ &= \prod_{i\in I} x_i \prod_{i\in J} x_i + \left(\prod_{i\in I}y_i - \prod_{i\in I} x_i \right) \prod_{i\in J} y_i + \prod_{i\in I} x_i\prod_{i\in J} y_i\\ &\leq \prod_{i\in I} x_i \prod_{i\in J} x_i + \left(\prod_{i\in I}y_i - \prod_{i\in I} x_i \right) \prod_{i\in J} x_i + \prod_{i\in I} x_i\prod_{i\in J} y_i\\ &= \prod_{i\in I}y_i\prod_{i\in J} x_i + \prod_{i\in I} x_i\prod_{i\in J} y_i = \prod_{i=1}^k \max(x_i,y_i) + \prod_{i=1}^k \min(x_i,y_i). \end{align*} \end{proof} Using the lemma above, we can prove the following. \begin{lem} \label{lem1} Let $n_1,\ldots,n_N\in \mathds{R}_{\geq 0}$, $k\geq 2$ and let $\pi_1,\ldots,\pi_k$ be permutations of $\{1,\ldots, N\}$. Then \[ \sum_{i=1}^N \prod_{j=1}^k n_{\pi_j(i)} \leq \sum_{i=1}^N n_i^k. \] \end{lem} \begin{proof} Without loss of generality we can assume \begin{align} \label{wlog1} n_1\leq n_2\leq\ldots\leq n_N. \end{align} The proof is by induction on $N$. For $N=1$ there is nothing to prove, for $N=2$ the result follows from Lemma \ref{lem3}. So let $N>2$ and assume the result holds for $N-1$. Let \[ t = |\{i: \pi_j(i)=N \text{ for some } j\}|. \] We prove by induction on $t$ that the following holds: There exist permutations $\tilde{\pi}_1,\ldots,\tilde{\pi}_k$ of $\{1,\ldots,N-1\}$ such that \[ \sum_{i=1}^{N} \prod_{j=1}^k n_{\pi_j(i)} \leq \sum_{i=1}^{N-1} \prod_{j=1}^k n_{\tilde{\pi}_j(i)} + n_N^k. \] The result then follows by applying the induction hypotheses (with respect to $N$) to the right hand side of this inequality. For $t=1$ the permutations $\tilde{\pi}_i$ obviously exist since in this case there is an index $l$ such that $\pi_j(l) = N$ for all $j$. Consider the case $t>1$. Then there exist indices $l_1\neq l_2$ such that $\pi_{j_1}(l_1)=\pi_{j_2}(l_2) = N$ for some $j_1,j_2$. We define permutations $\bar{\pi}_1,\ldots,\bar{\pi}_k$ of $\{1,\ldots,N\}$ by $\bar{\pi}_j = \pi_j$ if $\pi_j(l_1) > \pi_j(l_2)$ and by \[ \bar{\pi}_j(i) = \begin{cases} \pi_j(i) & \text{if } i\neq l_1,l_2\\ \pi_j(l_2) & \text{if } i= l_1\\ \pi_j(l_1) & \text{if } i= l_2\\ \end{cases} \] if $\pi_j(l_1) < \pi_j(l_2)$. Then, using Lemma \ref{lem3} together with \eqref{wlog1} we obtain \begin{align*} &\sum_{i=1}^{N} \prod_{j=1}^k n_{\pi_j(i)} = \sum_{i=1,i\neq l_1,l_2}^{N} \prod_{j=1}^k n_{\pi_j(i)} + \prod_{j=1}^k n_{\pi_j(l_1)} + \prod_{j=1}^k n_{\pi_j(l_2)}\\ &\leq \sum_{i=1,i\neq l_1,l_2}^{N} \prod_{j=1}^k n_{\pi_j(i)} + \prod_{j=1}^k \max(n_{\pi_j(l_1)}, n_{\pi_j(l_2)}) + \prod_{j=1}^k \min(n_{\pi_j(l_1)}, n_{\pi_j(l_2)})\\ &= \sum_{i=1}^{N} \prod_{j=1}^k n_{\bar{\pi}_j(i)}. \end{align*} Note, that $\bar{\pi}_j(l_2)\neq N$ for all $j$ and hence $|\{i:\bar{\pi}_j(i)=N \text{ for some } j\}| = t-1$. Applying the induction hypotheses (with respect to $t$) yields the existence of the desired permutations $\tilde{\pi}_j$. This concludes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{lem2}] First note that the result holds trivially in the case when $F(\xi) = \infty$. Moreover, if $F(\xi)<\infty$, then property (K3) guarantees that $F(x,\xi)=0$ for all but finitely many $x\in\xi$. Hence, we can assume without loss of generality that $\xi$ is finite and that $F(x,\xi)>0$ for all $x\in\xi$. We consider a tiling of $\mathds{R}^d$ into cubes of diagonal $\rho_F$. Since we assumed that $\xi$ is finite, we can choose out of these cubes $C_1,\ldots,C_N$ such that $\xi\subseteq \cup C_i$. Then for any distinct $x_1\ldots,x_k\in\xi$ contained in the same cube we have $f(x_1\ldots,x_k)\geq m_F$. Moreover, translating the tiling a bit if necessary, we can assume without loss of generality that each point $x\in\xi$ is contained in the interior of exactly one of the cubes. Observe that condition (K2) implies the following: Let $i\in\{1,\ldots, N\}$. Then there are $(2\lceil\sqrt{d} \Theta_F\rceil + 1)^d =: q$ many cubes $C^1_i,\ldots,C^q_i$ such that for any $x\in C_i$ we have $\{y_1,\ldots,y_{k-1}\}\subset \cup_{j=1}^qC_i^j$ whenever $f(x,y_1,\ldots,y_{k-1}) > 0$. Note also, that the $C_i^j$ can be chosen such that for any fixed $j$ it holds that $\{C_1,\ldots, C_N\} = \{C_1^j,\ldots,C_N^j\}$. Let $[q] := \{1,\ldots,q\}$ and for any $x\in\xi\cap C_i$ and $(j_1,\ldots,j_{k-1})\in[q]^{k-1}$ let \[ C_i(x,j_1,\ldots,j_{k-1}) := \left[(C_i^{j_1}\cap(\xi\setminus x))\times \ldots \times (C_i^{j_{k-1}}\cap(\xi\setminus x))\right]_{\neq}. \] Then we have \begin{align*} & \sum_{x\in\xi}F(x,\xi)^2 = \sum_{i=1}^N \sum_{x\in \xi\cap C_i}F(x,\xi)^2 \\ &= \sum_{i=1}^N \sum_{x\in \xi\cap C_i}\left(\sum_{(j_1,\ldots,j_{k-1})\in[q]^{k-1}}\sum_{{\bf y}\in C_i(x,j_1,\ldots,j_{k-1})} f(x,{\bf y})\right)^2, \end{align*} where ${\bf y} = (y_1,\ldots,y_{k-1})$. Now let $n_i = |\xi\cap C_i|$ and $n_i^{(j)} = |\xi\cap C_i^j|$. Then by condition (K3) the last expression in the above display does not exceed \[ \sum_{i=1}^N \sum_{x\in \xi\cap Ci}\left(\sum_{(j_1,\ldots,j_{k-1})\in[q]^{k-1}} M_F\prod_{l=1}^{k-1} n_i^{(j_l)}\right)^2. \] Moreover, we have \[ \left(\sum_{(j_1,\ldots,j_{k-1})\in[q]^{k-1}} M_F\prod_{l=1}^{k-1} n_i^{(j_l)}\right)^2 \leq q^{k-1}M_F^2 \sum_{(j_1,\ldots,j_{k-1})\in[q]^{k-1}}\prod_{l=1}^{k-1} (n_i^{(j_l)})^2. \] Thus, it follows from the above considerations that \[ \sum_{x\in\xi}F(x,\xi)^2 \leq q^{k-1} M_F^2\sum_{i=1}^N \sum_{x\in \xi\cap C_i}\sum_{(j_1,\ldots,j_{k-1})\in[q]^{k-1}}\prod_{l=1}^{k-1} (n_i^{(j_l)})^2. \] Rearranging the triple sum, we can write the right hand side as \begin{align*} & q^{k-1} M_F^2\sum_{(j_1,\ldots,j_{k-1})\in[q]^{k-1}}\sum_{i=1}^N \sum_{x\in \xi\cap C_i}\prod_{l=1}^{k-1} (n_i^{(j_l)})^2\\ = & \ q^{k-1} M_F^2\sum_{(j_1,\ldots, j_{k-1})\in[q]^{k-1}}\sum_{i=1}^N n_i\prod_{l=1}^{k-1} (n_i^{(j_l)})^2. \end{align*} By Lemma \ref{lem1} this expression does not exceed \begin{align*} q^{k-1} M_F^2\sum_{(j_1,\ldots,j_{k-1})\in[q]^{k-1}}\sum_{i=1}^N n_i^{2k-1} &= q^{2(k-1)} M_F^2\sum_{i=1}^N n_i^{2k-1}\\ &\leq q^{2(k-1)} M_F^2\left(\sum_{i=1}^N n_i^k\right)^{\tfrac{2k-1}{k}}. \end{align*} At this, the latter inequality holds by monotonicity of the $p$-Norm. It remains to prove \[ \sum_{i=1}^N n_i^k \leq \frac{k^{k-1}}{m_F(k-1)!}\sum_{x\in\xi} F(x,\xi). \] We will see that for any $x\in\xi\cap C_i$ we have \begin{align} \label{ineq1} n_i^{k-1}\frac{(k-1)!}{k^{k-1}}\leq \frac{F(x,\xi)}{m_F}. \end{align} This then gives, as desired, that \begin{align*} \frac{k^{k-1}}{(k-1)!}\sum_{x\in\xi} F(x,\xi) &= m_F\sum_{i=1}^N\sum_{x\in\xi\cap C_i} \frac{k^{k-1}F(x,\xi)}{m_F(k-1)!}\\ &\geq m_F\sum_{i=1}^N\sum_{x\in\xi\cap C_i} n_i^{k-1} = m_F\sum_{i=1}^N n_i^k. \end{align*} To prove \eqref{ineq1}, let $x\in\xi\cap C_i$. Consider the set \[ A := \{(y_1,\ldots,y_{k-1})\in(\xi\setminus x) ^{k-1}_{\neq} \ : \ f(x,y_1,\ldots,y_{k-1}) > 0\}. \] Then, by definition of $F(x,\xi)$ and by condition (K3) we have \[ F(x,\xi) = \sum_{(y_1,\ldots,y_{k-1})\in A} f(x,y_1,\ldots,y_{k-1})\geq \sum_{(y_1,\ldots,y_{k-1})\in A} m_F. \] Thus \[ \frac{F(x,\xi)}{m_F} \geq |A|. \] Since by condition (K1) we have $((\xi\setminus x)\cap C_i)^{k-1}_{\neq} \subseteq A$, it follows that \[ \prod_{t=1}^{k-1} (n_i - t) = \prod_{t=0}^{k-2} (n_i - 1 - t) = |((\xi\setminus x)\cap C_i)^{k-1}_{\neq}| \leq \frac{F(x,\xi)}{m_F}. \] Moreover, it is straightforward to check that \[ \frac{\prod_{t=1}^{k-1} (n_i - t)}{n_i^{k-1}} \geq \frac{(k-1)!}{k^ {k-1}} \] whenever $n_i>k$. Hence, it follows that for $n_i>k$ we have \[ n_i^{k-1} \frac{(k-1)!}{k^{k-1}} \leq \prod_{t=1}^{k-1} (n_i - t)\leq \frac{F(x,\xi)}{m_F}. \] Therefore, the inequality in (\ref{ineq1}) holds for $n_i>k$. To conclude that the inequality also holds for $n_i\leq k$, recall that we assumed whithout loss of gererality that $F(y,\xi)>0$ for all $y\in\xi$, thus $F(x,\xi)>0$. Hence, it follows from condition (K3) together with symmetry of $f$ that even $F(x,\xi)\geq m_F(k-1)!$. Thus, for $n_i\leq k$, we obtain \[ n_i^{k-1} \frac{(k-1)!}{k^{k-1}} \leq k^{k-1} \frac{(k-1)!}{k^{k-1}} = (k-1)! \leq \frac{F(x,\xi)}{m_F}. \] This concludes the proof. \end{proof} \subsection*{Proof of Theorem \ref{thm2}} The approach that is described in the following to obtain the deviation inequalities presented in Theorem \ref{thm2} is a refinement of the method suggested in \cite{RST_2013, LR_2015}. We will use the convex distance for Poisson point processes which was introduced in \cite{R_2013}. Let ${\bf N}_{\rm fin}$ be the space of finite point configurations in $\mathds{R}^d$, equipped with the $\sigma$-algebra $\mathcal{N}_{\rm fin}$ that is obtained by restricting $\mathcal{N}$ to ${\bf N}_{\rm fin}$. Then for any $\xi\in {\bf N}_{\rm fin}$ and $A\in \mathcal{N}_{\rm fin}$ this distance is given by \[ d_T(\xi, A) = \max_{u \in S(\xi)}\min_{\delta\in A} \sum_{x\in\xi\setminus\delta}u(x), \] where \[ S(\xi) = \{u: \mathds{R}^d\to\mathds{R}_{\geq 0} \text{ measurable with} \ \sum_{x\in\xi}u(x)^2\leq 1\}. \] To obtain the deviation inequalities for a U-statistic $F$, we will first relate $F$ in a reasonable way to $d_T$. Then we will use the inequality \begin{align} \label{concentration} {\mathds P}(\eta \in A){\mathds P}(d_T(\eta , A)\geq s)\leq \exp\left(-\frac{s^2}{4}\right) \ \ \ \ \text{for} \ A\in \mathcal{N}_{\rm fin}, \ s\geq 0, \end{align} which was proved in \cite{R_2013}. For the upcoming proof of Theorem \ref{thm2} we also need the following relation, stated in \cite{LR_2015}. We carry out the corresponding straightforward computations for the sake of completeness. \begin{lem} \label{lem4} Let $F$ be a U-statistic of order $k$ with kernel $f\geq 0$. Then for any $\xi, \delta\in {\bf N}_{\rm fin}$ we have \[ F(\xi)\leq k\sum_{x\in\xi\setminus\delta}F(x,\xi) + F(\delta). \] \end{lem} \begin{proof} We have \begin{align*} F(\xi) &= \sum_{ {\bf x}\in\xi_{\neq}^k} {{\mathds 1}} (\exists x_i \notin \delta) f( {\bf x}) + \sum_{ {\bf x}\in (\xi \cap \delta)_{\neq}^k} f( {\bf x}) \\ &\leq \sum_{i=1}^k \sum_{ {\bf x}\in\xi_{\neq}^k} {{\mathds 1}} (x_i \notin \delta) f( {\bf x}) + \sum_{ {\bf x}\in \delta_{\neq}^k} f( {\bf x}) \\ & = k \sum_{ {\bf x}\in\xi_{\neq}^k} {{\mathds 1}} (x_1 \notin \delta) f( {\bf x}) + \sum_{ {\bf x}\in \delta_{\neq}^k} f( {\bf x}) \\ &= k\sum_{x\in\xi\setminus\delta}F(x,\xi) + F(\delta) \end{align*} where the third line holds by symmetry of $f$. \end{proof} Before we proof Theorem \ref{thm2} in its full generality, we need to establish the corresponding result for finite intensity measure processes. The proof of the next statement is a variation of the method that was suggested in \cite[Sections 5.1 and 5.2]{RST_2013} and \cite[Section 3]{LR_2015}. \begin{prop}\label{propMedian} Assume that the intensity measure of $\eta$ is finite. Let $F$ be a U-statistic of order $k$ with kernel $f \geq 0$ and let $\mathfrak m$ be a median of $F$. Assume that $F$ satisfies (\ref{condUstat}) for some $\alpha\in[0,2)$ and $c>0$. Then for all $r\geq 0$ one has \begin{align*} {\mathds P}(F\geq \mathfrak m+r) &\leq 2 \exp\left(-\frac{r^2}{4k^2c(r+\mathfrak m)^{\alpha}}\right),\\ {\mathds P}(F\leq \mathfrak m-r) &\leq 2 \exp\left(-\frac{r^2}{4k^2c \mathfrak m^{\alpha}}\right). \end{align*} \end{prop} \begin{proof} Let $\xi\in{\bf N}_{\rm fin}$ and $A\in \mathcal{N}_{\rm fin}$. Define the map $u_\xi:\mathds{R}^d\to\mathds{R}_{\geq 0}$ by \begin{align*} u_\xi(x) = \frac{F(x,\xi)}{\sqrt{\sum_{y\in\xi}F(y,\xi)^2}} \ \ \text{if} \ \ x\in\xi \ \ \text{and} \ \ u_\xi(x)=0 \ \ \text{if} \ \ x\notin \xi. \end{align*} Then we have $u_\xi\in S(\xi)$, thus \[ d_T(\xi,A)\geq \min_{\delta\in A} \frac{ \sum_{x\in\xi\setminus\delta}F(x,\xi)}{\sqrt{\sum_{y\in\xi}F(y,\xi)^2}}. \] Moreover, by Lemma \ref{lem4}, for any $\delta\in A$ we have \[ F(\xi)\leq k\sum_{x\in\xi\setminus\delta}F(x,\xi) + F(\delta). \] Thus, since by (\ref{condUstat}) we have $\sum_{y\in\xi}F(y,\xi)^2 \leq c F(\xi)^\alpha$ for ${\mathds P}_\eta$-a.e. $\xi\in{\bf N}_{\rm fin}$, we obtain \[ d_T(\xi,A)\geq \min_{\delta\in A} \frac{ F(\xi)-F(\delta)}{k\sqrt{\sum_{y\in\xi}F(y,\xi)^2}} \geq \min_{\delta\in A} \frac{ F(\xi) - F(\delta)}{k\sqrt{c}F(\xi)^{\alpha/2}}. \] Now, to prove the first inequality, let \[ A = \{\delta\in{\bf N}_{\rm fin} \ : \ F(\delta)\leq\mathfrak m\}. \] Then, since the map $s\mapsto s / (s+\mathfrak m)^{\alpha/2}$ is increasing, we have \[ d_T(\xi,A) \geq\frac{F(\xi) - \mathfrak m}{k\sqrt{c}F(\xi)^{\alpha/2}}\geq \frac{r}{k\sqrt{c}(r+\mathfrak m)^{\alpha/2}} \] for ${\mathds P}_\eta$-a.e. $\xi\in{\bf N}_{\rm fin}$ that satisfies $F(\xi)\geq \mathfrak m + r$. This observation together with (\ref{concentration}) and the fact that ${\mathds P}(\eta\in A)\geq \tfrac{1}{2}$ yields \begin{align*} {\mathds P}(F(\eta )\geq \mathfrak m + r) &\leq {\mathds P}\left(d_T(\eta,A) \geq \frac{r}{k\sqrt{c}(r+\mathfrak m)^{\alpha/2}}\right)\\ &\leq 2 \exp\left(-\frac{r^2}{4k^2c(r+\mathfrak m)^{\alpha}}\right). \end{align*} To prove the second inequality, let $r\geq 0$ and \[ A = \{\delta\in{\bf N}_{\rm fin} \ : \ F(\delta)\leq\mathfrak m - r\}. \] If ${\mathds P}(F(\eta)\leq \mathfrak m - r) = {\mathds P}(\eta\in A) = 0$, the desired inequality holds trivially, so assume that ${\mathds P}(F(\eta)\leq \mathfrak m - r)>0$ and note that this implies $r\leq \mathfrak m$. Then, since the map $s\mapsto (s-(\mathfrak m-r))/s^{\alpha/2}$ is increasing, we have \[ d_T(\xi,A) \geq\frac{F(\xi) - (\mathfrak m-r)}{k\sqrt{c}F(\xi)^{\alpha/2}}\geq \frac{\mathfrak m - (\mathfrak m-r)}{k\sqrt{c}\mathfrak m^{\alpha/2}} = \frac{r}{k\sqrt{c}\mathfrak m^{\alpha/2}} \] for ${\mathds P}_\eta$-a.e. $\xi\in{\bf N}_{\rm fin}$ that satisfies $F(\xi)\geq \mathfrak m$. Thus, it follows from (\ref{concentration}) that \begin{align*} \frac{1}{2} &\leq {\mathds P}(F(\eta )\geq \mathfrak m) \leq {\mathds P}\left(d_T(\eta ,A) \geq \frac{r}{k\sqrt{c}\mathfrak m^{\alpha/2}}\right)\\ &\leq \frac{1}{{\mathds P}(F(\eta )\leq \mathfrak m -r)} \exp\left(-\frac{r^2}{4k^2c\mathfrak m^{\alpha}}\right). \end{align*} \end{proof} As a final preparation for the proof of Theorem \ref{thm2}, we establish the following lemma. Recall that $\mathds{M} X$ is the smallest median of a random variable $X$, as defined in \eqref{MedDef}. \begin{lem}\label{medianLemma} Let $X$ and $X_n, n\in\mathds{N}$ be random variables such that a.s. $X_{n+1}\geq X_n$ for all $n\in\mathds{N}$ and $X_n\overset{a.s.}{\to} X$. Then there exists a non-decreasing sequence $(\mathfrak m_n)_{n\in\mathds{N}}$ where $\mathfrak m_n$ is a median of $X_n$ such that $\lim_{n\to\infty} \mathfrak m_n = \mathds{M} X$. \end{lem} \begin{proof} For a random variable $Z$, let \begin{align*} \hat{\mathds{M}} Z = \sup\{x\in\mathds{R} : {\mathds P}(Z\geq x) \geq 1/2\}<\infty. \end{align*} Note that $(\hat{\mathds{M}} X_n)_{n\in\mathds{N}}$ is a non-decreasing sequence and that $\hat{\mathds{M}} X_n \leq \hat{\mathds{M}} X$ for all $n\in\mathds{N}$, hence $(\hat{\mathds{M}} X_n)_{n\in\mathds{N}}$ is convergent. We claim that \begin{align}\label{medianClaim} \lim_{n\to\infty} \hat{\mathds{M}} X_n \geq \mathds{M} X. \end{align} To see this, let $x\in\mathds{R}$ be such that ${\mathds P}(X \leq x)<1/2$. Since almost sure convergence of the $X_n$ implies convergence in distribution, we have by the Portmanteau theorem (see e.g. \cite[Theorem 4.25]{Ka_2002}) that \begin{align*} \limsup_{n\to\infty} {\mathds P}(X_n\leq x) \leq {\mathds P}(X\leq x)<1/2. \end{align*} Hence, for sufficiently large $n$, one has ${\mathds P}(X_n\leq x)<1/2$. This implies that for sufficiently large $n$, we have ${\mathds P}(X_n\geq x) \geq 1/2$ and thus $\hat{\mathds{M}} X_n \geq x$. From these considerations it follows that \begin{align*} \lim_{n\to\infty} \hat{\mathds{M}} X_n &\geq \sup\{x\in\mathds{R}: {\mathds P}(X\leq x)<\tfrac 12\} \\ &= \inf\{x\in\mathds{R}: {\mathds P}(X\leq x)\geq \tfrac 12\} = \mathds{M} X. \end{align*} Hence (\ref{medianClaim}) is established. Now, for any $n\in\mathds{N}$, either $\mathds{M} X_n = \hat{\mathds{M}} X_n$ is the unique median of $X_n$ or all elements in the interval $[\mathds{M} X_n, \hat{\mathds{M}} X_n)$ are medians of $X_n$, where $\mathds{M} X_n$ and $\hat{\mathds{M}} X_n$ are non-decreasing in $n$. Taking (\ref{medianClaim}) into account as well as the fact that $\mathds{M} X_n \leq \mathds{M} X$ for all $n\in\mathds{N}$, the result follows. \end{proof} \begin{proof}[Proof of Theorem \ref{thm2}] For any $n\in\mathds{N}$ let $\eta_n = \eta\cap B(0,n)$. Then $\eta_n$ is a Poisson point process with finite intensity measure that is given by $\mu_n(A) = \mu(A\cap B(0,n))$ for any Borel set $A\subseteq \mathds{R}^d$. We define for any $n\in\mathds{N}$ the random variable $F_n$ in terms of the functional $F:{\bf N}\to\mathds{R}\cup\{\infty\}$ by $F_n = F(\eta_n)$. One easily observes that, since $F$ is a U-statistic with non-negative kernel, we have almost surely $F_{n+1}\geq F_n$ for all $n\in\mathds{N}$ and also $F_n\overset{a.s.}{\to} F$. According to Lemma \ref{medianLemma} we can choose a non-decreasing sequence $\mathfrak m_n, n\in\mathds{N}$ such that $\mathfrak m_n$ is a median of $F_n$ satisfying $\lim_{n\to\infty} \mathfrak m_n = \mathds{M} F$. Let $r\geq 0$. By virtue of Proposition \ref{propMedian}, for any $n\in\mathds{N}$ we have \begin{align*} {\mathds P}(F_n - \mathfrak m_n \geq r) \leq 2 \exp\left(-\frac{r^2}{4k^2c(r+\mathfrak m_n)^{\alpha}}\right) \leq 2 \exp\left(-\frac{r^2}{4k^2c(r+\mathds{M} F)^{\alpha}}\right). \end{align*} The sequence of random variables $F_n - \mathfrak m_n$ converges almost surely (and thus in distribution) to $F - \mathds{M} F$. Therefore, by the Portmanteau theorem (see e.g. \linebreak \cite[Theorem 4.25]{Ka_2002}) we have \begin{align*} {\mathds P}(F - \mathds{M} F > r) \leq \liminf_{n\to\infty} {\mathds P}(F_n - \mathfrak m_n > r) \leq 2 \exp\left(-\frac{r^2}{4k^2c(r+\mathds{M} F)^{\alpha}}\right). \end{align*} The inequality for the lower tail follows analogously. \end{proof} \subsection*{Proof of Proposition \ref{optProp}} The proof presented below generalizes ideas from \linebreak \cite[Section 6.1]{BP_2015}. Recall that $F$ is a U-statistic of order $k$ satisfying (K1) to (K3) and that $I$ and $a>0$ are chosen such that \begin{align*} \tag{\ref{tailCond}}{\mathds P}(F\geq M + r) \leq \exp(-I(r)), \end{align*} where $M$ is either the mean or a median of $F$, and \begin{align*} \tag{\ref{ICond}}\liminf_{r\to\infty} I(r)/r^a > 0. \end{align*} \begin{proof}[Proof of Proposition \ref{optProp}] Choose some $x\in\mathds{R}^d$ such that $q = \mu(B(x,\rho_F/2))>0$. Then $Z:=\eta(B(x,\rho_F/2))$ is a Poisson random variable with mean $q$. Since the diameter of $B(x,\rho_F/2)$ equals $\rho_F$, the assumptions (K1) and (K3) yield that for any ${\bf x} \in \eta_{\neq}^k \cap B(x,\rho_F/2)$ one has $f({\bf x})\geq m_F>0$. It follows that \begin{align*} F\geq m_Fk!\binom{Z}{k}. \end{align*} Hence, there exists a constant $A>0$ such that for sufficiently large $r$, \begin{align*} A Z^k \geq r \ \ \text{implies} \ \ F\geq r. \end{align*} Thus, for sufficiently large $r$, \begin{align}\label{upperTailLowerBound} {\mathds P}(F\geq M + r) \geq {\mathds P}(A Z^k \geq M + r) = {\mathds P}(Z \geq \tau(r)),\end{align} where $\tau(r) = A^{-1/k} (r+M)^{1/k}$. The tail asymptotic of a Poisson random variable is well known and one has ${\mathds P}(Z\geq s) \sim \exp(-s \log(s/q)-q)$ as $s\to\infty$, see e.g. \cite{G_1987}. Here the symbol $\sim$ denotes the asymptotic equivalence relation. Combining this with (\ref{upperTailLowerBound}), we obtain \begin{align*} \liminf_{r\to\infty} \frac{{\mathds P}(F\geq M + r)}{\exp(-\tau(r)\log(\tau(r)/q) - q)} \geq 1. \end{align*} Hence, taking (\ref{tailCond}) into account, there exists a constant $B>0$ such that for sufficiently large $r$, \begin{align*} \tau(r)\log(\tau(r)/q) + q \geq I(r) -B. \end{align*} By virtue of (\ref{ICond}), dividing this inequality by $r^a$ and taking the limit yields \begin{align} \label{positiveLiminf} \liminf_{r\to\infty} \frac{\tau(r)\log(\tau(r)/q)}{r^a} > 0. \end{align} Moreover, writing $C = A^{-1/k}>0$, we have \begin{align*} \frac{\tau(r)\log(\tau(r)/q)}{r^a} \sim C r^{1/k-a} \log(C(r+M)^{1/k} /q). \end{align*} The result follows since $a>1/k$ would imply that the RHS in the above display converges to $0$, contradicting (\ref{positiveLiminf}). \end{proof} \subsection*{Proof of Proposition \ref{IntegrableProp} and Theorem \ref{SGCExp}} The upcoming reasoning is partially inspired by the proof of \cite[Proposition 3.1]{P_2003}. First of all, we need to introduce the quantities $\lVert f_n \rVert_n^2$ that are crucial ingredients for the stochastic analysis of Poisson processes using Malliavin Calculus, see e.g. \cite{L_2014} for details. According to \linebreak \cite[Lemma 3.5]{RS_2013}, for a square-integrable U-statistic with kernel $f$ these quantities can be explicitly written as \begin{align} \label{normFn} \lVert f_n\rVert _n^ 2 = \binom{k}{n}^2 \int_{(\mathds{R}^d)^n} \left( \int_{(\mathds{R}^d)^{k-n}} f({\bf y}_n,{\bf x}_{k-n})d\mu^{k-n}({\bf x}_{k-n}) \right)^2 d\mu^n({\bf y}_n), \end{align} where we use the abbreviations ${\bf y}_n = (y_1,\ldots,y_n)$ and ${\bf x}_{k-n} = (x_{n+1},\ldots,x_k)$. Also according to \cite[Lemma 3.5]{RS_2013}, one has that the variance of a square-integrable \linebreak U-statistic $F$ is now given by \begin{align}\label{VarFormula} \mathds{V} F = \sum_{n=1}^k n! \lVert f_n \rVert_n^2. \end{align} This formula will be crucial for analysing the variance asymptotics of subgraph counts. For the remainder of the section, let the assumptions and notations of Section \ref{ABSC} prevail. In particular, recall that $m$ denotes the Lebesgue density of the measure $\mu$ and note that we will use the notation \begin{align*} m^{\otimes l}(\textbf{z}) = m^{\otimes l} (z_1,\ldots,z_l) = \prod_{i=1}^l m(z_i) \end{align*} for any $l\in\mathds{N}$ and $\textbf{z} = (z_1,\ldots,z_l) \in (\mathds{R}^d)^l$. The following two lemmas are used in the proofs of Proposition \ref{IntegrableProp} and Theorem \ref{SGCExp}. \begin{lem} \label{formulasExpNormFn} Assume that the subgraph count $N_t$ is square-integrable. Then the corresponding quantities $\lVert f_n \rVert_n^2$ given by formula (\ref{normFn}) can be written as \begin{align} \label{normFnFormula} c_n(t) \int_{\mathds{R}^d\times B(0,\Theta)^{n-1}} I_t^y({\bf y}_n) \left(\int_{B(0,\Theta)^{k-n}} I^x_t( {\bf x}_{k-n}) \ J({\bf y}_n, {\bf x}_{k-n}) \ d {\bf x}_{k-n} \right)^2 d{\bf y}_n, \end{align} where \begin{align*} c_n(t)& = \frac{t^{2k-n}\rho_t^{d(2k-n-1)}}{(k!)^2} \binom{k}{n}^2,\\ \Theta &= \diam(H)\theta,\\ I^y_t({\bf y}_n) &= m^{\otimes n}(y_1,\rho_ty_2 + y_1,\ldots, \rho_t y_n + y_1),\\ I^x_t( {\bf x}_{k-n}) &= m^{\otimes k-n}(\rho_tx_{n+1} + y_1,\ldots, \rho_tx_{k} + y_1),\\ J({\bf y}_n, {\bf x}_{k-n}) &= |\{\text{subgraphs} \ H' \ \text{of} \ G_S(0,y_2 ,\ldots, y_n,x_{n+1}, \ldots, x_{k}): H'\cong H\}|. \end{align*} \end{lem} \begin{proof} Recall that the kernel of the subgraph count $N_t$ is given by \begin{align*} f_t( {\bf z}) = \frac{|\{\text{subgraphs} \ H' \ \text{of} \ G_{\rho_t S}( {\bf z}): H'\cong H\}|}{k!}, \ \ {\bf z}\in(\mathds{R}^d)^k. \end{align*} Since $N_t$ is assumed to be square-integrable, formula (\ref{normFn}) holds. The integral on the RHS of (\ref{normFn}) can be written as \begin{align*} t^{2k-n} \int_{(\mathds{R}^d)^n} m^{\otimes n}({\bf y}_n)\left(\int_{(\mathds{R}^d)^{k-n}} m^{\otimes k-n}( {\bf x}_{k-n}) f_t({\bf y}_n, {\bf x}_{k-n}) \ d {\bf x}_{k-n}\right)^2 \ d{\bf y}_{n}. \end{align*} The change of variables \[ {\bf x}_{k-n} \mapsto (\rho_t x_{n+1} + y_1, \ldots, \rho_t x_{k} + y_1) \] now gives that the above expression equals \begin{align*} \frac{t^{2k-n}\rho_t^{2d(k-n)}}{(k!)^2} \int_{(\mathds{R}^d)^n} m^{\otimes n}({\bf y}_n) \left(\int_{(\mathds{R}^d)^{k-n}} I^x_t( {\bf x}_{k-n}) \ J^x_t({\bf y}_n, {\bf x}_{k-n}) \ d {\bf x}_{k-n} \right)^2 d{\bf y}_n, \end{align*} where \begin{align*} &J^x_t({\bf y}_n, {\bf x}_{k-n}) = |\{\text{subgraphs } H' \text{ of } G_{\rho_tS}({\bf y}_n, \rho_t x_{n+1} + y_1,\ldots, \rho_t x_{k} + y_1) : H'\cong H\}|. \end{align*} We perform a further change of variables for the outer integral, namely \begin{align*} {\bf y}_{n} \mapsto (y_1,\rho_t y_2 + y_1, \ldots, \rho_t y_n + y_1). \end{align*} This yields that the quantity in question equals \begin{align*} \frac{t^{2k-n}\rho_t^{d(2k-n-1)}}{(k!)^2} \int_{(\mathds{R}^d)^n} I_t^y({\bf y}_n) \left(\int_{(\mathds{R}^d)^{k-n}} I^x_t( {\bf x}_{k-n}) \ J({\bf y}_n, {\bf x}_{k-n}) \ d {\bf x}_{k-n} \right)^2 d{\bf y}_n, \end{align*} where one should notice that \begin{align*} &J^x_t(y_1,\rho_t y_2 + y_1,\ldots, \rho_t y_n + y_1, {\bf x}_{k-n})\\ & =|\{\text{subgraphs} \ H' \ \text{of} \ G_S(0,y_2 ,\ldots, y_n,x_{n+1}, \ldots, x_{k}): H'\cong H\}|\\ & = J({\bf y}_n,{\bf x}_{k-n}). \end{align*} Also, we see that $J({\bf y}_n, {\bf x}_{k-n}) = 0$ whenever $\lVert x_i\rVert > \diam(H)\theta = \Theta$ for some \linebreak $n+1\leq i\leq k$. Hence, the inner integral is actually an integral over $B(0,\Theta)^{k-n}$. Similarly, the outer integral is an integral over $\mathds{R}^d \times B(0,\Theta)^{n-1}$. \end{proof} In the next statement we use the abbreviation ${\bf x} = (x_1,\ldots,x_k)$. Note that here we do \emph{not} assume that the considered subgraph count $N_t$ is necessarily square-integrable. \begin{lem} \label{ExpFormula} The expectation of $N_t$ can be written as \begin{align} \label{ExpFormulaEq} \mathds{E} N_t = \frac{t^k\rho_t^{d(k-1)}}{k!} \int_{\mathds{R}^d\times B(0,\Theta)^{k-1}} I_t( {\bf x}) \ J( {\bf x}) \ d {\bf x}, \end{align} where \begin{align}\label{ItDef} I_t( {\bf x}) &= m^{\otimes k}(x_1,\rho_t x_2 + x_1, \ldots, \rho_t x_{k} + x_1),\\ \nonumber J( {\bf x}) &= |\{\text{subgraphs} \ H' \ \text{of} \ G_{S}(0,x_2,\ldots,x_k): H'\cong H\}|. \end{align} In (\ref{ExpFormulaEq}), the LHS is finite if and only if the RHS is finite. \end{lem} \begin{proof} Using the Slivniak-Mecke formula (see e.g. \cite[Corollary 3.2.3]{SW_2008}) we can write \[ \mathds{E} {N_t} = \frac{t^k}{k!} \int_{(\mathds{R}^d)^k} m^{\otimes k}( {\bf x}) \ |\{\text{subgraphs} \ H' \ \text{of} \ G_{\rho_t S}( {\bf x}): H'\cong H\}| \ d {\bf x}. \] Now, the change of variables \[ {\bf x} = (x_1,\ldots,x_k)\mapsto (x_1,\rho_t x_2 + x_1, \ldots, \rho_t x_{k} + x_1) \] followed by a reasoning very similar to the one in the proof of Lemma \ref{formulasExpNormFn} yields the result. \end{proof} \begin{proof}[Proof of Proposition \ref{IntegrableProp}] Let the notation of Lemma \ref{ExpFormula} prevail. The assumption (\ref{desityCond}) implies that for any $t\in\mathds{N}$ and ${\bf x}\in\mathds{R}^d\times B(0,\Theta)^{k-1}$, \begin{align*} c^{-k}m(x_1)^k \leq I_t({\bf x}) \leq c^km(x_1)^k. \end{align*} Using Lemma \ref{ExpFormula}, one obtains \begin{align*} c^{-k} A \int_{\mathds{R}^d}m(x)^k\ dx \leq \frac{k!}{t^k\rho_t^{d(k-1)}}\mathds{E} N_t \leq c^k A \int_{\mathds{R}^d} m(x)^k\ dx, \end{align*} where \begin{align*} A = \int_{B(0,\Theta)^{k-1}} |\{\text{subgraphs} \ H' \ \text{of} \ G_S(0,x_2 , \ldots, x_{k}): H'\cong H\}|\ d {\bf x} > 0. \end{align*} Hence, it follows that $\int m(x)^k dx<\infty$ is equivalent to integrability of $N_t$. According to Theorem \ref{main}, the subgraph count $N_t$ is integrable if and only if it is almost surely finite. The result follows. \end{proof} \begin{proof}[\it{Proof of Theorem \ref{SGCExp}}] [Proof of (i)] Let the notation of Lemma \ref{ExpFormula} prevail. Since by assumption the density $m$ is continuous, we have for $I_t$ given by (\ref{ItDef}) and for any ${\bf x}\in(\mathds{R}^d)^k$, \[ \lim_{t\to\infty}I_t( {\bf x}) = m(x_1)^k. \] Now, the condition (\ref{desityCond}) guarantees that for any ${\bf x}\in\mathds{R}^d\times B(0,\Theta)^{k-1}$, \begin{align*} I_t({\bf x}) \leq c^km(x_1)^k. \end{align*} Moreover, by virtue of assumption (\ref{integrableCond}) and boundedness of $J$, \begin{align*} K := \int_{\mathds{R}^d\times B(0,\Theta)^{k-1}} m(x_1)^k J({\bf x}) \ d{\bf x} < \infty. \end{align*} Hence, Lemma \ref{ExpFormula} together with the dominated convergence theorem yield that the sequence $\mathds{E} {N_t} / (t^k \rho_t^{d(k-1)})$ converges to $K/k!>0$. \medskip [Proof of (ii)] Since $\mathds{M} N_t$ is a median of $N_t$, in the case $\mathds{M} N_t > \mathds{E} N_t$ one has \begin{align*} \frac 12 &\leq {\mathds P}(N_t \geq \mathds{M} N_t) = {\mathds P}(N_t - \mathds{E} N_t \geq \mathds{M} N_t- \mathds{E} N_t)\\ &\leq {\mathds P}(|N_t - \mathds{E} N_t| \geq |\mathds{M} N_t- \mathds{E} N_t|) \leq \frac{\mathds{V} N_t}{(\mathds{M} N_t - \mathds{E} N_t)^2}, \end{align*} where for the last inequality we used Chebyshev. Similarly, if $\mathds{M} N_t < \mathds{E} N_t$, \begin{align*} \frac 12 &\leq {\mathds P}(N_t \leq \mathds{M} N_t) \leq {\mathds P}(|N_t - \mathds{E} N_t| \geq |\mathds{M} N_t- \mathds{E} N_t|) \leq \frac{\mathds{V} N_t}{(\mathds{M} N_t - \mathds{E} N_t)^2}. \end{align*} We see that in any case one has \begin{align*} \left| \frac{\mathds{M} N_t}{\mathds{E} N_t} - 1 \right| \leq \frac{\sqrt{2\mathds{V} N_t}}{\mathds{E} N_t}. \end{align*} Now, the statements (i) and (iii) of the theorem yield \begin{align}\label{asympVbyE} \frac{\sqrt{2\mathds{V} N_t}}{\mathds{E} N_t} \sim \sqrt{\frac{2}{a^2}\sum_{n=1}^k \frac{(t\rho_t^d)^{k-n}}{t^k\rho_t^{d(k-1)}} A^{(n)}} = \sqrt{\frac{2}{a^2}\sum_{n=1}^k \rho_t^{d(1-n/k)} (t^k\rho_t^{d(k-1)})^{-n/k} A^{(n)}}. \end{align} The assumption $\lim_{t\to\infty}\mathds{E} N_t = \infty$ together with statement (i) of the theorem and $\lim_{t\to\infty} \rho_t = 0$ implies that this tends to $0$ as $t\to\infty$. \medskip [Proof of (iii)] First note that by Proposition \ref{IntegrableProp} together with Theorem \ref{main} the assumption (\ref{integrableCond}) guarantees that all moments of the subgraph counts $N_t$ exist. In particular, the $N_t$ are square-integrable, thus formula (\ref{VarFormula}) is in order. So, to analyse the asymptotic behaviour of $\mathds{V} N_t$, we need to analyse the asymptotics of the quantities $\lVert f_n \rVert_n^2$. This will be done by proving convergence of the integrals that appear in (\ref{normFnFormula}), \begin{align*} \int_{\mathds{R}^d\times B(0,\Theta)^{n-1}} I_t^y({\bf y}_n) \left(\int_{B(0,\Theta)^{k-n}} I^x_t( {\bf x}_{k-n}) \ J({\bf y}_n, {\bf x}_{k-n}) \ d {\bf x}_{k-n} \right)^2 d{\bf y}_n. \end{align*} We first remark that by continuity of the density $m$, for any $ {\bf x}_{k-n}\in(\mathds{R}^d)^{k-n}$, \[ \lim_{t\to\infty}I^x_t({\bf x}_{k-n}) = m(y_1)^{k-n}. \] Since $J$ and $m$ are bounded, the dominated convergence theorem applies and gives \begin{align*} &\lim_{t\to\infty} \int_{B(0,\Theta)^{k-n}} I^x_t( {\bf x}_{k-n}) \ J({\bf y}_n, {\bf x}_{k-n}) \ d {\bf x}_{k-n}\\ &= \int_{B(0,\Theta)^{k-n}} m(y_1)^{k-n} \ J({\bf y}_n, {\bf x}_{k-n}) \ d {\bf x}_{k-n}. \end{align*} Now, it follows from the assumption (\ref{desityCond}) that for any $ {\bf x}_{k-n}\in B(0,\Theta)^{k-n}$ and ${\bf y}_{n}\in \mathds{R}^d\times B(0,\Theta)^{n-1}$, \begin{align*} I^x_t( {\bf x}_{k-n}) \leq c^{k-n} m(y_1)^{k-n} \ \ \text{and} \ \ I^y_t({\bf y}_{n}) \leq c^{n} m(y_1)^{n}. \end{align*} Moreover, we have that \begin{align*} &\int_{\mathds{R}^d\times B(0,\Theta)^{n-1}}m(y_1)^{n} \left(\int_{B(0,\Theta)^{k-n}} m(y_1)^{k-n} \ J({\bf y}_n, {\bf x}_{k-n}) \ d {\bf x}_{k-n} \right)^2 \ d{\bf y}_n\\ &\leq (\Theta^{d}\kappa_d)^{2k-n-1} \lVert J\rVert_\infty^2 \int_{\mathds{R}^d}m(x)^{2k-n} \ d x < \infty, \end{align*} where $\kappa_d$ denotes the Lebesgue measure of the unit ball in $\mathds{R}^d$. Here the finiteness of the latter integral follows from assumption (\ref{integrableCond}) since $m$ is bounded and $2k-n\geq k$. Furthermore, by continuity of $m$, one has for any ${\bf y}_n\in \mathds{R}^d\times B(0,\Theta)^{n-1}$, \begin{align*} \lim_{t\to\infty} I^y_t({\bf y}_n) = m(y_1)^n. \end{align*} We conclude that the dominated convergence theorem applies and gives \begin{align*} &\lim_{t\to\infty} \int_{\mathds{R}^d\times B(0,\Theta)^{n-1}} I_t^y({\bf y}_n) \left(\int_{B(0,\Theta)^{k-n}} I^x_t( {\bf x}_{k-n}) \ J({\bf y}_n, {\bf x}_{k-n}) \ d {\bf x}_{k-n} \right)^2 d{\bf y}_n\\ &= \int_{\mathds{R}^d} m(x)^{2k-n} dx \int_{B(0,\Theta)^{n-1}} \left( \int_{B(0,\Theta)^{k-n}} J^{0}({\bf y}_{n-1}, {\bf x}_{k-n}) \ d {\bf x}_{k-n} \right)^2 d {\bf y}_{n-1}\\ &=: K^{(n)}, \end{align*} where \begin{align*} J^{0}({\bf y}_{n-1}, {\bf x}_{k-n}) = |\{\text{subgraphs} \ H' \ \text{of} \ G_S(0,{\bf y}_{n-1}, {\bf x}_{k-n}): H'\cong H\}|. \end{align*} Hence, by virtue of (\ref{VarFormula}) and (\ref{normFnFormula}) the variance can be written as \begin{align*} \mathds{V} N_t &= \sum_{n=1}^k t^{2k-n}\rho_t^{d(2k-n-1)}\frac{n!}{(k!)^2}\binom{k}{n}^2 K^{(n)}_t,\\ &= t^k\rho_t^{d(k-1)}\sum_{n=1}^k (t\rho_t^d)^{k-n}\frac{n!}{(k!)^2} \binom{k}{n}^2 K^{(n)}_t. \end{align*} where $K^{(n)}_t$ is such that $\lim_{t\to\infty} K^{(n)}_t = K^{(n)}\in(0,\infty)$. The result follows. \end{proof} \subsection*{Proof of Theorem \ref{SLLN}} In the upcoming proof of the strong law for subgraph counts we use the following consequence of the Borel-Cantelli lemma (see e.g. \linebreak \cite[Theorem 3.18]{Ka_2002}), well known from basic probability theory. \begin{lem} \label{aslem} Let $(X_n)_{n\in\mathds{N}}$ be a sequence of real random variables and let $(a_n)_{n\in\mathds{N}}$ be a sequence of real numbers converging to some $a\in\mathds{R}$. Assume that for any $\epsilon>0$, \begin{align*} \sum_{n=1}^\infty {\mathds P}(|X_n - a_n|\geq\epsilon) < \infty. \end{align*} Then \begin{align*} X_n \overset{a.s.}{\longrightarrow} a \ \ \text{as} \ \ n\to\infty. \end{align*} \end{lem} \begin{proof}[Proof of Theorem \ref{SLLN}] It follows from Theorem \ref{main} that for any real number $\epsilon>0$, \begin{align*} {\mathds P}\left(\left|\frac{N_t}{t^k\rho_t^{d(k-1)}} - \frac{\mathds{E} N_t}{t^k\rho_t^{d(k-1)}} \right|\geq \epsilon\right) \leq p_u(t,\epsilon) + p_l(t,\epsilon), \end{align*} where \begin{align*} p_u(t,\epsilon) &:= \exp\left(-t\rho_t^{d(k-1)/k}\frac{\left(\left(\tfrac{\mathds{E} N_t}{t^k\rho_t^{d(k-1)}} + \epsilon\right)^{\tfrac{1}{2k}}-\left(\tfrac{\mathds{E} N_t}{t^k\rho_t^{d(k-1)}}\right)^{\tfrac{1}{2k}}\right)^2}{2k^2c_d}\right),\\ p_l(t,\epsilon) &:= \exp\left(-\frac{t^{2k}\rho_t^{2d(k-1)}\epsilon^2}{2k \mathds{V} N_t}\right). \end{align*} Now, by Theorem \ref{SGCExp} (i) the sequence $\mathds{E} N_t / (t^k \rho_t^{d(k-1)})$ converges to a positive real number. This implies that we can choose a constant $C>0$ such that for sufficiently large $t$, \begin{align*} p_u(t,\epsilon) \leq \exp\left(-t\rho_t^{d(k-1)/k} C\right). \end{align*} Moreover, assumption \eqref{regime} guarantees the existence of a constant $C'>0$ such that for sufficiently large $t$, \begin{align*} \exp\left(-t\rho_t^{d(k-1)/k} C\right) \leq \exp\left(-t^{\gamma/k}C'\right). \end{align*} We conclude that $\sum_{t\in\mathds{N}} p_u(t,\epsilon) < \infty$. Furthermore, similarly as in \eqref{asympVbyE}, it follows from Theorem \ref{SGCExp} (iii) that \begin{align*} \frac{\mathds{V} N_t}{t^{2k}\rho_t^{2d(k-1)}} \sim \sum_{n=1}^k \rho_t^{d(1-n/k)} (t^k\rho_t^{d(k-1)})^{-n/k} A^{(n)}. \end{align*} Invoking assumption \eqref{regime} yields existence of a constant $C''>0$ such that for sufficiently large $t$, \begin{align*} \sum_{n=1}^k \rho_t^{d(1-n/k)} (t^k\rho_t^{d(k-1)})^{-n/k} A^{(n)} \leq \sum_{n=1}^k \rho_t^{d(1-n/k)} (C'' t^\gamma)^{-n/k} A^{(n)}. \end{align*} Now, the sequence $\rho_t$ tends to $0$ and thus it is bounded above. Moreover, for any $t\in\mathds{N}$ one has $t^{-k\gamma/k}\leq t^{-(k-1)\gamma/k}\leq \ldots \leq t^{-\gamma /k}$. From this together with the last two displays we derive existence of a constant $C'''>0$ such that for large $t$, \begin{align*} \frac{\mathds{V} N_t}{t^{2k}\rho_t^{2d(k-1)}} \leq C''' t^{-\gamma/k}. \end{align*} Hence, for sufficiently large $t$, \begin{align*} p_l(t,\epsilon) \leq \exp\left(-\frac{t^{\gamma/k}\epsilon^2}{2kC'''}\right). \end{align*} It follows that $\sum_{t\in\mathds{N}} p_l(t,\epsilon)<\infty$. Applying Lemma \ref{aslem} yields the result. \end{proof} \bigskip \renewcommand{\bibname}{References} \defbibheading{bibliography}[\refname]{\section*{#1}} \printbibliography \end{document}
1,116,691,498,818
arxiv
\section{Introduction}\label{sec:introduction}} \section{Introduction}\label{sec:introduction} \IEEEPARstart{I}{n} recent years, skyline queries receive much attention in various applications such as multi-preference analysis and decision making. In such applications, a skyline set contains the most interesting objects or best objects and retrieves the objects that are not dominated by any other objects. In database systems, queries specialized to search for the non-dominated data objects are called \emph{skyline queries} and their corresponding result sets are known as skyline sets. The data objects in a skyline set are known as skyline objects. In tradition, the skyline query is discussed in a static environment, where all the data objects and query are static. Now it is progressively extended for dynamic or distributed environments. If the user is moving or the query is issued from a dynamic environment, such a case addresses the skyline problem in dynamic environments. The skyline query is also used in spatial networks and all the considered data objects are highly distributed. Thus, how to process skyline queries in distributed environments has become an important issue. Conventional Location-Based Services (LBSs) focus on processing proximity-based queries, including range query~\citep{SeokJin:2006:IndexRangeQueryBroadcast,Jianting:2004:MultiDimensionRangeQueryBroadcast} and nearest neighbor (NN) query~\citep{Li2015,Liu:2008:kNNIndexTreeMultiDimension}. However, these queries are not sufficient for providing high-quality services to mobile users without considering both spatial and non-spatial information simultaneously. A typical scenario is finding a nearby hotel with a cheap price, in which the distance is a spatial attribute and the price is non-spatial. Clearly, in this case, a multi-criteria query is more appealing than a conventional spatial query that considers the distance only. Among various multi-criteria queries, the skyline query is considered as one of the most classical ones and receives a great deal of attention in LBS research. However, the resulting skyline may contain many useless data objects since the resulting data objects (hotels) may be too far away from the query (user). Some other works~\citep{6081864,Rahul:2012:ARQ:2424321.2424406} tend to solve the range-skyline query to improve the QoS of LBSs by considering the dynamic data objects and supporting the \emph{continuous range-skyline} query. The continuous range-skyline is a collection of range-skyline answers during a specific time interval that the query concerns. Such a query is applied in many LBSs whose environments are dynamic. For example, searching taxis is an application of the continuous range-skyline query. Users can use such a service to obtain some candidate taxis which are nearby, cheap, and high-ranked. Hence, this work focuses on processing the range-skyline query and the continuous case. Most of the existing approaches~\citep{6081864,Rahul:2012:ARQ:2424321.2424406,Dimitris:2005:SkylineComputation,Tian:2007:CMS:1254850.1254861} process the skyline or range-skyline in a centralized way under the assumption that data objects are stored in a centralized fashion. They provide some algorithms that focus on how to index the spatial or non-spatial data and how to efficiently process queries with a large number of data objects. Although some of them have discussed spatial queries and moving data objects in the distributed and mobile environments like the Internet of Mobile Things (IoMT) or Mobile Wireless Sensor Networks (MWSNs), the computing model is still centralized. The collected data objects are stored distributively and the process of data sensing (collection) phase is not discussed. They discussed the changes (or updates) of data and treated such cases as moving data objects. In these approaches, each mobile node needs to continuously obtain the location information of itself by GPS and sends the information back to the server(s) for updating the database(s). However, the overhead of the data collection process was not mentioned and addressed. If we use the conventional methods in a specific scenario, the position of each data object changes frequently, the server(s) will receive a huge amount of information for updating the location of each data object in a short time. In this case, the system will be overloaded. Furthermore, a large number of messages for updates will occupy quite a lot of communication bandwidth. In general, the IoMT applications based on MWSNs are self-configuring and infrastructure-less. IoMT consists of many mobile sensor nodes connected by wireless communication. In such environments, each node can move freely and independently in any direction, so the communication links between nodes will change frequently. Each node can forward the information unrelated to its owns and act as a router. In comparison with the client-server environment, IoMT applications may have no centralized server to handle the spatial queries. Accordingly, the information system for an infrastructure-less IoMT application must process queries in a distributed (or decentralized) way. Each mobile sensor node can cooperate and exchange data with each other and then derive the answers for spatial queries. Since most of the existing works consider multiple data sources but only a few works consider the fully distributed and dynamic computing environments, one of the main objectives of this work is to provide a fully distributed approach for processing \emph{Range-Skyline Queries} (RSQ) and \emph{Continuous Range-Skyline Queries} (CRSQ) in an infrastructure-less mobile environments, IoMT. In this work, we propose a \emph{Distributed Range-Skyline Query process} (DRSQ process) in IoMT whose computing model is decentralized. We further extend DRSQ to \emph{Distributed Continuous Range-Skyline Query process} (DCRSQ process) for supporting continuous range-skyline query processing. Each mobile node can filter out the irrelevant data objects, derive a primary candidate answer set of the received query, and then report the candidate set to the query node. For validating the DRSQ process, we perform the simulation experiments with the following measurements: the \emph{response time} and the \emph{number of messages} (\emph{I/O operations}). Furthermore, to validate the DCRSQ process, we consider four measurements: the number of accessed objects (the overhead on the query node), the number of messages, precision and recall. We also consider the effects on the number of sensor nodes, the number of queries, the transmission range of a node, and the query range in the DRSQ process. One additional effect, node speed, is considered in the DCRSQ process. Besides, we give the analysis on network cost for the proposed approach and compare the proposed approach with the centralized approach. As the results show, the proposed approach has a better performance in the simulation. We address the distributed continuous range-skyline query (DCRSQ) processing over the IoMT and make the following contributions: \begin{itemize} \item We propose a distributed approach, DCRSQ, to make the process of range-skyline query appropriate to the Internet of Mobile Things environments. \item To the best of our knowledge, this is the first study for this problem simultaneously considering the computing process and information filtering in the data collection phase so that the performance of system is significantly improved in comparison with the conventional approach. \item With the mobility, DCRSQ can make each node predict the time when its neighboring mobile data objects fall in the query range and avoid periodically flooding messages for updating the information of neighbors. \item We give a formal analysis of the network cost on the proposed approach and conduct extensive simulations to evaluate the performance. The simulation results show that DCRSQ can save more than 15$\%$ network cost and achieve almost 90$\%$ accuracy in most of the scenarios. \end{itemize} The balance of this paper is organized as follows. In Section~\ref{related_work}, we introduce the background and review related research. Section~\ref{preliminary} presents the overview of problem and the notations used in this work. Section~\ref{DCRSQ} introduces the proposed solution and a breakdown of the data structures and algorithms. Some analysis on network cost will be discussed in Section~\ref{analysis}. Simulation experiments are presented in Section~\ref{simulation}. Finally, we make concluding remarks in Section~\ref{conclusion}. \section{Related Work} \label{related_work} The IoMT has triggered a lot of emerging applications and services~\citep{7903653,7945539} in wireless communications, fog/edge computing, and (mobile) big sensor data processing. Ang et al.~\citep{7903653} identified some important research topics, like data analysis and processing, in smart city ecosystems which base on IoMT. They also investigate some use cases in IoMT (smart cities) such as real-time urban monitoring~\citep{5594641} and spatial decision support system for flood risk management~\citep{HORITA201584}. These use cases are spatio-temporal applications classified in~\citep{7945539}. In spatio-temporal IoMT applications, spatial query processing plays a key role for the decision making. In our work, we focus on the range-skyline query for this kind of IoMT applications. A range-skyline query is an extension of skyline query with a distance threshold in spatial databases. Borzsony et al.~\citep{914855} introduced skyline operator into the database systems with algorithms \emph{Block Nested Loop} (BNL) and \emph{Divide-and-Conquer} (D\&C). A great number of researchers also keep their eyes on skyline query processing from then on. Papadias et al.~\citep{Dimitris:2005:SkylineComputation} proposed a \emph{Branch-and-Bound Skyline} (BBS) method based on the best-first nearest neighbor algorithm~\citep{Hjaltason:1999:DBS:320248.320255}. Cheema et al.~\citep{Cheema:2013:SZB:2452376.2452409} proposed a \emph{safe zone} based approach and combined it with Vonronoi cells to provide a better BBS algorithm for processing skyline queries. Hose and Vlachou~\citep{Hose2012} provided comprehensive analysis of previous skyline algorithms without indexing supports, and proposed a new hybrid method with improvement. Lin et al.~\citep{6081864} also discussed both the indexing and non-indexing algorithms, and then extended their work to process probabilistic RSQ. All these works discuss the issues in a centralized data storage. \begin{table*}[t] \renewcommand{\arraystretch}{1.2} \caption{Comparisons of Existing Works for Skyline Query} \label{compared_methods} \centering \small \begin{tabular}{lccccccc} \hline & \multicolumn{7}{c}{\textbf{Methods}} \\ \cline{2-8} \textbf{Considered Issues} & BBS~\citep{Dimitris:2005:SkylineComputation} & LDSQ~\citep{4511446} & PadSkyline~\citep{10.1109/TKDE.2010.103} & EDDS~\citep{jsan5010002} & RSQ~\citep{6081864} & L-SQ~\citep{1617434} & \textbf{DRSQ \& DCRSQ}\\ \hline \hline \textbf{Distributed Databases} & \texttimes & \texttimes & \checkmark & \checkmark & \texttimes & \checkmark & \textbf{\checkmark} \\ \textbf{Distributed Computing} & \texttimes & \texttimes & \checkmark & \checkmark & \texttimes & \checkmark & \textbf{\checkmark} \\ \textbf{Moving Objects} & \texttimes & \texttimes & \texttimes & \texttimes & \checkmark & \texttimes & \textbf{\checkmark} \\ \textbf{Moving Queries} & \texttimes & \texttimes & \texttimes & \texttimes & \checkmark & \checkmark & \textbf{\checkmark} \\ \textbf{Non-Index based} & \texttimes & \texttimes & \texttimes & \texttimes & \checkmark & \texttimes & \textbf{\checkmark} \\ \hline \end{tabular} \vspace{-10pt} \end{table*} To make the applications scalable and improve the performance of skyline query processing, many parallel or distributed algorithms have been proposed. Wu et al.~\citep{Wu:2006:PSQ:2117976.2117990} first attempted a progressive processing of skyline queries on a CAN-based P2P network~\citep{Ratnasamy:2001:SCN:383059.383072}. By using the query range to recursively partition the data region involved and encoding each involved sub-region dynamically, their method can progressively report skyline objects without accessing the data sites not containing potential skyline objects, thus saving computation overhead. Chen et al.~\citep{10.1109/TKDE.2010.103} proposed a parallel approach to filter data objects efficiently from distributed databases for processing constrained (or range) skyline queries. Zheng et al.~\citep{4511446} introduced a variant notion of the valid scope for skyline queries, that can save the re-computation if the next query is still inside the valid scope. Although the above existing works considered the distributed databases, they still used some high-performance servers to process skyline queries with the data objects from multiple data sources. Alternatively, we consider an infrastructure-less environment, IoMT, in this work. Huang et al.~\citep{1617434} proposed techniques for skyline query processing in MANETs. Lightweight devices in MANETs are able to issue spatially constrained skyline queries that involve data stored on many mobile devices. Queries are forwarded through the whole MANET without routing information. However, they only considered the mobile distributed data sites over MANETs but did not consider the moving objects. Ahmed et al.~\citep{jsan5010002} proposed an approach, Enhanced Distributed Dynamic Skyline (EDDS), to handle skyline queries over the IoMT. EDDS used disc track and sector to map the data locations. Such a way improves the performance of searching the new input data objects for computing and updating the skyline. Although EDDS considered the dynamically data input from the distributed sensor nodes, EDDS did not consider the mobile sensor nodes (moving objects). In fact, the conventional works can be categorized into following models: (1) single data source with a centralized computing model, (2) multiple data sources with a centralized computing model, and (3) multiple data sources with a distributed/decentralized computing model. The third model is more popular in recent years. However, it is not easy to compare all the works in type (3) by simulation or experiments since the considered environments, assumptions, and requirements are quite different. To the best of our knowledge, most of works in type (3) consider the distributed computing model with multiple "powerful" computing servers for the query processing. Only very few works consider query processing over a lightweight mobile environment whose computing resource is limited. However, these few works only consider the static spatial data and do not support moving data objects. We therefore present a comparison summary of the existing methods related to skyline query and our work in Table~\ref{compared_methods}. \section{Preliminaries} \label{preliminary} In this section we give some preliminaries of the problem, including some fundamental notations and definitions. We consider a data set $S$ of sensor nodes. Each mobile sensor node $s \in S$ is associated with a spatial (i.e., location or distance) attributes and several other non-spatial attributes (e.g., temperature, trust rank, and possibility). Note that we use Euclidean distance in the spatial attribute and the non-spatial dominance relation between the mobile objects is described as Definition~\ref{nsdom}. \begin{definition}[\textbf{Non-spatial Dominance}] \label{nsdom} Given two sensor nodes $s$ and $s'$, if $s'$ is no worse than $s$ in all non-spatial attributes, then we say $s'$ \textit{non-spatially dominates} $s$. We say that $s'$ is a non-spatial dominator object of $s$, and $s$ is a non-spatial dominance object of $s'$. Formally, it is denoted as $s'\triangleleft s$. The set of $s$'s non-spatial dominator objects is denoted as $Dom(s)$, i.e., $s$ is dominated by any object in $Dom(s)$ on non-spatial attributes. \end{definition} If $s'\triangleleft s$ and $s\triangleleft s'$ are hold, it means that non-spatial attributes of $s'$ and $s$ are equivalent. In this case, the system is going to check the dominance relation between the spatial attributes of $s'$ and $s$. So the complete dominance relation can be described as \begin{definition} \label{sdom} \textbf{(Dominance)}\\ Given a query node $q$ and two sensor nodes $s$ and $s'$, if 1) $s'$ non-spatially dominates $s$, and 2) $dist(q,s')\leq dist(q,s)$ (i.e., $s'$ also spatially dominates $s$), then we say $s'$ dominates $s$ w.r.t. the query node $q$. Formally, it is denoted as $s'\triangleleft_q s$. \end{definition} Note that if $s'\triangleleft_q s$ and $s'\triangleleft_q s$, it means that both spatial and non-spatial attributes of $s'$ and $s$ are equivalent with respect to the query node $q$. With the above definitions, point-skyline query (PSQ) can be defined as \begin{definition} \label{psq} \textbf{(Point-Skyline Query (PSQ))}\\ Given a data set $S$, the \textit{skyline} of a query node $q$ is a subset of $S$ in which each object (sensor node) is not dominated by any other object in $S$ w.r.t. $q$. We call this subset skyline set and denote it as $PSQ(S,q)=\{s|s \in S \wedge \forall s' \in S-\{s\}: s' \ntriangleleft_q s\}$. \end{definition} In accordance with the above definitions, the range-skyline query can be defined in Definition~\ref{trsq} and such a definition comes from a global view of system. \begin{definition} \label{trsq} \textbf{(Range-Skyline Query (RSQ))}\\ Given a data set $S$ with a range $R$, the range-skyline query with respect to $q$ returns the skyline set of the subset of objects (sensor nodes) that locate in $R$. Formally, it is denoted as $RSQ(R,S,q)$, and $RSQ(R,S,q)=\{s \in S\wedge s$ locates in $R| \forall s'\in S-\{s\}\wedge s'$ locates in $R: s' \ntriangleleft_q s\}$. \end{definition} Fig.~\ref{fig:rsq_ex} is an example of a range-skyline w.r.t. query node $q$, where the mobile user wants to search a nearby taxi around the query node with a high rank of service quality. The system firstly uses $q$'s range value, $R$, to prune the irrelevant moving objects which are out of the range. Then the system examines dominance relations between the remaining data objects. We assume that the mobile objects with smaller weights have a higher priority in this example. Since $s_1$ is the nearest neighbor of $q$ and spatially dominates all the other sensor nodes, $s_1$ must be in the resulting range skyline. Sensor node $s_4$ non-spatially dominates the other nodes in the range $R$, except $s_3$ and $s_5$. So the possible RSQ is $\{s_1,s_3,s_4,s_5\}$. However, $s_3$ is dominated by $s_5$ in the non-spatially attribute. Thus, $RSQ(R,S,q)$ will be $\{s_1, s_4, s_5\}$. It means that the system returns taxi $s_1$, $s_4$ and $s_5$ to the mobile user. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth,keepaspectratio]{./RSQ-Example.eps} \end{center} \caption{An example of an $RSQ(R,S,q)$ where $S$ is the data set, $R$ is the range with the query node $q$ as the center, and the output is $\{s_1, s_4, s_5\}$} \label{fig:rsq_ex} \end{figure} A continuous range-skyline query (CRSQ) is an extension of RSQ. CRSQ will monitor the environmental information within a given range and continuously produce the skyline for a period of time. It means that the system monitors each continuous range-skyline query $q$ within a specific range $R$ for a time period $\varDelta t=[t_0,t_{end}]$. Since each sensor node can move in the considered environment, such a phenomenon will make the answer of an RSQ change during the monitoring time $\varDelta t$. We formally define the continuous range-skyline query as below. \begin{definition} \label{crsq} \textbf{(Continuous Range-Skyline Query (CRSQ))}\\ Suppose that the notations are defined as above. Given a query $q$ with a query range $R$ and a time period $\varDelta t=[t_0,t_{end}]$, the \emph{continuous range-skyline query} returns a collection of range-skyline sets $RSQ_{t_i}(R,S,q)$, where $t_i \in \varDelta t$ and $i$ is the number of updated results. Formally, it is denoted as $CRSQ(R,S,q,\varDelta t)=\{RSQ_{t_i}(R,S,q)|t_i \in \varDelta t, i \in N\}$. \end{definition} An example of a continuous range-skyline query $q$ is shown in Fig.~\ref{fig:crsq_ex}. In this example, the system monitors the range-skyline of $q$ for a time period $\varDelta t$ and the result may be a collection of answers that contains different RSQ answers at different time since the answer may change. The answer collection contains 3 RSQ results at time $t_0$, $t_1$, and $t_2$ and these results are respectively shown in Fig.~\ref{fig:crsq_ex:a}, Fig.~\ref{fig:crsq_ex:b}, and Fig.~\ref{fig:crsq_ex:c}. The result of $q$ is $CRSQ(R,S,q,\varDelta t)=\{RSQ_{t_0}(R,S,q),RSQ_{t_1}(R,S,q),RSQ_{t_2}(R,S,q)\}$, where $RSQ_{t_0}(R,S,q)=\{s_1, s_4, s_5\}$, $RSQ_{t_1}(R,S,q)=\{s_1, s_8, s_{11}\}$, and $RSQ_{t_2}(R,S,q)=\{s_8\}$. The final output therefore will be $\{<\{s_1, s_4, s_5\},[t_0,t_1)>,<\{s_1, s_8, s_{11}\},[t_1,t_2)>,<\{s_8\},[t_2,t_{end}]>\}$. \begin{figure*}[h] \centering \subfigure[$RSQ_{t_0}(R,S,q)=\{s_1, s_4, s_5\}$]{ \label{fig:crsq_ex:a} \includegraphics[width=0.45 \textwidth]{./CRSQ-Example--a.eps} \\ \subfigure[$RSQ_{t_1}(R,S,q)=\{s_1, s_8, s_{11}\}$]{ \label{fig:crsq_ex:b} \includegraphics[width=0.45 \textwidth]{./CRSQ-Example--b.eps}}\hspace{0.05 \textwidth} \subfigure[$RSQ_{t_2}(R,S,q)=\{s_8\}$]{ \label{fig:crsq_ex:c} \includegraphics[width=0.45 \textwidth]{./CRSQ-Example--c.eps}} \caption{An example of a $CRSQ(R,S,q,\varDelta t)$ and the output is $\{<\{s_1, s_4, s_5\},[t_0,t_1)>,<\{s_1, s_8, s_{11}\},[t_1,t_2)>,<\{s_8\},[t_2,t_{end}]>\}$ where $t_0$, $t_1$, and $t_2$ are different times during $\varDelta t=[t_0,t_{end}]$} \label{fig:crsq_ex} \vspace{-10pt} \end{figure*} \section{The Distributed Continuous Range-Skyline Query Process} \label{DCRSQ} This section describes in detail the proposed distributed range-skyline query process over the IoMT. The proposed approach includes two parts: distributed range-skyline query process (DRSQ process) and distributed continuous range-skyline query process (DCRSQ process). The fundamental distributed approach for processing snapshot RSQs will be introduced in the DRSQ process. In second part, the DCRSQ process is extended from the DRSQ process with the consideration of node mobility to support CRSQ. Thus, the system can predict the change of RSQ result when monitoring a CRSQ during a period of time in IoMT environments. \subsection{Distributed Range-Skyline Query} In general, the query processing in mobile and distributed environments like IoMT based on MWSNs or MANETS, contains three steps. The first step is the local process that computes the skyline set based on local data and filters information received along with the query. The second step is query routing by which the query message can be forwarded to some of the neighboring nodes in order to retrieve their partial results. Thus, the node decides whether a neighbor can contribute to the skyline set based on available routing information. The last step is to merge the results, where the node receives the local result sets from queried neighbors and merges all the partial results by checking for dominated objects. Then, each node sends the merged result to the query node hop by hop. \subsubsection{Description of DRSQ} We assume that each mobile sensor node can hold a small database to store the collected sensing data and the query's information for the distributed query process. The collected sensing data set is called \emph{local data set} and all of the data are collected from the sensor node's one-hop neighbors and itself. Hence, each mobile sensor node can derive the result of \emph{local range-skyline query process} (LRSQ process) which may be the subset of range-skyline and return the result to the query node for computing the final (global) range-skyline answer. The result of LRSQ is defined in Definition~\ref{define_lrsq}. \begin{definition} \label{define_lrsq} \textbf{(Local Range-Skyline Query)}\\ Suppose that the notations are defined as above and a query node $q$ with a query range $R$ is given. After a mobile sensor node $s_j$ receives the query $q$, $s_j$ will return a subset of the local data set $S_{s_j}$ and each object $s$ in $S_{s_j}$ is $s_j$'s neighbor and not dominated by any other object $s'$ in $S_{s_j}$ w.r.t. $q$, where $S_{s_j}\subseteq S$. We refer to this result as a local range-skyline set and denote it as $LRSQ_{s_j}(R,S,q)=\{s$ locates in $R \wedge s \in S_{s_j} | \forall s' \in S_{s_j}-\{s\}\wedge s'$ locates in $R: s' \ntriangleleft_q s\}$. \end{definition} According to Definition~\ref{define_lrsq}, the query node $q$ will receive the results of $LRSQ_{s_j}(R,S,q)$, where $s_j$ is $q$'s one-hop neighbor ($1\leq j\leq k$) and $k$ is the maximum number of neighbors. Then the query node takes the union of these results as the candidate set, $RSQ_{candidate}=\bigcup^{k}_{j=1}{LRSQ_{s_j}(R,S,q)}$. After $RSQ_{candidate}$ is derived, the query node will use Definition~\ref{nsdom} and Definition~\ref{sdom} to examine the dominance relations of all the mobile objects in $RSQ_{candidate}$ again and then save the final result in $RSQ_{distributed}$ set. Such a cooperative and distributed process is refer to \emph{distributed range-skyline query process} (DRSQ process). As a result, DRSQ can be defined as Definition~\ref{define_drsq}. \begin{figure*}[t] \centering \subfigure[]{ \label{fig:DRSQ-Running-example:a} \includegraphics[width=0.31 \textwidth]{./DRSQ-Running-Example-a.eps} \subfigure[]{ \label{fig:DRSQ-Running-example:b} \includegraphics[width=0.31 \textwidth]{./DRSQ-Running-Example-b.eps} \subfigure[]{ \label{fig:DRSQ-Running-example:c} \includegraphics[width=0.31 \textwidth]{./DRSQ-Running-Example-c.eps}}\\ \vspace{-15pt} \subfigure[]{ \label{fig:DRSQ-Running-example:d} \includegraphics[width=0.31 \textwidth]{./DRSQ-Running-Example-d.eps} \subfigure[]{ \label{fig:DRSQ-Running-example:e} \includegraphics[width=0.31 \textwidth]{./DRSQ-Running-Example-e.eps} \subfigure[]{ \label{fig:DRSQ-Running-example:f} \includegraphics[width=0.31 \textwidth]{./DRSQ-Running-Example-f.eps}} \caption{A running example of DRSQ process where the data set is $S=\{s_1,...,s_{11}\}$, $m_q$ is a query message, and the $TTL$ of $m_q$ is $2$: \subref{fig:DRSQ-Running-example:a} one-hop neighbors $s_1$ and $s_4$; \subref{fig:DRSQ-Running-example:b} two-hop neighbors $s_2$, $s_3$, $s_5$, and $s_{11}$; \subref{fig:DRSQ-Running-example:c} two-hop neighbors derive and return their LRSQ results to the one-hop neighbors; \subref{fig:DRSQ-Running-example:d} one-hop neighbors merge all the received information, prune the irrelevant information, and then return LRSQ of themselves to $q$; \subref{fig:DRSQ-Running-example:e} $q$ unites all the received information and obtains a candidate set; \subref{fig:DRSQ-Running-example:e} $q$ derives the final result after checking the dominance relations between all the data objects in the candidate set. } \label{fig:DRSQ-Running-example} \vspace{-10pt} \end{figure*} \begin{definition} \label{define_drsq} \textbf{(Distributed Range-Skyline Query)}\\ Suppose the candidate set $RSQ_{candidate}$ of query node $q$ has been computed. Then, the query node $q$ uses $RSQ_{candidate}$ to derive the skyline set of the data objects in $R$ and the result of distributed range-skyline query can be denoted as $DRSQ(R,S,q)=\{s$ locates in $R\wedge s\in RSQ_{candidate}| \forall s' \in RSQ_{candidate}-\{s\}\wedge s'$ locates in $R: s' \ntriangleleft_q s\}$. \end{definition} The above distributed process for LRSQ derivation has a benefit that many irrelevant data objects are also pruned during the process. Thus, the computation overhead of the query node can be reduced. \subsubsection{Overview of DRSQ process} Before introducing the DRSQ processes on a query node and a sensor node with pseudo-codes in detail, we use a running example in Fig.~\ref{fig:DRSQ-Running-example} to depict the overview of DRSQ process and explain it step by step. Note that the spatial and non-spatial attributes of each sensor node (data object) are shown in Fig.~\ref{fig:rsq_ex}. As shown in Fig.~\ref{fig:DRSQ-Running-example:a} and Fig.~\ref{fig:DRSQ-Running-example:b}, a query node spreads the query message $m_q$ ($TTL=2$) to its one hop and two-hop neighbors. Each message $m_q$ contains the information of query node, such as query range $R$, location, and speed of $q$. Fig.~\ref{fig:DRSQ-Running-example:c} then shows that two-hop neighbors of $q$, $s_2, s_3, s_5,$ and $s_{11}$, use their own local information to derive their local range-skyline results and return these local range-skyline results to $q$'s one-hop neighbors, $s_1$ and $s_4$. After $s_1$ and $s_4$ receive the local range-skyline results from the two-hop neighbors of $q$, they will merge the received local range-skyline results and do the dominance check with the information of themselves. As Fig.~\ref{fig:DRSQ-Running-example:c} shows, the local skyline candidate sets of $s_1$ and $s_4$ are $LRSQ_{s_2}=\{s_1,s_2\}$ and $LRSQ_{s_3}\cup LRSQ_{s_5}\cup LRSQ_{s_{11}}=\{s_3,s_4,s_5\}$ respectively. Sensor nodes $s_1$ and $s_4$ then check the dominance relations between all the candidate data objects and obtain their local range-skyline results, $LRSQ_{s_1}=\{s_1,s_2\}$ and $LRSQ_{s_4}=\{s_4,s_5\}$ because of $s_5\triangleleft_q s_3$ respectively, as Fig.~\ref{fig:DRSQ-Running-example:d} shows. After aggregating the local range-skyline sets, $LRSQ_{s_1}$ and $LRSQ_{s_4}$, $s_1$ and $s_4$ return them to the query node $q$ respectively. After receiving the local range-skyline sets from $s_1$ and $s_4$, query node $q$ merges these sets to derive a candidate set $RSQ_{candidate}=LRSQ_{s_1}\cup LRSQ_{s_4}=\{s_1,s_4,s_5\}$. Fig.~\ref{fig:DRSQ-Running-example:e} presents the step of obtaining a candidate set of $q$. Finally, in Fig.~\ref{fig:DRSQ-Running-example:f}, query node $q$ checks the dominance relations between all the data objects in local candidate set and derives the final range-skyline, $DRSQ(R,S,q)$ $=\{s_1,s_4,s_5\}$. \subsubsection{The DRSQ Process} \label{sec_drsq_process} In this subsection, we introduce DRSQ process with pseudo-codes. The whole DRSQ process includes two parts: LRSQ and GRSQ processes. The pseudo-codes of Algorithm~\ref{alg:lrsq} and Algorithm~\ref{alg:crsq} respectively show the LRSQ process on a mobile sensor node and the GRSQ process on the query node as well as present the ideas and frameworks of our proposed approaches. Note that each sensor node repeatedly runs the LRSQ process in Algorithm~\ref{alg:lrsq} for a query and thus continuously receives messages from the network. When a user (mobile device) issues a query $q$, the device floods query messages to its one-hop neighbors with a maximum hop count, \emph{Time-To-Live} (TTL). After flooding the query messages, the query node starts GRSQ process (in Algorithm~\ref{alg:crsq}) to collect the local skyline sets from its one-hop neighbors and then derives the final result, $RSQ_{distributed}$, for the query. When each one-hop neighbor of $q$ receives the query messages, it will do the operations from Line~\ref{alg:lrsq:get_query} to Line~\ref{alg:lrsq:end_get_query} of LRSQ process (Algorithm~\ref{alg:lrsq}). If the TTL value in the received query message is larger than 0, it means that the mobile sensor node is an intermediate node in the routing path of the query and the sensor node will forward the query to its neighbors at Line~\ref{alg:lrsq:forward_query}. Otherwise, the mobile sensor node is an end node in the routing path of the query. The sensor node will stop forwarding the query message, add the data objects of itself to the local skyline set $RSQ_{local}$ at Line~\ref{alg:lrsq:add_to_rsq_local}, and then start to return the $RSQ_{local}$ to the query node (at Line~\ref{alg:lrsq:return_rsq_local_result}) through the reversed routing path of the query. Note that $RSQ_{local}$ is equal to the term, $LRSQ_{s{i}}(R,S,q)$, and we may use both interchangeably afterward in this paper. \begin{algorithm2e}[t] \footnotesize \SetAlgoLined \KwIn{received message $m$ and neighbor list $list_{neighbor}$} \KwOut{local range-skyline $RSQ_{local}$} $q\leftarrow$ new query object\tcc*[r]{create a temporary empty query object} $s\leftarrow$ new node\tcc*[r]{create a temporary empty source node} $RSQ_{local}\leftarrow \emptyset$\tcc*[r]{create a local range-skyline set} $o\leftarrow$ this.sense()\tcc*[r]{save self's environmental data in a temporary object} \uIf{$m.type==$ RSQ\_TYPE\label{alg:lrsq:get_query}}{ $q\leftarrow$ this.parse($m$, \textit{RSQ\_TYPE})\tcc*[r]{save the query information to $q$} $s.address\leftarrow m.source\_address$\tcc*[r]{record the previous node $s$} \uIf{$m.TTL > 0$}{ this.flood($m$, $m.TTL-1$)\label{alg:lrsq:forward_query}\tcc*[r]{forward message $m$ to all the neighbors} \Return \; } \ElseIf{$m.TTL == 0$}{ \If{$o$ locates in $q$'s range}{ add $o$ into $RSQ_{local}$\label{alg:lrsq:add_to_rsq_local}\; } } }\label{alg:lrsq:end_get_query} \ElseIf(\tcc*[f]{LRSQ}){$m.type==$ RSQ\_REPLY\_TYPE\label{alg:lrsq:rsq_local_compute}}{ $RSQ_{neighbor}\leftarrow \emptyset$\tcc*[r]{create a temporary range-skyline set} \tcc{get the neighbor's local range-skyline set} $RSQ_{neighbor}\leftarrow$ this.parse($m$, \textit{RSQ\_REPLY\_TYPE})\; \tcc{check dominance relations between the recieved objects and itself} int $isDominated = 0$\; \ForEach{data object $o'$ in $RSQ_{neighbor}$\label{alg:lrsq:dominance_check}}{ \uIf{$o \triangleleft_q o'$}{ \textbf{continue}\; } \ElseIf{$o' \triangleleft_q o$}{ $isDominated=1$\; } $RSQ_{local}\leftarrow RSQ_{local}\cup \{o'\}$\; }\label{alg:lrsq:end_dominance_check} \If{$isDominated==0$}{ $RSQ_{local}\leftarrow RSQ_{local}\cup \{o\}$\label{alg:lrsq:add_to_local_skyline}\; } }\label{alg:lrsq:end_rsq_local_compute} \tcc{return $m'$ to $q$ through the previous sensor node $s$ in $q$'s routing path} $m'\leftarrow$ this.createMessage($RSQ_{local}$, \textit{RSQ\_REPLY\_TYPE})\tcc*[r]{only in DRSQ approach} this.forward($m'$, $s$)\label{alg:lrsq:return_rsq_local_result}\tcc*[r]{only in DRSQ approach} \Return $RSQ_{local}$\; \caption{LRSQ process on a mobile sensor node} \label{alg:lrsq} \end{algorithm2e} \begin{algorithm2e}[t] \footnotesize \SetAlgoLined \KwIn{received message $m$ and neighbor list $list_{neighbor}$} \KwOut{distributed range-skyline $RSQ_{distributed}$} $RSQ_{distributed}\leftarrow \emptyset$\tcc*[r]{create a set to save the distributed range-skyline} $RSQ_{candidate}\leftarrow \emptyset$\tcc*[r]{create a set to save the candidate range-skyline} int $i=0$\; \Repeat{$i==list_{neighbor}.length$}{ \label{alg:crsq:get_local_rsq} \If{$m.type==$ RSQ\_REPLY\_TYPE}{ \label{alg:crsq:get_local_rsq:check_msg_type} $RSQ_{neighbor}\leftarrow \emptyset$\tcc*[r]{create a temporary range-skyline set} \tcc{obtain the neighbor's local range-skyline set} $RSQ_{neighbor}\leftarrow$ this.parse($m$)\; $RSQ_{candidate}\leftarrow RSQ_{candidate} \cup RSQ_{neighbor}$\; $i++$\; } }\label{alg:crsq:end_get_local_rsq} \tcc{check dominance relations between the recieved candidiates and itself} \ForEach{data object $o$ in $RSQ_{candidate}$\label{alg:crsq:dominance_check}}{ int $isDominated=0$\; \ForEach{data object $o'$ in $(RSQ_{candidate}-\{o\})$}{ \If{$o' \triangleleft_q o$}{ $isDominated=1$\; \textbf{break}\; } } \If{$isDominated==0$}{ $RSQ_{distributed}\leftarrow RSQ_{distributed}\cup \{o\}$\; } }\label{alg:crsq:end_dominance_check} \Return $RSQ_{distributed}$\; \caption{GRSQ process on the query node} \label{alg:crsq} \end{algorithm2e} When an intermediate sensor node receives a response message for the query, it will perform the operations from Line~\ref{alg:lrsq:rsq_local_compute} to Line~\ref{alg:lrsq:end_rsq_local_compute} in Algorithm~\ref{alg:lrsq} to compute the latest local range-skyline $RSQ_{local}$. Since the received response message contains a local range-skyline set w.r.t. the neighboring node which sent this message, the intermediate mobile sensor node will save the received local range-skyline in a temporary set $RSQ_{neighbor}$. Note that all the data objects in $RSQ_{neighbor}$ do not dominate each other, so the sensor node will check the dominance relations between each data object $o'$ in $RSQ_{neighbor}$ and the data object $o$ of itself. If data object $o'$ in $RSQ_{neighbor}$ is not dominated by data object $o$, the data object $o'$ is still a local range-skyline member. The operations of dominance relation checking are presented from Line~\ref{alg:lrsq:dominance_check} to Line~\ref{alg:lrsq:end_dominance_check} of Algorithm~\ref{alg:lrsq}. The mobile sensor node executes the operation at Line~\ref{alg:lrsq:add_to_local_skyline} if the data object $o$ of itself is not dominated by any other data objects in $RSQ_{neighbor}$. It indicates that the sensed data object $o$ of the intermediate sensor node becomes one of the local range-skyline member. After the dominance validation, the intermediate sensor node keeps forwarding the response message, including the latest local range-skyline, back to the query node. All the intermediate sensor nodes do the above operations and update the local skyline set which is saved in the response message until the response message is received by the query node. Algorithm~\ref{alg:crsq} describes the operations executed by the query node $q$ after it floods the query messages. From Line~\ref{alg:crsq:get_local_rsq} to Line~\ref{alg:crsq:end_get_local_rsq}, the query node collects all the local range-skyline sets from its one-hop neighbors and merges them into the candidate range-skyline set $RSQ_{candidate}$. Such a union operation is done to avoid recording the same data objects multiple times. Note that the Line~\ref{alg:crsq:get_local_rsq:check_msg_type} is a operation to check whether the received message is a reply message or not. Such a check can avoid the routing loop of a query message. In the considered environment, the query node $q$ is a sensor node and may be an $i$-hop neighbor of a node $s$, so $q$ may receive the query message from $s$ if $TTL > 2i-1$, where the appropriate value of $TTL$ will be discussed in Section~\ref{analysis}. Such a scenario may only occur when the query range $R$ is much larger than the transmission range $r$. However, $RSQ_{candidate}$ is not the final result for the query because the data objects in all the received local range-skyline sets may dominate each other. So the query node executes the operations from Line~\ref{alg:crsq:dominance_check} to Line~\ref{alg:crsq:end_dominance_check} for checking the dominance relations between all the candidate points and then saves the non-dominated data objects in a set, $RSQ_{distributed}$. Finally, the query node (the user's device) returns the final range-skyline, $RSQ_{distributed}$, to the user. \subsection{Distributed Continuous Range-Skyline Query} This subsection is organized as follows. We will first present some important notations and assumptions for the DCRSQ process. Second, we will introduce the proposed DCRSQ approach with a running example extended from the CRSQ example in Fig.~\ref{fig:crsq_ex}. Last, the proposed DCRSQ process will be explained in details with some definitions. \subsubsection{System Assumptions} In order to make the DRSQ process able to support CRSQ in the DCRSQ process, some additional and modified assumptions of the system are needed and we will describe them in detail before introducing the DCRSQ process. Details of the system assumptions are given as follows: \begin{itemize} \item Each mobile node can always obtain the spatial (location and mobility) and non-spatial (sensed data) informations of its one-hop neighbors and itself with its GPS equipment. We call such collected informations as \emph{local information}. \item Each mobile node has a limited buffer to store the received CRSQ queries and each stored query will be continuously processed with local information until the query is expired. \item Each node also collects the information of existing queries from its neighbors while generating local information of itself. \item The size of a network packet (message) is fixed and one packet only can store one data object. \end{itemize} We here show all the important notations used in this paper in Table~\ref{notations}. \begin{table}[h] \caption{Important Notations} \label{notations} \centering \footnotesize \begin{tabular}{c|p{0.75\columnwidth}} \hline \textbf{Notation} & \textbf{Description} \\ \hline \hline $\mathcal{A}$ & The sensing area\\ $S$ & The set of all the mobile nodes\\ $s$ & Sensor node (mobile object) \\ $q$ & Query node\\ $R$ & Query range \\ $r$ & Transmission range (Sensing range) \\ $N$ & Number of mobile sensor nodes\\ $N_R$ & Number of mobile sensor nodes in the query range\\ $N_r$ & Number of neighbors in the transmission range\\ $\varDelta t$ & A period of time for monitoring a CRSQ\\ $T$ & A time interval for each sensor node return its information periodically\\ $T_{safe}$ & A predicted time period that the answer may change\\ $t_{M_{s}}$ & The monitoring time of node $s$ w.r.t. $q$\\ $m_q$ & Query message \\ $m_r$ & Reply message \\ $TTL$ & Time-To-Live (Hop count) of a message \\ $dist$ & Euclidean distance \\ \hline \end{tabular} \end{table} \subsubsection{Overview of DCRSQ} To support CRSQ, the DCRSQ process should take node mobility into account because the movement of nodes may cause frequent change of the answer. We thus adapt the idea of \citep{conf/mdm/KimCT08}, \emph{safe-time}, for considering the mobility of nodes. The safe time is derived for a node to predict when a neighbor node enters and leaves the query $q$'s range. If a neighboring object leaves the range of $q$, this object will not to a point of range-skyline and thus it will not be processed on the sensor node. It means that the node can determine that processing this neighboring object is necessary or not with the safe-time information. For deriving the precise safe time, we consider the movement of mobile nodes simultaneously and use Fig.~\ref{fig:safe_time_ex} to illustrate. Initially, $dist(q,s)$ is the distance from the query node $q$ to sensor node $s$, where $dist(q,s)\leq R$. In the following, we present the equation to calculate the safe time $t_{|\overline{qs}|}$ when $dist(q,s)=R$. Suppose the initial location of object $q(s)$ is $(x_q,y_q)$($(x_s,y_s)$) with speed $(v_{x_q},v_{y_q})$($(v_{x_s},v_{y_s})$). Since $dist(q,s)=R$, we can have \small{ \begin{eqnarray} R^2&=&[(x_q+v_{x_q}*t_{\overline{qs}})-(x_s+v_{x_s}*t_{\overline{qs}})]^2 \notag\\ &+&[(y_q+v_{y_q}*t_{\overline{qs}})-(y_s+v_{y_s}*t_{\overline{qs}})]^2. \label{eq1} \end{eqnarray} }\normalsize After transposing each term, \eqref{eq1} can be a quadratic equation. Then, we can simply use the discriminant of quadratic equation to get two values of the safe time. We select the minimum positive value as the safe time $t_{\overline{qs}}$. Note that the above example shows the case of a leaving node. The other scenario is that a node may enter the range of query $q$. If the node is out of the $q$'s range and receives the query message in advance, the time periods of its entering and leaving can also be obtained by the same way. \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth,keepaspectratio]{./safe_time.eps} \end{center} \vspace{-10pt} \caption{An example for deriving the safe time $t_{\overline{qs}}$ when $dist(q,s)=R$} \label{fig:safe_time_ex} \end{figure} For example, a query $q$ issues a CRSQ with $\varDelta t=[3,10]$ at time $t_0$. Fig~\ref{fig:safe_time_for_enter_leave_ex} shows the relative locations of $s$ and $q$ at each time step $t_i$ where $i\geq 0$. The query $q$ can use~\eqref{eq1} to obtain the safe time of node $s$, $t_{\overline{qs}}=[t_{enter},t_{leave}]=[1,6]$. So the exact monitoring time of node $s$ w.r.t $q$, $t_{M_{s}}$, is $[3,6]$, since the $q$ only concerns the results during the time $\varDelta t=[3,10]$ and the node $s$ will leave the range of $q$ after time $t_6$. \begin{figure}[ht] \begin{center} \includegraphics[width=0.32\textwidth,keepaspectratio]{./safe_time_for_enter_leave.eps} \end{center} \vspace{-10pt} \caption{The monitoring time ($t_{M_{s}}=[3,6]$) of node $s$ w.r.t. $q$ where $\varDelta t=[3,10]$,} \label{fig:safe_time_for_enter_leave_ex} \vspace{-10pt} \end{figure} Combine the prediction of monitoring time with the LRSQ process, the continuous local range-skyline candidate sets also can be obtained. We refer such a process to \emph{continuous local range-skyline query process} (CLRSQ process). Consider the query $CRSQ(R,S,q,\varDelta t=[0,3])$ in Fig.~\ref{fig:crsq_ex}, the nodes $s_2, s_3, s_5,$ and $s_{11}$ in CLRSQ process (modified from the LRSQ process in Fig.~\ref{fig:DRSQ-Running-example}), respectively return the information of their local range-skyline candidate sets during the time $\varDelta t=[0,3]$. Node $s_2$ returns $CLRSQ_{s_2}=\{<\{s_1,s_2\},[t_0,t_1)>,<\{s_1,s_2\},[t_1,t_2)>,<\{s_2\},[t_2,t_3]>\}$ back to the intermediate node $s_1$. Nodes $s_3$, $s_5$, and $s_{11}$ respectively return $CLRSQ_{s_3}=\{<\{s_3,s_4\},[t_0,t_1)>,<\{s_3,s_{11}\},[t_1,t_2)>,<\{s_3,s_{11}\},[t_2,t_3]>\}$, $CLRSQ_{s_5}=\{<\{s_4,s_5\},[t_0,t_1)>,<\{s_4\},[t_1,t_2)>,<\{s_4\},[t_2,t_3]>\}$, and $CLRSQ_{s_{11}}=\{<\{s_3,s_4\},[t_0,t_1)>,<\{s_3,s_{11}\},[t_1,t_2)>,<\{s_3,s_{11}\},[t_2,t_3]>\}$ to the intermediate node $s_4$. In such a case, node $s_1$ will use the received $CLRSQ_{s_2}$ and the local information of itself to derive the $CLRSQ_{s_1}=\{<\{s_1,s_2\},[t_0,t_1)>,<\{s_1,s_2\},[t_1,t_2)>,<\{s_2\},[t_2,t_3]>\}$; and node $s_4$ will use the received $CLRSQ_{s_3}$, $CLRSQ_{s_5}$, $CLRSQ_{s_{11}}$, and the local information of itself to calculate the $CLRSQ_{s_4}=\{<\{s_4,s_5\},[t_0,t_1)>,<\{s_3,s_{11}\},[t_1,t_2)>,<\{s_3,s_{11}\},[t_2,t_3]>\}$. Finally, with received $CLRSQ_{s_1}$ and $CLRSQ_{s_4}$, the query node $q$ can obtain a predicted final result of CRSQ, $DCRSQ(R,S,q,\varDelta t)=\{<\{s_1,s_4,s_5\},[t_0,t_1)>,<\{s_1,s_3,s_{11}\},[t_1,t_2)>,<\{s_3,s_{11}\},[t_2,t_3]>\}$, at time $t_0$. Unfortunately, the predicted result may not be correct. In the above example, $DCRSQ(R,S,q,\varDelta t)$ is not equal to $CRSQ(R,S,q,\varDelta t)$ since the query node $q$ cannot obtain the information of node $s_8$ at time $t_0$. So the result needs to be updated continuously. In the DCRSQ process, three cases may happen if a node enters the range of $q$. First, if $s_8$ received a query message from $q$, $s_8$ would return its local range-skyline while entering the range of $q$. It means that at least one node locates in the range of $q$ at time $t_0$ and it is the intermediate (relay) node between $s_8$ and $q$. In such a case, the intermediate node has the information of $s_8$ and uses that to derive the predicted CLRSQ result. Then the predicted DCRSQ result should be a correct answer, $DCRSQ(R,S,q,\varDelta t)=\{<\{s_1,s_4,s_5\},[t_0,t_1)>,<\{s_1,s_8,s_{11}\},[t_1,t_2)>,<\{s_8\},[t_2,t_3]>\}$. However, the mentioned example is not in this case since there is no intermediate node between $s_8$ and $q$. It is the second case that $s_8$ does not receive any information of $q$ at time $t_0$. According the assumptions of the DCRSQ process, $s_8$ will obtain the information of $q$ from its neighbors when $s_8$ enters the range of $q$ after time $t_1$. Hence, the RSQ results at time $t_1$ and $t_2$, $<\{s_1,s_3,s_{11}\},[t_1,t_2)>$ and $<\{s_3,s_{11}\},[t_2,t_3]>$, will be updated to $<\{s_1,s_8,s_{11}\},[t_1,t_2)>$ and $<\{s_8\},[t_2,t_3]>$, since $s_8 \triangleleft_q \{s_3,s_{11}\}$. The last case is that $s_8$ cannot successfully obtain the information of $q$ when it enters the range of $q$. Such a case will be recognized as an incorrect result and it only occurs when the mobile environment is too sparse. \subsubsection{Description of DCRSQ} According to Definition~\ref{crsq}, the system will process the CRSQ for a period of time $\varDelta t$ and derive the collection of possible answers. However, such a definition comes from the global and centralized view of system. In the previous subsection, the distributed method for processing RSQ has been introduced with Definition~\ref{define_lrsq} and Definition~\ref{define_drsq}. In the DCRSQ process, we use a mechanism to make each node able to predict the change of LRSQ answer with the node mobility. With above definitions and examples, the formal descriptions of CLRSQ and can be defined as Definition~\ref{define_clrsq} and Definition~\ref{define_dcrsq} respectively. \begin{definition} \label{define_clrsq} \textbf{(Continuous Local Range-Skyline Query)}\\ Suppose that the notations are defined as above and a query $CRSQ(R,S,q,\varDelta t)$ is issued by the query node $q$. After a mobile sensor node $s_j$ receives the query message from $q$, $s_j$ will return a collection of local range-skyline sets $LRSQ_{s_j}(R,S,q,t_i)$, where $t_i \in \varDelta t$ and $i$ is the number of local answer change. Formally, it is denoted as $CLRSQ_{s_j}(R,S,q,\varDelta t)=\{LRSQ_{s_j}(R,S,q,t_i)|t_i \in \varDelta t, i\in N\}$. \end{definition} According to Definition~\ref{define_clrsq}, the query node $q$ will receive the results of $CLRSQ_{s_j}(R,S,q,\varDelta t)$, where $s_j$ is one-hop neighbor of $q$, $1\leq j\leq k$, and $k$ is the maximum number of neighbors. Then the query node $q$ will take the union of received results, which are the local range-skyline sets for time $t_i$, as $RSQ_{candidate}(R,S,q,t_i)=\bigcup^{k}_{j=1}LRSQ_{s_j}(R,S,q,t_i)$. Thus, the candidate collection can be denoted as $CRSQ_{candidate}=\{<RSQ_{candidate}(R,S,q,t_0),[t_0,t_1)>,<RSQ_{candidate}(R,S,q,t_1),[t_1,t_2)>,\dots,<RSQ_{candidate}(R,S,q,t_i),[t_i,t_{end}]>\}$. After deriving $CRSQ_{candidate}$, $q$ will check the dominance relations of all objects in each $RSQ_{candidate}(R,S,q,t_i)$ set again and then obtain the final result $DRSQ(R,S,q,t_i)$ at each time $t_i$. We call such a process \emph{distributed continuous range-skyline query process} (DCRSQ process) and the definition is given in Definition~\ref{define_dcrsq}. \begin{definition} \label{define_dcrsq} \textbf{(Distributed Continuous Range-Skyline Query)}\\ Suppose the candidate collection $CRSQ_{candidate}$ of query node $q$ has been computed. Query node $q$ uses $CRSQ_{candidate}$ to derive a collection of the range-skyline sets for different time $t_i$ and we use $DCRSQ(R,S,q,\varDelta t)$ to represent such a collection of continuous range-skyline sets, where $DCRSQ(R,S,q,\varDelta t)=\{DRSQ(R,S,q,t_i)|t_i \in \varDelta t, i\in N\}$. \end{definition} Note that the fundamental process of DCRSQ is similar to DRSQ process mentioned in section~\ref{sec_drsq_process}. The main difference is that each mobile node $s_j$ generates a continuous local range-skyline set $CLRSQ_{s_j}(R,S,q,\varDelta t)$ with the safe-time information of candidate nodes. Thus the DCRSQ process can provide sufficient information to the query node $q$ for deriving, predicting, and updating the answer as time continuously goes on. Algorithm~\ref{alg:dcrsq} gives the high-level description of DCRSQ process. To implement Line 12 and Line 13 of Algorithm~\ref{alg:dcrsq}, we use the idea of \emph{sliding window}, which is already a widely used design in many domains. Since it is out of the scope of this paper, we will not address it. In addition, if the $\varDelta t$ is the specific time of the query issuing, $[t_i,t_i]$, $t_i\geq 0$, the query will be a snapshot RSQ and the DCRSQ will do the same process as the DRSQ process does. \begin{algorithm2e}[t] \footnotesize \SetAlgoLined \KwIn{received message $m$ and neighbor list $list_{neighbor}$} $RSQ_{distributed}\leftarrow \emptyset$\tcc*[r]{create a set to save the distributed range-skyline} $RSQ_{local}\leftarrow \emptyset$\tcc*[r]{create a set to save the latest local range-skyline} $RSQ_{current\_local}\leftarrow \emptyset$\tcc*[r]{create a set to save the previous local range-skyline} $list_{safe\_time}\leftarrow \emptyset$\tcc*[r]{create a list to record the safe time of neighbors} \uIf{$this.nodeType==$ QUERY\_NODE}{ \Repeat{$m$.isExpired()}{ \tcc{call the Algorithm~\ref{alg:crsq} to update the final distributed range-skyline} $RSQ_{distributed}\leftarrow$GRSQ($m$, $list_{neighbor}$)\; }\label{alg:dcrsq:end_get_local_rsq} } \ElseIf{$this.nodeType==$SENSOR\_NODE}{ \label{alg:dcrsq:get_local_rsq:check_msg_type} \If{$m.type==$ RSQ\_REPLY\_TYPE}{ \tcc{derive the safe time of each neighbor} $list_{safe\_time}\leftarrow$ UpdateSafeTime($list_{neighbor}$)\; \tcc{call the Algorithm~\ref{alg:lrsq} to update the local range-skyline} $RSQ_{local}\leftarrow$ LRSQ($m$, $list_{neighbor}$)\; \tcc{update the local range-skyline with the safe time values of neighbors} $RSQ_{local}\leftarrow$ SafeTimeCheck($RSQ_{local}$, $list_{safe\_time}$)\; \tcc{return the update message when the local range-skyline changes} \If{$RSQ_{current\_local}$ is not equal to $RSQ_{local}$}{ $s\leftarrow$new node\; $s.address\leftarrow m.source\_address$\; $m'\leftarrow$ this.createMessage($RSQ_{local}$, \textit{RSQ\_REPLY\_TYPE})\; this.forward($m'$, $s$)\; } } } \Return $RSQ_{distributed}$\; \caption{DCRSQ process on a mobile node (both query and sensor node)} \label{alg:dcrsq} \end{algorithm2e} \section{Cost Analysis and Discussion} \label{analysis} Suppose that $N$ mobile data objects are distributed independently and uniformly in the sensing area, $\mathcal{A}$, and each data object has $d$ attributes. If all the attributes of each object are in a uniform distribution, the skyline search problem can be treated as the problem of finding the maxima~\citep{Bentley:1978:ANM:322092.322095} in an $N\times d$ matrix. Hence, the expected size of skyline will be $n_{sky}=O((\ln N)^{d-1})$. In the considered environment, the query node does not need to process all the mobile sensor nodes (or data objects) for the range-skyline query and thus the expected size of range-skyline will be $n_{range-sky}=O((\ln N_{R})^{d-1})\leq O((\ln N)^{d-1})$, where $N_{R}$ is the number of mobile sensor nodes in the query range $R$ and $0\leq N_{R}\leq N$. Note that the value of $N_{R}$ is influenced by the value of $N$, sensing area $|\mathcal{A}|$ and the query range $R$ and $N_{R}=\lfloor\frac{\pi R^2N}{|\mathcal{A}|}\rfloor$. According to the above notations, the average number of data objects in the transmission range of a mobile sensor node will be $N_r=\lfloor\frac{\pi r^2N}{|\mathcal{A}|}\rfloor$, where $r$ is the transmission range of a mobile sensor node. If $N_r \leq 1$, the density of mobile sensor nodes is too sparse and thus it is too hard to route messages. In such a case, none of the conventional centralized and proposed approaches can perform well in the CRSQ processing. Hence, we only discuss the case, $N_r>1$, in this work. Note that we do not discuss the case here $N_{R}\leq1$ since none of mobile sensor nodes can serve this query. In the considered IoMT, the mobile sensor nodes in the query range have to return information to the query node in hop-by-hop manner. The possibility distribution function of each hop in a multi-hop wireless environment has been discussed in~\citep{4277081} and we use that to obtain the possibility $P_i$ of the $i$th hop transmission. To obtain sufficient information for deriving the accurate result of a RSQ, the system must guarantee that more than $N_R$ neighboring nodes of the query node can receive the query message. Then we can denote such an expected network cost for spreading the query message as \begin{eqnarray}\label{eq2:expected_query_speard} E[q_{spread}] = \sum_{i=1}^{k}N_r^i\prod_{j=1}^{i}P_j, \end{eqnarray} where $N_r^i$ is the average number of $i$th-hop neighbors with respect to the query node $q$. Since all sensor nodes in the query range should be notified with the query messages from $q$, we can find a minimum value of $k\in \mathbb{N}$ that $E[q_{spread}]\geq N_R$. Hence, the expected hop count $TTL_q$ can be derived by~\eqref{eq2:expected_query_speard} and $TTL_q=k$. The process of data collection in the centralized approach is straightforward and each of the mobile sensor node which receives the query message will return the information of itself to the query node. Since the $i$th-hop neighbor needs to return an $i$-hop response message to the query node, the network cost of the reply messages for the $i$th-hop neighbors will be $N_r^i\times i\prod_{j=1}^{i}P_j$ without the cooperative pruning. Hence, the expected network cost for returning messages in the centralized approach can be denoted as \begin{equation}\label{eq3:centralized_response_cost} E_{centralized}[q_{response}] = \sum_{i=1}^{k}N_r^i\times i\prod_{j=1}^{i}P_j, \end{equation} where $k=TTL_q$ is determined by \eqref{eq2:expected_query_speard} with the constraint $E[q_{spread}]\geq N_R$. In summary, the total network cost of the centralized approach for a RSQ, $q$, can be denoted as \begin{equation}\label{eq4:centralized_cost} E_{centralized}[q] = E[q_{spread}] + E_{centralized}[q_{response}]. \end{equation} In the proposed approach, DRSQ process, each node derives the local range-skyline and the expected size of result is $O(\ln N_r)^{d-1}$. The reason is that DRSQ process combines the information filtering into the data collection, thus reducing a large number of irrelevant response messages. Hence, the network cost for replying the information can be denoted as \begin{equation}\label{eq4:drsq_response_cost} E_{DRSQ}[q_{response}] = \sum_{i=1}^k N_r^i (\ln N_r^i)^{d-1}P_{k-i}, \end{equation} and $E_{DRSQ}[q_{response}]< E_{centralized}[q_{response}]$ in normal cases. So the total network cost of DRSQ process can be estimated as \begin{equation}\label{eq5:drsq_cost} E_{DRSQ}[q] = E[q_{spread}] + E_{DRSQ}[q_{response}]. \end{equation} For monitoring a CRSQ query in the centralized approach, the query node has to spread the query message periodically during the time period $\varDelta t$ and each neighboring node also has to periodically return the information of itself. So the network cost of the centralized approach can be denoted as \begin{equation}\label{eq6:crsq_centralized_cost} E_{centralized}^{CRSQ}[q] = \dfrac{|\varDelta t|}{T}\times E_{centralized}[q], \end{equation} where $T$ is the time interval that each mobile sensor node periodically reports the updated information to the query node and the default value of $T$ is 1 second. In DCRSQ process, the query node does not have to periodically spread query messages since each mobile sensor node can buffered the information of the query. So the network cost is mainly influenced by the frequency of the answer changes and it can be derived by \begin{equation}\label{eq7:crsq_dcrsq_cost} E_{DCRSQ}^{CRSQ}[q] = E[q_{spread}] + \dfrac{|\varDelta t|}{\overline{T_{safe}}}\times E_{DRSQ}[q_{response}], \end{equation} where $\overline{T_{safe}}$ is the average safe-time that the result needs to be updated. \section{Simulation Results} \label{simulation} All of the simulations are implemented as custom programs using C++ and executed on a Windows 7 system with an Intel i5-4460 3.20GHz CPU and 8GB memory. In all the simulation scenarios, the mobile sensor nodes are distributed uniformly and the results are reported with the average of 200 executions. The used mobility model is Random Way Point (RWP) and the network routing protocol is AODV~\citep{749281}. Since none of existing works provides distributed RSQ process over IoMT environments, we thereby use a centralized method~\citep{Dimitris:2005:SkylineComputation} as the compared centralized approach and it is executed on the query node for calculating the query results. In the centralized approach, the query node directly uses the flooding scheme to spread query messages and then collects information from moving data objects. The proposed approach, DCRSQ process, can support (snapshot) range-skyline query and continuous range-skyline query. If $\varDelta t=[t_0,t_{end}]=0$, where $t_0=t_{end}$, the DCRSQ process will perform the DRSQ process for deriving the results of the (snapshot) RSQ at time $t_0$. We thus organize the simulation section as two scenarios. In the first scenario, we discuss the performance of DRSQ process in terms of \emph{response time} and \emph{number of messages}. The response time is the period of time from issuing a RSQ to obtaining the result during the DRSQ process. The number of messages represents the necessary network cost on data collection. In the second scenario, the performance of DCRSQ process is discussed in terms of \emph{number of accessed objects} and \emph{number of messages}. Additionally, the correctness of DCRSQ result is discussed in terms of \emph{precision} and \emph{recall}. In both scenarios, the following important factors are discussed: \emph{density (number of sensor nodes)}, \emph{number of queries}, \emph{query range}, and \emph{transmission range}. For validating the DCRSQ process, an additional factor, \emph{node speed}, is also in the discussion. \subsection{Scenario I: Performance of DRSQ Process} In the first scenario, we discuss the performance of DRSQ process. There are $100$ mobile sensor nodes in a $400m \times 400m$ square sensing area. The default transmission range is $75m$ and the node speed is $5 m/s$. Initially, mobile sensor nodes and queries are placed randomly in the area. The basic simulation settings for the first scenario are shown in TABLE~\ref{simulation_parameters_1} and, we execute the simulation 200 times to get the average results and the $95\%$ confidence intervals under each scenario. \begin{table}[ht] \caption{Simulation Parameters for Scenario I} \label{simulation_parameters_1} \centering \footnotesize \begin{tabular}{p{3.7cm}cc} \hline \textbf{Parameter} & \textbf{Default Value} & \textbf{Range (type)}\\ \hline \hline Sensing Area ($m \times m$) & 400 $\times$ 400 & --\\ Number of Sensor Nodes & 100 & 50, 100, 150 ,200\\ Number of Queries & 1 & 1, 2, 3, 4, 5\\ Query Range, $R$ ($m$) & 80 & 60, 80, 100, 120\\ Transmission Range, $r$ ($m$) & 75 & 50, 75, 100, 125\\ Node Speed ($m/s$) & 2 & 1, 2, 3, 4, 5\\ $TTL$ of Messages (centralized approach) & 5 & --\\ Bandwidth ($Mb/s$) & 2 & --\\ \hline \end{tabular} \end{table} To the best of our knowledge, none of existing works proposed a method for processing range-skyline queries in such an environment, where the databases, CPUs, and memory are fully-distributed. We hence compare the proposed approach, DRSQ process, with a centralized approach which is a baseline. Note that the centralized approach does not use a powerful server. In the centralized approach, we assume that the query node is a sink node and can process the query with received information. The other mobile nodes only just forward the query and response messages without processing the local range-skyline. \subsubsection{DRSQ: Density} Fig.~\ref{fig:sensor_num:time} shows that the response time of our proposed distributed approach, DRSQ process, is $10\%$ better than the centralized approach when the density becomes more dense (the number of sensor nodes increases). Although both centralized approach and DRSQ process need to collect the information from the other sensor nodes, DRSQ process spends less time on data collection. There are two reasons. One is that DRSQ process only needs to access the sensor nodes around the query range. Conversely, the centralized approach asks all the sensor nodes in the considered environment for data collection. The other reason is that the query node using the centralized approach needs to process many data objects and the computation overhead is thus heavy. Fig.~\ref{fig:sensor_num:messages} presents that DRSQ process is almost $75\%$ better than the centralized approach in term of number of messages. In the DRSQ process, each mobile sensor node collects the information of its neighbors and derives a local RSQ result before sending a response message to the query node. Such a process can effectively prune a lot of irrelevant information from data objects (sensor nodes) and thus cost less number of messages on returning the local range-skyline. On the contrary, the centralized approach just floods query messages and collects the information from all the neighboring mobile sensor nodes to derive the range-skyline. It thus wastes more network cost on data collection. \begin{figure}[t] \centering \subfigure[Response Time]{ \label{fig:sensor_num:time} \includegraphics[width=0.245 \textwidth]{./DRSQ_Sensor_Num_Time.eps} \subfigure[Number of Messages]{ \label{fig:sensor_num:messages} \includegraphics[width=0.245 \textwidth]{./DRSQ_Sensor_Num_Messages.eps}} \caption{Impact of the number of sensor nodes on \subref{fig:sensor_num:time} response time and \subref{fig:sensor_num:messages} number of messages } \label{fig:sensor_num} \vspace{-10pt} \end{figure} \begin{figure}[t] \centering \subfigure[Response Time]{ \label{fig:query_num:time} \includegraphics[width=0.245 \textwidth]{./DRSQ_Query_Num_Time.eps} \subfigure[Number of Messages]{ \label{fig:query_num:messages} \includegraphics[width=0.245 \textwidth]{./DRSQ_Query_Num_Messages.eps}} \caption{Impact of the number of queries on \subref{fig:query_num:time} response time and \subref{fig:query_num:messages} number of messages } \label{fig:query_num} \vspace{-10pt} \end{figure} \begin{figure}[t] \centering \subfigure[Response Time]{ \label{fig:query_range:time} \includegraphics[width=0.245 \textwidth]{./DRSQ_Query_Range_Time.eps} \subfigure[Number of Messages]{ \label{fig:query_range:messages} \includegraphics[width=0.245 \textwidth]{./DRSQ_Query_Range_Messages.eps}} \caption{Impact of query range on \subref{fig:query_range:time} response time and \subref{fig:query_range:messages} number of messages } \label{fig:query_range} \vspace{-10pt} \end{figure} \begin{figure}[t] \centering \subfigure[Response Time]{ \label{fig:radio_range:time} \includegraphics[width=0.245 \textwidth]{./DRSQ_Radio_Range_Time.eps} \subfigure[Number of Messages]{ \label{fig:radio_range:messages} \includegraphics[width=0.245 \textwidth]{./DRSQ_Radio_Range_Messages.eps}} \caption{Impact of transmission range on \subref{fig:radio_range:time} response time and \subref{fig:radio_range:messages} number of messages } \label{fig:radio_range} \vspace{-10pt} \end{figure} \subsubsection{DRSQ: Number of Queries} In this subsection, we discuss the impact of the number of queries. The number of queries indicates the maximum number of queries concurrently processed in the system. Fig.~\ref{fig:query_num:time} shows that the response time of DRSQ process is much better than the centralized approach as the number of queries increases. When the number of queries is $5$, the DRSQ process performs almost $30\%$ faster than the centralized approach does. The DRSQ process only needs to access the sensor nodes which are around the query range. In contrast, the centralized approach floods query messages to asks all the sensor nodes in the considered environment for data collection. As the number of queries increases, a large amount of flooding messages harms the network routing performance and thus increases the response time. In addition, the query node using the centralized approach needs to process more information from data objects, so the overhead of RSQ computing on the query node becomes heavy. This can be verified by the experimental results shown in Fig.~\ref{fig:query_num:messages}. As the result indicates, the DRSQ process costs $20\%$ to $50\%$ fewer number of messages than the centralized approach does since it can avoid irrelevant data objects during data collection, and thus reducing a large amount of duplicated messages for data transmissions. \subsubsection{DRSQ: Query Range} Different values on the query range also influence the performance of query processing. Fig.~\ref{fig:query_range:time} shows that the DRSQ process outperforms the centralized approach by $2\%$ to $10\%$ in term of response time with all different range values. When the query range $R$ is smaller than the transmission range $r$, the probability that the whole query range falls in the transmission range is high and it thus is easier for the query node to obtain the RSQ result by collecting sufficient information from its one-hop neighbors. When the query range $R$ is larger than the transmission range $r$, the DRSQ process only needs to access the sensor nodes which are around the query range instead of accessing all the sensor nodes as the centralized approach does. So, the DRSQ process can outperform the centralized approach on response time. Fig.~\ref{fig:query_range:messages} shows that DRSQ saves almost $80\%$ transmission cost in comparison with the centralized approach. The reason is that the DRSQ process can effectively prune the irrelevant information from data objects during data (local skyline) collection. \subsubsection{DRSQ: Transmission Range} The last important impact is the transmission range $r$ of a sensor node. As Fig.~\ref{fig:radio_range:time} indicates, the distributed approach outperforms the centralized approach by $5\%$ to $10\%$ with different transmission ranges in terms of the response time. Unlike the dramatic increasing response time of the centralized approach, the response time of DRSQ process increases more gently. The centralized approach has to do the dominance checks after it receives a large amount of information from the neighboring sensor nodes. So, it needs more computation overhead. Instead, the DRSQ process can avoid the irrelevant data objects in a distributed way during the information collection. The query node only processes the one-hop neighbors' local range-skyline sets whose sizes are much smaller than the sizes of the data sets in the centralized approach on the query node. Effectively pruning irrelevant data objects in a distributed way also reduces a lot of required messages for returning the local range-skyline sets to the query node and this trend is demonstrated in Fig.~\ref{fig:radio_range:messages}. DRSQ can save $60\%$ to $70\%$ transmission cost on the data collection. \subsection{Scenario II: Performance of DCRSQ Process} In the second scenario, we present the performance results of DCRSQ process. The duration of each query $\varDelta t$ is $10$ seconds and the total duration of the simulation is 60 seconds. Initially, the mobile sensor nodes and query nodes are placed randomly in a $500m \times 500m$ square area. For each simulation set, we execute the simulation 200 times to get the average results and the $95\%$ confidence intervals. The $t_0$ of each query's $\varDelta t$ is randomly generated from second 1 to 50. The other important settings are shown in Table~\ref{simulation_parameters_2}. The system will continuously return results for a CRSQ query within the time period $\varDelta t$, so it is difficult to measure the response time precisely. Instead, we observe the number of accessed objects (collected data objects) on each query node. If the number of accessed objects on the query node is small, it means that the efficiency of DCRSQ process is better since a large number of irrelevant data objects are skipped during message routing. For the DCRSQ process in a mobile environment, the node speed is one of the important factors. If the node speed becomes fast, it may lead the answer changing more frequently and the overhead of processing CRSQ queries also becomes heavier. We thus discuss the impact of node speed on the performance of DCRSQ process. In addition, we use a server to check the correctness of results, generated by the DCRSQ process and the centralized approaches respectively in terms of precision and recall. Note that the server has a global knowledge of all the data objects and always generates the correct answer for a query. \begin{table}[ht] \caption{Simulation Parameters for Scenario II} \label{simulation_parameters_2} \centering \footnotesize \begin{tabular}{p{3.7cm}cc} \hline \textbf{Parameter} & \textbf{Default Value} & \textbf{Range (type)}\\ \hline \hline Sensing Area ($m \times m$) & 500 $\times$ 500 & --\\ Simulation time (seconds) & 60 & --\\ Number of Sensor Nodes & 60 & 30, 60, 90, 120\\ Number of Queries & 1 & 1 to 10\\ Query Range, $R$ ($m$) & 100 & 50, 100, 150, 200\\ $\varDelta t$ of a Query (seconds) & 10 & --\\ Transmission Range, $r$ ($m$) & 75 & 50, 75, 100, 125\\ Maximum of Node Speed ($m/s$) & 10 & 5, 10, 15, 20, 25, 30\\ $TTL$ of Messages (centralized approach) & 5 & --\\ Bandwidth ($Mb/s$) & 2 & --\\ \hline \end{tabular} \end{table} \subsubsection{DCRSQ: Density} Fig.~\ref{fig:dcrsq:sensor_num:access} shows that DCRSQ process is better than the centralized approach in terms of number of accessed objects. When the number of mobile sensors increases, the total number of accessed objects in the proposed method remains constant that is no more 1100 nodes. In contrast, using the centralized way, a query point needs about 2 to 5.5 times more nodes to derive the final range-skyline. This is due to no process of discarding irrelevant moving objects during data collection (local range-skyline processing) in the centralized approach. In summary, DCRSQ can save $50\%$ to $80\%$ computational cost in average for a query. Similarly, Fig.~\ref{fig:dcrsq:sensor_num:messages} shows that DCRSQ process is better than the centralized approach in terms of number of messages. In DCRSQ process, each mobile sensor node collects the information of its neighbors and derives a local RSQ result before sending a response message to the query node. Such a process can prune a lot of moving objects which will not be the candidates and thus cost less number of messages on returning local range-skyline. On the other hand, the centralized approach just floods query messages and collects the information of all neighboring mobile sensor nodes for deriving the final range-skyline. So, the centralized approach wastes $10\%$ to $20\%$ more network cost on data collection. For the accuracy, each mobile sensor node in the centralized approach does not consider the prediction location of the neighbor nodes. Each sensor node only forwards the collected information to the query point. The result of final range-skyline may be inaccurate, so we compare the results of DCRSQ process and centralized approach with the answer in a server to measure the precision and recall. Fig.~\ref{fig:dcrsq:sensor_num:precision} and Fig.~\ref{fig:dcrsq:sensor_num:recall} show that both precision and recall of the centralized approach are worse than DCRSQ process by $10\%$ to $20\%$. Moreover, precision and recall of DCRSQ process are almost $100\%$ correct when the number of sensor nodes is large. \begin{figure*}[t] \centering \subfigure[Number of Accessed Objects]{ \label{fig:dcrsq:sensor_num:access} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Sensor_Num_Access.eps} \subfigure[Number of Messages]{ \label{fig:dcrsq:sensor_num:messages} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Sensor_Num_Messages.eps} \subfigure[Precision]{ \label{fig:dcrsq:sensor_num:precision} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Sensor_Num_Precision.eps} \subfigure[Recall]{ \label{fig:dcrsq:sensor_num:recall} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Sensor_Num_Recall.eps}} \caption{Impact of the number of sensor nodes on \subref{fig:dcrsq:sensor_num:access} number of accessed objects, \subref{fig:dcrsq:sensor_num:messages} number of messages, \subref{fig:dcrsq:sensor_num:precision} precision, and \subref{fig:dcrsq:sensor_num:recall} recall } \label{fig:dcrsq:sensor_num} \vspace{-10pt} \end{figure*} \begin{figure*}[t] \centering \subfigure[Number of Accessed Objects]{ \label{fig:dcrsq:query_num:access} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Query_Num_Access.eps} \subfigure[Number of Messages]{ \label{fig:dcrsq:query_num:messages} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Query_Num_Messages.eps} \subfigure[Precision]{ \label{fig:dcrsq:query_num:precision} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Query_Num_Precision.eps} \subfigure[Recall]{ \label{fig:dcrsq:query_num:recall} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Query_Num_Recall.eps}} \caption{Impact of the number of queries on \subref{fig:dcrsq:query_num:access} number of accessed objects, \subref{fig:dcrsq:query_num:messages} number of messages, \subref{fig:dcrsq:query_num:precision} precision, and \subref{fig:dcrsq:query_num:recall} recall } \label{fig:dcrsq:query_num} \vspace{-10pt} \end{figure*} \subsubsection{DCRSQ: Number of Queries} In this simulation experiment, we are interested in evaluating the performance of different number of queries issued simultaneously. We set the number of queries from 1 to 10, which means that there are at most 10 queries within the time period $\varDelta t$ in the simulation. Fig.~\ref{fig:dcrsq:query_num:access} and Fig.~\ref{fig:dcrsq:query_num:messages} show that as the number of queries increases, the number of messages and the number of accessed objects grow up respectively for both approaches. However, DCRSQ process only needs to access $50\%$ to $70\%$ amount of data objects in the derivation comparing to the centralized approach. DCRSQ outperforms the centralized approach and saves about $30\%$ on network cost when the number of queries is smaller then 7. If the number of queries exceeds 7, DCRSQ will cost more network messages. The reason is that the neighboring nodes cooperatively process the local result of CRSQ and some of them store duplicated local results (objects). That is, the query node may receive many duplicated reply messages. Fig.~\ref{fig:dcrsq:query_num:precision} and Fig.~\ref{fig:dcrsq:query_num:recall} show that DCRSQ process is better than the centralized approach in terms of precision and recall. As the number of queries increases, DCRSQ process still can achieve almost $90\%$ correctness and outperforms the centralized approach by $12\%$ to $25\%$ in terms of precision and recall, respectively. The reason is that DCRSQ process does not compute the irrelevant data objects anymore since they have already been filtered by the neighboring mobile nodes. Thus, a query point only checks the dominance objects that can guarantee to be in the final range-skyline. The other reason is that each mobile sensor node in the centralized way just gathers the information of its neighbors and sends the information back to the query node for deriving final range-skyline. Therefore, there is a possibility to compute a large number of irrelevant data objects for the query node. Thus, it makes the precision and recall of the centralized approach worse than the DCRSQ process. Although DCRSQ still has many redundant transmissions we mentioned above (in Fig.~\ref{fig:dcrsq:query_num:messages}), such duplicate local results significantly recover transmission failures and thus increase the precision and recall of the query result. \subsubsection{DCRSQ: Query Range} In this simulation set, we discuss the results of varying the query range. In Fig.~\ref{fig:dcrsq:query_range:access}, the number of accessed objects in the centralized approach remains around 3000 nodes for the final range-skyline in query point. However, the number of accessed node increases slightly when the query range becomes larger. It means that the query node in DCRSQ can save almost $30\%$ to $80\%$ computational cost in comparison with the centralized approach. As shown in Fig.~\ref{fig:dcrsq:query_range:messages}, when the query range increases, the number of messages in the centralized approach is always about $1.7\times 10^5$ because it always floods messages to the whole sensing area. DCRSQ process needs much less network cost than the centralized approach when the query range is smaller than 150 meters and only performs slightly worse when the query range is 200 meters. The possible reason is that the DCRSQ still costs too much network messages on exchanging information between the irrelevant nodes which are very far away from the query node. As for the accuracy shown in Fig.~\ref{fig:dcrsq:query_range:precision} and Fig.~\ref{fig:dcrsq:query_range:recall}, the trends in centralized approach, in comparison with the previous two measurements, are different. With the wider range of a query, the percentage of recall drops rapidly down about 40$\%$ in the centralized approach. Recall that all the neighboring sensor nodes have to report their information to the query node continuously for CRSQ queries in the centralized approach. As the query range becomes larger, more neighboring sensor nodes need to continuously report their information without data pruning. In such a scenario, the query node will be a bottleneck of the system and thus many messages may be dropped. Due to the above reason, the precision and recall of the centralized approach decrease significantly. On the contrary, the DCRSQ process already computes the range-skyline locally and the local range-skyline candidate sets are continuously sent back to the query node for deriving the final range-skyline. Such a way can reduce large amount of network cost and avoid the bottleneck problem on the query node. Thus, the accuracy of our approach can achieve over $92\%$ better. \begin{figure*}[t] \centering \subfigure[Number of Accessed Objects]{ \label{fig:dcrsq:query_range:access} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Query_Range_Access.eps} \subfigure[Number of Messages]{ \label{fig:dcrsq:query_range:messages} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Query_Range_Messages.eps} \subfigure[Precision]{ \label{fig:dcrsq:query_range:precision} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Query_Range_Precision.eps} \subfigure[Recall]{ \label{fig:dcrsq:query_range:recall} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Query_Range_Recall.eps}} \caption{Impact of query range on \subref{fig:dcrsq:query_range:access} number of accessed objects, \subref{fig:dcrsq:query_range:messages} number of messages, \subref{fig:dcrsq:query_range:precision} precision, and \subref{fig:dcrsq:query_range:recall} recall } \label{fig:dcrsq:query_range} \vspace{-10pt} \end{figure*} \begin{figure*}[t] \centering \subfigure[Number of Accessed Objects]{ \label{fig:dcrsq:trans_range:access} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Transmission_Range_Access.eps} \subfigure[Number of Messages]{ \label{fig:dcrsq:trans_range:messages} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Transmission_Range_Messages.eps} \subfigure[Precision]{ \label{fig:dcrsq:trans_range:precision} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Transmission_Range_Precision.eps} \subfigure[Recall]{ \label{fig:dcrsq:trans_range:recall} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Transmission_Range_Recall.eps}} \caption{Impact of transmission range on \subref{fig:dcrsq:trans_range:access} number of accessed objects, \subref{fig:dcrsq:trans_range:messages} number of messages, \subref{fig:dcrsq:trans_range:precision} precision, and \subref{fig:dcrsq:trans_range:recall} recall } \label{fig:dcrsq:trans_range} \vspace{-10pt} \end{figure*} \begin{figure*}[t] \centering \subfigure[Number of Accessed Objects]{ \label{fig:dcrsq:node_speed:access} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Node_Speed_Access.eps} \subfigure[Number of Messages]{ \label{fig:dcrsq:node_speed:messages} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Node_Speed_Messages.eps} \subfigure[Precision]{ \label{fig:dcrsq:node_speed:precision} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Node_Speed_Precision.eps} \subfigure[Recall]{ \label{fig:dcrsq:node_speed:recall} \includegraphics[width=0.245 \textwidth]{./DCRSQ_Node_Speed_Recall.eps}} \caption{Impact of node speed on \subref{fig:dcrsq:node_speed:access} number of accessed objects, \subref{fig:dcrsq:node_speed:messages} number of messages, \subref{fig:dcrsq:node_speed:precision} precision, and \subref{fig:dcrsq:node_speed:recall} recall } \label{fig:dcrsq:node_speed} \vspace{-10pt} \end{figure*} \subsubsection{DCRSQ: Transmission Range} Transmission range (or sensing range) of each object is also an important impact factor. We therefore measure the performance on different values of transmission range. As shown in Fig.~\ref{fig:dcrsq:trans_range:access}, the number of accessed objects in DCRSQ process is much less than the centralized approach by up to $80\%$ if the transmission range is $125$ meters. Filtering unnecessary data objects in a distributed way can reduce a lot of required messages for returning the local range-skyline sets to the query point. Fig.~\ref{fig:dcrsq:trans_range:messages} shows that DCRSQ process outperforms the centralized approach by up to $25\%$ with different settings of transmission range in terms of the number of messages. Unlike the dramatic increasing of the number of messages in the centralized approach, the number of messages in DCRSQ process increases more slowly. The reason is that each mobile node in the centralized approach brings a large number of data objects from neighboring sensor nodes before forwarding them back to the query node. Fig.~\ref{fig:dcrsq:trans_range:precision} and Fig.~\ref{fig:dcrsq:trans_range:recall} show the accuracy of both approaches. For the centralized approach, the precision and recall are under $50\%$ when $r = 50$, and jump to more than $90\%$ if $r$ becomes larger than $100$ meters. The reason is that each mobile sensor node may not successfully transmit information to others while the transmission range becomes too small. In contrast, DCRSQ process can achieve $98\%$ precision and recall since DCRSQ process uses safe time to predict the locations of its neighbors and thus provides more accurate results in the final range-skyline sets. \subsubsection{DCRSQ: Node Speed} The last simulation experiment investigates the effect of mobile sensor node's speed. We vary the maximum value of node speed from $5$ to $30$ $m/s$ in the simulation. Fig.~\ref{fig:dcrsq:node_speed} indicates that all the trends are gradually decreasing and our proposed method, DCRSQ process, always has a better performance than the centralized approach. Fig.~\ref{fig:dcrsq:node_speed:access} and Fig.~\ref{fig:dcrsq:node_speed:messages} show that DCRSQ outperforms the centralized approach in terms of the number of accessed objects and the number of messages. Note that each mobile sensor node in DCRSQ process knows when the neighboring nodes enter and leave the query range in advance. It thus can save almost $20\%$ to $90\%$ message cost on query processing and reduce $60\%$ amount of accessed objects for deriving the final skyline result on the query node. In addition, when each mobile node moves faster, the neighbors will change more frequently and wireless connections between mobile sensor nodes become more unstable. Hence, in high-speed scenarios, it is more difficult for the query node to collect sufficient information to derive the accurate result. Fig.~\ref{fig:dcrsq:node_speed:precision} and Fig.~\ref{fig:dcrsq:node_speed:recall} show that both DCRSQ and the centralized approach has the similar trends of performance on precision and recall. Comparing to the centralized approach, DCRSQ has $20\%$ improvement in average on the precision and recall. \section{Conclusion} \label{conclusion} In IoMT, very few works discuss the RSQ and CRSQ queries while simultaneously considering the following constraints: distributed computing units and databases on different mobile sensor nodes databases, moving data objects, and the mobile query. We hence propose a Distributed Continuous Range-Skyline Query process (DCRSQ process), for driving the results of RSQ and CRSQ queries. The main idea of the proposed DCRSQ process is to predict the appropriate time, safe-time, that the answer of a query changes. We apply such a prediction to each mobile sensor node and query node, so each mobile sensor node can compute the local result more precisely and then provide the local result to the query node for computing the final result. Instead of processing the information of all the neighboring mobile sensor nodes on the query node, the proposed distributed and cooperative approach with safe-time prediction can effectively reduce the computation overhead of the query node. The performance of DCRSQ process is also validated by the extensive simulation experiments. In some scenarios, the performance of DCRSQ process is almost 80\% better than the performance of the centralized approach in terms of the number of accessed objects. The DCRSQ process saves more than 15\% network cost in terms of the number of messages in general. In most scenarios, the DCRSQ process outperforms the centralized approach by more than 10\% to 25\% accuracy (precision and recall). In this work, we propose a prototype of distributed query process for CRSQ and simply use 2 dimensional data objects (distance and sensing value) to valid the performance in the simulation. For each sensor node, it needs to buffer the received query information and then help the data filtering and local computation until the query time is expired. Hence, there is one possible future research direction to find the relation between the minimum requirement (CPU and memory/storage) of each sensor and a parameterized (report/sense rate, number of sensor nodes and number of queries, etc.) IoT environment. Another possible future research work is to implement the distributed multi-criteria decision services on different modern open source IoT constrained platform~\citep{Mosquitto} or simulators~\citep{7496514}. Such a way can help the research and open source communities for evaluating. In the future, we are going to develop distributed approaches for monitoring different spatial queries in practical drone-assisted IoMT applications~\citep{8642333}. \appendices \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi This research is partially supported by Ministry of Science and Technology under the Grant MOST 107-2221-E-027-099-MY2 and MOST 108-2634-F-009-006- through Pervasive Artificial Intelligence Research (PAIR) Labs, Taiwan. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetr} \bibpreamble \renewcommand{\bibfont}{\footnotesize}
1,116,691,498,819
arxiv
\section{Introduction} Interest to the options market steadily grows. In general case, brokers are creating financial portfolios based on a combination of standard European call and put options, cash, and the underlying assets itself, which are not associated with an investor's goal. Sometimes this goal is simply to insurance and hedge \cite{Bar2012, Kaur2016}, while, in other cases, the investor will wish to gain access to cash without currently paying tax \cite{Hull2002} or the manager will can choice an investment technology \cite{Ju2013} as well as speculative purposes \cite{Dash2007}. The option's structures appeared in 1990's and became a popular tool of protection against falling prices \cite{Dash2007, Garrett2013, Griffin2007}. Most of the company's hedges were conducted through option's portfolio (three-way collars), which involve selling a call, buying a put, and selling a put \cite{Bar2012, Kaur2016, Griffin2007}. In the study \cite{Bar2012} the theoretical model of zero-cost option's strategy in hedging of sales was demonstrated. In the model the options prices were evaluated by banks. There are seven different cases and it is shown that the put option was not exercised in either one of the researched cases. Therefore, it is difficult to talk about the positive effect of hedging. In the study \cite{Kaur2016} authors provide an empirical analysis of the zero cost collar option contracts for commodity hedging and its financial impact analysis. Authors assessed option's portfolio as the hedging instrument on a quantitative basis using two scenarios in which assets prices are changed in the certain range. In the study \cite{Ederington2002} authors proved that option's combinations are very popular on major option markets. They show the most popularly traded combinations in order of contract volume: straddles, ratio spreads, vertical spreads, and strangles. If European options were available with every single possible strike, any \emph{smooth} payoff function could be created \cite{Carr2002}. Authors \cite{Carr2002} gives the decomposition formula for the replication of a certain payoff, was shown that any twice differentiable payoff function can be written as a sum of the payoffs from a static position on bonds, calls, and puts. Note that authors did not impose assumption regarding the stochastic price path. Analytical forms and graphs of the typical payoff profiles of option trading strategies can be found in publications \cite{Hull2002, Topaloglou2011}. The portfolio optimization problem is a basic problem of financial analysis. In modern portfolio theory, developed by H.~Markowitz, investors attempt to construct the portfolios by taking some alternatives into account: a) the portfolios with the lowest variance correspond to their preferred expected returns and vice versa, b) the portfolios with the highest expected returns correspond to their preferred variance. The Markowitz optimization is usually carried out by using historical data. The objective is to optimize the security's weight so that the overall portfolio variance is minimum for a given portfolio return. In this approach, options and structured products do not have a chance to be included into optimal mean-variance portfolios, but they have a place in optimal behavioral portfolios \cite{Carr2002}. Theoretical option pricing models generally assume that the underlying asset return follows a normal distribution. Another possible alternative is to apply some criterions to the payoff function as well as to the initial cost of a portfolio: for example, market risk measure \cite{Topaloglou2011, Das2013}, probabilistic \cite{Kibzun2015}, fuzzy goal problem \cite{Lin2016}. In the paper \cite{Das2013} authors have proposed a method to optimize portfolios without the normal distribution assumption of the portfolio's return. The objective function of the portfolio optimization problem is the expected return which is written as an integral over product of portfolio weights underlying assets and the joint density function of underlying asset returns. Also, the optimization problem includes constraints on short sale as well as the probability of not reaching thresholds. In the study \cite{Dash2007} authors have described a class of stock and option strategies, involving a long or short position in a stock, combined with a long or short position in an option. It was found that only the standard deviation, skewness, and kurtosis of the returns distribution of the underlying stock affected the optimal strategy, i.e. yield maximum returns. In the study \cite{Topaloglou2011} also, first four moments (mean, variance, skewness and kurtosis) were used to approximate the empirical distributions of the returns. Then the authors \cite{Topaloglou2011} have stated the multi-asset stochastic portfolio optimization model that incorporates European options and the portfolio has a multi-currency structure. The objective function is to minimize the tail risk of the portfolio's value at the end of the strategy term, $T$. In the model, the Conditional Value-at-Risk (CVaR) metric of tail losses over the strategy term was used. The hedge option's portfolio is optimal in the sense of the CVaR metric. However, the proposed model depends on the quality of a scenario tree which additionally must be test on containing arbitrage opportunities. Authors \cite{Topaloglou2011, Das2013} have found that optimal behavioral portfolios are likely to include the combination of derivative securities: put options, call options, and other structured products. Moreover, it must be noted that portfolios might include put options as well as call options on the same underlying assets. In the paper \cite{Davari-Ardakani2016} there is a proposed model for constructing a multi-period hedged portfolio which includes an European-type options. The objective function of optimization problem is recorded as the difference between the expected value of the portfolio and the expected regret of an investor. The model takes into account, the features of long-term investment: the risk aversion level is added into the objective function as well as the options contract with the different time to maturity were used. In the paper \cite{Kibzun2015} the two-step problem of optimal investment using the probability as an optimality criterion was studied. Various cases of distribution of returns were investigated. It was found that the structure of the optimal investment portfolio is almost identical despite of one or another distribution. In the paper \cite{Lin2016} a model for the construction of an option's hedged portfolio was proposed under a fuzzy objective function. The model does not explicitly takes transaction costs into account, but the entered membership functions are aimed at minimizing transaction lots. Thus, the authors have implicitly tried to reduce the potential transaction costs. In the study \cite{Ju2013} proposed a model based on using exotic options -- lookback call options. Authors denoted that lookback calls have positive payoffs for both maximum and minimum asset's prices, and thus have features similar to a portfolio on call and put options. In the paper \cite{Moshenets2016} a computer-based system of identifying the informed trader activities in European-style options and their underlying asset was proposed, then the mathematical procedure of informed trader activity monitoring was built. Previous studies considered the set of option's prices as a price function of an underlying asset: $(1+\delta)\times S_t$, where $|\delta| \le 0.1$. In contrast, in this paper the optimization is performed over the ask- and bid-prices and their combinations. We do not use the historical empirical distribution of returns. Options are popular in Europe, USA, Russia, India and Taiwan \cite{Kaur2016, Ju2013, Dash2007, Ederington2002, Das2013, Moshenets2016}. In the paper \cite{Ederington2002} authors have collected market data sets and shown that more than 55~percent of the trades of 100~contracts or larger are option's combinations and they account for almost 75~percent of the trading volume attributable to trades of 100~contracts or larger. In the numerical examples section of this article we will use the prices quoted on the Taiwan Futures Exchange (TAIFEX). According to the Report\footnote{\url{http://www.taifex.com.tw/eng/eng3/eng3_3.asp}} in 2016 options amounted to almost 70~percent of total volume of derivative market in Taiwan. The purpose of this study is to construct an option's strategy with the piecewise linear payoff function. With this in mind, the purpose for this study can be defined as the following problem statement: How does the personal investor's goal can be realized with an option's portfolio? In order to answer this, two research questions have been put forward. \begin{itemize} \item \textbf{Q1}: How can the method of establishment of the option's strategy be described from a theoretical perspective? \item \textbf{Q2}: How to validate a proposed method on option's market? \end{itemize} The remainder of this paper is organized as follows. In Section~2 we present the main definitions and assumptions which allow us to formulate the model as an optimization problem, including the objective function and the constraints for option's strategy. Section~3 then demonstrates the results of numerical experiments, including data description, and the integer solution of the optimization problem. Finally, Section~4 presents conclusions and future research. \section{Basic Definitions and Method}\label{Method} Option contracts were originally developed and put into circulation in order to reduce financial risks (hedging, insurance), associated with the underlying assets. In addition to existing standard option strategies \cite{Hull2002} actively developing trading models, oriented to the objectives of a particular trader (speculative trading) \cite{Dash2007}, construction of synthetic positions \cite{Topaloglou2011}, . An \emph{option} is a contract that will give an option holder a right to buy (or sell) the underlying asset. An \emph{options premium} is the amount of money that investors pay for a call or put option. A \emph{call option} is a contract that will give its holder a right, but not the obligation, to purchase at a specified time, in the future, certain identified underlying assets at a previously agreed price. A \emph{put option} is a security that will give its holder a right, but not the obligation, to sell at a specified time, in the future, certain identified underlying assets at a previously agreed price. The \emph{strike price} (exercise price) is defined as the price at which the holder of an options can buy (in the case of a call option) or sell (in the case of a put option) the underlying security when the option is exercised. An \emph{American option} may be exercised at the discretion of the option buyer at any time before the option expires. In contrast, a \emph{European option} can only be exercised on the day the contract expires. A \emph{covered option} involves the purchase of an underlying asset (equity, bond or currency) and the writing a call option on that same asset. Short selling is the sale of a security that is not owned by a trader, or that a trader has borrowed. A \emph{zero-cost option strategy} is an option trading strategy in which one could take a free options position for hedging or speculating in equity, forex and commodity markets. There are main types of option's portfolio in real-world applications in terms of the time to expiration: American-, European-type options \cite{Hajizadeh2016}, and their combination. The various option combinations represent strategies designed to exploit expected changes of the options values: the price of the underlying asset, its volatility, the time to expiration, the risk-free interest rate, the cost to enter \cite{Hull2002, Ederington2002}. There is a large number of possible option combinations. When there are only two possible strike prices and two times to expiration we can design $36$ combinations on one call and one put which may be either bought or sold. This number of combinations will increase significantly when any options values (strike prices, times to expiration, underlying assets) will be expanded insignificantly. In this study we propose the strategy which involves European call and put options on the same underlying asset with the same maturity date $T$, but different strikes in a series. Let $K_c=\{k^i_c \in \mathbb{Z}_{>0}, i \in I\}$ and $K_p=\{k^i_p \in \mathbb{Z}_{>0}, i \in I\}$ be the call and put strikes, $K_c$, $K_p$ are the increasing sequence of positive integers: $$k^i_c<k^{i+1}_c, k^i_p<k^{i+1}_p, \forall i=1, 2, \ldots, n-1,$$ $K=\{K_c \cup K_p\}$ is the set of unique strikes, $\mathbb{Z}_{>0}=\{x \in \mathbb{Z}: x > 0 \}$ denotes the set of positive integers. Let the number of call and put options be $$X_c=\{x_i^c \in \mathbb{Z}: L\le x_i^c \le U,L<0,U>0, i \in I\},$$ $$X_p=\{x_i^p \in \mathbb{Z}: L \le x_i^p \le U,L<0,U>0, i \in I\},$$ with $x_i^c, x_i^p>0$ for buying, $x_i^c, x_i^p<0$ for selling, if $x_i^c$ or $x_i^p$ equal to $0$ it means that the contract does not include in the portfolio, $L$ and $U$ represents the lower and upper bounds of the integer search space, respectively, $I=\{1,2, \ldots, n\}$ is the set of indices, and $S_t$ is a price of the underlying asset at calendar time, $0 \le t \le T$, $\hat{S}_T$ is an expected (forecasting) price of the underlying asset at the end of the strategy term, $T$ (single period). Prices $S_t, \hat{S}_T \in \mathbb{R}_{>0}$, where $\mathbb{R}_{>0}=\{x \in \mathbb{R}: x > 0 \}$ denotes the set of positive real numbers. We assume that the initial capital $W$ is given and that no funds are added to or extracted from the portfolio, $0 \le t \le T$. In order to determine the number of call and put options $X=\{X_c$, $X_p\}$ for the implementation of the individual investor goal we propose the following assumptions that have impact on the payoff $V(T,X)$ and an initial cost $C(t,X)$ of portfolio: \begin{itemize} \item (i) the strategy should have protection on the downside and upside of strike prices, \item (ii) the strategy should effectively limit the upside earnings and downside risk with a maximal loss, $\mathcal{L}$, \item (iii) the strategy should have the certain initial cost to enter $C(t,X)$ at time $t=0$. \end{itemize} % \subsection{Objective Function of Payoff} To establish the strategy we propose to use a combination long and short positions in put and call contracts based on the same underlying asset with different strike prices. We consider that one can take a static position (buy-and-hold), and the portfolio can include $x_i^c$, $x_i^p$ units of European call and put options, $i \in I$. Its value at time $t$ is given by the formula \begin{equation}\label{eqV} V(T,X)= \sum_{i=1}^{n} x_i^c (S_t - k_c^i)^{+} + x_i^p (k_p^i - S_t)^{+}, \end{equation} the first term is the value of the call option payoff and the second is the value of the put option payoff, and $$X^+=\max(X, 0).$$ Let the best ask- and bid-prices for buying and selling of call and put options at time $t=0$ be $$A_c=\{a_c^i \in \mathbb{R}_{>0}, i \in I\}, B_c=\{b_c^i \in \mathbb{R}_{>0}, i\in I\},$$ $$A_p=\{a_p^i \in \mathbb{R}_{>0}, i \in I\}, B_p=\{b_p^i \in \mathbb{R}_{>0}, i \in I\},$$ which we will name the \emph{input constants}: $$b_c^i<a_c^i, b_p^i<a_p^i, i\in I \text{ and}$$ $$a_c^i > a_c^{i+1}, a_p^i < a_p^{i+1}, b_c^i > b_c^{i+1}, b_p^i < b_p^{i+1},$$ $i=1,2, \ldots, n-1$. The initial cost portfolio at time $t=0$ can be expressed as \begin{equation}\label{eqC} C(t, X)=\sum_{i=1}^{n} x_i^c \cdot g_c(x_i^c)+ x_i^p \cdot g_p(x_i^p), \end{equation} where the functions $g_c(x_i^c)$ and $g_p(x_i^p)$ are defined as \begin{equation}\label{eq_g_c} g_c(x_i^c) = \begin{cases} a_c^i \in A_c, &\text{if } x_i^c>0, \\ b_c^i, \in B_c, &\text{if } x_i^c\le0,\\ \end{cases} \end{equation} \begin{equation}\label{eq_g_p} g_p(x_i^p) = \begin{cases} a_p^i \in A_p, &\text{if } x_i^p>0, \\ b_p^i, \in B_p, &\text{if } x_i^p\le0,\\ \end{cases} \end{equation} $i \in I$. Thus taking into account Eq.~(\ref{eqV}) and Eq.~(\ref{eqC}) the overall profit and loss at time $T$ will be the final payoff minus the initial cost. So the objective function can be expressed as \begin{eqnarray}\label{eqF} F(X)=V(T, X)-C(t, X =\sum_{i=1}^{n} x_i^c((\hat{S}_T-k_c^i)^{+} -g_c(x_i^c) +x_i^p((k_p^i - \hat{S}_T)^{+} -g_p(x_i^p)), \end{eqnarray} which is a linear function of the \emph{decision} variable $X =\{X_c, X_p\}$. The objective functional $F(X)$ maps the entire stochastic process (cash flow) to a single real number $$F(X): \mathbb{Z}^n \mapsto \mathbb{R}, \text{where } \mathbb{Z}^n \mapsto \mathbb{Z}\times \stackrel{n}{\cdots} \times \mathbb{Z}.$$ \subsection{Selection of Input Parameters} Using the conditional functions $g_c(\cdot)$ and $g_p(\cdot)$ in the objective function Eq.~(\ref{eqF}) leads us to solve a sequence of optimization problems. There are four input parameter values for each call and put options: ($A_c$ or $B_c$) and ($A_p$ or $B_p$). In this case, the number of permutations based on the selection between alternative prices (ask or bid) and possible contracts (call or put) equal to $N=2^n \times 2^n= 2^{2n}$. In the numerical examples section of this article (Section~\ref{Example}) we will use the ask- and bid-prices quoted on the Taiwan Futures Exchange (TAIFEX). Let $\mathcal{C}$ denote the set of all $2 \times n$-tuples of elements of given ordered sets of ask- and bid-prices $A_c$, $B_c$, $A_p$ and $B_p$. The set $\mathcal{C}$ can be expressed as \begin{align}\label{setC} \mathcal{C}=\{(x_1, x_2,\dots, x_n, y_1, y_2, \ldots, y_n): x_i=a_c^i \text{ or } b_c^i, y_i =a_p^i \text{ or } b_p^i, i \in I\}. \end{align} Thus $\mathcal{C}=\{c_1, c_2, \ldots, c_{N} \}$ is the set of ordered permutations without replacement of two $n$-elements sets $A_c$, $B_c$ and two $n$-elements sets $A_p$, $B_p$. The calculation of the portfolio's terminal payoff under each price combination in the vector notation takes the form: \begin{align}\label{eqFvector} \max_{X}\{X_c^{\top}((\hat{S}_T-K_c)^{+} - G_c(X_c) + X_p^{\top}((K_p - \hat{S}_T)^{+} - G_p(X_p))\} =\max_{X}\{F_{\mathcal{C}}(X)\}, \end{align} where $\mathcal{C}$ denotes the set of ordered permutations of model input constants Eq.~(\ref{setC}), and $G_c(X_c)$, $G_p(X_p)$ are the vector notation of conditional functions defined in Eq.~(\ref{eq_g_c}) and Eq.~(\ref{eq_g_p}). The objective function Eq.~(\ref{eqFvector}) maximizes the option's payoff over the holding period $[0, T]$. \subsection{System of Constraints} Each objective function $F_{\mathcal{C}}(X)$ from the set~Eq.~(\ref{eqFvector}) is the piecewise linear function. Taking into account the assumptions mentioned in Section~\ref{Method} we should determine the slope of the objective function Eq.~(\ref{eqFvector}) in the unique strike intervals. We separately investigate the intervals $0\le S_T \le k_1$, $k_2\le S_T\le k_3$, $\ldots$, $k_m\le S_T<+\infty$, here $k_1=\min(K_c, K_p)$ is the smallest strike and $k_m=\max(K_c, K_p)$ is the largest strike. The horizontal slope of the function (\ref{eqFvector}) in the first closed interval $[0, k_1]$ and the left-closed interval $[k_m, +\infty)$ are specified respectively by: \begin{eqnarray}\label{eq1} \sum_{i=1}^n x_i^c&=&0, \mbox{ if } S_T \in [0, k_1], \\ \sum_{i=1}^n x_i^p&=&0, \mbox{ if } S_T \in [k_m, +\infty). \end{eqnarray} Positive and negative slopes of the function (\ref{eqFvector}) in the interior intervals $[k_q, k_{q+1}]$ are provided by: \begin{equation}\label{eq2} \sum_{i: k_c^i \leq k_q } {x_i^c} -\sum_{j: k_p^j \geq k_{q+1} } {x_j^p} \text{ is } \begin{cases} \geq 0, &\text{if $k_q \le k$}, \\ \leq 0, &\text{if $k_q > k$}, \end{cases} \end{equation} here $k \in K$ is an inflection point of the function (\ref{eqFvector}). The next balance constraint defines the bound of the downside risk with a maximal loss, $\mathcal{L}$, over the holding period $0\le t \le T$: \begin{equation}\label{eq3} V(T, X)= -\mathcal{L} \end{equation} The objective function value (\ref{eqFvector}) at the terminal time $T$ must be positive: \begin{equation}\label{eq4} V(T, X)>0, S_T=\hat{S}_T. \end{equation} The model has the liquidity constraints, we assume that an investor can buy at least $U$ and sell at least $L$ contracts at each strike price $k_c^i, k_p^i \in K$: \begin{equation}\label{eq5} L \le x_i^c, x_i^p \le U, L <0, U > 0, i \in I. \end{equation} The inflection point $k$ introduced in Eq.~(\ref{eq2}), the maximal loss $\mathcal{L}$, and the expected (forecasting) price $\hat{S}_T$ Eq.~(\ref{eq4}) should be specified by an investor. Thus to address research question \textbf{Q1} we formulated the integer linear programming of option's portfolio selection problem~(\ref{eqFvector}), subject to the portfolio constraints~(\ref{eq1})-(\ref{eq5}). We assume that an investor can use the money received from the sale of some contracts to buy other contracts in the portfolio, then the initial cost of portfolio $C(t, X)$ Eq.~(\ref{eqC}) can be either a positive number or zero, or even negative number. In the Section~\ref{Example} we will represent the series of numerical experiments for these three cases. Thus, the system of constraints~(\ref{eq1})-(\ref{eq5}) can be (optionally) extended with the constraint on the initial cost of portfolio: $$C(t, X) \gtreqqless 0.$$ Another series of numerical experiments will be conducted to define the sensitivity of the solution on the liquidity constraints Eq.~(\ref{eq5}). \begin{figure* \centering \includegraphics[width=0.95\textwidth]{fig1.jpg}\\ \caption{\textbf{Option Snapshot Quotes of the Taiwan Futures Exchange at May 16, 2016.}}\label{fig1 \end{figure*} \section{Data Collection and Processing}\label{Example} To address research question \textbf{Q2} we apply the optimization problem (\ref{eqFvector})--(\ref{eq5}) and construct the option's strategy with the certain payoff function on the derivatives of Taiwan Futures Exchange. All options in the Taiwan's market are European-style. The expiration periods of TXO options have spot month, the next two months, and the next two quarterly calendar months\footnote{\url{http://www.taifex.com.tw/eng/eng4/Calendar.asp}}. We will be designing option's portfolio from the daily closing ask- and bid-prices for TXO options\footnote{\url{http://www.taifex.com.tw/eng/eng2/TXO.asp}}. We select TXO options; they are liquid assets and comprise above $60\%$ of trading volume of TAIFEX. Here we have taken a single date, May 16, 2016, selected at random, to illustrate the portfolio design in practice. The strikes and the ask- and bid-prices of options are required inputs to the portfolio optimization model. The available TXO prices are denominated in New Taiwan Dollars (NTD). The TAIFEX index closed at $8,067.60$ on May 16, 2016 (Fig.~\ref{fig1}), and the May options contracts expired $9$~days later, on May 25, 2016. The option price equals to $S_0=8,067.60$~NTD at May 16, 2016. Next, we will consider two cases for the expected price, $\hat{S}_T$. Suppose that the price will significant move up to 1) $\hat{S}_T = 8{,}300.00$~NTD, 2) $\hat{S}_T = 8{,}400.00$~NTD at May 25, 2016. The investor then wants to monetize his position, $C(t,X)=100$~NTD at the time of purchase, $t=0$, and to limit the maximum loss by $\mathcal{L}=-100$~NTD, if the price of underlying asset will come out of the certain range from $8{,}000$ to $8{,}400$~NTD, respectively. The margin requirement for short positions and transaction costs are not accounted. \begin{table}[t] \centering \caption{Optimal portfolios with the different initial costs, $C$, and the expected price $\hat{S}_T$, NTD} \label{tab1} \begin{tabular}{crrrrrrrrrrrr} \hline \multicolumn{1}{l}{} & \multicolumn{6}{c}{$\hat{S}_T=8400$} & \multicolumn{6}{c}{$\hat{S}_T=8300$} \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Strike \\ Price\end{tabular}} & \multicolumn{2}{c}{$C=100$} & \multicolumn{2}{c}{$C=-100$}& \multicolumn{2}{c}{$C=0$} & \multicolumn{2}{c}{$C=100$} & \multicolumn{2}{c}{$C=-100$} & \multicolumn{2}{c}{$C=0$} \\ \cline{2-13} & \multicolumn{1}{c}{Call} & \multicolumn{1}{c}{Put} & \multicolumn{1}{c}{Call} & \multicolumn{1}{c}{Put} & \multicolumn{1}{c}{Call} & \multicolumn{1}{c}{Put} & \multicolumn{1}{l}{Call} & \multicolumn{1}{l}{Put} & \multicolumn{1}{l}{Call} & \multicolumn{1}{l}{Put} & \multicolumn{1}{l}{Call} & \multicolumn{1}{l}{Put} \\ \hline 7850 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 \\ 7950 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 \\ \textbf{8050} & 4 & -3 & 7 & -6 & 3 & -3 & 7 & -7 & 3 & -2 & 5 & -5 \\ 8150 & -8 & 8 & -10 & 9 & 1 & 2 & -8 & 9 & -2 & 4 & -4 & 6 \\ 8250 & 10 & -5 & 4 & 0 & -5 & 6 & 4 & 3 & 0 & 0 & 0 & 4 \\ 8350 & -8 & 0 & -3 & -3 & -5 & -5 & -3 & -5 & -1 & -2 & -5 & -5 \\ 8400 & -5 & & -2 & & 2 & & -9 & & -7 & & -2 & \\ 8500 & 7 & & 4 & & 4 & &9 & & 7 & & 6 & \\ \hline \multicolumn{1}{l}{$\max\limits_{X}\{F_{\mathcal{C}}(X)\}$} & \multicolumn{2}{c}{\textbf{700}} & \multicolumn{2}{c}{400} & \multicolumn{2}{c}{600} & \multicolumn{2}{c}{\textbf{400}} & \multicolumn{2}{c}{250} & \multicolumn{2}{c}{300} \\ \hline \multicolumn{1}{l}{Total number of contracts}& \multicolumn{2}{c}{58} & \multicolumn{2}{c}{48} & \multicolumn{2}{c}{36} & \multicolumn{2}{c}{64} & \multicolumn{2}{c}{28} & \multicolumn{2}{c}{42} \\ \hline \end{tabular} \end{table} \begin{table}[] \centering \caption{\textbf{Optimal portfolios with the different liquidity constraints, $|L|=U$, contracts, and the expected price $\hat{S}_T$, NTD}} \label{tab2} \begin{tabular}{crrrrrrrrrrrr} \hline \multicolumn{1}{l}{} & \multicolumn{6}{c}{$\hat{S}_T=8400$} & \multicolumn{6}{c}{$\hat{S}_T=8300$} \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Strike \\ Price\end{tabular}} & \multicolumn{2}{c}{$|L|=10^{ a)}$} & \multicolumn{2}{c}{$|L|=50$} & \multicolumn{2}{c}{$|L|=100$} & \multicolumn{2}{c}{$|L|=10^{ a)}$} & \multicolumn{2}{c}{$|L|=50$} & \multicolumn{2}{c}{$|L|=100$} \\ \cline{2-13} & \multicolumn{1}{c}{Call} & \multicolumn{1}{c}{Put} & \multicolumn{1}{c}{Call} & \multicolumn{1}{c}{Put} & \multicolumn{1}{c}{Call} & \multicolumn{1}{c}{Put} & \multicolumn{1}{l}{Call} & \multicolumn{1}{l}{Put} & \multicolumn{1}{l}{Call} & \multicolumn{1}{l}{Put} & \multicolumn{1}{l}{Call} & \multicolumn{1}{l}{Put} \\ \hline 7850 & & 0 & & 0 & & 0 & &0 & &0 & &0 \\ 7950 & & 0 & & 0 & & 0 & &0 & &0 & &0 \\ \textbf{8050} & 4 & -3 & -24 & 24 & -51 & 51 & 7& -7& -24& 24& -50& 50 \\ 8150 & -8 & 8 & 50 & -50 & 100 & -100 & -8& 9& 47& -47& 100& -100 \\ 8250 & 10 & -5 & -11 & 30 & -4 & 49 & 4& 3& -4& 22& -12& 50 \\ 8350 & -8 & 0 & -27 & -4 & -89 & 0 & -3& -5& -21& 1& -40& 0 \\ 8400 & -5 & & -1 & & 21 & & -9& & -15& & -35& \\ 8500 & 7 & & 13 & & 23 & & 9& & 17& & 37& \\ \hline \multicolumn{1}{l}{$\max\limits_{X}\{F_{\mathcal{C}}(X)\}$}& \multicolumn{2}{c}{700} & \multicolumn{2}{c}{1800} & \multicolumn{2}{c}{4400} & \multicolumn{2}{c}{400} & \multicolumn{2}{c}{800} & \multicolumn{2}{c}{1800} \\ \hline \multicolumn{1}{l}{Total number of contracts}& \multicolumn{2}{c}{58} & \multicolumn{2}{c}{234} & \multicolumn{2}{c}{488} & \multicolumn{2}{c}{64} & \multicolumn{2}{c}{222} & \multicolumn{2}{c}{474} \\ \hline \multicolumn{13}{l}{$^{a)}$ The column $|L|=10$ corresponds to the column $C=100$ of Table~\ref{tab1}.} \end{tabular} \end{table} To establish the proposed strategy we use $12$ strike prices: sequential $n=6$ strike prices are corresponding to call $$K_c=(\textbf{8050}, 8150, 8250, 8350, 8400, 8500)$$ and sequential $n=6$ strike prices are corresponding to put $$K_p=(7850, 7950, \textbf{8050}, 8150, 8250, 8350)$$ at the same expiration date May~25, 2016. The central strike of the option is $K=8050$, is marked with bold. The number of combinations of ask- and bid-price for the $12$ options equal to $2^n \times 2^n =2^6 \times 2^6 = 4096$, thus the cardinality of set of feasible portfolios $|\mathcal{C}|=4096$. The cardinality of the set $|K|=|K_p \cup K_c|$ is $8$, therefore, we have $7$ pairs of sequential strike prices $[k_q, k_{q+1}]$, and that these pairs produce the system of $7$ inequalities from Eq.~(\ref{eq2}), and we should add two equalities on the first closed interval and the left-closed interval from Eq.~(\ref{eq1}). In our example, we assumed that one can buy or sell at least $L=-10$, $U=10$ contracts at each strike price. This assumption does not limit our approach because the total volume (\emph{TtlVol}, Fig.~\ref{fig1}) is bigger for all strike prices. Then we calculate the price for call and put in accordance with the specific strike (Fig.~\ref{fig1}). Next, we maximized the objective function $F_\mathcal{C}(X)$, proposed in Eq.~(\ref{eqFvector}) with the system of constraints~(\ref{eq1})--(\ref{eq5}). The optimal portfolio was obtained in approximately two minutes on a personal computer, using the MATLAB software. As a result, the optimum solution in the case $\hat{S}_T=8400$, $C=100$ is $$X=(\underbrace{4,-8,10,-8,5,7}_{\text{call}}, \underbrace{0,0,-3,8,-5,0}_{\text{put}})$$ the first six elements correspond to call options, the second six -- to put options, the total number of contracts are 58 out of which 34 are for buying and 24 are for selling, with objective function value equal to 700~NTD (bold in Table~\ref{tab1}). The optimum solution in the case $\hat{S}_T=8300$, $C=100$ is $$X=(\underbrace{7,-8,4,-3,-9,9}_{\text{call}}, \underbrace{0,0,-7,9,3,-5}_{\text{put}})$$ -- the total number of contracts are 64 out of which 32 are for buying and 32 are for selling, with objective function value equal to 400~NTD (bold in Table~\ref{tab1}). From the Table~\ref{tab1} one can see that the maximum values of the payoff function are achieved at $C = 100$~NTD, and the minimum values at $C = -100$~NTD. At the end of the strategy term, May 25, 2016, the price of underlying index increased to $S_T=8{,}396.20$~NTD. The forecast come true, the amount of loss was limited by the maximal loss $\mathcal{L}=-100$. \subsection{The Sensitivity of the Solution to the Constraints Variation} The initial cost $C(t, X)$ Eq.~(\ref{eqC}) can be either a positive number or zero, or even negative number. We calculated the alternative portfolios with the different initial costs $C(t,X)=\{-100, 0, 100\}$ with fixed liquidity constraints $|L|=U=10$~Eq.~(\ref{eq5}), and then the values of liquidity constraints were varied $|L|=U=\{10, 50, 100\}$ with the fixed initial cost $C(t, X)=100$, results are represent in Tables~\ref{tab1},~\ref{tab2}. Table~\ref{tab2} shows the strong dependence: with the increase in the number of liquidity constraints, i.e. $|L|=U$, the maximum value of the payoff function grows too. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{fig2.jpg \caption{\textbf{Payoff functions of proposed option's portfolios, underlying $S_0=8{,}067.60$~NTD. Different initial costs $C(t,X): 100, 0, -100$~NTD: a) $\hat{S}_T = 8{,}400.00$~NTD, b) $\hat{S}_T = 8{,}300.00$~NTD. Different liquidity constraints $L$, $U$, contracts: 10, 50, 100: c) $\hat{S}_T = 8{,}400.00$~NTD, d) $\hat{S}_T = 8{,}300.00$~NTD.}}\label{fig2 \end{figure} Fig.~\ref{fig2} show the payoff functions from the proposed option's portfolio, taking into account, the different values of: a), b)~the initial cost $C(t, X)$, Eq.~(\ref{eqC}) and c), d)~liquidity constraints $L$, $U$, Eq.~(\ref{eq5}), respectively. Our strike prices are the $x$-axis and the payoff functions are the $y$-axis. The expansion of the boundaries on the liquidity constraints $|L|=U \in \{10,50,100\}$ makes the options portfolio more attractive from the point of view of the terminal payoff amount. On the other hand, there is the difficulty of forming a strategy in view of the buy/sell of a sufficiently large number of underlying assets, a transaction which is difficult to implement for a short time. \section{Conclusion and Future Research} In this study, the description of the method of construction of the option's strategy with the piecewise linear payoff function is carried out. We take into account the next set of assumptions of the proposed option's strategy: strategy should effectively limit the upside earnings and downside risks; strategy should have an initial cost to enter; strategy should have protection on the downside and upside of the underlying asset price. To address research question \textbf{Q1} we have formulated the mathematical model as an integer linear programming problem which includes the system of constraints in the form of equalities and inequalities. The optimum solution was obtained using the MATLAB software. To address research question \textbf{Q2} we have demonstrated the possibility of the proposed model on the European-style TXO options of Taiwan Futures Exchange. In the study we do not use the historical empirical distribution of returns. Our approach is statical, quite general and has the potential to design option's portfolios on financial markets. In option's strategies, in addition to the forecast of the price of the underlying asset, various parameters can be taken into account: exercise price, volatility, the time to expiration, the risk-free interest rate, option premium, transaction costs. In this case, even an insignificant change of the parameter's values can lead to a significant change in the number of possible combinations of option's strategies. In our numerical experiments the total number of contracts for different cases varieties from 28 to 64 (Table~\ref{tab1}) and from 58 to 488 (Table~\ref{tab2}). In papers \cite{Davari-Ardakani2016, Goyal2007, Primbs2009}, it is noted that transactional costs in the dynamic management of the portfolio of options are one of the key factors without which it is impossible to talk about the feasibility of using the proposed models. The use of option strategies, including covered options, leads to deformation of the initial distribution of returns -- it becomes truncated and asymmetric. The payoff of the option's portfolio is asymmetric and non-linear, therefore, from the point of view of risk management. The use of a portfolio with various options contracts is preferable and effective, but the problem of choosing an optimal portfolio is significantly complicated too. In this paper, we used the European type options in a series only. Pricing and valuation of the American option, even the single-asset option, is a hard problem in a quantitative finance \cite{Hajizadeh2016, Mitchell2014}. One of possible approach in the pricing and considering early exercise of the American option is dynamic programming. The further research of our study can be continued in the following directions. At first, it is a portfolio optimization under transaction costs (exchange commissions, brokerage fees) and the margin requirement for short positions which are essential in the options market. At second, it is using options with different time to the expiration. At third, it is necessary to extend the system of constraints and add the budget constraint. Such extensions allows us to make the proposed approach more realistic and flexible \section{Acknowledgments} Thanks to the editor and referees for several comments and suggestions that were instrumental in improving the paper. We are grateful to Dr. Sergey V.~Kurochkin (National Research University Higher School of Economics, Russia) and Mr.~Ashu Prakash (Indian Institute of Technology, Kanpur, India) for valuable comments and suggestions that improved the work and resulted in a better presentation of the material. \providecommand*{\BibDash}{---} \bibliographystyle{gost2003}
1,116,691,498,820
arxiv
\section{Introduction} \label{sec:mot} Quantum Chromodynamics predicts objects composed entirely of valence gluons, so-called {\em glueballs}. Their existence, however, could not be confirmed experimentally to the present day. One of the goals of the COMPASS experiment~\cite{com07} at CERN is to study the existence and signatures of glueballs, in continuation of the efforts that were made at the CERN Omega spectrometer in the late 1990s~\cite{kir97}. Since Pomerons are considered to have no valence quark contribution, Pomeron-Pomeron fusion was proposed to be well suited for the production of glueballs. This process can be realised in a fixed-target experiment by the scattering of a proton beam on a proton target, where a system of particles is produced centrally (cf. Figure~\ref{fig:cp}). \begin{figure}[ht] \begin{minipage}{.55\textwidth} \begin{center} \vspace{.5cm} \includegraphics[width=\textwidth]{central_production.png} \vspace{-.5cm} \caption{\em Central production} \label{fig:cp} \end{center} \end{minipage} \begin{minipage}{.44\textwidth} \begin{center} \includegraphics[width=\textwidth]{xf_15_20.pdf} \caption[{\em Feynman $x_F$}]{\em Feynman $x_F$ distributions \label{fig:xf} \end{center} \end{minipage} \end{figure} Approximately $30\%$ of the total COMPASS data recorded with a $190\,\textrm{GeV}/c$ proton beam impinging on a liquid hydrogen target was used for the presented analysis. In order to separate the central $\pi^+\pi^-$ system from the fast proton $p_f$, a cut on the invariant mass combinations $M(p\pi) > 1.5\,\mathrm{GeV}/c^2$ was introduced. The effect of this kinematic cut on the Feynman $x_F$ distributions for the fast (yellow) and the slow proton (red) as well as the di-pion system (blue) is shown in Figure~\ref{fig:xf}. The $\pi^+\pi^-$ system lies within $\left \vert x_F \right \vert \le 0.25$ and can therefore be considered as centrally produced. However, central production comprises several production processes, their characteristic signature being the different dependence of the cross section on the centre-of-mass energy $\sqrt s$ of the reaction. While Pomeron-Pomeron scattering should be only weakly dependent on $s$, Pomeron-Reggeon scattering is predicted to scale with $1/\sqrt s$ and Reggeon-Reggeon scattering with $1/s$~\cite{kle07}. This behaviour can be observed experimentally, e.g. by comparing the mass spectra obtained at COMPASS energy to those the Omega spectrometer~\cite{arm91} obtained at different energies (cf. Figure~\ref{fig:two}). \begin{figure}[ht] \begin{center} \subfigure[{\em $85\,\mathrm{GeV}/c$}~\cite{arm91}]{\includegraphics[width=.2\textwidth]{pippim_85.png}} \subfigure[{\em $190\,\mathrm{GeV}/c$}]{\includegraphics[width=.4\textwidth]{M.pdf}} \subfigure[{\em $300\,\mathrm{GeV}/c$}~\cite{arm91}]{\includegraphics[width=.2\textwidth]{pippim_300.png}} \end{center} \caption[{\em $\pi^+\pi^-$ Invariant Mass}]{\em Invariant mass of the $\pi^+\pi^-$ system for three beam energies} \label{fig:two} \end{figure} The dominant features of the $\pi^+\pi^-$ invariant mass distribution, the $\rho$(770), the $f_2$(1270) and the sharp drop in intensity in the vicinity of the $f_0$(980), can be observed at all three different centre-of-mass energies. The relative yield of $\rho$-production decreased rapidly with increasing $\sqrt s$, which is explained by a Reggeised pion-pion contribution ($\approx 1/s^2$). On the the other hand, the enhancement at low masses as well as the $f_0$(980) remain practically unchanged, a fact that is characteristic for $s$-independent Pomeron-Pomeron scattering. \section{Partial-Wave Analysis} The partial-wave analysis has been performed assuming that the $\pi^+\pi^-$ system is produced by the collision of two particles emitted by the scattered protons. They are also referred to as exchanged particles, and carry the squared four-momentum transfer $t_1$ from the beam proton and $t_2$ from the target proton, respectively. We make the strong assumption that $t_1$ only transmits the helicity $\lambda = 0$. In this limit, $t_1$ can be seen as a vacuum-like Pomeron with $J^{PC}=0^{++}$, which is treated like an {\em external particle}. The beam and the fast outgoing proton $p_f$ are irrelevant in this picture. If we accept the space-like Pomeron as the incoming beam, we can construct a Gottfried-Jackson frame~\cite{got64} for the di-pion system. Hence, the exchanged particle $t_1$ in the centre-of-mass frame of the $\pi^+\pi^-$ system defines the $z$-axis for the reaction. We have to emphasise that we fixed the choice to $t_1$ in contrast to previous experiments (e.g. \cite{bar99}) to be able to correct for the different $t$ acceptances of the trigger. The $y$-axis of the right-handed coordinate system is defined by the cross product of the momentum vectors of the two exchange particles in the $pp$ centre-of-mass. Two variables specify the decay process, namely the polar and azimuthal angles ($\cos(\theta), \phi$) of the $\pi^-$ in the di-pion centre-of-mass frame relative to these axes. Figure~\ref{fig:pwa} illustrates the distribution of these decay variables as a function of the di-pion mass. \begin{figure}[ht] \subfigure[{\em $\cos(\theta)$}]{\includegraphics[width=.45\textwidth]{costheta.pdf}} \subfigure[{\em $\phi$}]{\includegraphics[width=.45\textwidth]{phi.pdf}} \caption{\em Decay variables as a function of the $\pi^+\pi^-$ mass} \label{fig:pwa} \end{figure} The decay amplitudes for a two-pseudoscalar system with relative angular momentum $\ell$ and its projection on the quantisation axis $m$ is given by the spherical harmonics $Y^\ell_m(\theta, \phi)$ in spherical coordinates. To profit from the fact that the strong interaction conserves parity, a reflection operator was introduced in \cite{chu75} as a parity operator followed by a rotation around the $y$-axis such that all momenta relevant to the production process are recovered. Introducing a quantum number $\varepsilon$ which is called {\em reflectivity} and which can have the values $\pm 1$, the eigenstates of this reflection operator can be constructed as: \begin{equation} Y^{\varepsilon \ell}_m(\theta, \phi) \equiv c_m \left [ Y^\ell_m (\theta , \phi) - \varepsilon (-1)^m Y^\ell_{-m}(\theta, \phi) \right ] \label{eq:eigen} \end{equation} with a normalisation constant $c_m$. The two classes of states $\varepsilon = \pm1$ correspond to different production processes in the asymptotic limit~\cite{got64} and can consequently not interfere. With the complex {\em transition amplitudes} $T_{\varepsilon l m}$ and an explicit incoherent sum over the reflectivities, the intensity in mass bins can be expanded in terms of partial waves: \begin{equation} I(\theta, \phi) = \sum_\varepsilon \left \vert \sum_{\ell m} T_{\varepsilon \ell m}Y^{\varepsilon \ell}_m(\theta, \phi) \right \vert^2 \end{equation} We adopt the notation $J^\varepsilon_m$ from \cite{bar99} to construct the basic wave set $\mathbf{S^-_0}, P^-_0, P^-_1, D^-_0, D^-_1$ and $\mathbf{P^+_1}, D^+_1$. Since the overall phase for each reflectivity is indeterminate, one wave in each class can be defined real, which means the imaginary part of that transition amplitude is fixed to zero. These so-called {\em anchor} waves are emphasised in bold font. An {\em extended likelihood} fit in $20\,\mathrm{MeV}/c^{2}$ mass bins is used to find the parameters $T_{\varepsilon l m}$, such that the acceptance corrected model $I(\theta, \phi)$ matches the measured data best. \section{Ambiguities} As shown in \cite{sad91,chu97}, the angular dependency of the intensity $I(\Omega)$ for two-pseudoscalar final states can generally be expressed in terms of $|G(u)|^2$ by the introduction of a variable $u \equiv \tan(\theta/2)$. The function $G(u)$ is a complex polynomial of the order of $2\ell$, where $\ell$ is the the highest considered spin in the system. This polynomial can be factorised in terms of its complex roots $u_k$, the so-called {\em Barrelet-zeros}~\cite{bar72}. Since the function $G(u)$ only enters as absolute square in the expression for the angular distribution, the complex conjugate of a root $u_k^*$ is an equally valid solution. That means, that there are in general $2^{2\ell-1}$ different ambiguous solutions which result in exactly the same angular distribution. This ambiguity has to be resolved by physical arguments. The system of $S$, $P$ and $D$ waves used in the presented analysis has eight ambiguous solutions. By using different random starting values for the fit to the same data and MC sample, it was shown experimentally that very different solutions can be obtained. However, the fitted production amplitudes $T_{\varepsilon l m}$ for one single attempt can be used to calculate all eight solutions analytically~\cite{chu97}. In order to find the complex polynomial roots numerically, {\em Laguerre's method} was applied. Figure~\ref{fig:roots} illustrates the real and imaginary parts of these four roots for all mass bins, where a sorting depending on the real part has been performed. They are well separated from each other and do not require a special linking procedure. The imaginary parts do not cross the real axis, hence bifurcation of the solutions does not pose a problem and the solutions can be uniquely identified. \begin{figure}[t] \begin{center} \subfigure[]{ \includegraphics[width=.45\textwidth]{re_nice.pdf} } \subfigure[]{ \includegraphics[width=.45\textwidth]{im_nice.pdf} } \end{center} \caption[{\em Barrelet Zeros}]{{\em The (a) real and (b) imaginary parts of the Barrelet-zeros as a function of the $\pi^+\pi^-$ mass}} \label{fig:roots} \end{figure} By fixing $u_1$ and allowing $u_{2,3,4}$ to undergo complex conjugation, the entire set of eight ambiguous solutions is calculated. For six solutions, most of the intensity is formed by one single wave. This is clearly unphysical, since we know of at least one resonance in each of the spins $0$, $1$ and $2$. The choice among the remaining solutions is not evident, the intensity and phase distributions are very similar. \begin{figure}[ht!] \begin{center} \includegraphics[width=.93\textwidth]{physical_nice4} \end{center} \caption[{\em Physical Solution}]{\em Solution compatible with physical constraints, intensities (blue) and relative phases (red)} \label{fig:phys} \end{figure} As a last step, the calculated values for one chosen solution are reintroduced to the fit as starting values, which probes the convergence and provides the correct covariance matrix. Figure~\ref{fig:phys} shows the intensities and phases for one solution compatible with the physical constraints, depicted in a matrix form. A clear signal from $f_2$(1270) can be seen in the $D^-_0$ wave, accompanied by a phase motion with respect to the $S^-_0$. There is also evidence for $\rho$(770) in both $P^-_0$ and $P^-_1$. At masses well below $1\,\mathrm{GeV}/c^2$, however, the scalar wave does not show the expected threshold enhancement. To evaluate the fit quality, the decay amplitudes of phase-space Monte-Carlo events are weighted by the production amplitudes $T_i$ obtained in the minimisation. The projected angular distributions show very good agreement with the data. \section{Outlook} \label{sec:DaO} With these proceedings, we want to show that we are able to select a centrally produced sample and describe the main features of the data in terms of partial waves. Using the methods from~\cite{bar99}, we can reproduce the analysis with comparable results. The complicated problem of central production was approached following a model employed by WA102, an earlier experiment at CERN, inheriting some limitations and assumptions. In the meantime, other ideas were developed based on their data and analyses which can help to determine spin and parity of particles produced in central exclusive processes~\cite{kai03}. In addition to measuring the decay products, it can have particular advantages to study angular correlations between the outgoing protons. In the long run, we might study the data with a combination of both. In order to obtain a complete picture of the light scalar meson sector, the decay of identified resonances into different final states is needed to understand their composition. Since the trigger was not selecting a particular final state and the generalisation of the analysis of the two-pion system to other two-pseudoscalar final states is trivial, the next natural step will be the analysis of the $K^+K^-$ system. Other available channels are $K_sK_s$, $\pi^0\pi^0$ and $\eta\eta$. We expect that COMPASS will be able to shed light on this sector, outperforming previous experiments in terms of data as well as resolution.
1,116,691,498,821
arxiv
\section{Introduction} In the landmark E144 experiment at SLAC in the mid 1990s, a $46.6\,\trm{GeV}$ electron beam was collided with a weakly-focussed laser of moderate intensity parameter ($\xi \approx 0.3$) in a near head-on collision \cite{E144:1996enr,burke97,bamber99}. This provided the first measurement of nonlinear Compton scattering (NLC) of real high energy photons and their transformation into electron-positron pairs via the nonlinear Breit-Wheeler (NBW) process in the multiphoton regime. This sequence of processes is the `two-step' part of the nonlinear trident process $e^{\pm}\to e^{\pm}+e^{+}e^{-}$. Recently, the NA63 experiment at CERN measured the two-step \emph{nonlinear} trident process with $\xi \sim O(10)$, at strong-field parameter $\chi\approx2.4$ in the collision of $200\,\trm{GeV}$ electrons with germanium single crystals \cite{Nielsen:2022bws}. It is planned, at the E320 experiment at SLAC \cite{chen22} and the LUXE experiment at DESY \cite{Abramowicz:2021zja}, to measure this two-step process in the intermediate intensity regime $\xi\sim\mathcal{O}(1)$. Since the intensity parameter represents the charge-field coupling, these new experiments will probe strong-field quantum electrodynamics (QED) effects where the charge-field interaction is non-perturbative and can no longer be described using leading-order multiphoton contributions. Although exact solutions for the complete nonlinear trident process have been calculated in a plane wave pulse \cite{hu10,ilderton11,Hu:2013yrz, Dinu:2017uoj,Mackenroth:2018smh,Dinu:2019wdw,Torgrimsson:2022ndq} also including radiation reaction \cite{Torgrimsson:2022ndq} (as well as in a constant crossed field \cite{baier72,ritus72,morozov77,king13b,king18c}, a magnetic field \cite{Novak:2012zz}, a Coulomb field \cite{Torgrimsson:2020wlz} and also studied using an adiabatic approximation \cite{Titov:2021kbj} and in trains of laser pulses \cite{Kaminski:2022uoi}), they are computationally expensive and a more efficient and versatile method is required to model experiments. Therefore, a simulational approach that calculates particle trajectories classically, and adds first-order QED effects such as the NLC and NBW processes via Monte-Carlo methods, is typically favoured in designing and analysing experiments \cite{bamber99, cole18,poder18,DiPiazza:2019vwb,2021NJPh...23j5002S,chen22,Abramowicz:2021zja}. For second (and higher) order processes, the correct factorisation of the $n$-step subprocess requires taking into account the polarisation of intermediate particles. For the nonlinear trident process, this means the polarisation of the intermediate photon must be included in the NLC and NBW steps~\cite{ritus72}. The inclusion of photon polarised strong-field QED processes into simulational approaches is a relatively recent development \cite{king13a,PhysRevLett.124.014801,PhysRevResearch.2.032049,Seipt-2021,Seipt-2022,Nielsen:2022bws} and is so far achieved using the locally constant field approximation (LCFA)~\cite{ritus85,Seipt:2020diz,Fedotov:2022ely}. However, at intermediate intensity values $\xi \sim O(1)$, the LCFA for the first-order processes is known to become inaccurate for typical experimental parameters \cite{harvey15,DiPiazza:2017raw,Ilderton:2018nws,DiPiazza:2018bfu,King:2019igt}. In the current paper, we assess the importance of photon polarisation in modelling the two-step nonlinear trident process. This is achieved by adapting the locally monochromatic approximation (LMA) \cite{Heinzl:2020ynb,Blackburn:2021cuq,Blackburn:2021rqm}, which is being used to model the interaction-point physics of the LUXE experiment \cite{Abramowicz:2021zja} as well as strong-field QED experiments at the BELLA PW laser \cite{Turner:2022hch} (also appearing in other forms \cite{Yokoya:1991qz,Hartin:2018egj} and being used in the modelling of the E144 experiment \cite{bamber99}), to include photon-polarised rates for the NLC and NBW processes. Photon polarisation has been studied in NLC in monochromatic backgrounds \cite{Ivanov:2004fi} and in plane-wave pulses \cite{TangPRA022809,TANG2020135701}; photon polarisation in NBW in the LMA and in plane-wave backgrounds has also been studied \cite{Tang:2022a,TangPRD056003}; here we gather and present the LMA formulas in a circularly-polarised and linearly-polarised background. A second focus of this paper is to consider the role of harmonics: whether the harmonic structure of nonlinear Compton, which is a key experimental observable, e.g. in LUXE, can be found in the pair spectrum in the nonlinear trident process. The LMA was chosen as it has been shown that first-order subprocesses can be described more accurately at intermediate intensity values, which are planned to be used in upcoming experiments \cite{Blackburn:2021cuq,Blackburn:2021rqm}. Although it is known that there are effects missed by the LMA that are related to the finite bandwidth of the background pulse, such as harmonic broadening, low-energy photons produced in NLC~\cite{King:2020hsk} and pairs created by the low intensity part of the pulse \cite{Tang:2021qht}, these should be negligible for realistic set-ups with the parameters we consider here. (For more background on strong-field QED, we direct the reader to the reviews \cite{ritus85,dipiazza12,Fedotov:2022ely,RMP2022_045001}.) Overall, trident is a $1\to 3$ process: $e^{\pm} \to e^{\pm} + e^{+}e^{-}$, but we are also interested in the intermediate steps, which are both $1\to 2$ processes: $e^{\pm} \to e^{\pm} + \gamma$ followed by $\gamma \to e^{+}e^{-}$, and in the two-step case, we take the photon to be on-shell. In this paper, we assume the initial particle is an electron (analogous conclusions follow for a positron). The total probablity will depend on: i) the incoming particle's energy parameter $\eta = \varkappa \cdot p / m^{2}$, where $\varkappa$ is the plane wave background wavevector, $p$ is the probe electron momentum, $m$ is the mass of the electron; ii) the intensity parameter $\xi$ of the background, which will be defined in the following in terms of the gauge potential and iii) the number of cycles, $N$ of the background. We mainly focus on the total probability, $\tsf{P} = \tsf{P}(\xi,\eta,N)$ and lightfront momentum spectrum of the two-step trident process. The parameter space is large and we will also be studying the impact of photon polarisation on the total probability. Therefore, in this paper we will restrict our attention to the most relevant parameter ranges, given below. The initial particle energy will mainly be chosen to be $16.5\,\trm{GeV}$ and the laser photon energy to be $1.55\,\trm{eV}$ ($800\,\trm{nm}$ wavelength), or its third harmonic $4.65\,\trm{eV}$. This particle energy is chosen as it is the energy of electrons planned to be used at LUXE, and is similar to the energy of $13\,\trm{GeV}$ used at E320. To investigate higher harmonics, we will calculate one example with $80\,\trm{GeV}$ and $500\,\trm{GeV}$ electrons, which are both currently much higher than planned for in experiments. The investigation into potential harmonic structure is particularly relevant for our approach using the LMA, where such effects are captured, in comparison to approaches based on the LCFA, which do not capture harmonic effects. For convenience, in this paper we will use a cosine-squared pulse envelope. We note that this pulse shape has a much wider bandwidth than can be transmitted through optical elements in an experiment. As a consequence, when $\xi\lesssim 1$ and initial electron energies are lower than around $10\,\trm{GeV}$, an exact plane-wave calculation will show bandwidth effects beyond the LMA can become dominant. However, these effects will not be present in an experiment. For consistency, we consider initial electron energies higher than this. (See \appref{app:lowEnergy} for more details on this point.) The intensity parameter will be varied in the `intermediate regime', $\xi\sim O(1)$, which we take to be $\xi = 0.5 ... 5$. This is significant because the LMA is accurate in this regime, whereas studies of trident based on the LCFA, are limited to higher values of $\xi$. This parameter regime will also be probed by the E320 and LUXE experiments. The number of laser cycles $N$ is a less important parameter in the LMA because it can be factored out and occurs just as a pre-integral over the laser phase. However, $N$ must be large enough that the LMA is a good approximation to the full QED probability (it is assumed that $N\gg 1$ for this to be the case). We will consider a gauge potential $a=|e|A$ describing the background as a plane wave pulse. The potential depends upon the phase $\phi=\varkappa \cdot x$ in the form: \begin{equation} \mbf{a}^{\perp} = \begin{cases} m\xi f(\phi) (\cos \phi, \varsigma\sin\phi ) & |\phi| < \Phi/2 \\ 0 & \trm{otherwise},\label{eqn:adef} \end{cases} \end{equation} where $a = (0, \mbf{a}^{\perp},0)$ and $\varsigma \in \{-1,0,1\}$ is chosen to switch between linear ($\varsigma=0$) or circular ($\varsigma=\pm 1$) polarisation (we note that the time-averaged amplitude of the potential is different for these two choices). In this paper, we will always use $f(\phi) = \cos^{2}(\pi\phi/\Phi)$ and $\varsigma=1$ for circular polarisation, where $\Phi = 2\pi N$, and will choose $N=16$, which corresponds to a full-width-at-half-maximum duration of $21.3\,\trm{fs}$ for a wavelength of $\lambda = 2\pi/\varkappa^{0}=800\,\trm{nm}$. Let us denote the lightfront momentum fraction of the photon $s_{\gamma}=\varkappa \cdot \ell/\varkappa\cdot p$, where $\ell$ is photon momentum. The probability for nonlinear Compton scattering of a photon in polarisation state $\sigma$ is $\tsf{P}_{\gamma}^{\sigma}$, and nonlinear Breit-Wheeler pair-creation from a photon in the same state is $\tsf{P}_{e}^{\sigma}$. Then the total probablity $\tsf{P}$ for the two-step trident process, can be written \cite{ritus72}: \begin{eqnarray} \tsf{P} = \sum_{\sigma}\int_{0}^{1} ds_{\gamma} \int d\phi\, \frac{d^{2}\tsf{P}^{\sigma}_{\gamma}(\phi; s_{\gamma})}{ds_{\gamma}\,d\phi}~\tsf{P}^{\sigma}_{e}(\phi; s_{\gamma}),\label{eqn:tri2} \end{eqnarray} where $\tsf{P}^{\sigma}_{e}(\phi; s_{\gamma}) = -\int_{\phi} \,d\phi'~ d\tsf{P}^{\sigma}_{e}(\phi';s_{\gamma})/d\phi'$ and ${\sigma\in\{\parallel,\perp\}}$ refers to the two polarisation eigenstates of the background. For a linearly-polarised background, the designation $\parallel$ ($\perp$) refers to the photon polarisation eigenstate being in the same (opposite) polarisation state as the background with a fixed direction in lab co-ordinates. For a circularly-polarised background, these directions rotate with $\parallel$ ($\perp$) referring to when a photon collides head-on with the background having a polarisation that rotates in the same (opposite) direction as the background\footnote{We note that in terms of helicity states, $\parallel$ ($\perp$) refer to the photon have the opposite (same) helicity as the background.}. (The polarised NBW probability $\tsf{P}^{\sigma}_{e}$ is defined such that the total unpolarised probability, $\tsf{P}_{e} = \tsf{P}^{\parallel}_{e} + \tsf{P}^{\perp}_{e}$ i.e. the usual $1/2$ polarisation averaging factor \cite{ritus72} is included already in $\tsf{P}^{\sigma}_{e}$, and analogously for $\tsf{P}_{\gamma}^{\sigma}$.) In order to assess the importance of photon polarisation, we will also be interested in the approximation to this probability that uses unpolarised probabilities, $\tsf{P}_{\gamma}$ and $\tsf{P}_{e}$ for the nonlinear Compton and Breit-Wheeler steps respectively. We denote: \begin{eqnarray} \widetilde{\tsf{P}} = \int_{0}^{1} ds_{\gamma} \int d\phi\, \frac{d^{2}\tsf{P}_{\gamma}(\phi; s_{\gamma})}{ds_{\gamma}\,d\phi}~\tsf{P}_{e}(\phi; s_{\gamma}). \end{eqnarray} (This approximation to the two-step trident probability has often been used in numerical simulations of trident and QED cascades \cite{PRL2008200403,Kirk_2009,nerush11,elkina11,TangPRA2014,gonoskov17,Samsonov:2018nff,Khudik:2018hkr}.) Since the total probability, \mbox{\eqnref{eqn:tri2}}, involves a sum over polarisation eigenstates of the background, to assess the importance of photon polarisation, one should consider different background polarisation. Here, we study the cases of a circularly- and a linearly-polarised plane wave pulse. Probabilities for sub-processes will be calculated within the locally monochromatic approximation \cite{Heinzl:2020ynb,Tang:2022a}. This is an adiabatic approximation that neglects derivatives of the slow timescale of the pulse envelope, but includes the fast timescale of the carrier frequency exactly. It can be shown to be equal to the leading order term in an expansion of the total probability in $\Phi^{-1}$ \cite{Torgrimsson:2020gws}. This paper is organised as follows. In Sec. II the results are presented for the two-step nonlinear trident process in a circularly-polarised (Sec. II A) and a linearly-polarised (Sec. II B) background. The results are discussed in Sec. III and conclusions drawn in Sec. IV. In Appendix A, the LMA formulas for photon-polarised nonlinear Compton scattering and photon-polarised nonlinear Breit-Wheeler are given and Appendix B contains further details about bandwidth effects in the LMA. \section{LMA for two-step trident process} In this section, we combine the LMA formulas (shown in \appref{AppA}) for the NLC and NBW processes~\cite{Heinzl:2020ynb,Tang:2022a} to calculate the yield of the two-step trident process. The lightfront momentum is conserved in the interaction with the plane-wave. We use the variable $0 \leq s \leq 1$ to denote what fraction a particle's lightfront momentum is of the initial electron lightfront momentum. This can be written: \begin{equation} 1 = s_{e'} + s_{\gamma}; \quad s_{\gamma} = s_{q}+s_{e''}, \label{eqn:sdef} \end{equation} where $s_{\gamma}$ is the lightfront momentum fraction of the photon, $s_{e'}$ of the scattered electron, $s_{e''}$ of the created electron and $s_{q}$ of the positron. In this paper, only $s_{\gamma}$ and $s_{q}$ will appear explicitly; the other lightfront momentum fractions will be integrated out. Then we can write the total probability as an integral over the double differential probability: \begin{equation}\tsf{P} =\int^{1}_{0}\mathrm{d} s_{q}\int^{1}_{s_{q}} \mathrm{d} s_{\gamma} ~ \frac{\mathrm{d}^{2}\tsf{P}}{\mathrm{d} s_{q}\mathrm{d} s_{\gamma}}\,, \end{equation} where the double differential involves two integrals over the average phase positions of incoming particle in NLC and NBW: \begin{equation} \frac{\mathrm{d}^{2}\tsf{P}}{\mathrm{d} s_{q}\mathrm{d} s_{\gamma}} = \sum_{\sigma}\int^{\phi_{f}}_{\phi_{i}} \mathrm{d}\phi \frac{\mathrm{d}^{2}\tsf{P}^{\sigma}_{\gamma}(\phi; s_{\gamma})}{\mathrm{d} s_{\gamma}\mathrm{d}\phi} \int^{\phi_{f}}_{\phi} \mathrm{d}\phi' \frac{\mathrm{d}^{2}\tsf{P}^{\sigma}_{e}(\phi';s_{q})}{\mathrm{d} s_{q}\mathrm{d}\phi'} \label{Eq-QED-trident} \end{equation} (where again $\sigma \in \{\parallel,\perp\}$). The production of the positron depends not only on the energy of the NLC photon, but also on the polarisation of the photon. The polarisation degree of the NLC photon is defined as \begin{equation} \Gamma(s_{\gamma})=\frac{\mathrm{d} \tsf{P}^{{\scriptscriptstyle \parallel}}_{\gamma}/\mathrm{d} s_{\gamma} - \mathrm{d} \tsf{P}^{{\scriptscriptstyle \perp}}_{\gamma}/\mathrm{d} s_{\gamma}}{\mathrm{d} \tsf{P}_{\gamma}/\mathrm{d} s_{\gamma}} \label{eqn:GammasDef} \end{equation} where $\mathrm{d} \tsf{P}_{\gamma}/\mathrm{d} s_{\gamma}=\mathrm{d} \tsf{P}^{{\scriptscriptstyle \parallel}}_{\gamma}/\mathrm{d} s_{\gamma} + \mathrm{d} \tsf{P}^{{\scriptscriptstyle \perp}}_{\gamma}/\mathrm{d} s_{\gamma}$. Following the definition in \eqnref{eqn:GammasDef}, we define the polarisation degree for a distribution of photons to be $\Gamma$ where $\Gamma= (\tsf{P}^{{\scriptscriptstyle \parallel}}_{\gamma}-\tsf{P}^{{\scriptscriptstyle \perp}}_{\gamma})/\tsf{P}_{\gamma}$ and $\tsf{P}_{\gamma} = \tsf{P}^{{\scriptscriptstyle \parallel}}_{\gamma}-\tsf{P}^{{\scriptscriptstyle \perp}}_{\gamma}$. For photons in the $\parallel$ polarisation eigenstate, $\Gamma=1$; if they are in the $\perp$ eigenstate then $\Gamma=-1$ and if they are completely unpolarised, $\Gamma=0$. \subsection{Circularly polarised background} First, we investigate the potential harmonic structure in the energy spectrum of the outgoing positron. For the harmonic structure from both NLC and NBW steps to be discernible: i) pairs must be produced by photons with momentum fractions around the first NLC harmonic $s_{\gamma,1} = 2\eta/(2\eta +1 + \xi^2)$; ii) the photon momentum fraction must be larger than the threshold for pair creation from a single laser photon, i.e. linear Breit Wheeler from the carrier frequency, which requires $2(1+\xi^{2})/(\eta s_{\gamma,1})<1$. Combining these conditions, one arrives at the requirement on the initial electron's energy parameter of $\eta>(1+\sqrt{2})(1+\xi^2)$. For $\xi=1$, and a standard laser carrier frequency of $1.55\,\trm{eV}$, this corresponds already to $407\,\trm{GeV}$ electrons (or an energy parameter of $\eta=4.83$). Therefore to demonstrate harmonic structure in the pair spectrum, we consider $500\,\trm{GeV}$ electrons colliding head-on with a $16$ cycle, $\xi=1$ circularly-polarised plane wave with carrier frequency $1.55\,\trm{eV}$ (i.e. $\eta=5.94)$. \begin{figure}[h!!] \includegraphics[width=8cm]{figure1.pdf} \caption{Differential probability for the two-step trident process in a $\xi=1$, $N=16$, circularly polarised pulse of frequency $1.55\,\trm{eV}$ colliding head on with a $500\,\trm{GeV}$ electron, corresponding to an energy parameter of $\eta= 5.94$. (a) Double lightfront spectrum, $\mathrm{d}^{2}\tsf{P}/\mathrm{d} s_{q}/\mathrm{d} s_{\gamma}$ is plotted for the polarised intermediate photon and compared to the NLC spectrum (red overlaid line). The black dashed lines give the location of the first two harmonics in the photon spectrum at $s_{\gamma,n}=2n\eta /(2n\eta + 1 + \xi^2)$ for $n=1,2$; the magenta dotted line denotes the location of the first harmonic in the positron spectrum created by the photon with the energy $\eta s_{\gamma}$ at $s_{q} = \{1\pm[1-2(1+\xi^2)/(s_{\gamma}\eta)]^{1/2}\}/2$. (b) Positron energy spectrum for the two-step trident process $\mathrm{d}\trm{P}/\mathrm{d} s_{q}$ and for the NBW process $\mathrm{d}\trm{P}_{e}/\mathrm{d} s_{q}$ created by the first-harmonic photon (with the energy parameter $\eta_{\gamma}=\eta s_{\gamma,1}$) in the NLC spectrum. } \label{Fig-Tri-E500-H1-Dist} \end{figure} The double differential lightfront spectrum, $\mathrm{d}^{2}\tsf{P}/\mathrm{d} s_{q}/\mathrm{d} s_{\gamma}$ for this case is plotted in Fig.~\ref{Fig-Tri-E500-H1-Dist} (a). We see the main features: i) the harmonic structures from the NLC spectrum are observed around the black dashed lines, namely $s_{\gamma,n}=2n\eta/(2n\eta + 1 + \xi^2)$ for $n=1,2$. This comparison is made manifest by the red solid line which plots the NLC spectrum of photons before the pair is created; ii) the double differential spectrum rises up around the magenta dotted line, $s_{q} = \{1\pm[1-2(1+\xi^2)/(s_{\gamma}\eta)]^{1/2}\}/2$, corresponding to the edge of the first NBW harmonic triggered by the intermediate photon with the momentum fraction $s_{\gamma}>2(1+\xi^2)/\eta$. This is illustrated by the location of harmonic peaks in the NBW positron spectrum [magenta dot-dashed line plotted in \mbox{Fig.~\ref{Fig-Tri-E500-H1-Dist} (b)}] created by a photon with energy equal to the edge of the first NLC harmonic range (the `Compton edge' \cite{Harvey09}), with energy $\eta_{\gamma}=\eta s_{\gamma,1}$. The double differential spectrum peaks up around the crossing points between the first NLC-harmonic (black dashed) line and the first NBW-harmonic (magenta dotted) line in Fig.~\ref{Fig-Tri-E500-H1-Dist} (a). After integrating over the photon lightfront momentum $s_{\gamma}$, the lower harmonic peak ($s_{q}=0.23$) remains in the positron spectrum of the nonlinear two-step trident process as shown in Fig.~\ref{Fig-Tri-E500-H1-Dist} (b) (but the upper harmonic peak around $s_{q}=0.62$ is smoothed out.) \begin{figure}[h!!] \includegraphics[width=8cm]{figure2.pdf} \caption{Differential probability for the two-step trident process in a $\xi=1$, $N=16$, circularly polarised pulse of frequency $4.65\,\trm{eV}$ colliding head on with a $80\,\trm{GeV}$ electron, corresponding to an energy parameter of $\eta=2.85$. The double lightfront spectra $\mathrm{d}^{2}\tsf{P}/\mathrm{d} s_{q}/\mathrm{d} s_{\gamma}$ lor an (a) polarised and (b) unpolarised photon and compared to the NLC spectrum (red overlaid line). The magenta dotted lines denote the location of the leading-NBW harmonic in the positron spectrum created by the photon with energy $\eta s_{\gamma}$ at $s_{q} = \{1\pm[1-2(1+\xi^2)/(2s_{\gamma}\eta)]^{1/2}\}/2$. (c) Dependence of the probability on the lightfront momentum of a polarised (blue dashed line) and unpolarised (blue sold line) photon: for the polarised case, the polarisation degree is given by the magenta dashed-dotted line. (d) Energy spectrum of the positron, $\mathrm{d}\trm{P}/\mathrm{d} s_{q}$, for unpolarised ($\Gamma=0$) and polarised ($\Gamma=\Gamma_{\gamma}$) intermediate photons. The shaded areas underneath the photon-polarised spectrum denote the contribution from each polarisation eigenstate. In (a)-(c), the black dashed lines give the location of the first two harmonics in the photon spectrum at $s_{\gamma}=2n\eta /(2n\eta + 1 + \xi^2)$ for $n=1,2$.} \label{Fig-Tri-E80-H3-Dist} \end{figure} In Figs.~\ref{Fig-Tri-E80-H3-Dist}\,(a) and (b), the double differential spectrum for a lower-energy case is plotted, in which an $80\,\trm{GeV}$ electron collides head-on with a $16$ cycle circularly polarised plane wave pulse with carrier frequency at the third harmonic of the laser at $4.65\,\trm{eV}$, corresponding to an energy parameter of $\eta=2.85$. In this case, harmonic structure is again present in the NLC photon spectrum (illustrated by the red solid line), but the energy parameter is significantly lower, $\eta<2(1+\xi^2)$, so that pairs are only created via the nonlinear process, i.e. requiring higher harmonics from the laser. The leading NBW harmonic is now the second order with smooth harmonic edges, as shown in the figure by the magenta dotted line plotted at $s_{q} = \{1\pm[1-2(1+\xi^2)/(2s_{\gamma}\eta)]^{1/2}\}/2$. Therefore, there is no noticeable NBW-harmonic structure remaining in the double differential spectrum of the trident process. In Fig.~\ref{Fig-Tri-E80-H3-Dist}\,(c), the single differential spectrum $\mathrm{d}\tsf{P}/\mathrm{d} s_{\gamma}$ is plotted, which shows the spectrum of photons that created pairs in the two-step trident process. The locations of the Compton harmonics can be clearly seen. We note that the major contribution originates from the middle of the $s_{\gamma}$-range. There is clearly a competition between: i) more photons being produced at low values of $s_{\gamma}$ and ii) pairs being created more easily for photons with higher values of $s_{\gamma}$. The optimum region of $s_{\gamma}$ falls between these two extremes. For the parameters in Fig.~\ref{Fig-Tri-E80-H3-Dist}, the low-order harmonics in the photon spectrum coincide with the optimal region, and therefore clear harmonic structures from the NLC spectrum can be observed in the double differential plots in Figs.~\ref{Fig-Tri-E80-H3-Dist} (a) and (b). (As we will see later in Fig.~\ref{Fig-Tri-E165-H3-Dist}, if the initial electron energy is lowered to $16.5\,\trm{GeV}$, this optimal region of the photon spectrum corresponds to higher harmonic order and hence no harmonic structure is observable.) Despite the appearance of harmonic structure in the double differential for the $80\,\trm{GeV}$ electron case due to NLC, the harmonic structure is not passed on to the energy spectrum of the produced pair. This is because: i) the Compton spectrum is integrated over and ii) pairs can only be created at higher harmonic order. \begin{figure}[t!!!!] \center{\includegraphics[width=0.45\textwidth]{figure3.pdf} \caption{Differential probability for the two-step trident process in a $\xi=1$, $N=16$, circularly polarised pulse of frequency $4.65\,\trm{eV}$ colliding head on with a $16.5\,\trm{GeV}$ electron, corresponding to an energy parameter of $\eta= 0.59$. The double lightfront spectra, $\mathrm{d}^{2}\trm{P}/\mathrm{d} s_{q}/\mathrm{d} s_{\gamma}$ are plotted for an (a) polarised and (b) unpolarised photons and compared to the NLC spectrum (red overlaid line). (c) The dependence of the probability on the lightfront momentum of a polarised (blue dashed line) and unpolarised (blue sold line) photon is plotted: for the polarised case, the polarisation degree is given by the magenta dashed-dotted line. (d) The energy spectrum of the positron, $\mathrm{d}\trm{P}/\mathrm{d} s_{q}$, for unpolarised ($\Gamma=0$) and polarised ($\Gamma=\Gamma_{\gamma}$) intermediate photon is plotted. The shaded areas underneath the polarised spectrum denote the contributions from each polarisation eigenstate. In (a)-(c), the black dashed lines give the location of the first three harmonics in the photon spectrum at $s_{\gamma}=2n\eta /(2n\eta + 1 + \xi^2)$ for $n=1,2,3$.}} \label{Fig-Tri-E165-H3-Dist} \end{figure} Now we consider the effect of photon polarisation. For some parameters, the photon from NLC can be highly polarised \cite{TangPRA022809,TANG2020135701}, and since NBW can be strongly affected by the polarisation of the photon \cite{Toll:1952rq,Seipt:2020diz,Tang:2022a,TangPRD056003}, we expect photon polarisation to play a role in the two-step trident process. The dependence of the differential probability on the polarisation of the photon is plotted in Fig.~\ref{Fig-Tri-E80-H3-Dist}. First, we see that the harmonic position in the double differential spectrum is unchanged when the photon is unpolarised in Fig.~\ref{Fig-Tri-E80-H3-Dist} (b), which is to be expected, since polarisation can affect energy spectra, but not kinematic ranges. However, in Fig.~\ref{Fig-Tri-E80-H3-Dist} (c), we see that the \emph{height} of the harmonic peak is reduced. This has a straightforward explanation when compared to the photon polarisation degree at the harmonic peak. At this energy, photons are \emph{more} likely to be produced by NLC in the $\parallel$ polarisation state, which is the state \emph{least} likely to produce pairs via NBW. This is why, for these parameters, including photon polarisation leads to a suppression of pairs at the leading Compton harmonic. In general, we can see that photon polarisation can have the affect of enhancing and suppressing different parts of the pair spectrum. For example in the same plot, we see that at photon energies lower than the first Compton harmonic, the spectrum of pairs is instead \emph{enhanced}. Again, this can be understood by comparing with the polarisation degree, where we see that NLC is more likely to produce photons polarised in the $\perp$ polarisation state, which is \emph{more} likely to lead to NBW pair creation. Whilst photon polarisation can influence the pair spectrum, the total yield is not always affected: in Fig.~\ref{Fig-Tri-E80-H3-Dist} (d) it can be seen that the integrals of the polarised and unpolarised spectra are approximately equal. However, for higher energies, e.g. those in Fig.~\ref{Fig-Tri-E500-H1-Dist}, the yield can be increased due to photon polarisation. In Fig.~\ref{Fig-Tri-E500-H1-Dist}, because the first Compton harmonic appears at $s_{\gamma,1}\to1$, the effect of the pair suppression due to the photon polarisation in the regime $s_{\gamma}>s_{\gamma,1}$ can be weaker overall than the enhancement due to the photon polarisation in the regime $s_{\gamma}<s_{\gamma,1}$. In contrast, for lower photon energies, as we will show in Fig.~\ref{Fig-Tri-E165-H3-Dist}, the total trident yield can be noticeably reduced by photon polarisation as only photons with $s_{\gamma}>s_{\gamma,1}$ can contribute effectively to the production process. In Fig.~\ref{Fig-Tri-E165-H3-Dist}, we present the results for a $16.5\,\trm{GeV}$ electron. For this lower energy, harmonic structures in the double differential lightfront spectrum are no longer observable because the main contribution is from higher-order harmonics (around $s_{\gamma}=0.64$, the harmonic order is $n=3$). The contribution from the most visible NLC harmonic at $n=1$, is strongly suppressed. Also in this case, because of the photon polarisation, the yield of pairs is slightly reduced around the central peak in the double-differential spectrum and because the contribution from photons at energies below the first NLC harmonic is so suppressed, the part of the spectrum where photon polarisation \emph{enhances} pair production is also suppressed. As a result, in Fig.~\ref{Fig-Tri-E165-H3-Dist} (d), it can be seen that the integral of the polarised case is now noticeably smaller than the unpolarised case. The other notable difference from the high-energy case in Fig.~\ref{Fig-Tri-E80-H3-Dist} is that the photons polarised in the $\parallel$-eigenstate has larger contribution to the total positron yield from the NLC-polarised photons as shown in Fig.~\ref{Fig-Tri-E165-H3-Dist} (d). \begin{figure}[t!!!!] \center{\includegraphics[width=0.45\textwidth]{figure4.pdf}} \caption{Positron yield as a function of laser intensity for the two-step trident process in a head-on collision of a $16.5\,\trm{GeV}$ electron and a $16$-cycle circularly-polarised plane wave pulse with carrier frequency $\omega_{l}=4.65\,\trm{eV}$ (corresponding to $\eta=0.59$). The plot shows the yield for photons that are unpolarised ($\Gamma=0$), polarised by NLC ($\Gamma=\Gamma_{\gamma}$) or polarised in one of the eigenstates of the laser background ($\Gamma=\pm1$). The inset shows the ratio of the yield for photons in a polarisation eigensate to the total yield for the NLC-polarised photons. } \label{Fig-Trident-LMA-E165-H3-Numb} \end{figure} In~\figref{Fig-Trident-LMA-E165-H3-Numb} the total yield of positrons is calculated as a function of intensity for various photon polarisation choices. With the increase of the laser intensity, the positron yield increases significantly. For single-vertex processes, it is known that as $\xi$ increases, the LMA tends to the LCFA for a plane-wave background \cite{Heinzl:2020ynb}. Since a locally constant field is the same for different circularly-polarised background helicities, we expect that as $\xi$ increases, the difference between each photon helicity state should decrease. This is indeed what we observe in the inset in Fig.~\ref{Fig-Trident-LMA-E165-H3-Numb}. We also observe the converse: as $\xi$ is reduced past $\xi=1$, the relative importance of photon helicity increases. This is another example of physics that is beyond the locally constant-field approach. (The relative importance of the photon polarisation with the change of the laser intensity for the two-step trident process will be discussed later in Fig.~\ref{Fig-Trident-LMA-E165-H3-Numb_Rela_lin_cir}.) \subsection{Linearly polarised background} The LMA for a linearly-polarised background is more computationally expensive to calculate than the LMA for a circularly-polarised background due to the latter having azimuthal symmetry in the transverse plane. As a consequence, one must calculate generalised Bessel functions \cite{ritus85,Korsch_2006,PhysRevE.79.026707} in the integrand for a linearly-polarised background, compared to just standard Bessel functions of the first kind for a circularly-polarised background. \begin{figure}[t!!!!!] \includegraphics[width=8cm]{figure5.pdf} \caption{Differential probability for the two-step trident process in a $\xi=1$, $N=16$, \emph{linearly} polarised pulse of frequency $4.65\,\trm{eV}$ colliding head on with a $16.5\,\trm{GeV}$ electron, corresponding to an energy parameter of $\eta= 0.59$. The double lightfront spectra, $\mathrm{d}^{2}\trm{P}/\mathrm{d} s_{q}/\mathrm{d} s_{\gamma}$ are plotted for an (a) polarised and (b) unpolarised photon and compared to the NLC spectrum (red overlaid line). (c) Dependence of the probability on the lightfront momentum of a polarised (blue dashed line) and unpolarised (blue sold line) photon: for the polarised case, the polarisation degree is given by the magenta dashed-dotted line. (d) Energy spectra of the positron, $\mathrm{d}\trm{P}/\mathrm{d} s_{q}$ in the two-step trident process for the unpolarised ($\Gamma=0$) and polarised ($\Gamma=\Gamma_{\gamma}$) photon. The shaded areas underneath the polarised spectrum denote the contributions from each polarisation eigenstate. In (a)-(c), the black dashed lines give the location of the first six harmonics in the photon spectrum at $s_{\gamma}=2n\eta /(2n\eta + 1 + \xi^2)$ for integers $n=1$ to $n=6$. } \label{Fig-Tri-QED-LMA-E165-H3-Dist-lin} \end{figure} Here we perform a similar analysis as in the previous section but focus on the case of a $16.5\,\trm{GeV}$ electron colliding head-on with a $\xi=1$, $N=16$ \emph{linearly} polarised pulse of frequency $4.65\,\trm{eV}$, corresponding to an energy parameter of $\eta\approx 0.59$. In Fig.~\ref{Fig-Tri-QED-LMA-E165-H3-Dist-lin} rich harmonic structures are observed in the double differential plots, $\mathrm{d}^{2}\tsf{P}/\mathrm{d} s_{p}/\mathrm{d} s_{\gamma}$ for a polarised [Fig.~\ref{Fig-Tri-QED-LMA-E165-H3-Dist-lin}\,(a)] and unpolarised [Fig.~\ref{Fig-Tri-QED-LMA-E165-H3-Dist-lin}\,(b)] intermediate photon. Again, these harmonic structures correspond to harmonics in the NLC spectrum and cannot be observed in the energy spectrum of the produced pair. Similar to the circularly polarised case, the effect of the photon polarisation decreases the height of the harmonic peak in the double differential plots. However, in linearly polarised laser backgrounds, the photon is always more likely to be produced in the $\parallel$ polarisation state ($\Gamma_{\gamma}>0$), shown by the magenta line in Fig.~\ref{Fig-Tri-QED-LMA-E165-H3-Dist-lin} (c). Therefore, even with very high-energy electrons, in a linearly-polarised background, the effect of photon polarisation is always to \emph{reduce} the probability of nonlinear trident in the two-step process. Similar to the low-energy case in circularly polarised backgrounds, the photons in the $\parallel$-polarisation eigenstate has a larger contribution to the total positron yield for the NLC-polarised photons in Fig.~\ref{Fig-Tri-QED-LMA-E165-H3-Dist-lin} (d). \begin{figure}[t!!!!] \center{\includegraphics[width=0.45\textwidth]{figure6.pdf}} \caption{Positron yield as a function of laser intensity for the two-step trident process in a head-on collision of a $16.5\,\trm{GeV}$ electron and a $16$-cycle \emph{linearly}-polarised plane wave pulse with carrier frequency $\omega=4.65\,\trm{eV}$,corresponding to $\eta=0.59$. In plot (a), the yield for photons that are unpolarised ($\Gamma=0$), polarised by NLC ($\Gamma=\Gamma_{\gamma}$) or polarised in one of the eigenstates of the laser background ($\Gamma=\pm1$) is shown; the inset shows the ratio of the yield for photons produced in a polarisation eigenstate to the total yield for the NLC-polarised photons. In plot (b), the locally monochromatic approximation is compared to the locally constant field approximation for different laser carrier frequencies. } \label{Fig-Trident-LMA-E165-H3-Numb_lin} \end{figure} The positron yield is plotted against the background intensity parameter $\xi$ in~\figref{Fig-Trident-LMA-E165-H3-Numb_lin}. As in the circularly polarised case, the positron yields for the unpolarised ($\Gamma=0$) and polarised ($\Gamma=\Gamma_{\gamma}$) photon increase significantly with laser intensity, and in the photon-polarised case, the difference between the yield for each photon polarisation eigenstate becomes smaller at higher intensities. However, in the linearly polarised background, the decrease of this difference is much slower than that in the circularly polarised background, which implies that the polarisation effects in linearly polarised backgrounds persist at much higher laser intensities (we will investigate this in the discussion in Sec. III). As the direction of polarisation does not change with the phase in linearly polarised backgrounds, there is a well-defined constant field limit. In Fig.~\ref{Fig-Trident-LMA-E165-H3-Numb_lin} (b), the difference between the LMA and LCFA for predicting the positron yield is plotted. This illustrates the result that the LMA agrees with the LCFA at large intensity parameter $\xi$ and at small intensities the LMA predicts higher positron yield than the LCFA. The threshold intensity for the agreement between the LMA and LCFA depends on the frequency of the background field: for the larger laser frequency, the LMA result matches the LCFA at higher intensities. \section{Discussion} One of the motivations for the current work was to investigate harmonic structure in the two-step trident process. We found that the harmonic structure from the Compton step is generally washed out in the pair creation step, because the photon energies most likely to create pairs correspond to higher Breit-Wheeler harmonics where harmonic structure is no longer visible. The implications of this for $n$-step higher-order processes and cascades, is that harmonic structure from lower-order processes becomes washed out in later generations due to the lightfront momentum being reduced. For a pair-creation stage, this is clear: to see harmonic structure requires the energy parameter to be of the order of $2(1+\xi^{2})$; already for $\xi=1$ and a triple-harmonic laser frequency of $4.65\,\trm{eV}$ this corresponds to around $130\,\trm{GeV}$ electrons. This washing-out of harmonic structure must also apply to multiple nonlinear Compton scattering of hard photons, where the Compton edge is at energy parameter $2\eta^{2}/(2\eta+1+\xi^{2})$; as the energy parameter $\eta$ reduces in each stage of the cascade, the Compton edge is pushed to lower energies, which are less likely to seed the next generation of the cascade. In order to obtain harmonic structures in later generations, which carry information of the subprocesses in earlier generations, the energy parameter needs to be very high. We demonstrated this for the two-step trident process by considering a $500\,\trm{GeV}$ electron in a circularly-polarised background with frequency $1.55\,\trm{eV}$. Another motivation for the current work was to study the effect due to the intermediate photon being polarised. It is known from past studies of the trident process \cite{king13a,king13b} that in a constant crossed field the relative importance of photon polarisation is an order $10\%$ effect. Indeed this is what we found using the LMA when the intensity parameter was increased towards the $\xi\gg 1$ region. However, with the LMA, we could also study the effect of photon polarisation in the range when $\xi \not \gg 1$, which is the parameter regime of some topical experiments E320 and LUXE that will employ electron beams from a linac. \begin{figure}[t!!!!] \centering \includegraphics[width=8cm]{figure7.pdf} \caption{The dependency on the intensity parameter of the `relative importance': $(\tsf{P}^{\Gamma=0}-\tsf{P}^{\Gamma=\Gamma_{\gamma}})/\tsf{P}^{\Gamma=\Gamma_{\gamma}}$ of the polarisation of the intermediate photon in the two-step trident process is illustrated for differently polarised laser backgrounds. $N=16$, $E_{e}=16.5\,\trm{GeV}$.} \label{Fig-Trident-LMA-E165-H3-Numb_Rela_lin_cir} \end{figure} In Fig.~\ref{Fig-Trident-LMA-E165-H3-Numb_Rela_lin_cir}, we compare the relative importance of the polarisation property of the intermediate photon in the two-step trident process in differently polarised laser backgrounds. In a \emph{linearly}-polarised background, the effect of photon polarisation in the two-step trident for likely future experimental parameters, is to \emph{lower} the trident rate by around $15\%$ in the intermediate intensity regime of $\xi \sim O(1)$. This agrees with a similar analysis performed for a constant crossed field background \cite{king13a}. In a linearly-polarised background, the direction of polarisation does not change with phase in the lab frame, and therefore there is a well-defined constant field limit. However, for circular polarisation, there is no well-defined constant-crossed field limit. Therefore, if the LCFA becomes more accurate as $\xi$ is increased, the importance of photon polarisation in circular backgrounds must reduce for larger $\xi$, which we indeed find in Fig.~\ref{Fig-Trident-LMA-E165-H3-Numb_Rela_lin_cir}. For $\xi\lesssim 1$ in circular backgrounds, the polarization effect can also reduce the trident rate, in principle even more than $10\%$. Here we see the point of using the LMA, which has been used to calculate the polarisation effect in the intermediate intensity region and in field backgrounds with the circular polarisation. Agreement between the LMA and LCFA in the high-intensity region is shown in Fig.~\ref{Fig-Trident-LMA-E165-H3-Numb_Rela_lin_cir}. Since the applicability of the LCFA also depends on the energy parameter, we include in the plot the leading and third harmonic of the background frequency. We see little difference in the relative importance although the energy parameter differs by a factor of three. This is perhaps due to the fact that, when all other parameters are fixed, for increasing energy parameter, the LCFA for NLC becomes \emph{less} accurate, but the LCFA for NBW becomes \emph{more} accurate. So whilst linear polarisation is an attractive background to use for trident experiments since, for fixed laser pulse energy, it allows for $\xi$ to be increased by a factor $\sqrt{2}$ compared to a circularly-polarised background, we see that there is a slight cost to this increase when one takes into account the polarisation of intermediate photons. One may ask the question of whether knowledge of the influence of photon polarisation can be used to optimise the two-step. Since nonlinear Compton most abundantly produces photons in the polarisation state that is least likely to create pairs, if a single laser pulse is used, there is a natural compensation against the effects of polarisation. However, if one uses a double laser pulse, with each sub-pulse being linearly polarised in a plane orthogonal to the polarisation of the other pulse, and if one chooses the properties of the pulses such that in the first pulse mainly only nonlinear Compton scattering takes place, then one can engineer a situation in which the most abundantly produced polarisation in nonlinear Compton scattering is also the polarisation most likely to create pairs in the second pulse. Such an approach can be presumably generalised to $n$-stage processes, depending on which sub-process is favoured in each generation. The total yield of pairs in this two-step scenario can be calculated as in the previous sections using \eqnref{eqn:tri2}, but now where the corresponding phase integrals for NLC and NBW correspond to an integral over different laser pulses. In Fig.~\ref{Fig-Opt-Spec-NLC-NBW-a2}, we compare the positron yield in the double laser pulses with the parallel and perpendicular linear polarisation. As we can see, by using laser pulses with orthogonal polarisations, one can improve the positron yield about $32\%$ than that from the two laser pulses with the parallel polarisation. \begin{figure}[t!!!] \center{\includegraphics[width=0.48\textwidth]{figure8.pdf}} \caption{Positron yield in the two-step trident scenario, in which a beam of $16.5\,\trm{GeV}$ electrons collide head-on with one plane-wave pulse to scatter highly polarised $\gamma$ photons via nonlinear Compton scattering, which then collide with a second plane-wave pulse to create pairs via the nonlinear Breit-Wheeler process. Both the plane waves are $16$-cycle linearly polarised pulses with intensity parameter $\xi=2$ and carrier frequency $1.55\,\trm{eV}$, which are polarised in the plane parallel or perpendicular to each other. The relative difference in the positron yield between the two cases is shown by the magenta line. $\trm{P}_{\trm{para}}$ ($\trm{P}_{\trm{perp}}$) denotes the positron yield in the case with the parallel (perpendicular) polarisation.} \label{Fig-Opt-Spec-NLC-NBW-a2} \end{figure} \section{Conclusion} In the two-step trident process, photon polarisation can have an O(10\%) effect on the total rate. This conclusion agrees with previous studies in a constant crossed field and via the locally constant field approximation, which one would expect to be a good approximation when $\xi\gg1$. What is new here is that we have used the locally monochromatic approximation, which allowed us to analyse $\xi \sim O(1)$ as well as backgrounds with circularly-polarisation. We find that the effect of photon polarisation increases slightly for a linearly polarised background as $\xi$ is made smaller; for a circularly polarised background, photon polarisation is only $O(10\%)$ for $\xi\lesssim 1$: as $\xi$ is increased above this, the effect of photon polarisation disappears. In order to reach this conclusion we have derived photon-polarised rates for nonlinear Compton scattering and nonlinear Breit-Wheeler pair creation, in both a circularly-polarised and a linearly-polarised background. Such expressions can be useful in extending simulation codes based on the locally monochromatic approximation to include photon-polarised subprocesses. If the energy of the scattered electrons and positrons were measured in experiment, the double-differential probability can in principle show harmonic structure from the Compton step intersecting with harmonic structure from nonlinear Breit-Wheeler. However, this can only be achieved if the centre-of-mass energy for a single background photon colliding with the electron is significantly over the threshold for linear Breit-Wheeler. We illustrated this with an example of $500\,\trm{GeV}$ electrons and a $1.55\,\trm{eV}$ laser frequency. In general we conclude that harmonic structure will be washed out in later generations in any $n$-step process such as a QED cascade. Finally, one way to exploit the dependence on photon polarisation to enhance the two-step trident process, is to collide initial electrons with two linearly-polarised laser pulses, which have perpendicular polarisations. This can lead to an enhancement of around $30\%$ compared to two pulses with the same polarisation. \section{Acknowledgments} ST acknowledges support from the Shandong Provincial Natural Science Foundation, Grants No. ZR202102280476. BK acknowledges the hospitality of the DESY theory group and support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2121 “Quantum Universe” – 390833306. The work was carried out at Marine Big Data Center of Institute for Advanced Ocean Study of Ocean University of China.
1,116,691,498,822
arxiv
\section{Introduction} \vskip .1in This paper focuses on a special 3D anisotropic magnetohydrodynamic (MHD) system. The velocity field obeys the 3D Navier-Stokes equation with one-directional dissipation while the magnetic field satisfies the induction equation with two-directional magnetic diffusion. More precisely, the MHD system concerned here reads \begin{equation}\label{mhd0} \begin{cases} \p_t u + u \cdot \nabla u - \partial_1^2 u + \nabla P = B \cdot \nabla B, \qquad x \in \mathbb{R}^3, t > 0,\\ \p_t B + u \cdot \nabla B - \partial_1^2 B-\partial_2^2 B = B \cdot \nabla u, \\ \nabla \cdot u = \nabla \cdot B = 0,\\ u(x,0) =u_0(x), \quad B(x,0) =B_0(x), \end{cases} \end{equation} where $u=(u_1,u_2,u_3)^T$ represents the velocity field, $P$ the total pressure and $B=(B_1,B_2,B_3)^T$ the magnetic field. The MHD equations reflect the basic physics laws governing the motion of electrically conducting fluids such as plasmas, liquid metals and electrolytes. They are a combination of the Navier-Stokes equation of fluid dynamics and Maxwell's equation of electromagnetism (see, e.g., \cite{Bis,Davi, Pri}). The MHD system (\ref{mhd0}) focused here is relevant in the modeling of reconnecting plasmas (see, e.g., \cite{CLi1,CLi2}). The Navier-Stokes equation with anisotropic viscous dissipation arise in several physical circumstances. It can model the turbulent diffusion of rotating fluids in Ekman layers. More details on the physical backgrounds of anisotropic fluids can be found in \cite{CDGG,Ped}. \vskip .1in The goal here is to establish the global well-posedness and stability near a background magnetic field. More precisely, the background magnetic field refers to the special steady-state solution $(u^{(0)}, B^{(0)})$, where $$ u^{(0)} \equiv 0, \quad B^{(0)} \equiv (0,1, 0) := e_2. $$ Any perturbation $(u, b)$ near $(u^{(0)}, B^{(0)})$ with $$ b= B- B^{(0)} $$ is governed by \begin{equation}\label{mhd} \begin{cases} \p_t u + u \cdot \nabla u - \partial_1^2 u + \nabla P = b \cdot \nabla b + \partial_2 b, \quad x \in \mathbb{R}^3, t > 0,\\ \p_t b + u \cdot \nabla b - \Delta_h b = b \cdot \nabla u + \partial_2 u, \\ \nabla \cdot u = \nabla \cdot b = 0,\\ u(x,0) =u_0(x), \quad b(x,0) =b_0(x), \end{cases} \end{equation} where, for notational convenience, we have written $\Delta_h = \partial_1^2 + \partial_2^2 $ and we shall also write $\na_h =(\p_1, \p_2)$. Our motivation for this study comes from two sources. The first is to gain a better understanding on the well-posedness problem on the 3D Navier-Stokes equation with dissipation in only one direction. The second is to reveal and rigorously establish the stabilizing phenomenon exhibited by electrically conducting fluids. Extensive physical experiments and numerical simulations have been performed to understand the influence of the magnetic field on the bulk turbulence involving various electrically conducting fluids such as liquid metals (see, e.g., \cite{AMS,Alex,Alf,BSS,Bur, Davi0,Davi1,Davi,Gall,Gall2}). These experiments and simulations have observed a remarkable phenomenon that a background magnetic field can smooth and stabilize turbulent electrically conducting fluids. We intend to establish these observations as mathematically rigorous facts on the system (\ref{mhd}). \vskip .1in Mathematically the problem we are attempting appears to be impossible. The velocity satisfies the 3D incompressible forced Navier-Stokes equations with dissipation in only one direction $$ \p_t u + u \cdot \nabla u - \partial_1^2 u + \nabla P = b \cdot \nabla b + \partial_2 b, \quad x \in \mathbb{R}^3,\,\, t>0. $$ However, when the spatial domain is the whole space $\mathbb{R}^3$, the small data global well-posedness of the 3D Navier-Stokes equations with only one-directional dissipation, \beq\label{ns} \begin{cases} \p_t u + u\cdot\na u = -\na p + \p_1^2 u, \quad x\in \mathbb R^3, \,\,t>0, \\ \na\cdot u=0, \\ u(x,0) =u_0(x) \end{cases} \eeq remains an outstanding open problem. The difficulty is immediate. The dissipation in one direction is simply not sufficient in controlling the nonlinearity when the spatial domain is the whole space $\mathbb R^3$. \vskip .1in In a beautiful work \cite{ZHANG1}, Paicu and Zhang were able to deal with the case when the spatial domain is bounded in the $x_1$-direction with Dirichlet type boundary conditions. They successfully established the small data global well-posedness by observing a crucial Poincar\'{e} type inequality. This inequality allows one to bound $u$ in terms of $\p_1 u$ and thus leads to the control of the nonlinearity. However, such Poincar\'{e} type inequalities are not valid for the whole space case. \vskip .1in If we increase the dissipation to be in two-directions, say $$ \begin{cases} \p_t u + u\cdot\na u = -\na p + (\p_1^2+ \p_2^2) u, \quad x\in \mathbb R^3, \,\,t>0, \\ \na\cdot u=0, \end{cases} $$ then any sufficiently small initial data in a suitable Sobolev or Besov space always leads to a global (not necessarily stable) solution. There are substantial developments on the 3D anisotropic Navier-Stokes equations with two-directional dissipation. Significant progress has been made on the global existence of small solutions and on the regularity criteria on general large solutions in various Sobolev and Besov settings (see, e.g., \cite{CDGG,If,ZHANG2,ZHANG1, Pai2}). \vskip .1in We return to the well-posedness and stability problem proposed here. Clearly, if the coupling with the magnetic field does not generate extra smoothing and stabilizing effect, then the problem focused here would be impossible. Fortunately, we discover in this paper that the magnetic field does help stabilize the conducting fluids, as observed by physical experiments and numerical simulations. To unearth the stabilization effect, we take advantage of the coupling and intersection in the MHD system to convert (\ref{mhd}) into the following wave equations \begin{equation}\label{wav} \begin{cases} \p_t^2 u - (\p_1^2 + \Delta_h) \p_t u + \p_1^2 \Delta_h u- \p_2^2 u= (\p_t -\Delta_h) N_1 + \p_2 N_2,\\ \p_t^2 b - (\p_1^2 + \Delta_h) \p_t b + \p_1^2 \Delta_h b- \p_2^2 b= (\p_t -\p_1^2) N_2 + \p_2 N_1, \end{cases} \end{equation} where $N_1$ and $N_2$ are the nonlinear terms $$ N_1 = \mathbb P (-u\cdot\na u+ b\cdot \na b), \quad N_2 = -u\cdot\na b + b\cdot\na u $$ with $\mathbb P=I -\na\Delta^{-1} \na\cdot$ being the Leray projection. (\ref{wav}) is derived by taking the time derivative of (\ref{mhd}) and making several substitutions. In comparison with (\ref{mhd}), the wave structure in (\ref{wav}) exhibits much more regularity properties. In particular, the linearized wave equation of $u$, $$ \p_t^2 u - (\p_1^2 + \Delta_h) \p_t u + \p_1^2 \Delta_h u- \p_2^2 u=0 $$ reveals the regularization of $u$ in the $x_2$-direction, although the original system (\ref{mhd}) involves only the dissipation in the $x_1$-direction. \vskip .1in Unfortunately the extra regularization in the $x_2$-direction is not sufficient to control the Navier-Stokes nonlinearity. The regularity from the wave structure is in general one-derivative-order lower than the standard dissipation. More precisely, when we seek solutions in the Sobolev space $H^4(\mathbb R^3)$, the dissipation in the $x_1$-direction yields the time integrability term \beq\label{jj} \int_0^t \|\p_1 u(\tau)\|^2_{H^4}\,d\tau, \eeq but the extra regularity in the $x_2$-direction due to the background magnetic field and the coupling can only allow us to bound \beq\label{jj1} \int_0^t \|\p_2 u(\tau)\|^2_{H^3}\,d\tau. \eeq But (\ref{jj}) and (\ref{jj1}) may not be sufficient to control some of the terms resulting from the nonlinearity in the estimate of $\|u\|_{H^4}$ such as $$ \int_{\mathbb R^3} \p_3 u_3 \,\p_3^4 u_1\, \p_3^4 u_1\,dx. $$ Naturally we use the divergence-free condition $\p_3 u_3 = -\p_1 u_1 -\p_2 u_2$ to eliminate the bad derivative $\p_3$, but this process generates a new difficult term $$ \int_{\mathbb R^3} \p_2 u_2 \,\p_3^4 u_1\, \p_3^4 u_1\,dx, $$ which can not be bounded in terms of (\ref{jj1}). If we integrate by parts, we would have the fifth-order derivatives on the velocity, which can not be controlled. We call this phenomenon the derivative loss problem. The above analysis reveals that the extra regularity gained through the background magnetic field and the nonlinear coupling is not sufficient to deal with the derivative loss problem. \vskip .1in This paper creates several new techniques to combat the derivative loss problem. As a consequence, we are able to offer suitable upper bounds on the Navier-Stokes nonlinearity and solve the desired global well-posedness and stability problem. We state our main result and then describe these techniques. \begin{theorem} \label{main} Consider \eqref{mhd} with the initial datum $(u_0, b_0) \in H^4(\mathbb{R}^3)$ and $\nabla \cdot u_0 = \nabla \cdot b_0 = 0$, Then there exists a constant $\epsilon>0$ such that, if \begin{equation*}\label{initialdata} \|u_0\|_{H^4} + \|b_0\|_{H^4} \leq \epsilon, \end{equation*} system \eqref{mhd} has a unique global classical solution $(u, b)$ satisfying, for any $t>0$, $$ \|u(t)\|_{H^4}^2 + \|b(t)\|_{H^4}^2 + \int_0^t \left(\|\partial_1 u\|_{H^4}^2 + \|\na_h b\|^2_{H^4} + \|\p_2 u\|^2_{H^3}\right) \,d\tau \leq \;\epsilon. $$ \end{theorem} We make two remarks. Theorem \ref{main} does not solve the small data global well-posedness problem on the 3D Navier-Stokes equations in (\ref{ns}). The MHD system in (\ref{mhd}) with $b\equiv 0$ does not reduce to (\ref{ns}). It is hoped that the new idea and techniques discovered in this paper will shed light on the open problem on (\ref{ns}). \vskip .1in A previous work of Wu and Zhu \cite{WuZhu} successfully resolved the small data global well-posedness and stability problem on the 3D MHD system with horizontal dissipation and vertical magnetic diffusion near an equilibrium, \begin{equation}\label{mhd1} \begin{cases} \p_t u + u \cdot \nabla u - \Delta_h u + \nabla P = b \cdot \nabla b + \partial_1 b, \quad x \in \mathbb{R}^3, t > 0,\\ \p_t b + u \cdot \nabla b - \p_3^2 b = b \cdot \nabla u + \partial_1 u, \\ \nabla \cdot u = \nabla \cdot b = 0. \end{cases} \end{equation} Although the small data global well-posedness problem on (\ref{mhd1}) is highly nontrivial, it is clear that the current problem on (\ref{mhd}) is different and can be even more difficult. One simple reason is that the equation of $u$ in (\ref{mhd1}) contains dissipation in two directions and all the nonlinear terms in the estimate of the Sobolev norms involve $u$ as a component. However, (\ref{mhd}) has only one-direction velocity dissipation and the velocity nonlinear term involves only $u$ (no other more regularized components). The methods in \cite{WuZhu} are not sufficient for the problem of this paper. \vskip .1in Since the pioneering work of F. Lin and P. Zhang \cite{LinZh} and F. Lin, L. Xu and P. Zhnag \cite{LinZhang1}, many efforts have now devoted to the small data global well-posedness and stability problems on partially dissipated MHD systems. MHD systems with various levels of dissipation and magnetic diffusion near several steady-states have been thoroughly investigated and a rich array of results have been established (see, e.g., \cite{AZ,BSS, CaiLei, HeXuYu,HuWang, JiangARMA, JiangJMPA, LinZhang1,LinZh, PanZhouZhu,Ren, Ren2, Tan, WeiZ, WuWu, WuWuXu, ZT1, ZT2,ZhouZhu}). In addition, global well-posedness on the MHD equations with general large initial data has also been actively pursued and important progress has been made (see, e.g., {\cite{CaoWuYuan,CMZ,DongLiWu1, DongLiWu0, FNZ, Fefferman1, Fefferman2, HuangLi, JNW, JiuZhao2, LinDu, WuMHD2018, Yam1, YuanZhao, Ye})}. Needless to say, the references listed here represent only a small portion of the very large literature on the global well-posedness and related problems concerning the MHD equations. \vskip .2in We briefly outline the proof of Theorem \ref{main}. We have chosen the Sobolev space $H^4$ as the functional setting for our solutions since $H^3$ does not appear to be regular enough to accommodate our approach. Since the local well-posedness of (\ref{mhd}) in $H^4$ follows from a standard procedure (see, e.g., \cite{MaBe}), the proof focuses on the global $H^4$-bound. The framework of the proof for the global $H^4$-bound is the bootstrapping argument (see, e.g., \cite[p.20]{Tao}). The process starts with the setup of a suitable energy functional. Besides the standard $H^4$-energy, we also need to include the extra regularization term in (\ref{jj1}) resulting from the background magnetic field and the coupling. More precisely, we set \beq\label{ed} \mathscr{E}(t) = \mathscr{E}_1(t) + \mathscr{E}_2(t), \eeq where \begin{align*} & \mathscr{E}_1 (t)=\sup_{0 \leq \tau \leq t} \big( \|u\|_{H^4}^2 + \|b\|_{H^4}^2 \big)+\int_0^t \Big( \|\p_1u\|_{H^4}^2 + \|\na_h b\|_{H^4}^2\Big) d\tau,\\ & \mathscr{E}_2(t) =\int_0^t \|\p_2 u\|_{H^3}^2 d\tau. \end{align*} Our main efforts are devoted to showing that, for some constant $C_0>0$ and for all $t>0$, \begin{equation}\label{bb} \mathscr{E} (t) \leq C_0\, \mathscr{E} (0)+ C_0\, \mathscr{E}^{\frac32} (0) + C_0\,\mathscr{E}^{\frac{3}{2}} (t) + C_0\, \mathscr{E}^2 (t). \end{equation} Then an application of the bootstrapping argument to (\ref{bb}) would yield the desired result, namely, for a sufficiently small $\epsilon>0$, $$ \|u_0\|_{H^4} + \|b_0\|_{H^4} \le \epsilon \quad \mbox{or}\quad \mathscr{E}(0) \le \epsilon^2 $$ would imply, for a constant $C>0$ and for any $t>0$, $$ \mathscr{E}(t) \le C\,\epsilon^2. $$ It then yields the global uniform $H^4$-bound on $(u, b)$ and the stability. \vskip .1in The proof of (\ref{bb}) is highly nontrivial. It is achieved by establishing the following two inequalities for positive constants $C_1$ through $C_8$, \begin{align} &{\mathscr{E}_{2}(t)}\le C_1\, \mathscr{E}(0) + C_2\,\mathscr{E}_1(t) + C_3\,\mathscr{E}_1^{\frac32}(t) + C_4 \,\mathscr{E}_2^{\frac32}(t), \label{lb}\\ &{\mathscr{E}_{1} (t)}\le C_5\, \mathscr{E}(0) + C_6\, \mathscr{E}^{\frac32} (0) + C_7\, \mathscr{E}^{\frac32}(t) + C_8\, \mathscr{E}^2 (t), \label{hb} \end{align} which clearly lead to (\ref{bb}) by adding (\ref{hb}) to a suitable multiple of (\ref{lb}). Due to the equivalence of the norms $$ \|f\|_{H^k} {\; \sim \; } \|f\|_{L^2} + \sum_{i=1}^3 \|\p_i^k f\|_{L^2}, $$ the verification of (\ref{lb}) is naturally divided into the estimates of $$ \int_0^t \|\p_2 u\|_{L^2}^2 \,d\tau \quad \mbox{and} \quad \sum_{i=1}^3\int_0^t \|\p_i^3 \p_2 u\|_{L^2}^2 \,d\tau. $$ One key strategy is to take advanatge of the coupling and interaction of the MHD system in (\ref{mhd}) to shift the time integrability. More precisely, we replace $\p_2 u$ by the evolution of $b$, $$ \p_2 u = \p_t b +u\cdot\nabla b-\Delta_h b-b\cdot\nabla u $$ and convert the time integral of $\|\p_2 u\|_{L^2}^2$ into time integrals of more regular terms, $$ \int_0^t \|\p_2 u\|_{L^2}^2 \,d\tau = \int_0^t \int_{\mathbb R^3} (\p_t b +u\cdot\nabla b-\Delta_h b-b\cdot\nabla u)\cdot \p_2u \;dx\,d\tau. $$ We then further shift the time derivative from $b$ to $\p_2 u$ and involve the equation of $u$. This process generates many more terms, but it replaces those with less time-integrable terms by more integrable nonlinear terms. More details can be found in Section \ref{e2b}. \vskip .1in It is much more difficult to verify (\ref{hb}). Undoubtedly the most difficult term is generated by the Navier-Stokes nonlinear term. We use the vorticity formulation to take advantage of certain symmetries. Since $\|\om\|_{L^2} = \|\na u||_{L^2}$ for the vorticity $\om =\na\times u$, it suffices to control $\|\om\|_{\dot H^3}$. One of the wildest terms is given by $$ \int \p_3 u_3\, \p_3^3 \om \cdot \p_3^3 \om \,dx . $$ Naturally we eliminate the bad derivative $\p_3$ via the divergence-free condition $\p_3 u_3 = -\p_1 u_1 -\p_2 u_2$, but this leads to another uncontrolled term \beq\label{aa} \int \p_2 u_2\, \p_3^3 \om \cdot \p_3^3 \om \,dx. \eeq As aforementioned, $\mathscr{E}_{2}$ contains only fourth-order derivatives and integrating by parts in (\ref{aa}) would generate uncontrollable fifth-order derivatives. Obtaining a suitable bound on (\ref{aa}) appears to be an impossible mission. This leads to the derivative loss problem. This paper introduces several new techniques to unearth the hidden structure in the nonlinearity. We enhance the nonlinearity through the coupling in the system and induce cancellations through the construction of artificial symmetries. More precisely, we are able to establish the desired upper bounds stated in the following proposition. For notational convenience, we use $A \lesssim B$ to mean $A\le C\, B$ for a pure constant $C>0$. \begin{proposition}\label{lem-important} Let $(u, b)\in H^4$ be a solution of (\ref{mhd}). Let $\omega = \nabla \times u$ and $H = \nabla \times b$ be the corresponding vorticity and current density, respectively. Let $\mathscr{E}(t)$ be defined as in \eqref{ed}. Let $\mathcal{W}(t)$ be the interaction type terms defined as follows, $$ \mathcal{W}^{ijk}(t) = \int_{\mathbb{R}^3} \p_3^{3}\omega_i \p_2u_j \p_3^{3}\omega_k \; dx \quad \text{for} \; \; i,j,k \in \{1,2,3\}.$$ Then the time integral of $\mathcal{W}^{ijk}$ admits the following bound, $$ \int_0^t \mathcal{W}^{ijk} (\tau) \; d\tau \lesssim \mathscr{E}^\frac{3}{2}(0) +\mathscr{E}^{\frac{3}{2}}(t) + \mathscr{E}^2 (t). $$ \end{proposition} The proof of this proposition is not trivial. When we just have the 3D Navier-Stokes with one-directional dissipation, this term can not be suitably bounded and the small-data global well-posedness remains an open problem for the 3D Navier-Stokes. The advantage of working with the MHD system in (\ref{mhd}) is the coupling and interaction. We take advantage of this coupling to replace $\p_2 u_j$ via the equation of $b$, \begin{equation*}\label{W} \begin{split} \mathcal{W}^{ijk}(t)=\; & \int_{\mathbb{R}^3} \p_3^{3}\omega_i \Big[\p_t b_j +u\cdot\nabla b_j -\Delta_h b_j -b\cdot\nabla u_j\Big] \p_3^{3}\omega_k \; dx \\ = \; & \frac{d}{dt} \int_{\mathbb{R}^3} \p_3^{3}\omega_i b_j \p_3^{3}\omega_k dx- \int_{\mathbb{R}^3} b_j \p_t(\p_3^{3}\omega_i \p_3^{3}\omega_k)dx \\ &+ \int_{\mathbb{R}^3} \p_3^{3}\omega_i u\cdot\nabla b_j \p_3^{3}\omega_kdx - \int_{\mathbb{R}^3} \p_3^{3}\omega_i b\cdot\nabla u_j \p_3^{3}\omega_kdx \\ & -\int_{\mathbb{R}^3} \p_3^{3}\omega_i \Delta_h b_j \p_3^{3}\omega_kdx. \end{split} \end{equation*} Immediately we encounter the new difficult term $$ -\int_{\mathbb{R}^3} b_j \p_t(\p_3^{3}\omega_i \p_3^{3}\omega_k)dx, $$ which is further converted into integrals of many more terms after invoking the evolution of the vorticity, \begin{equation}\nonumber \begin{split} &-\int_{\mathbb{R}^3} b_j \p_t(\p_3^{3}\omega_i \p_3^{3}\omega_k)dx\\ &=\int_{\mathbb{R}^3} b_j \p_3^{3}\omega_i \p_3^{3}\big( u\cdot\nabla\omega_k - \omega\cdot\nabla u_k -\p_1^2 \omega_k -b\cdot\nabla H_k +H \cdot\nabla b_k -\p_2 H_k \big) \\ \;& \qquad + b_j \,\p_3^{3}\omega_k \p_3^{3}\big( u\cdot\nabla\omega_i - \omega\cdot\nabla u_i -\p_1^2 \omega_i -b\cdot\nabla H_i +H \cdot\nabla b_i -\p_2 H_i \big)\,dx. \end{split} \end{equation} Some of the terms above can be paired together to form symmetries to deal with the derivative loss problem. This process also allows us to convert some of the cubic nonlinearity into quartic nonlinearity, which helps improve the time integrability. However, there are terms that can not be paired into symmetric structure, for example, $$ \int_{\mathbb{R}^3} b_j \big[ -\p_3^{3}\omega_i b\cdot\nabla \p_3^{3}H_k - \p_3^{3}\omega_k b\cdot\nabla\p_3^{3} H_i\big]\; dx. $$ Our idea in dealing with such terms is to construct artificial symmetries by adding and subtracting suitable terms. This strategy helps us alleviate the derivative loss problem eventually. The technical details are very complicated and are left to the proof of Proposition \ref{lem-important} in Section \ref{proof-lem}. \vskip .1in The rest of this paper is divided into four sections. Section \ref{e2b} proves (\ref{lb}), one of the two key energy inequalities while Section \ref{proof-high} establishes the second key energy inequality, namely (\ref{hb}). Section \ref{proof-lem} provides the proof of Proposition \ref{lem-important} and deals some of the most difficult terms in the Navier-Stokes nonlinearity in Lemma \ref{lem-assistant}. The last section, Section \ref{proof-thm}, finishes the proof of Theorem \ref{main}. \vskip .3in \section{Proof of (\ref{lb})} \label{e2b} This section proves (\ref{lb}), namely, for four positive constants $C_1$ through $C_4$, \beq\label{bb1} \mathscr{E}_{2}\le C_1\, \mathscr{E}(0) + C_2\,\mathscr{E}_1(t) + C_3\,\mathscr{E}_1^{\frac32}(t) + C_4 \,\mathscr{E}_2^{\frac32}(t) \quad\mbox{for all $t>0$.} \eeq The following lemma provides a powerful tool to control the triple products in terms of anisotropic upper bounds. \begin{lemma}\label{lem-anisotropic-est} The following inequalities hold when the right-hand sides are all bounded, \begin{equation}\nonumber \begin{split} \int_{\mathbb{R}^3} |f g h| \;dx \lesssim \;& \|f\|_{L^2}^{\frac{1}{2}} \|\partial_1 f\|_{L^2}^{\frac{1}{2}}\|g\|_{L^2}^{\frac{1}{2}} \|\partial_2 g\|_{L^2}^{\frac{1}{2}}\|h\|_{L^2}^{\frac{1}{2}} \|\partial_3 h\|_{L^2}^{\frac{1}{2}},\\ {\int_{\mathbb{R}^3} |f g h| \;dx \lesssim }\;& {\|f\|_{L^2}^{\frac{1}{4}} \|\partial_i f\|_{L^2}^{\frac{1}{4}} \|\partial_j f\|_{L^2}^{\frac{1}{4}}\|\partial_i\p_j f\|_{L^2}^{\frac{1}{4}}\|g\|_{L^2}^{\frac{1}{2}} \|\partial_k g\|_{L^2}^{\frac{1}{2}}\|h\|_{L^2}}\\ \lesssim \;& {\|f\|_{H^1}^{\frac{1}{2}} \|\partial_i f\|_{H^1}^{\frac{1}{2}}\|g\|_{L^2}^{\frac{1}{2}} \|\partial_k g\|_{L^2}^{\frac{1}{2}}\|h\|_{L^2},}\\ \left(\int_{\mathbb{R}^3} |f g|^2 \;dx\right)^\frac{1}{2} \lesssim \;&\|f\|_{L^2}^{\frac{1}{4}} \|\partial_i f\|_{L^2}^{\frac{1}{4}} \|\partial_j f\|_{L^2}^{\frac{1}{4}} \|\partial_i \partial_j f\|_{L^2}^{\frac{1}{4}} \|g\|_{L^2}^{\frac{1}{2}} \|\partial_k g\|_{L^2}^{\frac{1}{2}}\\ \lesssim \;&\|f\|_{H^1}^{\frac{1}{2}} \|\partial_i f\|_{H^1}^{\frac{1}{2}} \|g\|_{L^2}^{\frac{1}{2}} \|\partial_k g\|_{L^2}^{\frac{1}{2}}, \\ \int_{\mathbb{R}^3} |f g h v| \;dx \lesssim \;& \|f\|_{L^2}^{\frac{1}{4}} \|\partial_i f\|_{L^2}^{\frac{1}{4}} \|\partial_j f\|_{L^2}^{\frac{1}{4}} \|\partial_i \partial_j f\|_{L^2}^{\frac{1}{4}}\\ & \cdot\|g\|_{L^2}^{\frac{1}{4}} \|\partial_i g\|_{L^2}^{\frac{1}{4}} \|\partial_j g\|_{L^2}^{\frac{1}{4}} \|\partial_i \partial_i g\|_{L^2}^{\frac{1}{4}} \\ & \cdot \|h\|_{L^2}^{\frac{1}{2}} \|\partial_k h\|_{L^2}^{\frac{1}{2}} \|v\|_{L^2}^{\frac{1}{2}} \|\partial_k v\|_{L^2}^{\frac{1}{2}} \\ \lesssim \;& \|f\|_{H^1}^{\frac{1}{2}} \|\partial_i f\|_{H^1}^{\frac{1}{2}} \|g\|_{H^1}^{\frac{1}{2}} \|\partial_i g\|_{H^1}^{\frac{1}{2}} \|h\|_{L^2}^{\frac{1}{2}} \|\partial_k h\|_{L^2}^{\frac{1}{2}} \|v\|_{L^2}^{\frac{1}{2}} \|\partial_k v\|_{L^2}^{\frac{1}{2}}, \\ \end{split} \end{equation} where $i,j$ and $k$ belong to $\{1,2,3\}$ are different numbers. \end{lemma} The proof of Lemma \ref{lem-anisotropic-est} relies on the following one-dimensional interpolation inequality, for $f \in H^1(\mathbb R)$, \begin{equation}\nonumber \|f\|_{L^\infty (\mathbb{R})} \leq \sqrt{2} \|f\|_{L^2(\mathbb{R})}^\frac{1}{2} \|f'\|_{L^2(\mathbb{R})}^\frac{1}{2}. \end{equation} A detailed proof of this lemma can be found in \cite{WuZhu}. We remark that similar anisotropic inequalities are also available for two-dimensional functions (see \cite{CaoWu}). \vskip .1in We are now ready to prove (\ref{bb1}). \begin{proof}[Proof of (\ref{bb1})] Due to the norm equivalence, namely for any integer $k>0$, \beq\label{ne} \|f\|_{H^k}^2 { \;\sim \;} \|f\|_{L^2}^2 + \sum_{i=1}^3 \|\p_i^k f\|_{L^2}^2, \eeq it suffices to bound $$ \int_0^t \|\p_2u\|_{L^2}^2\,d\tau \quad \mbox{and}\quad \sum_{i=1}^3 \int_0^t \| \p_i^{3} \p_2 u \|_{L^2}^2\,d\tau. $$ Recalling the equations in (\ref{mhd}), \begin{equation}\label{er} \begin{split} &\p_2 u = \p_t b+u\cdot\nabla b-\Delta_h b-b\cdot\nabla u, \\ &\p_t u = {-u\cdot\nabla u + \p_1^2 u -\na P +b\cdot\nabla b +\p_2b ,} \end{split} \end{equation} we have \begin{equation}\label{4eq2} \begin{split} \|\p_2u\|_{L^2}^2 =& \int_{\mathbb{R}^3} (\p_t b+u\cdot\nabla b-\Delta_h b-b\cdot\nabla u)\cdot \p_2u \;dx \\ =& \; \frac{d}{dt} \int_{\mathbb{R}^3} b \cdot \p_2u \;dx +\int_{\mathbb{R}^3}\p_2b \cdot \p_t u dx\\ &+\int_{\mathbb{R}^3}(u\cdot\nabla b-\Delta_h b-b\cdot\nabla u)\cdot \p_2u \;dx\\ =& \; \frac{d}{dt} \int_{\mathbb{R}^3} b \cdot \p_2u \;dx \\ & + \int_{\mathbb{R}^3} \p_2b \cdot (\p_2b+b\cdot\nabla b-\nabla P + \p_1^2u-u\cdot\nabla u)\;dx\\ &+\int_{\mathbb{R}^3}(u\cdot\nabla b-\Delta_h b-b\cdot\nabla u)\cdot \p_2u \;dx. \end{split} \end{equation} Due to $\na\cdot b=0$, $$ \int_{\mathbb{R}^3} \p_2b \cdot \na P\,dx =0. $$ We bound the nonlinear terms. By Sobolev's inequality and Lemma \ref{lem-anisotropic-est}, \begin{equation}\nonumber \begin{split} &\int_{\mathbb{R}^3} \p_2b \cdot (b\cdot\nabla b)\;dx =\int_{\mathbb{R}^3}\big( \p_2b \cdot (b_h \cdot \na_hb) + \p_2b \cdot (b_3\p_3b) \big)\;dx \\ & \lesssim \|\na_hb\|_{L^2}^2\|b\|_{H^2} + \|\p_2b\|_{L^2}^\frac{1}{2}\|{\bf \p_3}\p_2 b\|_{L^2}^\frac{1}{2} \|b_3\|_{L^2}^\frac{1}{2}\|{\bf \p_1} b_3\|_{L^2}^\frac{1}{2} \|\p_3b\|_{L^2}^\frac{1}{2}\|{\bf \p_2} \p_3b\|_{L^2}^\frac{1}{2} \\ & \lesssim \|b\|_{H^2} \|\na_h b\|_{H^1}^2, \end{split} \end{equation} \begin{equation}\nonumber \begin{split} &\int_{\mathbb{R}^3} \p_2b \cdot (u\cdot\nabla u) \;dx =\int_{\mathbb{R}^3}{\big(} \p_2b \cdot (u_h \cdot \na_hu)+\p_2b \cdot (u_3\p_3u) {\big)}\;dx \\ & \lesssim \|\p_2b\|_{L^2}\|\na_hu\|_{L^2}\|u\|_{H^2} + \|\p_2 b\|_{L^2}^\frac{1}{2} \| {\bf \p_3} \p_2 b\|_{L^2}^\frac{1}{2} \|u_3\|_{L^2}^\frac{1}{2}\|{\bf \p_2} u_3\|_{L^2}^\frac{1}{2} \|\p_3 u\|_{L^2}^\frac{1}{2} \|{\bf \p_1}\p_3 u\|_{L^2}^\frac{1}{2} \\ & \lesssim \|\p_2b\|_{L^2}\|\na_hu\|_{L^2}\|u\|_{H^2} + \|\p_2 b\|_{H^1} \|\p_2 u\|_{L^2}^\frac{1}{2} \|\p_1 u\|_{H^1}^\frac{1}{2}\|u\|_{H^1}, \end{split} \end{equation} \begin{equation}\nonumber \begin{split} & \int_{\mathbb{R}^3} (u\cdot\nabla b) \cdot \p_2u\;dx \\ = & \int_{\mathbb{R}^3} {\big(} (u_h\cdot \na_h b) \cdot \p_2u+ (u_3\p_3b) \cdot \p_2u { \big)} \;dx \\ \lesssim \; & \|\na_hb\|_{L^2}\|\p_2u\|_{L^2}\|u\|_{H^2} + \|u_3\|_{L^2}^\frac{1}{2} \|{\bf \p_1} u_3\|_{L^2}^\frac{1}{2} \|\p_3 b\|_{L^2}^\frac{1}{2} \|{\bf \p_2} \p_3 b\|_{L^2}^\frac{1}{2} \|\p_2 u\|_{L^2}^\frac{1}{2} \|\p_2 {\bf \p_3} u\|_{L^2}^\frac{1}{2} \end{split} \end{equation} and \begin{equation}\nonumber \begin{split} & \int_{\mathbb{R}^3} (b\cdot\nabla u) \cdot \p_2u \;dx \\ = & \int_{\mathbb{R}^3} (b_h\cdot \na_h u) \cdot \p_2u+ (b_3\p_3u) \cdot \p_2u dx \\ \lesssim \; & \|b\|_{H^2} \|\na_h u\|_{L^2}^2 + \|b_3\|_{L^2}^\frac{1}{2} \|{\bf \p_2} b_3 \|_{L^2}^\frac{1}{2} \|\p_3 u\|_{L^2}^\frac{1}{2}\|{\bf \p_1} \p_3 u\|_{L^2}^\frac{1}{2} \|\p_2 u\|_{L^2}^\frac{1}{2} \|\p_2 {\bf \p_3} u\|_{L^2}^\frac{1}{2}. \end{split} \end{equation} Inserting these bounds in (\ref{4eq2}), integrating in time and using the simple bound $$ \int_0^t {\frac{d}{d\tau}} \int_{\mathbb{R}^3} b \cdot \p_2u \;dx \; {d\tau} \le \|b(t)\|_{L^2} \|{\p_2 u(t)}\|_{L^2} + \|b_0\|_{L^2} \|\p_2 u_0 \|_{L^2} \le \mathscr{E}_{1}(t) + \mathscr{E}_{1}(0), $$ we obtain \begin{equation}\label{4eq4} \begin{split} \int_0^t \|\p_2 u\|_{L^2}^2 \; d\tau \lesssim \; & \mathscr{E}_{1}(0) +\mathscr{E}_{1}(t)+ + \mathscr{E}_{1}^\frac{3}{2}(t) + \mathscr{E}_{1}(t)\mathscr{E}_{2}^\frac{1}{2}(t)\\ & + \mathscr{E}_{1}^\frac{1}{2}(t)\mathscr{E}_{2}(t)+\mathscr{E}_{1}^\frac{5}{4}(t)\mathscr{E}_{2}^{^\frac{1}{4}}(t)\\ \lesssim \; & \mathscr{E}_{1}(0) +\mathscr{E}_{1}(t)+ \mathscr{E}_{1}^\frac{3}{2}(t) +\mathscr{E}_{2}^\frac{3}{2}(t){.} \end{split} \end{equation} Here we have used several H\"{o}lder's inequalities such as \begin{align*} & \sup_{0 \leq \tau \leq t} \|b(\tau)\|_{H^2} \int_0^t \|\na_h b\|_{H^1}^2 \; d\tau \le \mathscr{E}_{1}^\frac{3}{2}(t), \\ & \sup_{0 \leq \tau \leq t} \|u(\tau)\|_{H^2} \int_0^t \|\p_2 b\|_{L^2} \|\p_2 u\|_{L^2} \; d\tau \le \mathscr{E}_{1}(t)\mathscr{E}_{2}^\frac{1}{2}(t) \le \mathscr{E}_{1}^\frac{3}{2}(t) +\mathscr{E}_{2}^\frac{3}{2}(t). \end{align*} We now turn to the bound for the highest-order derivatives. By \eqref{er}, \begin{equation}\label{5eq6} \begin{split} \sum_{i=1}^3 \| \p_i^{3} \p_2 u \|_{L^2}^2 = & \sum_{i=1}^3 \int_{\mathbb{R}^3} \p_i^{3} \Big(\p_t b +u\cdot\nabla b -\Delta_hb -b\cdot\nabla u \Big) \cdot \p_i^{3}\p_2u dx\\ =& \sum_{i=1}^3 \frac{d}{dt} \int_{\mathbb{R}^3} \p_i^{3}b \cdot \p_i^{3}\p_2 u dx + \sum_{i=1}^3 \int_{\mathbb{R}^3} \p_i^{3}\p_2 b \cdot \p_i^{3}\p_tu\, dx \\ & + \sum_{i=1}^3 \int_{\mathbb{R}^3} \p_i^{3} \Big( u\cdot\nabla b -\Delta_hb -b\cdot\nabla u \Big)\cdot \p_i^{3}\p_2u dx. \end{split} \end{equation} The estimates for the terms with $i = 1, 2$ (those containing $\p_1^{3}$ or $\p_2^{3}$ derivatives) are simple. We focus on the terms with $i=3$ (those with the bad derivative $\p_3$), namely, \begin{equation}\label{nn} \begin{split} & \frac{d}{dt} \int_{\mathbb{R}^3} \p_3^{3}b \cdot \p_3^{3}\p_2 u dx + \int_{\mathbb{R}^3} \p_3^{3}\p_2 b \cdot \p_3^{3}u_t dx \\ & + \int_{\mathbb{R}^3} \p_3^{3} \Big( u\cdot\nabla b -\Delta_hb -b\cdot\nabla u \Big)\cdot \p_3^{3}\p_2u dx. \end{split} \end{equation} The second part of (\ref{nn}) can be handled as follows. By Lemma \ref{lem-anisotropic-est}, \begin{equation}\nonumber \begin{split} &\int_{\mathbb{R}^3} \p_3^{3}\p_2 b \cdot \p_3^{3} \p_t u\, dx \\ = &\int_{\mathbb{R}^3} \p_3^{3}\p_2 b \cdot \p_3^{3}(\p_2b + b\cdot\nabla b -\nabla P +\p_1^2u-u\cdot\nabla u) dx \\ \lesssim & \; \|\p_2 b\|_{H^{3}}{\big(}\|\p_2 b\|_{H^{3}} + \|\p_1 u\|_{H^4}{\big)} \\ & + \|\p_2 b\|_{H^{3}}^\frac{1}{2} \|{\bf \p_3} \p_2 b\|_{H^{3}}^\frac{1}{2} {\big(}\|b\|_{H^{3}}^\frac{1}{2}\|{\bf \p_2} b\|_{H^{3}}^\frac{1}{2} \|b\|_{H^4}^\frac{1}{2}\|{\bf \p_1} b\|_{H^4}^\frac{1}{2} \\ & + \|u\|_{H^{3}}^\frac{1}{2}\|{\bf \p_2} u\|_{H^{3}}^\frac{1}{2} \|u\|_{H^4}^\frac{1}{2}\|{\bf \p_1} u\|_{H^4}^\frac{1}{2}{\big)}. \end{split} \end{equation} The last part in (\ref{nn}) can be bounded by \begin{equation}\nonumber \begin{split} & \int_{\mathbb{R}^3} \p_3^{3} \Big( u\cdot\nabla b -\Delta_hb -b\cdot\nabla u \Big) \cdot \p_3^{3}\p_2u dx \\ \lesssim & \; \|\p_2 u\|_{H^{3}} \big( \|\na_h b\|_{H^4} + \|u\|_{H^{3}}^\frac{1}{4} \|{\bf \p_1} u\|_{H^{3}}^\frac{1}{4} \|{\bf \p_3} u\|_{H^{3}}^\frac{1}{4}\|{\bf \p_1 \p_3} u\|_{H^{3}}^\frac{1}{4} \|b\|_{H^4}^\frac{1}{2}\|{\bf \p_2} b\|_{H^4}^\frac{1}{2} \\ & + \|b\|_{H^{3}}^\frac{1}{4} \|{\bf \p_2} b\|_{H^{3}}^\frac{1}{4} \|{\bf \p_3} b\|_{H^{3}}^\frac{1}{4}\|{\bf \p_2 \p_3} b\|_{H^{3}}^\frac{1}{4} \|u\|_{H^4}^\frac{1}{2}\|{\bf \p_1} u\|_{H^4}^\frac{1}{2}\big). \end{split} \end{equation} The terms with $i=1,2$ in \eqref{5eq6} are simpler and can be bounded similarly by applying Lemma \ref{lem-anisotropic-est}. Inserting the bounds above in \eqref{5eq6} and integrating in time yields \begin{equation}\label{5eq7} {\sum_{i=1}^3\int_0^t \|\p_i^3\p_2 u\|_{L^2}^2 }d\tau \lesssim \mathscr{E}_{1}(0) +\mathscr{E}_{1}(t)+ \mathscr{E}_{1}^\frac{3}{2}(t) +\mathscr{E}_{2}^\frac{3}{2}(t). \end{equation} Combining (\ref{4eq4}) and (\ref{5eq7}) gives (\ref{bb1}). This finishes the proof for \eqref{lb}. \end{proof} \vskip .3in \section{Proof of (\ref{hb})} \label{proof-high} \vskip .1in This section proves the energy inequality in (\ref{hb}). \begin{proof}[Proof of (\ref{hb})] Due to the norm equivalence (\ref{ne}), it suffices to bound $$ \sup_{0\leq \tau \leq t} \big(\|u\|_{L^2}^2 +\|b\|_{L^2}^2) + \int_0^t \Big( \|\p_1u\|_{L^2}^2 + \|\na_hb\|_{L^2}^2 \Big)d\tau $$ and \begin{equation}\nonumber \sup_{0 \leq \tau \leq t} \sum_{i = 1}^3 \big( \|\p_i^{3}\omega\|_{L^2}^2 + \|\p_i^{3}H\|_{L^2}^2 \big)+\sum_{i = 1}^3 \int_0^t \Big( \|\p_i^{3}\p_1\omega\|_{L^2}^2 + \|\p_i^3\na_h H\|_{L^2}^2 \Big)d\tau, \end{equation} where $\omega = \nabla\times u$ and $ H = \nabla\times b$ are the vorticity and the current density, respectively. As aforementioned, $\|\om\|_{L^2}= \|\na u\|_{L^2}$ and $\|H\|_{L^2} = \|\na b\|_{L^2}$. \vskip .1in Taking the inner product of $(u, b)$ with the first two equations of \eqref{mhd}, integrating by parts and using $\na\cdot u=\na\cdot b=0$, and then integrating in time, we find \begin{equation}\label{4eq1} \|u\|_{L^2}^2 +\|b\|_{L^2}^2+2 \int_0^t \Big( \|\p_1u\|_{L^2}^2 + \|\na_hb\|_{L^2}^2 \Big)d\tau {=} \|u_0\|_{L^2}^2 + \|b_0\|_{L^2}^2. \end{equation} Applying the operator $\nabla\times$ to \eqref{mhd}, we obtain the system governing $(\omega,H)$, \begin{equation}\label{5eq1} \begin{cases} \p_t \omega + u \cdot \nabla \omega - \omega\cdot\nabla u- \partial_1^2 \omega = b \cdot \nabla H - H\cdot\nabla b + \partial_2 H,\\ \p_t H + \nabla\times(u\cdot\nabla b)-\Delta_h H = \nabla\times(b\cdot\nabla u)+\p_2\omega. \end{cases} \end{equation} Applying $\p_i^{3}$ with $i=1,2,3$ to \eqref{5eq1} and taking the inner product of $(\p_i^{3}\omega,\p_i^{3}H)$ with the resulting equations, we have, after integration by parts, \begin{equation}\label{5eq2} \begin{split} &\frac{1}{2}\frac{d}{dt} \sum_{i=1}^3 \Big[ \| \p_i^{3}\omega\|_{L^2}^2 + \| \p_i^{3}H\|_{L^2}^2\Big] + \sum_{i=1}^3 \Big[ \| \p_i^{3}\p_1 \omega\|_{L^2}^2 + \| \p_i^{3}\na_h H\|_{L^2}^2 \Big]\\ = & \sum_{i=1}^3 \int_{\mathbb{R}^3} \p_i^{3} \big[ - u \cdot \nabla \omega + \omega\cdot\nabla u + b \cdot \nabla H - H\cdot\nabla b + \partial_2 H \big] \cdot \p_i^{3} \omega dx \\ & + \sum_{i=1}^3 \int_{\mathbb{R}^3} \p_i^{3}\big[ -\nabla\times(u\cdot\nabla b)+ \nabla\times(b\cdot\nabla u)+\p_2\omega \big]\cdot \p_i^{3} H dx \\ =& \; I_1+I_2+I_3+I_4+I_5+I_6+I_7+I_8. \end{split} \end{equation} For the first term $I_1$, it can be written into three parts, $$\int_{\mathbb{R}^3} \p_1^{3} ( - u \cdot \nabla \omega)\cdot \p_1^{3} \omega dx, \; \int_{\mathbb{R}^3} \p_2^{3} ( - u \cdot \nabla \omega)\cdot \p_2^{3} \omega dx,\; \int_{\mathbb{R}^3} \p_3^{3} ( - u \cdot \nabla \omega)\cdot \p_3^{3} \omega dx.$$ The first and second parts above behave good and can be bounded easily, \begin{equation}\nonumber \begin{split} & \int_0^t \int_{\mathbb{R}^3} \p_1^{3} ( - u \cdot \nabla \omega)\cdot \p_1^{3} \omega dx \;d\tau + \int_0^t \int_{\mathbb{R}^3} \p_2^{3} ( - u \cdot \nabla \omega)\cdot \p_2^{3} \omega dx \;d\tau \\ & \lesssim \sup_{0 \leq \tau \leq t} \|u(\tau)\|_{H^4} \int_0^t \|\na_h \omega\|_{\dot H^{2}}^2 \; d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (t). \end{split} \end{equation} We then turn to the hard term $\int_{\mathbb{R}^3} \p_3^{3} ( - u \cdot \nabla \omega)\cdot \p_3^{3} \omega dx $ in $I_1$. It's difficult to control since there is no dissipation in the $x_3$ direction. We further decompose it into three parts, \begin{equation*}\label{5eq3} \begin{split} &\int_{\mathbb{R}^3} \p_3^{3} ( - u \cdot \nabla \omega)\cdot \p_3^{3} \omega dx \\ = & \sum_{k = 1}^{3}\int_{\mathbb{R}^3} - \mathcal{C}_{3}^k \p_3^k u \cdot \nabla \p_3^{3-k} \omega\cdot \p_3^{3} \omega dx \\ = & -\Big\{ \sum_{k=2}^{3} \mathcal{C}_{3}^k \int_{\mathbb{R}^3}\p_3^k u_h \cdot \na_h\p_3^{3-k}\omega \cdot \p_3^{3}\omega dx + 3\int_{\mathbb{R}^3}\p_3 u_h \cdot \na_h \p_3^{2}\omega \cdot \p_3^{3}\omega dx \Big\} \\ & - \sum_{k=2}^{3} \mathcal{C}_{3}^k \int_{\mathbb{R}^3}\p_3^k u_3\p_3^{4-k}\omega \cdot \p_3^{3}\omega dx - 3\int_{\mathbb{R}^3}\p_3u_3\p_3^{3}\omega \cdot \p_3^{3}\omega dx \\ =& \; I_{11}+ I_{12}+ I_{13}. \end{split} \end{equation*} By Lemma \ref{lem-anisotropic-est}, \begin{equation}\nonumber \begin{split} {|I_{11}| } \lesssim & \sum_{k=2}^{3} \|\p_3^k u_h\|_{L^2}^\frac{1}{2} \|{\bf \p_2} \p_3^k u_h\|_{L^2}^\frac{1}{2} \|\na_h \p_3^{3-k} \omega\|_{L^2}^\frac{1}{2} \|{\bf \p_3} \na_h \p_3^{3-k} \omega\|_{L^2}^\frac{1}{2} \|\p_3^{3}\omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1}\p_3^{3}\omega\|_{L^2}^\frac{1}{2} \\ & + \|\p_3 u_h\|_{L^2}^\frac{1}{4} \|{\bf \p_2} \p_3 u_h\|_{L^2}^\frac{1}{4} \|{\bf \p_3} \p_3u_h\|_{L^2}^\frac{1}{4}\|{\bf \p_2\p_3} \p_3u_h\|_{L^2}^\frac{1}{4}\|\na_h \p_3^{2} \omega\|_{L^2}\|\p_3^{3}\omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1}\p_3^{3}\omega\|_{L^2}^\frac{1}{2}. \end{split} \end{equation} Integrating in time and applying H\"{o}lder's inequality yields \begin{equation}\nonumber \int_0^t {|I_{11}| } d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (t). \end{equation} Similarly, \begin{equation}\nonumber \begin{split} {I_{12} }= & \sum_{k=2}^{3} \mathcal{C}_{3}^k \int_{\mathbb{R}^3}\p_3^{k-1} \nabla_h \cdot u_h \p_3^{4-k}\omega \cdot \p_3^{3}\omega dx \\ \lesssim & \sum_{k=2}^{3} \|\p_3^{k-1} \nabla_h \cdot u_h\|_{L^2}^\frac{1}{2} \|{\bf \p_3} \p_3^{k-1} \nabla_h \cdot u_h\|_{L^2}^\frac{1}{2}\\ &\cdot \|\p_3^{4-k} \omega\|_{L^2}^\frac{1}{2}\|{\bf \p_2}\p_3^{4-k} \omega\|_{L^2}^\frac{1}{2}\|\p_3^{3}\omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1}\p_3^{3}\omega\|_{L^2}^\frac{1}{2}. \end{split} \end{equation} Therefore, \begin{equation}\nonumber \int_0^t | { I_{12} } | d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (t). \end{equation} By the divergence free condition $\nabla\cdot u=0$, we can further split $ I_{13} $ into two parts, \begin{equation}\nonumber \begin{split} {I_{13} }= -\int_{\mathbb{R}^3}\p_1u_1\p_3^{3}\omega\cdot \p_3^{3}\omega dx- \int_{\mathbb{R}^3}\p_2u_2\p_3^{3}\omega\cdot \p_3^{3}\omega dx = \; {I_{131} + I_{132}}. \end{split} \end{equation} Using integration by parts and Lemma \ref{lem-anisotropic-est}, we have \begin{equation}\nonumber \begin{split} \int_0^t { I_{131}}\; d\tau = & \; 2 \int_0^t \int_{\mathbb{R}^3} u_1\p_3^{3}\omega \cdot \p_1\p_3^{3}\omega dx \\ \lesssim & \int_0^t \|u_1\|_{L^2}^\frac{1}{4} \|{\bf \p_2} u_1\|_{L^2}^\frac{1}{4} \|{\bf \p_3} u_1\|_{L^2}^\frac{1}{4} \|{\bf \p_2 \p_3} u_1\|_{L^2}^\frac{1}{4} \\ &\qquad \cdot \|\p_3^{3}\omega\|_{L^2}^\frac{1}{2} \|{\bf \p_1} \p_3^{3}\omega\|_{L^2}^\frac{1}{2} \|\p_1\p_3^{3}\omega\|_{L^2} \;d\tau \\ \lesssim & \; \mathscr{E}^\frac{3}{2}(t). \end{split} \end{equation} However $I_{132}$ can not be similarly treated as $ I_{131}$. Integration by parts would generate $\p_2 \p_3^3 \om$, which can not be bounded by either $\mathscr{E}_1$ or $\mathscr{E}_2$. We call this trouble the derivative loss problem. How to overcome the derivative loss problem is the main challenge of our proof. By creating several new techniques, we are able to deal with this type of terms. This is done in Proposition \ref{lem-important}. Therefore, by Proposition \ref{lem-important} we have $$ \int_0^t I_{132}\; d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (0) + \mathscr{E}^{\frac{3}{2}} (t) + \mathscr{E}^2 (t). $$ Collecting all the upper bounds for various parts of $I_1$, we obtain \begin{equation}\label{I1} \int_0^t |I_1(\tau) | d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (0) + \mathscr{E}^{\frac{3}{2}} (t) + \mathscr{E}^2 (t). \end{equation} \vskip .1in We turn to the second term $I_2$. Naturally we divide it into the following three parts, $$ \int_{\mathbb{R}^3} \p_1^{3}(\omega \cdot\nabla u)\cdot \p_1^{3}\omega dx, \quad \int_{\mathbb{R}^3} \p_2^{3}(\omega \cdot\nabla u)\cdot \p_2^{3}\omega dx \quad \text{and} \quad \int_{\mathbb{R}^3} \p_3^{3}(\omega \cdot\nabla u)\cdot \p_3^{3}\omega dx. $$ The first two parts above can be controlled easily like before, \begin{equation}\nonumber \begin{split} \int_0^t \int_{\mathbb{R}^3} \na_h^{3}(\omega \cdot\nabla u)\cdot \na_h^{3}\omega \; dx dt \lesssim \int_0^t \|u\|_{H^3} \|\na_h u\|_{H^{3}}\|\na_h^{3} \omega\|_{L^2} \; d\tau \lesssim \mathscr{E}^\frac{3}{2}(t). \end{split} \end{equation} The last part is further split into three terms as follows, \begin{equation}\label{5eq4} \begin{split} \int_{\mathbb{R}^3} \p_3^{3}(\omega \cdot\nabla u)\cdot \p_3^{3}\omega dx = &\int_{\mathbb{R}^3} \p_3^{3}(\omega_1 \p_1 u)\cdot \p_3^{3}\omega dx + \int_{\mathbb{R}^3} \p_3^{3}(\omega_2 \p_2 u)\cdot \p_3^{3}\omega dx \\ & + \int_{\mathbb{R}^3} \p_3^{3}(\omega_3 \p_3 u)\cdot \p_3^{3}\omega dx \\ = & \; I_{21} + I_{22} + I_{23}. \end{split} \end{equation} The estimate for $I_{21}$ is not difficult. By integration by parts, \begin{equation}\nonumber \begin{split} & \int_0^t I_{21}(\tau) \; d\tau = \sum_{k = 0}^{2} \mathcal{C}_{3}^k \int_0^t \int_{\mathbb{R}^3} \p_3^k \omega_1 \p_1 \p_3^{3-k} u \cdot \p_3^{3}\omega \; dx d\tau\\ &\qquad\qquad\qquad + \int_0^t \int_{\mathbb{R}^3} \p_3^{3} \omega_1 \p_1 u\cdot \p_3^{3}\omega \; dx d\tau \\ \lesssim & \sum_{k = 0}^{ 2}\int_0^t \|\p_3^k \omega_1\|_{L^2}^\frac{1}{2} \|{\bf \p_2} \p_3^k \omega_1\|_{L^2}^\frac{1}{2}\|\p_1 \p_3^{3-k} u\|_{L^2}^\frac{1}{2} \|{\bf \p_3}\p_1 \p_3^{3-k} u\|_{L^2}^\frac{1}{2}\\ &\qquad \quad \cdot \|\p_3^{3}\omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1}\p_3^{3}\omega\|_{L^2}^\frac{1}{2} \; d\tau \\ & + \int_0^t \| u \|_{L^2}^{\frac{1}{4}} \| {\bf \p_2} u \|_{L^2}^{\frac{1}{4}} \| {\bf \p_3} u \|_{L^2}^{\frac{1}{4}} \| {\bf \p_2 \p_3} u \|_{L^2}^{\frac{1}{4}} \|\p_1 \p_3^{3}\omega \|_{L^2}\\ &\qquad \quad \cdot \|\p_3^{3}\omega \|_{L^2}^{\frac{1}{2}} \|{\bf \p_1}\p_3^{3}\omega \|_{L^2}^{\frac{1}{2}} \; d\tau\\ \lesssim & \; \mathscr{E}^\frac{3}{2}(t). \end{split} \end{equation} $I_{22}$ contains a difficult term that has to be dealt with by Proposition \ref{lem-important}. By Lemma \ref{lem-anisotropic-est} and Proposition \ref{lem-important}, \begin{equation}\nonumber \begin{split} &\int_0^t I_{22}(\tau) \; d\tau \\ = & {\sum_{k = 0}^{2}} \mathcal{C}_{3}^k \int_0^t \int_{\mathbb{R}^3} \p_3^k \omega_2 \p_2 \p_3^{3-k} u \cdot \p_3^{3}\omega \; dx d\tau + \int_0^t \int_{\mathbb{R}^3} \p_3^{3} \omega_2 \p_2 u \cdot \p_3^{3}\omega \; dx d\tau\\ \lesssim & \int_0^t \|\omega_2\|_{H^2}^\frac{1}{2} \|{\bf \p_2} \omega\|_{H^2}^\frac{1}{2} \|\p_2 u\|_{H^3} \|\p_3^{3}\omega\|_{L^2}^\frac{1}{2} \|{\bf \p_1}\p_3^{3}\omega\|_{L^2}^\frac{1}{2}\;d\tau +{\mathscr{E}^{\frac{3}{2}} (0) } + \mathscr{E}^\frac{3}{2}(t) + \mathscr{E}^2(t) \\ \lesssim & \; \mathscr{E}^{\frac{3}{2}} (0) +\mathscr{E}^\frac{3}{2}(t) + \mathscr{E}^2(t). \end{split} \end{equation} Now we come to deal with the last part in \eqref{5eq4}, i.e., \begin{equation}\nonumber I_{23} = \sum_{k=1}^{3}\mathcal{C}_{3}^k \int_{\mathbb{R}^3} \p_3^k \omega_3 \p_3^{4-k} u \cdot \p_3^{3}\omega \; dx + \int_{\mathbb{R}^3} \omega_3 \p_3^4 u \cdot \p_3^{3}\omega \; dx = I_{231} + I_{232}. \end{equation} Due to the divergence free condition of $\omega$, $I_{231}$ is easy to control. By $\nabla \cdot \omega = 0$, \begin{equation}\nonumber \begin{split} I_{231} = & - \sum_{k=1}^{2}\mathcal{C}_{3}^k \int_{\mathbb{R}^3} \p_3^{k-1}(\p_1 \omega_1 + \p_2 \omega_2) \p_3^{4-k} u \cdot \p_3^{3}\omega \; dx\\ & - \int_{\mathbb{R}^3} \p_3^{2}(\p_1 \omega_1 + \p_2 \omega_2) \p_3 u \cdot \p_3^{3}\omega \; dx \\ \lesssim & \; \|\na_h \omega\|_{H^{1}}^\frac{1}{2} \|{\bf \p_3} \na_h \omega\|_{H^{1}}^\frac{1}{2} \|u\|_{H^{3}}^\frac{1}{2} \|{\bf \p_2} u\|_{H^{3}}^\frac{1}{2} \|\p_3^{3}\omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1}\p_3^{3}\omega\|_{L^2}^\frac{1}{2} \\ & + \|\p_3^{2} \na_h \omega\|_{L^2} \|\p_3 u\|_{L^2}^\frac{1}{4}\|{\bf \p_2}\p_3 u\|_{L^2}^\frac{1}{4}\|{\bf \p_3}\p_3 u\|_{L^2}^\frac{1}{4}\|{\bf \p_2 \p_3}\p_3 u\|_{L^2}^\frac{1}{4} \|\p_3^{3}\omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1}\p_3^{3}\omega\|_{L^2}^\frac{1}{2}. \end{split} \end{equation} $I_{232}$ can not be bounded in the similar way. We write $\omega_3 = \p_1 u_2 -\p_2 u_1$ in $I_{232}$, \begin{equation}\nonumber \int_0^t I_{232}(\tau) \; d\tau = \int_0^t \int_{\mathbb{R}^3} (\p_1u_2-\p_2u_1)\p_3^4 u\cdot \p_3^{3}\omega dx d\tau. \end{equation} The estimate for $ \int_0^t \int_{\mathbb{R}^3} \p_1u_2\p_3^4 u \cdot \p_3^{3}\omega dx d\tau$ is just similar to $\int_0^t \int_{\mathbb{R}^3}\p_3^{3}\omega_1 \p_1 u \cdot \p_3^{3}\omega dx d\tau$ in $I_{21}$. We can apply integration by parts and Lemma \ref{lem-anisotropic-est} to control it by $\mathscr{E}^\frac{3}{2}(t)$. It remains difficult to deal with $ \int_0^t \int_{\mathbb{R}^3} \p_2u_1\p_3^4 u\cdot \p_3^{3}\omega dx d\tau$ due to the derivative loss problem. We split this term into three parts again. \begin{equation}\label{uu} \begin{split} \int_0^t \int_{\mathbb{R}^3} \p_2u_1 \p_3^4 u \p_3^{3}\omega dx d\tau=& \int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^4 u_1 \p_3^{3}\omega_1 dxd\tau\\ & +\int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^4 u_2\p_3^{3}\omega_2 dx d\tau \\ & +\int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^4 u_3 \p_3^{3}\omega_3 dxd\tau. \end{split} \end{equation} The last part $\int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^4 u_3 \p_3^{3}\omega_3 dxd\tau$ can be controlled by $\mathscr{E}^\frac{3}{2}(t)$ easily by making use of the divergence free property of $u$. { The process is simpler than that for $I_{231}$}. Now, we focus on the first two parts in \eqref{uu}. We write them as \begin{equation}\label{UU} \begin{split} & \int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^4 u_1 \p_3^{3}\omega_1 dx d\tau +\int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^4 u_2\p_3^{3}\omega_2 dx d\tau\\ = & \int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3}\big(\p_3u_1-\p_1u_3 +\p_1u_3\big) \p_3^{3}\omega_1 dx d\tau \\ & +\int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3} \big(\p_3u_2-\p_2u_3+\p_2u_3 \big)\p_3^{3}\omega_2 dx d\tau\\ = & \int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3}\omega_2 \p_3^{3}\omega_1 dx -\int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3}\omega_1 \p_3^{3}\omega_1 dx d\tau\\ & + \int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3} \p_1u_3 \p_3^{3}\omega_1 dx+ \int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3} \p_2u_3 \p_3^{3}\omega_1 dx d\tau. \end{split} \end{equation} The first two terms on the right hand side of \eqref{UU} can be handled by Proposition \ref{lem-important} while the rest two terms by Sobolev's inequality, \begin{align*} &\int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3}\omega_2 \p_3^{3}\omega_1 dx -\int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3}\omega_1 \p_3^{3}\omega_1 dx d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (0) +\mathscr{E}^{\frac{3}{2}} (t) + \mathscr{E}^2 (t), \\ &\int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3} \p_1u_3 \p_3^{3}\omega_1 dx+ \int_0^t \int_{\mathbb{R}^3}\p_2u_1 \p_3^{3} \p_2u_3 \p_3^{3}\omega_1 dx d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (t). \end{align*} Taking all the inequalities above into consideration we then obtain \begin{equation}\label{I2} \int_0^t |I_2(\tau) | d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (0) +\mathscr{E}^{\frac{3}{2}} (t) + \mathscr{E}^2 (t). \end{equation} We shall combine the estimates of $I_3$ and $I_7$ to take advantage of cancellations. Naturally $I_3 = \sum_{i=1}^3 \int_{\mathbb{R}^3} \p_i^{3} (b\cdot\nabla H)\cdot \p_i^{3}\omega dx $ is divided into the following three terms, $$ \int_{\mathbb{R}^3} \p_1^{3} (b\cdot\nabla H)\cdot \p_1^{3}\omega dx, \quad \int_{\mathbb{R}^3} \p_2^{3} (b\cdot\nabla H)\cdot \p_2^{3}\omega dx \quad \text{and}\,\, \int_{\mathbb{R}^3} \p_3^{3} (b\cdot\nabla H)\cdot \p_3^{3}\omega dx .$$ Each term above can be split into two parts as follows. $$ I_{31} + \tilde{I}_{31} = \sum_{k = 1}^{3}\mathcal{C}_{3}^k \int_{\mathbb{R}^3}\p_1^k b\cdot\nabla \p_1^{3-k}H \cdot \p_1^{3}\omega dx + \int_{\mathbb{R}^3} b\cdot\nabla \p_1^{3} H\cdot \p_1^{3}\omega dx, $$ $$ I_{32} + \tilde{I}_{32} =\sum_{k = 1}^{3}\mathcal{C}_{3}^k \int_{\mathbb{R}^3}\p_2^k b\cdot\nabla \p_2^{3-k}H \cdot \p_2^{3}\omega dx + \int_{\mathbb{R}^3} b\cdot\nabla \p_2^{3} H \cdot \p_2^{3}\omega dx, $$ $$ I_{33} + \tilde{I}_{33} =\sum_{k = 1}^{3}\mathcal{C}_{3}^k \int_{\mathbb{R}^3}\p_3^k b\cdot\nabla \p_3^{3-k}H \cdot \p_3^{3}\omega dx + \int_{\mathbb{R}^3} b\cdot\nabla \p_3^{3} H \cdot \p_3^{3}\omega dx. $$ $I_{31}$ and $I_{32}$ have the good derivatives $\na_h$ and can be bounded directly, \begin{equation}\nonumber I_{31} + I_{32} \lesssim \|\na_h b\|_{H^2} \|b\|_{H^4} \|\na_h \omega\|_{H^{2}}. \end{equation} To deal with $I_{33}$, we also use the divergence free property $\nabla \cdot b = 0$ to write \begin{equation}\nonumber \begin{split} I_{33} = & \sum_{k = 1}^{3}\mathcal{C}_{3}^k \int_{\mathbb{R}^3}\p_3^k b_h \cdot\nabla_h \p_3^{3-k}H \cdot \p_3^{3}\omega dx - \sum_{k = 1}^{3}\mathcal{C}_{3}^k \int_{\mathbb{R}^3}\p_3^{k-1}\nabla _h \cdot b_h\p_3^{4-k}H \cdot \p_3^{3}\omega dx. \end{split} \end{equation} We have converted some of $\p_3$ into the good derivatives $\na_h$. By Lemma \ref{lem-anisotropic-est}, \begin{equation}\nonumber \begin{split} I_{33} \lesssim & (\|b_h\|_{H^{3}}^\frac{1}{2}\|{\bf \p_2} b_h\|_{H^{3}}^\frac{1}{2}\|\na_h H\|_{H^{2}}^\frac{1}{2}\|{\bf \p_3}\na_h H\|_{H^{2}}^\frac{1}{2} {+\|\na_h b\|_{H^{2}}^\frac{1}{2}\|{\bf \p_3} \na_h b\|_{H^{2}}^\frac{1}{2}\| H\|_{H^{3}}^\frac{1}{2} \|{\bf \p_2} H\|_{H^{3}}^\frac{1}{2}})\\ &\cdot\|\p_3^{3}\omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1}\p_3^{3}\omega\|_{L^2}^\frac{1}{2}. \end{split} \end{equation} The remaining terms $\tilde{I}_{31}$, $\tilde{I}_{32}$ and $\tilde{I}_{33}$ can not be controlled directly, but will be canceled by the corresponding terms in $I_7$. Now let's focus on $I_7= \sum_{i=1}^3 \int_{\mathbb{R}^3} \p_i^{3} [ \nabla \times(b\cdot\nabla u)]\cdot\p_i^{3}H dx $, we notice that $$ \nabla \times(b\cdot\nabla u)=b\cdot\nabla \omega + \mathcal{R} $$ where $\mathcal{R}$ stands for the vector with its $i$-th component given by $$ \mathcal{R}_i = \sigma_{ijk} \p_j b\cdot \nabla u_k. $$ Here $\sigma_{ijk}$ is the Levi-Cevita symbol, \begin{equation}\label{sigma} \sigma_{ijk} = \left\{ \begin{array}{ll} 1, & \hbox{$ijk=123,231,312;$} \\ -1, & \hbox{$ijk=321,213,132;$} \\ 0, & \hbox{otherwise.} \end{array} \right. \end{equation} Following the process for $I_3$, we can split $I_7$ into the following nine parts, \begin{equation}\nonumber \begin{split} & I_{71} + \tilde{I}_{71} + \tilde{\tilde{I}}_{71} \\ = & \sum_{k = 1}^{3}\mathcal{C}_{3}^k\int_{\mathbb{R}^3} \p_1^k b\cdot\nabla \p_1^{3-k} \omega\cdot \p_1^{3}H dx + \int_{\mathbb{R}^3} b\cdot\nabla \p_1^{3} \omega \cdot \p_1^{3}H dx + \int_{\mathbb{R}^3} \p_1^{3} ( \mathcal{R} )\cdot \p_1^{3}H dx, \end{split} \end{equation} \begin{equation}\nonumber \begin{split} & I_{72} + \tilde{I}_{72} + \tilde{\tilde{I}}_{72} \\ = & \sum_{k = 1}^{3}\mathcal{C}_{3}^k\int_{\mathbb{R}^3} \p_2^k b\cdot\nabla \p_2^{3-k} \omega \cdot \p_2^{3}H dx + \int_{\mathbb{R}^3} b\cdot\nabla \p_2^{3} \omega\cdot \p_2^{3}H dx + \int_{\mathbb{R}^3} \p_2^{3} ( \mathcal{R} )\cdot \p_2^{3}H dx, \end{split} \end{equation} \begin{equation}\nonumber \begin{split} & I_{73} + \tilde{I}_{73} + \tilde{\tilde{I}}_{73} \\ = & \sum_{k = 1}^{3}\mathcal{C}_{3}^k\int_{\mathbb{R}^3} \p_3^k b\cdot\nabla \p_3^{3-k} \omega\cdot \p_3^{3}H dx + \int_{\mathbb{R}^3} b\cdot\nabla \p_3^{3} \omega \cdot \p_3^{3}H dx + \int_{\mathbb{R}^3} \p_3^{3} ( \mathcal{R} )\cdot \p_3^{3}H dx . \end{split} \end{equation} By integration by parts and $\na\cdot b=0$, $$ \tilde{I}_{31} + \tilde{I}_{71}=0, \quad \tilde{I}_{32} + \tilde{I}_{72}=0 \quad \text{and} \quad \tilde{I}_{33} + \tilde{I}_{73}=0.$$ $I_{71}$ and $I_{72}$ are easy to control while $I_{73}$ can be written as \begin{equation}\nonumber \begin{split} I_{73} = \sum_{k = 1}^{3}\mathcal{C}_{3}^k\int_{\mathbb{R}^3} \p_3^k b_h \cdot \nabla_h \p_3^{3-k} \omega\cdot \p_3^{3}H dx - \sum_{k = 1}^{3}\mathcal{C}_{3}^k\int_{\mathbb{R}^3} \p_3^{k-1}\nabla_h \cdot b_h \p_3^{4-k} \omega\cdot \p_3^{3}H dx. \end{split} \end{equation} Now $I_{71}, I_{72}$ and $I_{73}$ all contain good derivatives $\na_h$ and can be handled exactly like $I_{31}, I_{32}$ and $I_{33}$. We omit the repeated details. Let's focus on the remaining terms containing $\mathcal{R}$, namely $\tilde{\tilde{I}}_{71}$, $\tilde{\tilde{I}}_{72}$ and $\tilde{\tilde{I}}_{73}$. By H\"{o}lder's inequality and Sobolev embedding theorem, \begin{equation}\nonumber \begin{split} \tilde{\tilde{I}}_{71} + \tilde{\tilde{I}}_{72} \lesssim {\big(} \|\na_h b\|_{H^{3}} \|u\|_{H^4} + \|\na_h u\|_{H^{3}} \|b\|_{H^4} {\big)}\|\na_h H\|_{H^{3}}. \end{split} \end{equation} $\tilde{\tilde{I}}_{73}$ is more complex and is further split into two parts, \begin{equation}\nonumber \begin{split} \tilde{\tilde{I}}_{73} = & \sum_{i=1}^3\int_{\mathbb{R}^3} \p_3^{3} (\sigma_{ijk} \p_j b_h \cdot \nabla_h u_k) \p_3^{3}H_i \;dx + \int_{\mathbb{R}^3} \p_3^{3} (\sigma_{3jk} \p_j b_3 \p_3 u_k) \p_3^{3}H_3 \;dx \\ &+ \sum_{i \neq 3} \int_{\mathbb{R}^3} \p_3^{3} (\sigma_{ijk} \p_j b_3 \p_3 u_k) \p_3^{3}H_i \;dx = \tilde{\tilde{I}}_{731} + \tilde{\tilde{I}}_{732}. \end{split} \end{equation} Noticing $H_3 = \p_1 b_2 - \p_2 b_1$ and applying Lemma \ref{lem-anisotropic-est}, we can bound $\tilde{\tilde{I}}_{731}$ by \begin{equation}\nonumber \begin{split} \tilde{\tilde{I}}_{731} \lesssim & \|\na_h u\|_{H^{3}} \|b_h\|_{H^{3}}^\frac{1}{4}\|{\bf \p_2}b_h\|_{H^{3}}^\frac{1}{4} \|{\bf \p_3}b_h\|_{H^{3}}^\frac{1}{4} \|{\bf \p_2\p_3}b_h\|_{H^{3}}^\frac{1}{4} \|\p_3^{3}H\|_{L^2}^\frac{1}{2} \|{\bf \p_1}\p_3^{3} H\|_{L^2}^\frac{1}{2} \\ & + {\|\na_h u\|_{L^{2}}^\frac{1}{2}\|{\bf \p_3} \na_h u\|_{L^{2}}^\frac{1}{2}} \|b_h\|_{H^4}^\frac{1}{2}\|{\bf \p_2}b_h\|_{H^4}^\frac{1}{2}\|\p_3^{3} H\|_{L^2}^\frac{1}{2} \|{\bf \p_1}\p_3^{3} H\|_{L^2}^\frac{1}{2}\\ & + \|b_3 \|_{H^4}^\frac{1}{2} \|{\bf \p_2} b_3\|_{H^4}^\frac{1}{2} \|u\|_{H^4}^\frac{1}{2} \|{\bf \p_1} u\|_{H^4}^\frac{1}{2} \|\na_h b\|_{H^{3}}^\frac{1}{2} \|{\bf \p_3} \na_h b\|_{H^{3}}^\frac{1}{2}. \end{split} \end{equation} By the definition of $\sigma_{ijk}$, if $i\neq 3$, we will have $j=3$ or $k=3$. Indeed, \begin{equation}\nonumber \begin{split} \tilde{\tilde{I}}_{732} \lesssim & \|\p_3 b_3\|_{H^{3}}^\frac{1}{2}\|{\bf \p_3}\p_3 b_3\|_{H^{3}}^\frac{1}{2}\|u\|_{H^4}^\frac{1}{2}\|{\bf \p_1}u\|_{H^4}^\frac{1}{2} \|\p_3^{3}H\|_{L^2}^\frac{1}{2}\|{\bf \p_2}\p_3^{3}H\|_{L^2}^\frac{1}{2} \\ & + \|\p_3 u_3\|_{H^{2}}^\frac{1}{2}\|{\bf \p_3}\p_3 u_3\|_{H^{2}}^\frac{1}{2}\|b_3\|_{H^4}^\frac{1}{2}\|{\bf \p_1}b_3\|_{H^4}^\frac{1}{2} \|\p_3^{3}H\|_{L^2}^\frac{1}{2}\|{\bf \p_2}\p_3^{3}H\|_{L^2}^\frac{1}{2} \\ & + \|\p_3 u_3 \|_{H^{3}} \|b_3\|_{H^1}^\frac{1}{4}\|{\bf \p_1}b_3\|_{H^1}^\frac{1}{4} \|{\bf \p_3}b_3\|_{H^1}^\frac{1}{4}\|{\bf \p_1 \p_3}b_3\|_{H^1}^\frac{1}{4}\|\p_3^{3}H\|_{L^2}^\frac{1}{2}\|{\bf \p_2}\p_3^{3}H\|_{L^2}^\frac{1}{2}. \end{split} \end{equation} Collecting the upper bounds for $I_3$ and $I_7$ above, we find \begin{equation}\label{I3I7} \int_0^t |I_3(\tau) + I_7(\tau) | d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (t). \end{equation} Next we deal with $I_4$, which is naturally divided into the following three terms $$ - \int_{\mathbb{R}^3} \p_1^3(H\cdot\nabla b)\cdot \p_1^3\omega dx, \quad - \int_{\mathbb{R}^3} \p_2^3 (H\cdot\nabla b)\cdot \p_2^3\omega dx, \quad - \int_{\mathbb{R}^3} \p_3^3 (H\cdot\nabla b)\cdot \p_3^3\omega dx. $$ The first two terms already contain the good derivatives $\na_h$ and can be bounded directly. The third term is further decomposed into three terms \begin{equation}\nonumber \begin{split} - \int_{\mathbb{R}^3} \p_3^{3} (H\cdot\nabla b)\cdot \p_3^{3}\omega dx =& - \int_{\mathbb{R}^3} \p_3^{3} (H_1 \p_1 b)\cdot \p_3^{3}\omega dx - \int_{\mathbb{R}^3} \p_3^{3} (H_2 \p_2 b)\cdot \p_3^{3}\omega dx \\ &- \int_{\mathbb{R}^3} \p_3^{3} (H_3 \p_3 b)\cdot \p_3^{3}\omega dx. \end{split} \end{equation} The first two terms above have either $\p_1 b$ or $\p_2b$ and can thus be bounded by applying Lemma \ref{lem-anisotropic-est}. The third term involves $H_3 = \p_1b_2-\p_2b_1$ and thus also contains the good horizontal derivatives on $b$. Therefore, all of them admit suitable upper bounds and \begin{equation}\label{I4} \int_0^t |I_4(\tau) | d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (t). \end{equation} $I_5$ and $I_8$ cancel each other by integration by parts, \begin{equation}\label{I5I8} I_5 + I_8 = \sum_{i=1}^3 \Big[ \int_{\mathbb{R}^3} \p_i^{3}\p_2H \cdot \p_i^{3} \omega dx + \int_{\mathbb{R}^3} \p_i^{3}\p_2 \omega\cdot \p_i^{3} H dx \Big] =0. \end{equation} It remains to deal with $I_6$. As in $I_7$, we can write $\nabla \times (u\cdot \nabla b) = u \cdot \nabla H + \tilde {\mathcal{R}}$, where $\tilde {\mathcal{R}_i} = \sigma_{ijk}\p_j u \cdot \nabla b_k$ and $\sigma_{ijk}$ is defined in \eqref{sigma}. Thus, \begin{equation}\nonumber \begin{split} I_6 = & -\sum_{k = 1}^{3}\mathcal{C}_{3}^k\int_{\mathbb{R}^3} \p_1^k u \cdot \nabla \p_1^{3-k}H\cdot \p_1^{3}H dx - \int_{\mathbb{R}^3} \p_1^{3}(\tilde {\mathcal{R}}) \cdot \p_1^{3} H dx\\ & -\sum_{k = 1}^{3}\mathcal{C}_{3}^k\int_{\mathbb{R}^3} \p_2^k u \cdot \nabla \p_2^{3-k}H \cdot \p_2^{3}H dx - \int_{\mathbb{R}^3} \p_2^{3}(\tilde {\mathcal{R}})\cdot \p_2^{3} H dx\\ & -\sum_{k = 1}^{3}\mathcal{C}_{3}^k\int_{\mathbb{R}^3} \p_3^k u_h \cdot \nabla_h \p_3^{3-k}H \cdot \p_3^{3}H dx -\sum_{k = 1}^{3}\mathcal{C}_{3}^k\int_{\mathbb{R}^3} \p_3^k u_3 \p_3^{4-k}H \cdot \p_3^{3}H dx \\ & - \int_{\mathbb{R}^3} \p_3^{3}(\tilde {\mathcal{R}}) \cdot \p_3^{3} H dx \\ = & \; I_{61}+I_{62} + I_{63} + I_{64}. \end{split} \end{equation} By H\"{o}lder's inequality and Sobolev embedding theorem, \begin{equation}\nonumber \begin{split} I_{61}+I_{62} \lesssim& \|\na_h u\|_{H^{3}} \|H\|_{H^3} \|\na_h H\|_{H^{3}}\\ & + \big(\|\na_h u\|_{H^3} \|b\|_{H^4} + {\|u\|_{H^2}} \|\na_h b\|_{H^3} \big) \|\na_h H\|_{H^{3}}. \end{split} \end{equation} The estimate for $I_{63}$ is similar to that for $I_{73}$, \begin{equation}\nonumber \begin{split} I_{63} \lesssim & \|u_h\|_{H^{3}}^\frac{1}{2} \|{\bf \p_1} u_h\|_{H^{3}}^\frac{1}{2}\|\na_h H\|_{H^{2}}^\frac{1}{2} \|{\bf \p_3} \na_h H\|_{H^{2}}^\frac{1}{2} \|\p_3^{3} H\|_{L^2}^\frac{1}{2} \|{\bf \p_2} \p_3^{3} H\|_{L^2}^\frac{1}{2} \\ & + \|\nabla_h \cdot u_h\|_{H^{2}}^\frac{1}{2} \|{\bf \p_3} \nabla_h \cdot u_h\|_{H^{2}}^\frac{1}{2} \|H\|_{H^{3}} \|{\bf \na_h} H\|_{H^{3}}. \end{split} \end{equation} $I_{64}$ is bounded similarly as $\tilde{\tilde{I}}_{73}$ \begin{equation}\nonumber \begin{split} I_{64} \lesssim & \; \|u_h\|_{H^4}^\frac{1}{2} \|{\bf \p_1} u_h\|_{H^4}^\frac{1}{2} \|\na_h b\|_{H^3}^\frac{1}{2} \|{\bf \p_3} \na_h b\|_{H^3}^\frac{1}{2} \|\p_3^{3}H\|_{L^2}^\frac{1}{2}\|{\bf \p_2}\p_3^{3}H\|_{L^2}^\frac{1}{2}\\ &+ \|u_3 \|_{H^4}^\frac{1}{2} \|{\bf \p_1} u_3\|_{H^4}^\frac{1}{2} \|b\|_{H^4}^\frac{1}{2} \|{\bf \p_2} b\|_{H^4}^\frac{1}{2} \|\na_h b\|_{H^{3}}^\frac{1}{2} \|{\bf \p_3} \na_h b\|_{H^{3}}^\frac{1}{2} \\ & + \|\p_3 b_3\|_{H^{3}}^\frac{1}{2}\|{\bf \p_3}\p_3 b_3\|_{H^{3}}^\frac{1}{2}\|u\|_{H^4}^\frac{1}{2}\|{\bf \p_1}u\|_{H^4}^\frac{1}{2} \|\p_3^{3}H\|_{L^2}^\frac{1}{2}\|{\bf \p_2}\p_3^{3}H\|_{L^2}^\frac{1}{2} \\ & + \|\p_3 u_3\|_{H^{2}}^\frac{1}{2}\|{\bf \p_3}\p_3 u_3\|_{H^{2}}^\frac{1}{2}\|b\|_{H^4}^\frac{1}{2}\|{\bf \p_1}b\|_{H^4}^\frac{1}{2} \|\p_3^{3}H\|_{L^2}^\frac{1}{2}\|{\bf \p_2}\p_3^{3}H\|_{L^2}^\frac{1}{2} \\ & + \|\p_3 u_3 \|_{H^{3}} \|b\|_{H^1}^\frac{1}{4}\|{\bf \p_1}b\|_{H^1}^\frac{1}{4} \|{\bf \p_3}b\|_{H^1}^\frac{1}{4}\|{\bf \p_1 \p_3}b\|_{H^1}^\frac{1}{4}\|\p_3^{3}H\|_{L^2}^\frac{1}{2}\|{\bf \p_2}\p_3^{3}H\|_{L^2}^\frac{1}{2}. \end{split} \end{equation} Therefore, \begin{equation}\label{I6} \int_0^t |I_6(\tau) | d\tau \lesssim \mathscr{E}^{\frac{3}{2}} (t). \end{equation} Integrating \eqref{5eq2} in time on the interval $[0,t]$ and invoking the upper bounds in \eqref{I1}, \eqref{I2}, \eqref{I3I7}, \eqref{I4}, \eqref{I5I8} and \eqref{I6}, we obtain \begin{equation}\label{5eq5} \sup_{{0\le\tau\le t}} \big( \|u\|_{\dot H^4}^2 + \|b\|_{\dot H^4}^2 \big)+\int_0^t \Big( \|\p_1u\|_{\dot H^4}^2 + \|\na_h b\|_{\dot H^4}^2 \Big) d\tau \lesssim \mathscr{E}(0) + \mathscr{E}^{\frac{3}{2}} (0) +\mathscr{E} ^{\frac{3}{2}}(t)+ \mathscr{E}^2(t). \end{equation} Adding (\ref{4eq1}) and (\ref{5eq5}) yields the desired inequality in (\ref{hb}), namely $$ \mathscr{E}_1(t)\lesssim \mathscr{E}(0) + \mathscr{E}^{\frac{3}{2}} (0) +\mathscr{E} ^{\frac{3}{2}}(t)+ \mathscr{E}^2(t). $$ This finishes the proof. \end{proof} \vskip .3in \section{Proof of Proposition \ref{lem-important}} \label{proof-lem} This section is devoted to the proof of Proposition \ref{lem-important}, which provides suitable upper bounds for the time integral of the interaction terms. We have used Proposition \ref{lem-important} extensively in the crucial energy estimates in the previous sections. \vskip .1in The proof of Proposition \ref{lem-important} deals with some of the most difficult terms emanating from the velocity nonlinearity. We take out two of the wildest terms and deal with them in the following lemma. We will state and prove this lemma, and then prove Proposition \ref{lem-important}. \begin{lemma}\label{lem-assistant} Let $(u, b)\in H^4$ be a solution of (\ref{mhd}). Let $\omega = \nabla \times u$ and $H = \nabla \times b$ be the corresponding vorticity and current density, respectively. Let $\mathscr{E}(t)$ be defined as in \eqref{ed}. Then, for all $i,j,k \in \{1,2,3\}$, $$ \left| \int_0^t \int_{\mathbb{R}^3} b_j \p_t(\p_3^{3}\omega_i \p_3^{3}\omega_k) \; dxd\tau \right|, \, \left| \int_0^t \int_{\mathbb{R}^3} \p_2 u_j \p_t(\p_3^{3}\omega_i\p_3^{3}\omega_k) \;dx d\tau \right| \lesssim {\mathscr{E}^{\frac{3}{2}}(0)}+ \mathscr{E}^{\frac{3}{2}}(t) + \mathscr{E}^2 (t).$$ \end{lemma} \begin{proof}[Proof of Lemma \ref{lem-assistant}] We will deal with the two terms above simultaneously. By the equation of $\omega$ in (\ref{5eq1}), \begin{equation}\label{6eq2} \begin{split} & - \p_t(\p_3^{3}\omega_i\p_3^{3}\omega_k)\\ = \;& \p_3^{3}\omega_i \p_3^{3}\big( u\cdot\nabla\omega_k - \omega\cdot\nabla u_k -\p_1^2 \omega_k -b\cdot\nabla H_k +H \cdot\nabla b_k -\p_2 H_k \big) \\ \;& + \p_3^{3}\omega_k \p_3^{3}\big( u\cdot\nabla\omega_i - \omega\cdot\nabla u_i -\p_1^2 \omega_i -b\cdot\nabla H_i +H \cdot\nabla b_i -\p_2 H_i \big). \end{split} \end{equation} Multiplying (\ref{6eq2}) by $b_j$ or $\p_2 u_j$ and integrating in space, we can write \begin{equation}\label{J12345} - \int_{\mathbb{R}^3} \p_t(\p_3^{3}\omega_i\p_3^{3}\omega_k) (b_j|\p_2 u_j )dx =J_1 + J_2 +J_3+ J_4+J_5, \end{equation} where the notation $(b_j|\p_2 u_j)$ stands for either $b_j$ or $ \p_2 u_j$ and we will use it throughout the rest of the proof. The explicit expression for $J_1 \thicksim J_5$ is shown below. We first deal with $J_1$ given by \begin{equation}\nonumber J_1 = \int_{\mathbb{R}^3} \big[\p_3^{3}\omega_i \p_3^{3}( u\cdot\nabla\omega_k) + \p_3^{3}\omega_k \p_3^{3} ( u\cdot\nabla\omega_i) \big] (b_j|\p_2 u_j )dx. \end{equation} The highest order norms of $\omega$ in $J_1$, labeled as $J_{11}$, can be dealt with using Lemma \ref{lem-anisotropic-est}, \begin{equation}\nonumber \begin{split} J_{11} = &\int_{\mathbb{R}^3} \Big[ \p_3^{3}\omega_i u\cdot\nabla \p_3^{3}\omega_k + \p_3^{3}\omega_k u\cdot\nabla \p_3^{3}\omega_i \Big] (b_j|\p_2 u_j )dx \\ =& \int_{\mathbb{R}^3} u\cdot\nabla \big( \p_3^{3}\omega_i\p_3^{3}\omega_k \big)(b_j|\p_2 u_j )dx \\ =& - \int_{\mathbb{R}^3} \p_3^{3}\omega_i\p_3^{3}\omega_k u\cdot\nabla (b_j|\p_2 u_j) dx \\ \lesssim & \; \| {\bf \p_1} \omega\|_{H^{3}} \|\omega\|_{H^{3}} \|{\bf \p_2} u\|_{H^1}^{\frac{1}{2}} \|u\|_{H^1}^{\frac{1}{2}} \|{\bf \p_2} (b|\p_2 u)\|_{H^2}^{\frac{1}{2}} \| (b|\p_2 u)\|_{H^2}^{\frac{1}{2}}. \end{split} \end{equation} The remaining parts in $J_1$ can be written as \begin{equation}\nonumber \begin{split} J_{12} =& \sum_{l = 1}^{3} \mathcal{C}_{3}^l\int_{\mathbb{R}^3} \big[\p_3^{3}\omega_i \p_3^{l} u\cdot\nabla \p_3^{3-l} \omega_k + \p_3^{3}\omega_k \p_3^{l} u\cdot\nabla \p_3^{3-l} \omega_i \big](b_j|\p_2 u_j )dx \\ = &\sum_{l = 1}^{2} \mathcal{C}_{3}^l\int_{\mathbb{R}^3} \big[\p_3^{3}\omega_i \p_3^{l} u\cdot\nabla \p_3^{3-l} \omega_k + \p_3^{3}\omega_k \p_3^{l} u\cdot\nabla \p_3^{3-l} \omega_i \big](b_j|\p_2 u_j )dx \\ & + \int_{\mathbb{R}^3} \big[\p_3^{3}\omega_i \p_3^{3} u\cdot\nabla \omega_k + \p_3^{3}\omega_k \p_3^{3} u\cdot\nabla \omega_i \big](b_j|\p_2 u_j )dx, \end{split} \end{equation} which is easily controlled by \begin{equation}\nonumber \begin{split} &\|\p_3^{3} \omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1}\p_3^{3} \omega\|_{L^2}^\frac{1}{2} \|u\|_{H^4}^\frac{1}{2}\|{\bf \p_1} u\|_{H^4}^\frac{1}{2} \|u\|_{H^3}^\frac{1}{2}\|{\bf \p_2} u\|_{H^3}^\frac{1}{2} \| (b|\p_2 u)\|_{H^1}^{\frac{1}{2}} \| {\bf \p_2}(b|\p_2 u)\|_{H^1}^{\frac{1}{2}}. \end{split} \end{equation} $J_2$ is defined as and bounded by \begin{equation}\nonumber \begin{split} J_2 &= -\int_{\mathbb{R}^3} \p_3^{3}\omega_i \p_3^{3}( \omega \cdot\nabla u_k)(b_j|\p_2 u_j )dx -\int_{\mathbb{R}^3}\p_3^{3}\omega_k \p_3^{3}( \omega \cdot\nabla u_i)(b_j|\p_2 u_j )dx\\ & \lesssim\|\p_3^{3} \omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1}\p_3^{3} \omega\|_{L^2}^\frac{1}{2} \|u\|_{H^4}^\frac{1}{2}\|{\bf \p_1}u\|_{H^4}^\frac{1}{2} \|u\|_{H^{2}}^\frac{1}{4}\|{\bf \p_2}u\|_{H^{2}}^\frac{1}{4}\|{\bf \p_3}u\|_{H^{2}}^\frac{1}{4}\|{\bf \p_2 \p_3}u\|_{H^{2}}^\frac{1}{4} \\ &\quad \cdot \| (b|\p_2 u)\|_{L^2}^{\frac{1}{4}} \| {\bf \p_2}(b|\p_2 u)\|_{L^2}^{\frac{1}{4}} \| {\bf \p_3}(b|\p_2 u)\|_{L^2}^{\frac{1}{4}} \| {\bf \p_2 \p_3}(b|\p_2 u)\|_{L^2}^{\frac{1}{4}}. \end{split} \end{equation} By integration by parts, \begin{equation}\nonumber \begin{split} J_3=&\int_{\mathbb{R}^3} \big[-\p_3^{3}\omega_i \p_3^{3} \p_1^2 \omega_k -\p_3^{3}\omega_k \p_3^{3} \p_1^2 \omega_i\big] (b_j|\p_2 u_j ) \;dx\\ =& \; 2 \int_{\mathbb{R}^3} \p_1\p_3^{3}\omega_i \p_1 \p_3^{3}\omega_k (b_j|\p_2 u_j )dx \\ &+ \int_{\mathbb{R}^3} \big[\p_3^{3}\omega_i \p_1 \p_3^{3}\omega_k + \p_3^{3}\omega_k \p_1 \p_3^{3}\omega_i\big]\p_1(b_j|\p_2 u_j )dx \\ \lesssim & \; \| \p_1 \omega \|_{H^{3}}^2 \| (b|\p_2 u) \|_{H^2} + \|\omega\|_{H^{3}}\|\p_1\omega\|_{H^{3}} \|\p_1(b|\p_2 u)\|_{H^2}. \end{split} \end{equation} $J_4$ is given by $$ J_4 =\int_{\mathbb{R}^3} \big[ -\p_3^{3}\omega_i \p_3^{3}( b\cdot\nabla H_k) -\p_3^{3}\omega_k \p_3^{3} ( b\cdot\nabla H_i) \big] (b_j|\p_2 u_j) \; dx . $$ This is a difficult term. First we separate $J_4$ into two parts, \begin{equation}\nonumber \begin{split} J_4 = &\int_{\mathbb{R}^3}\big[ -\p_3^{3}\omega_i b\cdot\nabla \p_3^{3}H_k - \p_3^{3}\omega_k b\cdot\nabla\p_3^{3} H_i\big] (b_j | \p_2 u_j)\; dx\\ & + \sum_{l = 1}^{3} \mathcal{C}_{3}^l\int_{\mathbb{R}^3} \big[-\p_3^{3}\omega_i \p_3^{l} b\cdot\nabla \p_3^{3-l} H_k - \p_3^{3}\omega_k \p_3^{l} b\cdot\nabla \p_3^{3-l} H_i \big](b_j|\p_2 u_j )dx\\ =& J_{41} + J_{42}. \end{split} \end{equation} $J_{42}$ can be bounded directly. By Lemma 2.1, \begin{equation}\label{j42b} \begin{split} |J_{42}| {\lesssim} & \| \omega \|_{H^3}^{\frac{1}{2}} \| {\bf \p_1} \omega \|_{H^3}^{\frac{1}{2}}\| H \|_{H^3}^{\frac{1}{2}} \| {\bf \p_1} H \|_{H^3}^{\frac{1}{2}}\| b \|_{H^4}^{\frac{1}{2}} \| {\bf \p_2} b \|_{H^4}^{\frac{1}{2}} \\ &\cdot \| (b| \p_2 u)\|_{H^1}^{\frac{1}{2}} \| {\bf \p_2} (b| \p_2 u) \|_{H^1}^{\frac{1}{2}}. \end{split} \end{equation} The estimate for $J_{41}$ is at the core of this section. $J_1$, $J_2$ and $J_3$ are symmetric in the sense that, when we switch $i$ and $k$ in any one of these terms, they remain the same. As we have seen in the estimates above, terms with symmetric structure are relatively easy to deal with. However, $J_{41}$ is not symmetric and we can no longer make easy cancellations. To overcome this essential difficulty, we construct some artificial symmetry to take full advantage of the cancellations. \begin{equation}\label{6eq3} \begin{split} J_{41} = &\int_{\mathbb{R}^3}\big[ -\p_3^{3}\omega_i b\cdot\nabla \p_3^{3}H_k - \p_3^{3}\omega_k b\cdot\nabla\p_3^{3} H_i\big] (b_j | \p_2 u_j)\; dx\\ = & \int_{\mathbb{R}^3} \big[\p_3^{3} H_i b\cdot\nabla \p_3^{3}\omega_k + \p_3^{3} H_k b\cdot\nabla \p_3^{3}\omega_i \big] (b_j | \p_2 u_j) \; dx\\ &-\int_{\mathbb{R}^3}\big[ b\cdot\nabla (\p_3^{3}\omega_i\p_3^{3} H_k) + b\cdot\nabla (\p_3^{3}\omega_k\p_3^{3} H_i)\big](b_j | \p_2 u_j) \; dx \\ = & \; J_{411} + J_{412}. \end{split} \end{equation} $J_{412}$ can be handled through integration by parts and Lemma \ref{lem-anisotropic-est}, \begin{equation}\label{j412b} \begin{split} J_{412}= & \int_{\mathbb{R}^3} \big[ \p_3^{3}\omega_i\p_3^{3} H_k +\p_3^{3}\omega_k\p_3^{3} H_i \big] b\cdot\nabla(b_j|\p_2 u_j ) dx \\ \lesssim & \; \|\p_3^{3} \omega\|_{L^2}^\frac{1}{2}\|{\bf \p_1} \p_3^{3} \omega\|_{L^2}^\frac{1}{2}\|\p_3^{3} H\|_{L^2}^\frac{1}{2}\|{\bf \p_1} \p_3^{3} H\|_{L^2}^\frac{1}{2}\|b\|_{H^1}^\frac{1}{2}\|{\bf \p_2}b\|_{H^1}^\frac{1}{2}\\ &\quad \cdot \|(b|\p_2 u)\|_{H^2}^\frac{1}{2}\|{\bf \p_2}(b|\p_2 u)\|_{H^2}^\frac{1}{2}. \end{split} \end{equation} $J_{411}$ is extremely difficult. As our first step, we invoke the equation of $H$ in (\ref{5eq1}) \begin{equation}\label{6eq4} \begin{split} & \p_t H_{k} + u\cdot\nabla H_k + \sum_{p=1}^3 \nabla u_p \times \p_p b_k -\Delta_h H_k \\ & = b\cdot\nabla \omega_k + \sum_{p=1}^3 \nabla b_p \times \p_p u_k +\p_2 \omega_k . \end{split} \end{equation} where we have used the simple identities \begin{equation}\nonumber \begin{split} &\nabla \times (b\cdot\nabla u) = b\cdot\nabla \omega + \sum_{p=1}^3 \nabla b_p \times \p_p u, \\ &\nabla \times (u\cdot\nabla b) = u \cdot\nabla H + \sum_{p=1}^3 \nabla u_p \times \p_p b. \end{split} \end{equation} Applying $\p_3^{3}$ to \eqref{6eq4} then yields \begin{equation}\label{6eq5} \begin{split} & (\p_3^{3} H_k)_t + \p_3^{3} (u\cdot\nabla H_k) +\p_3^{3}(\sum_{p=1}^3 \nabla u_p \times \p_p b_k) -\p_3^{3}\Delta_h H_k\\ =& \; b\cdot\nabla \p_3^{3} \omega_k + \sum_{l = 1}^{3} \mathcal{C}_{3}^l\p_3^l b\cdot\nabla \p_3^{3-l}\omega_k + \p_3^{3}( \sum_{p=1}^3 \nabla b_p\times \p_p u_k ) + \p_3^{3} \p_2 \omega_k. \end{split} \end{equation} Similarly, \begin{equation}\label{6eq6} \begin{split} & (\p_3^{3} H_i)_t + \p_3^{3} (u\cdot\nabla H_i) +\p_3^{3}(\sum_{p=1}^3 \nabla u_p \times \p_p b_i) -\p_3^{3}\Delta_h H_i\\ =& \; b\cdot\nabla \p_3^{3} \omega_i + \sum_{l = 1}^{3} \mathcal{C}_{3}^l\p_3^l b\cdot\nabla \p_3^{3-l}\omega_i + \p_3^{3}( \sum_{p=1}^3 \nabla b_p\times \p_p u_i ) + \p_3^{3} \p_2 \omega_i . \end{split} \end{equation} Multiplying \eqref{6eq5} by $\p_3^{3} H_i$ and \eqref{6eq6} by $\p_3^{3} H_k$ , and summing them up, there holds \begin{equation}\label{6eq7} \begin{split} & \p_3^{3} H_i b\cdot\nabla \p_3^{3}\omega_k + \p_3^{3} H_k b\cdot\nabla \p_3^{3}\omega_i \\ =\; & \big( \p_3^{3}H_i \p_3^{3}H_k \big)_t +\p_3^{3}H_i \p_3^{3} (u\cdot\nabla H_k) + \p_3^{3}H_k \p_3^{3} (u\cdot\nabla H_i)\\ & + \p_3^{3}H_i\p_3^{3}(\sum_{p=1}^3 \nabla u_p \times \p_p b_k) + \p_3^{3}H_k\p_3^{3}(\sum_{p=1}^3 \nabla u_p \times \p_p b_i)\\ & - \p_3^{3}H_i\p_3^{3}( \sum_{p=1}^3 \nabla b_p\times \p_p u_k )-\p_3^{3}H_k\p_3^{3}( \sum_{p=1}^3 \nabla b_p\times \p_p u_i ) \\ & -\p_3^{3}H_i \sum_{l = 1}^{3} \mathcal{C}_{3}^l\p_3^l b\cdot\nabla \p_3^{3-l}\omega_k -\p_3^{3}H_k \sum_{l = 1}^{3} \mathcal{C}_{3}^l\p_3^l b\cdot\nabla \p_3^{3-l}\omega_i\\ & - \p_3^{3}H_i\p_3^{3}\Delta_h H_k -\p_3^{3}H_k\p_3^{3}\Delta_h H_i - \p_3^{3}H_i\p_3^{3} \p_2 \omega_k -\p_3^{3}H_k\p_3^{3} \p_2 \omega_i . \end{split} \end{equation} We can then replace the nonlinear terms in $J_{411}$ by the right hand side of \eqref{6eq7}. This complicated substitution generates many more terms, but this important process converts some of seemingly impossible terms into other terms that can be bounded suitably. This is what we would call artificial cancellation through substitutions. Next we give the estimate for the terms corresponding to the right hand side of \eqref{6eq7}. The first term we come across is $$ K_1 =\int_{\mathbb{R}^3} \big( \p_3^{3}H_i \p_3^{3}H_k \big)_t (b_j | \p_2 u_j) \; dx. $$ Using integration by parts and invoking the equation of $b$ in \eqref{mhd}, we can rewrite the term containing $b_j$ as follows, \begin{equation}\nonumber \begin{split} K_1 = & \; \frac{d}{dt}\int_{\mathbb{R}^3} \big( \p_3^{3}H_i \p_3^{3}H_k \big) b_j \; dx - \int_{\mathbb{R}^3} \big( \p_3^{3}H_i \p_3^{3}H_k \big) \p_t b_j \; dx\\ = & \; \frac{d}{dt}\int_{\mathbb{R}^3} \big( \p_3^{3}H_i \p_3^{3}H_k \big) b_j \; dx \\ & - \int_{\mathbb{R}^3} \big( \p_3^{3}H_i \p_3^{3}H_k \big) (-u\cdot \nabla b_j + \Delta_h b_j + b\cdot \nabla u_j + \p_2 u_j) \; dx. \end{split} \end{equation} By Lemma \ref{lem-anisotropic-est}, the last line in the equality above can be bounded by \begin{equation}\nonumber \begin{split} &\|\p_3^{3}H\|_{L^2}\|\p_1 \p_3^{3}H\|_{L^2}\|u\|_{H^2}^\frac{1}{2} \|\p_2 u\|_{H^2}^\frac{1}{2}\|b\|_{H^2}^\frac{1}{2} \|\p_2 b\|_{H^2}^\frac{1}{2} \\ & + \|\p_3^{3}H\|_{L^2}^\frac{1}{2}\|{\bf \p_1} \p_3^{3}H\|_{L^2}^\frac{1}{2}\|\p_3^{3}H\|_{L^2}^\frac{1}{2}\|{\bf \p_2} \p_3^{3}H\|_{L^2}^\frac{1}{2}\\ &\quad \times \Big(\|\Delta_h b\|_{L^2}^\frac{1}{2}\|{\bf \p_3} \Delta_h b\|_{L^2}^\frac{1}{2} +\|\p_2 u\|_{L^2}^\frac{1}{2}\|{\bf \p_3} \p_2 u\|_{L^2}^\frac{1}{2}\Big). \end{split} \end{equation} The estimate for the term containing $\p_2 u_j$ is similar. We move on to the second part $$ K_2 = \int_{\mathbb{R}^3} \big[ \p_3^{3}H_i \p_3^{3} (u\cdot\nabla H_k) + \p_3^{3}H_k \p_3^{3} (u\cdot\nabla H_i) \big] (b_j | \p_2 u_j) \; dx. $$ It can be handled similarly as $J_{11}$. Due to its symmetric property, \begin{equation}\nonumber \begin{split} &\int_{\mathbb{R}^3} \Big[ \p_3^{3}H_i u\cdot\nabla \p_3^{3} H_k + \p_3^{3}H_k u\cdot\nabla \p_3^{3}H_i \Big] (b_j|\p_2 u_j )dx \\ =& \int_{\mathbb{R}^3} u\cdot\nabla \big( \p_3^{3}H_i\p_3^{3}H_k \big)(b_j|\p_2 u_j )dx \\ =& - \int_{\mathbb{R}^3} \p_3^{3}H_i\p_3^{3}H_k u\cdot\nabla (b_j|\p_2 u_j) dx \\ \lesssim & \; \| {\bf \p_1} H\|_{H^{3}} \|H\|_{H^{3}} \| {\bf \p_2} u\|_{H^{1}}^{\frac{1}{2}} \|u\|_{H^1}^{\frac{1}{2}} \| {\bf \p_2} (b|\p_2 u)\|_{H^{2}}^{\frac{1}{2}} \| (b|\p_2 u)\|_{H^{2}}^{\frac{1}{2}} \end{split} \end{equation} and \begin{equation}\nonumber \begin{split} &\sum_{l = 1}^3 \mathcal{C}_3^l \int_{\mathbb{R}^3} \Big[ \p_3^{3}H_i \p_3^l u\cdot\nabla \p_3^{3-l} H_k + \p_3^{3}H_k \p_3^l u\cdot\nabla \p_3^{3-l}H_i \Big] (b_j|\p_2 u_j )dx \\ \lesssim & \; \|\p_3^3 H\|_{L^2}^\frac{1}{2}\|{\bf \p_2}\p_3^3 H\|_{L^2}^\frac{1}{2} \|H\|_{H^3}^\frac{1}{2}\|{\bf \p_2}H\|_{H^3}^\frac{1}{2} \|u\|_{H^4}^\frac{1}{2}\|{\bf \p_1}u\|_{H^4}^\frac{1}{2} \|(b|\p_2 u)\|_{H^1}^\frac{1}{2}\|{\bf \p_1}(b|\p_2 u)\|_{H^1}^\frac{1}{2}. \end{split} \end{equation} The next term $K_3$ contains six parts of (\ref{6eq7}) and is defined as follows. \begin{equation}\nonumber \begin{split} K_3 = & \int_{\mathbb{R}^3} \Big(\p_3^{3}H_i\p_3^{3}(\sum_{p=1}^3 \nabla u_p \times \p_p b_k) + \p_3^{3}H_k\p_3^{3}(\sum_{p=1}^3 \nabla u_p \times \p_p b_i)\\ & - \p_3^{3}H_i\p_3^{3}( \sum_{p=1}^3 \nabla b_p\times \p_p u_k )-\p_3^{3}H_k\p_3^{3}( \sum_{p=1}^3 \nabla b_p\times \p_p u_i ) \\ & -\p_3^{3}H_i \sum_{l = 1}^{3} \mathcal{C}_{3}^l\p_3^l b\cdot\nabla \p_3^{3-l}\omega_k -\p_3^{3}H_k \sum_{l = 1}^{3} \mathcal{C}_{3}^l\p_3^l b\cdot\nabla \p_3^{3-l}\omega_i \Big) \cdot (b_j | \p_2 u_j) \; dx. \end{split} \end{equation} By Lemma \ref{lem-anisotropic-est}, it's easy to derive \begin{equation}\nonumber \begin{split} K_3 \lesssim & \; \|H\|_{H^{3}}^\frac{1}{2}\|{\bf \p_1} H\|_{H^{3}}^\frac{1}{2} \big(\|u\|_{{ H^{4}}}^\frac{1}{2}\|{\bf \p_1} u\|_{{H^{4}}}^\frac{1}{2} {\|b\|_{H^{3}}^\frac{1}{2} \|{\bf \p_2}b\|_{H^{3}}^\frac{1}{2}}\\ & + \|b\|_{ {H^{4}} }^\frac{1}{2}\|{\bf \p_1} b\|_{ {H^{4}} }^\frac{1}{2} {\|u\|_{H^{3}}^\frac{1}{2} \|{\bf \p_2}u\|_{H^{3}}^\frac{1}{2}} \big) \cdot \|(b|\p_2u)\|_{H^1}^\frac{1}{2}\|{\bf \p_2}(b|\p_2u)\|_{H^1}^\frac{1}{2}. \end{split} \end{equation} The last four parts of (\ref{6eq7}) are included in $K_4$, \begin{equation}\nonumber \begin{split} K_4 {=} & \int_{\mathbb{R}^3} \Big[ - \p_3^{3}H_i\p_3^{3}\Delta_h H_k -\p_3^{3}H_k\p_3^{3}\Delta_h H_i \Big] (b_j|\p_2 u_j )dx \\ & + \int_{\mathbb{R}^3} \Big[- \p_3^{3}H_i\p_3^{3} \p_2 \omega_k -\p_3^{3}H_k\p_3^{3} \p_2 \omega_i \Big] (b_j|\p_2 u_j )dx \\ \lesssim &\; \|\na_h H\|_{H^{3}}^2 \|(b |\p_2 u)\|_{H^2} + \|H\|_{H^{3}}\|\na_h H\|_{H^{3}} \|\na_h(b|\p_2 u)\|_{H^2} \\ & + \| \p_2 \p_3^{3}H \|_{L^2} \| \p_3^{3} \omega \|_{L^2}^{\frac{1}{2}} \| {\bf \p_1} \p_3^{3} \omega \|_{L^2}^{\frac{1}{2}} \| (b|\p_2 u )\|_{H^1}^{\frac{1}{2}} \| {\bf \p_2}(b|\p_2 u )\|_{H^1}^{\frac{1}{2}} \\ & + \|\p_3^{3}H \|_{L^2}^{\frac{1}{2}} \|{\bf \p_2}\p_3^{3}H \|_{L^2}^{\frac{1}{2}} \| \p_3^{3} \omega \|_{L^2}^{\frac{1}{2}} \| {\bf \p_1} \p_3^{3} \omega \|_{L^2}^{\frac{1}{2}} \| \p_2(b|\p_2 u ) \|_{L^2}^{\frac{1}{2}} \| {\bf \p_3}\p_2(b|\p_2 u ) \|_{L^2}^{\frac{1}{2}}. \end{split} \end{equation} We have estimated all the terms corresponding to the right hand side of \eqref{6eq7} and thus obtained a suitable upper bound for $J_{411}$ in (\ref{6eq3}). Integrating the upper bounds on $K_1$ through $K_4$ in time yields \begin{equation}\nonumber \int_0^t |J_{411}| d\tau \lesssim {\mathscr{E}^{\frac{3}{2}}(0)}+\mathscr{E}^{\frac{3}{2}}(t) + \mathscr{E}^2(t). \end{equation} Together with the estimate (\ref{j412b}) for $J_{412}$, we conclude that \begin{equation}\nonumber \int_0^t |J_{41}| d\tau \lesssim { \mathscr{E}^{\frac{3}{2}}(0)}+ \mathscr{E}^{\frac{3}{2}}(t) + \mathscr{E}^2(t). \end{equation} $J_{42}$ has been estimated before in (\ref{j42b}). Thus, \begin{equation}\nonumber \int_0^t |J_{4}| d\tau \lesssim { \mathscr{E}^{\frac{3}{2}}(0)}+ \mathscr{E}^{\frac{3}{2}}(t) + \mathscr{E}^2(t). \end{equation} We deal with the last term, \begin{equation}\nonumber \begin{split} J_5 = \int_{\mathbb{R}^3} \big[\p_3^3 \omega_i \p_3^3 (H \cdot \nabla b_k - \p_2 H_k) + \p_3^3 \omega_k \p_3^3 (H \cdot \nabla b_i - \p_2 H_i) \big] (b_j | \p_2 u_j) \; dx. \end{split} \end{equation} By Lemma \ref{lem-anisotropic-est}, \begin{equation}\nonumber \begin{split} J_5 \lesssim &\|\omega\|_{H^3}^\frac{1}{2}\|{\bf \p_1} \omega\|_{H^3}^\frac{1}{2} \|b\|_{H^4}^\frac{1}{2}\|{\bf \p_1} b\|_{H^4}^\frac{1}{2}\|b\|_{H^3}^\frac{1}{2}\|{\bf \p_2} b\|_{H^3}^\frac{1}{2}\|(b|\p_2 u)\|_{H^1}^\frac{1}{2}\|{\bf \p_2} (b|\p_2 u)\|_{H^1}^\frac{1}{2}\\ &+{\|\p_3^3\omega\|_{L^2}^{\frac {1}{2}}\|{\bf\p_1 }\p_3^3\omega\|_{L^2}^{\frac {1}{2}} \|\p_3^3\p_2H\|_{L^2} \|(b|\p_2 u)\|_{H^1}^\frac{1}{2}\|{\bf \p_2} (b|\p_2 u)\|_{H^1}^\frac{1}{2}}. \end{split} \end{equation} Integrating in time yields \begin{equation}\nonumber \int_0^t |J_{5}| d\tau \lesssim \mathscr{E}^{\frac{3}{2}}(t) + \mathscr{E}^2(t). \end{equation} Integrating (\ref{J12345}) in time over $[0, t]$ and combining all the bounds for $J_1$ through $J_5$, we are led to the conclusion of Lemma \ref{lem-assistant}. \end{proof} \vskip .1in We are now ready to prove Proposition \ref{lem-important}. \begin{proof}[Proof of Proposition \ref{lem-important}] Recall that the goal here is to bound the interaction terms $\mathcal{W}(t)$ defined by $$ \mathcal{W}^{ijk}(t) \triangleq \int_{\mathbb{R}^3} \p_3^{3}\omega_i \p_2u_j \p_3^{3}\omega_k \; dx, \quad i,j,k \in \{1,2,3\}.$$ We replace $\p_2 u_j$ by the equation of $b_j$ in (\ref{mhd}), there is \begin{equation*}\label{W} \begin{split} \mathcal{W}^{ijk}(t)=\; & \int_{\mathbb{R}^3} \p_3^{3}\omega_i \Big[\p_t b_j +u\cdot\nabla b_j -\Delta_h b_j -b\cdot\nabla u_j\Big] \p_3^{3}\omega_k \; dx \\ = \; & \frac{d}{dt} \int_{\mathbb{R}^3} \p_3^{3}\omega_i b_j \p_3^{3}\omega_k dx- \int_{\mathbb{R}^3} b_j \p_t(\p_3^{3}\omega_i \p_3^{3}\omega_k)dx \\ &+ \int_{\mathbb{R}^3} \p_3^{3}\omega_i u\cdot\nabla b_j \p_3^{3}\omega_kdx - \int_{\mathbb{R}^3} \p_3^{3}\omega_i b\cdot\nabla u_j \p_3^{3}\omega_kdx \\ & -\int_{\mathbb{R}^3} \p_3^{3}\omega_i \Delta_h b_j \p_3^{3}\omega_kdx \\ =\; & \mathcal{W}_1^{ijk}+\mathcal{W}_2^{ijk}+\mathcal{W}_3^{ijk}+\mathcal{W}_4^{ijk}+\mathcal{W}_5^{ijk} . \end{split} \end{equation*} By H\"{o}lder's inequality, \begin{equation}\nonumber \int_0^t W_1^{ijk}(\tau) \;d\tau \lesssim \mathscr{E}^\frac{3}{2}(0) + \mathscr{E}^\frac{3}{2}(t). \end{equation} $\mathcal{W}_2^{ijk}$ is an extremely difficult term. Fortunately, we have bounded it in Lemma \ref{lem-assistant}, \begin{equation}\nonumber \left|\int_0^t \mathcal{W}_2^{ijk}(\tau) d\tau \right| \lesssim {\mathscr{E}^{\frac{3}{2}}(0)}+ \mathscr{E}^{\frac{3}{2}}(t) + \mathscr{E}^2(t). \end{equation} By Lemma \ref{lem-anisotropic-est}, \begin{equation}\nonumber \begin{split} \mathcal{W}_3^{ijk} \lesssim & \; \|\p_3^{3}\omega_i \|_{L^2}^{\frac{1}{2}}\|{\bf \p_1}\p_3^{3}\omega_i \|_{L^2}^{\frac{1}{2}} \|\p_3^{3}\omega_k \|_{L^2}^{\frac{1}{2}}\|{\bf \p_1}\p_3^{3}\omega_k \|_{L^2}^{\frac{1}{2}}\\ &\cdot {\|u\|_{H^1}^{\frac{1}{2}} \|{\bf \p_2}u\|_{H^1}^{\frac{1}{2}} \| \nabla b_j\|_{H^1}^{\frac{1}{2}} \| {\bf \p_2}\nabla b_j\|_{H^1}^{\frac{1}{2}} } \end{split} \end{equation} and \begin{equation}\nonumber \begin{split} \mathcal{W}_4^{ijk} \lesssim & \; \|\p_3^{3}\omega_i \|_{L^2}^{\frac{1}{2}}\|{\bf \p_1}\p_3^{3}\omega_i \|_{L^2}^{\frac{1}{2}} \|\p_3^{3}\omega_k \|_{L^2}^{\frac{1}{2}}\|{\bf \p_1}\p_3^{3}\omega_k \|_{L^2}^{\frac{1}{2}} \\ &\cdot {\|b\|_{H^1}^{\frac{1}{2}} \|{\bf \p_2}b\|_{H^1}^{\frac{1}{2}} \| \nabla u_j\|_{H^1}^{\frac{1}{2}} \| {\bf \p_2}\nabla u_j\|_{H^1}^{\frac{1}{2}} }. \end{split} \end{equation} Integrating in time yields \begin{equation}\nonumber \int_0^t |\mathcal{W}_3^{ijk}(\tau)| d\tau, \,\, \int_0^t |\mathcal{W}_4^{ijk}(\tau)| d\tau\lesssim \mathscr{E}^2(t). \end{equation} We divide $\mathcal{W}_5^{ijk}$ into two parts, \begin{equation}\nonumber \begin{split} \mathcal{W}_5^{ijk} = & \int_{\mathbb{R}^3} \p_1 b_j \p_1 (\p_3^{3}\omega_i \p_3^{3}\omega_k)dx - \int_{\mathbb{R}^3} \p_2^2 b_j \p_3^{3}\omega_i\p_3^{3}\omega_kdx \\ = & \; \mathcal{W}_{51}^{ijk} + \mathcal{W}_{52}^{ijk}. \end{split} \end{equation} The estimate for $\mathcal{W}_{51}^{ijk}$ is easy, \begin{equation}\nonumber \begin{split} \mathcal{W}_{51}^{ijk} \lesssim \|\p_1 b\|_{H^2} \|\p_1 \omega\|_{H^{3}} \|\omega\|_{H^{3}}. \end{split} \end{equation} For $\mathcal{W}_{52}^{ijk}$, we replace $\p_2 b_j$ via the equation of {$u_j$} in \eqref{mhd}, \begin{equation}\nonumber \mathcal{W}_{52}^{ijk} = - \int_{\mathbb{R}^3} \p_2 \big( \p_tu_j +u\cdot\nabla u_j -\p_1^2u_j -b\cdot\nabla b_j + \p_j P\big) \p_3^{3}\omega_i\p_3^{3}\omega_kdx . \end{equation} By Lemma \ref{lem-anisotropic-est} and Sobolev's inequality, \begin{equation}\nonumber \begin{split} & - \int_{\mathbb{R}^3} \p_2 \big(u\cdot\nabla u_j -\p_1^2u_j -b\cdot\nabla b_j + \nabla_j P\big) \p_3^{3}\omega_i\p_3^{3}\omega_kdx \\ \lesssim & \; \|\p_1 u\|_{H^4} \|\p_1 \omega\|_{H^{3}} \|\omega\|_{H^3} + \|\p_3^{3}\omega \|_{L^2}\|{\bf \p_1}\p_3^{3}\omega \|_{L^2} \big(\|u\|_{H^3} \|{\bf \p_2}u\|_{H^3} + \|b\|_{H^3}\|{\bf \p_2}b\|_{H^3}\big)\\ & + \|\p_2 \nabla P\|_{H^1}^\frac{1}{2} \|{\bf \p_2} \p_2 \nabla P\|_{H^1}^\frac{1}{2}\|\p_3^{3}\omega \|_{L^2}^\frac{3}{2}\|{\bf \p_1}\p_3^{3}\omega \|_{L^2}^\frac{1}{2}. \end{split} \end{equation} Taking the divergence of the velocity equation in (\ref{mhd}) yields a representation of the pressure $P$, $$ P = \sum_{i,j=1}^3 (-\Delta)^{-1} \p_i \p_j (u_i u_j - b_i b_j). $$ Using the fact that Riesz operators are bounded on $L^q$ for $1< q < \infty$ and the third inequality in Lemma \ref{lem-anisotropic-est}, we have \begin{equation}\nonumber \begin{split} \|\p_2 P\|_{L^2} \lesssim & \sum_{v = u, b} \|\p_2(v \otimes v)\|_{L^2} \lesssim \sum_{v = u, b}\|\p_2 v\|_{L^2}^\frac{1}{2} \|{\bf \p_1} \p_2 v\|_{L^2}^\frac{1}{2}\|v\|_{H^1}^\frac{1}{2} \|{\bf \p_2} v\|_{H^1}^\frac{1}{2}, \\ \|\p_2^2 P\|_{L^2} \lesssim & \sum_{v = u, b} \|\p_2^2(v \otimes v)\|_{L^2} \lesssim \sum_{v = u, b}\big(\|\p_2 v\|_{L^4}^2+ \|\p_2^2 v\|_{L^2}^\frac{1}{2}\|{\bf \p_1} \p_2^2 v\|_{L^2}^\frac{1}{2} \|v\|_{H^1}^\frac{1}{2}\|{\bf \p_2} v\|_{H^1}^\frac{1}{2}\big). \end{split} \end{equation} Hence, \begin{equation}\nonumber \begin{split} & - \int_{\mathbb{R}^3} \p_2 \big(u\cdot\nabla u_j -\p_1^2u_j -b\cdot\nabla b_j + \nabla_j P\big) \p_3^{3}\omega_i\p_3^{3}\omega_kdx \\ \lesssim & \; \|\p_1 u\|_{H^4} \|\p_1 \omega\|_{H^{3}} \|\omega\|_{H^3} + \|\p_3^{3}\omega \|_{L^2}\|{\bf \p_1}\p_3^{3}\omega \|_{L^2} \big(\|u\|_{H^3} \|{\bf \p_2}u\|_{H^3} + \|b\|_{H^3}\|{\bf \p_2}b\|_{H^3}\big)\\ & + \sum_{v = u, b} \big(\|\p_2 v\|_{H^2}^\frac{1}{2} \|{\bf \p_1} \p_2 v\|_{H^2}^\frac{1}{2}\|v\|_{H^3}^\frac{1}{2} \|{\bf \p_2} v\|_{H^3}^\frac{1}{2}\big)^\frac{1}{2} \\ & \cdot \big(\|\p_2 v\|_{H^3}^2+ \|\p_2^2 v\|_{H^2}^\frac{1}{2}\|{\bf \p_1} \p_2^2 v\|_{H^2}^\frac{1}{2} \|v\|_{H^3}^\frac{1}{2}\|{\bf \p_2} v\|_{H^3}^\frac{1}{2}\big)^\frac{1}{2} \|\p_3^{3}\omega \|_{L^2}^\frac{3}{2}\|{\bf \p_1}\p_3^{3}\omega \|_{L^2}^\frac{1}{2}. \end{split} \end{equation} This implies that we only have one term left in the estimate for $W_{52}^{ijk}$, namely $$- \int_{\mathbb{R}^3} \p_2\p_tu_j\; \p_3^{3}\omega_i\p_3^{3}\omega_kdx.$$ We shift the time derivative via the equation $$-\int_{\mathbb{R}^3} \p_2\p_t u_j \p_3^{3}\omega_i\p_3^{3}\omega_kdx = -\frac{d}{dt}\int_{\mathbb{R}^3} \p_2 u_j \p_3^{3}\omega_i\p_3^{3}\omega_kdx + \int_{\mathbb{R}^3} \p_2 u_j \p_t(\p_3^{3}\omega_i\p_3^{3}\omega_k)dx.$$ The last term above is a difficult term and is bounded via Lemma \ref{lem-assistant}. Integrating in time and invoking the upper bounds, we find $$ \int_0^t |\mathcal{W}_5^{ijk}(\tau)| d\tau \lesssim \mathscr{E}^\frac{3}{2}(0) + \mathscr{E}^\frac{3}{2}(t) + \mathscr{E}^2(t). $$ This completes the proof of Proposition \ref{lem-important}. \end{proof} \vskip .3in \section{Proof of Theorem \ref{main}} \label{proof-thm} This section completes the proof of Theorem \ref{main}, which is achieved by applying the bootstrapping argument to the energy inequalities obtained in the previous sections. \begin{proof}[Proof of Theorem \ref{main}] As aforementioned in the introduction, the local well-posedness of (\ref{mhd}) in $H^4$ follows from a standard procedure (see, e.g., \cite{MaBe}) and our attention is exclusively focused on the global bound {of $H^4$-norms}. This is accomplished by the bootstrapping argument. The key components are the two energy inequalities established previously in Sections \ref{e2b} and \ref{proof-high}, \begin{align} & {\mathscr{E}_{2}(t)}\le C_1\, \mathscr{E}(0) + C_2\,\mathscr{E}_1(t) + C_3\,\mathscr{E}_1^{\frac32}(t) + C_4 \,\mathscr{E}_2^{\frac32}(t), \label{ineq1}\\ &{\mathscr{E}_{1}(t)} \le C_5\, \mathscr{E}(0) + C_6 \,\mathscr{E}^{\frac32}(0) + C_7\, \mathscr{E}^{\frac32}(t) + C_8\, \mathscr{E}^2 (t), \label{ineq2} \end{align} where $\mathscr{E} = \mathscr{E}_{1} +\mathscr{E}_{2}$. Adding (\ref{ineq2}) to $1/(2C_2)$ of (\ref{ineq1}) yields, for a constant $C_0>0$ and for any $t>0$, \begin{equation}\label{en} {\mathscr{E} (t)}\le C_0\, \mathscr{E}(0) +C_0 \,\mathscr{E}^{\frac32}(0) + C_0 \,\mathscr{E}^{\frac32}(t) + C_0\, \mathscr{E}^2 (t). \end{equation} Without loss of generality, we assume $C_0\ge 1$. Applying a bootstrapping argument to (\ref{en}) then implies that, if $\|(u_0, b_0)\|_{H^4}$ is sufficiently small, say \begin{equation}\label{inc} \mathscr{E}(0) \le \frac1{128 C^3_0}\quad \mbox{or}\quad \|(u_0, b_0)\|_{H^4} \le \epsilon :=\frac1{\sqrt{128 C_0^3}}, \end{equation} then, for any $t>0$, $$ \mathscr{E}(t)\le \frac1{32 C^2_0}, \quad \mbox{especially}\quad \|(u(t), b(t))\|_{H^4} \le 2 \sqrt{C_0}\, \epsilon. $$ In fact, if we make the ansatz that \begin{equation}\label{ans} \mathscr{E}(t)\le \frac1{16 C^2_0} \end{equation} and insert (\ref{ans}) in (\ref{en}), we obtain $$ \mathscr{E}(t) \le C_0\, \mathscr{E}(0) +C_0 \,\mathscr{E}^{\frac32}(0) + \frac12 \mathscr{E}(t), $$ which, together with (\ref{inc}), implies \begin{equation}\label{con} \mathscr{E}(t) \le\frac1{32 C^2_0}. \end{equation} The bootstrapping argument then implies that (\ref{con}) actually holds for all $t>0$. This completes the proof of Theorem \ref{main}. \end{proof} \vskip .3in \section*{Acknowledgement} Lin is partially supported by NSFC under Grant 11701049. Wu is partially supported by the National Science Foundation of the United States under grant DMS 2104682 and the AT\&T Foundation at Oklahoma State University. Zhu is partially supported by NSFC under Grant 11801175. \vskip .4in
1,116,691,498,823
arxiv
\section{Noting} \section{Introduction} The last decade has witnessed significant progress in the 3D object detection task via different types of sensors, such as monocular images~\cite{chen2016monocular,xu2018multi}, stereo cameras~\cite{chen20173d}, and LiDAR point clouds~\cite{zhou2018voxelnet,luo2018fast,yang2018pixor}. Camera images usually contain plenty of semantic features~(\textit{e.g.}, color, texture) while suffering from the lack of depth information. LiDAR points provide depth and geometric structure information, which are quite helpful for understanding 3D scenes. However, LiDAR points are usually sparse, unordered, and unevenly distributed. Fig.~\ref{fig:introduction}(a) illustrates a typical example of leveraging the camera image to improve the 3D detection task. It is challenging to distinguish between the closely packed white and yellow chairs by only the LiDAR point cloud due to their similar geometric structure, resulting in chaotically distributed bounding boxes. In this case, utilizing the color information is crucial to locate them precisely. This motivates us to design an effective module to fuse different sensors for a more accurate 3D object detector. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{figs/introduction.pdf} \end{center} \setlength{\abovecaptionskip}{2pt} \caption{Illustration of (a)~the benefit and (b)~potential interference information of the camera image. (c) demonstrates the inconsistency of the classification confidence and localization confidence. Green box denotes the ground truth. Blue and yellow boxes are predicted bounding boxes.} \label{fig:introduction} \end{figure} However, fusing the representations of LiDAR and camera image is a non-trivial task for two reasons. On the one hand, they possess highly different data characteristics. On the other hand, the camera image is sensitive to illumination, occlusion, \textit{etc.}~(see Fig.~\ref{fig:introduction}(b)), and thus may introduce interfering information that is harmful to the 3D object detection task. Previous works usually fuse these two sensors with the aid of image annotations~(namely 2D bounding boxes). According to different ways of utilizing the sensors, we summarize previous works into two main categories, including 1) cascading approaches using different sensors in different stages~\cite{qi2018frustum,xu2017pointfusion,zhao20193d}, and 2) fusion methods that jointly reason over multi-sensor inputs~\cite{liang2019multi,Liang2018ECCV}. Although effective, these methods have several limitations. Cascading approaches cannot leverage the complementarity among different sensors, and their performance is bounded by each stage. Fusion methods~\cite{liang2019multi,Liang2018ECCV} need to generate BEV data through perspective projection and voxelization, leading to information loss inevitably. Besides, they can only approximately establish a relatively coarse correspondence between the voxel features and semantic image features. We propose a LiDAR-guided Image Fusion~(LI-Fusion) module to address both the two issues mentioned above. LI-Fusion module establishes the correspondence between raw point cloud data and the camera image in a point-wise manner, and adaptively estimate the importance of the image semantic features. In this way, useful image features are utilized to enhance the point features while interfering image features are suppressed. Comparing with previous method, our solution possesses four main advantages, including 1) achieving fine-grained point-wise correspondence between LiDAR and camera image data through a simpler pipeline without complicated procedure for BEV data generation; 2) keeping the original geometric structure without information loss; 3) addressing the issue of the interference information that may be brought by the camera image; 4) free of image annotations, namely 2D bounding box annotations, as opposed to previous works~\cite{qi2018frustum,Liang2018ECCV}. Besides multi-sensor fusion, we observe the issue of the inconsistency between the classification confidence and localization confidence, which represent whether an object exists in a bounding box and how much overlap it shares with the ground truth. As shown in Fig.~\ref{fig:introduction}(c), the bounding box with higher classification confidence possesses lower localization confidence instead. This inconsistency will lead to degraded detection performance since the Non-Maximum Suppression~(NMS) procedure automatically filters out boxes with large overlaps but low classification confidence. However, this problem is rarely discussed in the 3D detection task. Jiang \textit{et al.}~\cite{jiang2018acquisition} attempt to alleviate this problem by improving the NMS procedure. They introduce a new branch to predict the localization confidence and replace the threshold for the NMS process as a multiplication of both the classification and localization confidences. Though effective to some extent, there is no explicit constraint to force the consistency of these two confidences. Different from ~\cite{jiang2018acquisition}, we present a consistency enforcing loss~(CE loss) to guarantee the consistency of these two confidences explicitly. With its aid, boxes with high classification confidence are encouraged to possess large overlaps with the ground truth, and vice versa. This approach owns two advantages. First, our solution is easy to implement without any modifications to the architecture of the detection network. Second, our solution is entirely free of learnable parameters and extra inference time overhead. Our key contributions are as follows: \begin{enumerate} \item Our LI-Fusion module operates on LiDAR point and camera image directly and effectively enhances the point features with corresponding semantic image features in a point-wise manner without image annotations. \item We propose a CE loss to encourage the consistency between the classification and localization confidence, leading to more accurate detection results. \item We integrate the LI-Fusion module and CE loss into a new framework named EPNet, which achieves state-of-the-art results on two common 3D object detection benchmark datasets, \textit{i.e.}, the KITTI dataset~\cite{geigerwe} and SUN-RGBD dataset~\cite{song2015sun}. \end{enumerate} \section{Related Work} \noindent\textbf{3D object detection based on camera images.}~Recent 3D object detection methods pay much attention to camera images, such as monocular~\cite{ma2019accurate,qin2019monogrnet,ku2019monopsr,li2019gs3d,liu2019deep} and stereo images~\cite{licvpr2019,wangcvpr2019}. Chen \textit{et al.}~\cite{chen2016monocular} obtain 2D bounding boxes with a CNN-based object detector and infer their corresponding 3D bounding boxes with semantic, context, and shape information. Mousavian \textit{et al.}~\cite{mousavian20173d} estimate localization and orientation from 2D bounding boxes of objects by exploiting the constraint of projective geometry. However, methods based on the camera image have difficulty in generating accurate 3D bounding boxes due to the lack of depth information. \noindent\textbf{3D object detection based on LiDAR.}~Many LiDAR-based methods~\cite{yang2018pixor,lasernet,std2019yang} are proposed in recent years. VoxelNet~\cite{zhou2018voxelnet} divides a point cloud into voxels and employs stacked voxel feature encoding layers to extract voxel features. SECOND~\cite{yan2018second} introduces a sparse convolution operation to improve the computational efficiency of ~\cite{zhou2018voxelnet}. PointPillars~\cite{lang2019pointpillars} converts the point cloud to a pseudo-image and gets rid of time-consuming 3D convolution operations. PointRCNN~\cite{shi2019pointrcnn} is a pioneering two-stage detector, which consists of a region proposal network~(RPN) and a refinement network. The RPN network predicts the foreground points and outputs coarse bounding boxes which are then refined by the refinement network. However, LiDAR data is usually extremely sparse, posing a challenge for accurate localization. \noindent\textbf{3D object detection based on multiple sensors.} Recently, much progress has been made in exploiting multiple sensors, such as camera image and LiDAR. Qi \textit{et al.}~\cite{qi2018frustum} propose a cascading approach F-PointNet, which first produces 2D proposals from camera images and then generates corresponding 3D boxes based on LiDAR point clouds. However, cascading methods need extra 2D annotations, and their performance is bounded by the 2D detector. Many methods attempt to reason over camera images and BEV jointly. MV3D~\cite{chen2017multi} and AVOD~\cite{ku2018joint} refine the detection box by fusing BEV and camera feature maps for each ROI region. ConFuse~\cite{Liang2018ECCV} proposes a novel continuous fusion layer that achieves the voxel-wise alignment between BEV and image feature maps. Different from previous works, our LI-Fusion module operates on LiDAR data directly and establishes a finer point-wise correspondence between the LiDAR and camera image features. \section{Method} Exploiting the complementary information of multiple sensors is important for accurate 3D object detection. Besides, it is also valuable to resolve the performance bottleneck caused by the inconsistency between the localization and classification confidence. In this paper, we propose a new framework named EPNet to improve the 3D detection performance from these two aspects. EPNet consists of a two-stream RPN for proposal generation and a refinement network for bounding box refining, which can be trained end-to-end. The two-stream RPN effectively combines the LiDAR point feature and semantic image feature via the proposed LI-Fusion module. Besides, we provide a consistency enforcing loss~(CE loss) to improve the consistency between the classification and localization confidence. In the following, we present the details of our two-steam RPN and refinement network in subsection~\ref{sec:image fusion} and subsection~\ref{sec:refinement}, respectively. Then we elaborate our CE loss and the overall loss function in subsection~\ref{sec:overall loss function}. \subsection{Two-stream RPN} \label{sec:image fusion} \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\linewidth]{figs/two_stream_rpn.pdf} \end{center} \setlength{\abovecaptionskip}{2pt} \caption{Illustration of the architecture of the two-stream RPN which is composed of a geometric stream and an image stream. We employ several LI-Fusion modules to enhance the LiDAR point features with corresponding semantic image features in multiple scales. $N$ represents the number of LiDAR points. $H$ and $W$ denote the height and width of the input camera image, respectively. } \label{fig:two-stream RPN} \end{figure*} Our two-stream RPN is composed of a geometric stream and an image stream. As shown in Fig.~\ref{fig:two-stream RPN}, the geometric stream and the image stream produce the point features and semantic image features, respectively. We employ multiple LI-Fusion modules to enhance the point features with corresponding semantic image features in different scales, leading to more discriminative feature representations. \noindent\textbf{Image Stream.}~The image stream takes camera images as input and extracts the semantic image information with a set of convolution operations. We adopt an especially simple architecture composed of four light-weighted convolutional blocks. Each convolutional block consists of two $3\times3$ convolution layers followed by a batch normalization layer~\cite{ioffe2015batch} and a ReLU activation function. We set the second convolution layer in each block with stride 2 to enlarge the receptive field and save GPU memory. $F_i$~($i$=1,2,3,4) denotes the outputs of these four convolutional blocks. As illustrated in Fig.~\ref{fig:two-stream RPN}, $F_i$ provides sufficient semantic image information to enrich the LiDAR point features in different scales. We further employ four parallel transposed convolution layers with different strides to recover the image resolution, leading to feature maps with the same size as the original image. We combine them in a concatenation manner and obtain a more representative feature map $F_U$ containing rich semantic image information with different receptive fields. As is shown later, the feature map $F_U$ is also employed to enhance the LiDAR point features to generate more accurate proposals. \noindent\textbf{Geometric Stream.}~The geometric stream takes LiDAR point cloud as input and generates the 3D proposals. The geometric stream comprises four paired Set Abstraction~(SA)~\cite{qi2017pointnet++} and Feature Propogation~(FP)~\cite{qi2017pointnet++} layers for feature extraction. For the convenience of description, the outputs of SA and FP layers are denoted as $S_i$ and $P_i$~($i$=1,2,3,4), respectively. As shown in Fig.~\ref{fig:two-stream RPN}, we combine the point features $S_i$ with the semantic image features $F_i$ with the aid of our LI-Fusion module. Besides, The point feature $P_4$ is further enriched by the multi-scale image feature $F_U$ to obtain a compact and discriminative feature representation, which is then fed to the detection heads for foreground point segmentation and 3D proposal generation. \noindent\textbf{LI-Fusion Module.}~The LiDAR-guided image fusion module consists of a grid generator, an image sampler, and a LI-Fusion layer. As illustrated in Fig.~\ref{fig:LI-Fusion}, the LI-Fusion module involves two parts, \textit{i.e.}, point-wise correspondence generation and LiDAR-guided fusion. Concretely, we project the LiDAR points onto the camera image and denote the mapping matrix as $M$. The grid generator takes a LiDAR point cloud and a mapping matrix $M$ as inputs, and outputs the point-wise correspondence between the LiDAR points and the camera image under different resolutions. In more detail, for a particular point $p(x,y,z)$ in the point cloud, we can get its corresponding position $p'(x', y')$ in the camera image, which can be written as: \begin{equation} \label{1} p' = M \times p , \end{equation} where $M$ is of size $3\times 4$. Note that we convert $p'$ and $p$ into 3-dimensional and 4-dimensional vector in homogeneous coordinates in the projection process formula (\ref{1}). \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{figs/LI_Fusion.pdf} \end{center} \setlength{\abovecaptionskip}{2pt} \caption{Illustration of the LI-Fusion module, which consists of a grid generator, an image sampler, and a LI-Fusion layer.} \label{fig:LI-Fusion} \end{figure} After establishing the correspondence, we propose to use an image sampler to get the semantic feature representation for each point. Specifically, our image sampler takes the sampling position $p'$ and the image feature map $F$ as inputs to produce a point-wise image feature representation $V$ for each sampling position. Considering that the sampling position may fall between adjacent pixels, we use bilinear interpolation to get the image feature at the continuous coordinates, which can be formularized as follows: \begin{equation} V^{(p)}=\mathcal{K}(F^{(\mathcal{N}(p'))}), \end{equation} where $V^{(p)}$ is the corresponding image feature for point $p$, $\mathcal{K}$ denotes the bilinear interpolation function, and $F^{(\mathcal{N}(p'))}$ represents the image features of the neighboring pixels for the sampling position $p'$. Fusing the LiDAR feature and the point-wise image feature is non-trivial since the camera image is challenged by many factors, including illumination, occlusion, etc. In these cases, the point-wise image feature will introduce interfering information. To address this issue, we adopt a LiDAR-guided fusion layer, which utilizes the LiDAR feature to adaptively estimate the importance of the image feature in a point-wise manner. As illustrated in Fig.~\ref{fig:LI-Fusion}, we first feed the LiDAR feature $F_P$ and the point-wise feature $F_I$ into a fully connected layer and map them into the same channel. Then we add them together to form a compact feature representation, which is then compressed into a weight map $\mathbf{w}$ with a single channel through another fully connected layer. We use a sigmoid activation function to normalize the weight map $\mathbf{w}$ into the range of [0, 1]. \begin{equation} \mathbf{w} = \sigma (\mathcal{W}\tanh(\mathcal{U}F_P + \mathcal{V}F_I)) \end{equation} where $\mathcal{W}$, $\mathcal{U}$, $\mathcal{V}$ denote the learnable weight matrices in our LI-Fusion layer. $\sigma$ represents the sigmoid activation function. After obtaining the weight map $\mathbf{w}$, we combine the LiDAR feature $F_P$ and the semantic image feature $F_I$ in a concatenation manner, which can be formularized as follows: \begin{equation} F_{LI} = F_P \ || \ \mathbf{w} F_I \end{equation} \subsection{Refinement Network} \label{sec:refinement} We employ the NMS procedure to keep the high-quality proposals and feed them into the refinement network. For each input proposal, we generate its feature descriptor by randomly selecting 512 points in the corresponding bounding box on top of the last SA layer of our two-stream RPN. For those proposals with less than 512 points, we simply pad the descriptor with zeros. The refinement network consists of three SA layers to extract a compact global descriptor, and two subnetworks with two cascaded $1\times 1$ convolution layers for the classification and regression, respectively. \subsection{Consistency Enforcing Loss} \label{sec:intertwined 3d iou loss} Common 3D object detectors usually generate much more bounding boxes than the number of the real objects in the scene. It poses a great challenge of how to select the high-quality bounding boxes. NMS attempts to filter unsatisfying bounding boxes according to their classification confidence. In this case, it is assumed that the classification confidence can serve as an agent for the real IoU between the bounding and the ground truth, \textit{i.e.}, the localization confidence. However, the classification confidence and the localization confidence is often inconsistent, leading to sub-optimal performance. This motivates us to introduce a consistency enforcing loss to ensure the consistency between the localization and classification confidence so that boxes with high localization confidence possess high classification confidence, and vice versa. The consistency enforcing loss can be written as follows: \begin{equation} L_{ce} = -log(c \times \frac{Area(D \cap G)}{Area(D \cup G)}) \end{equation} where $D$ and $G$ represents the predicted bounding box and the ground truth. $c$ denotes the classification confidence for $D$. Towards optimizing this loss function, the classification confidence and localization confidence~(\textit{i.e.}, the IoU) are encouraged to be as high as possible jointly. Hence, boxes with large overlaps will possess high classification possibilities and be kept in the NMS procedure. \noindent\textbf{Relation to IoU loss.} Our CE loss is similar to the IoU loss~\cite{yu2016unitbox} in the formula, but completely different in the motivation and the function. The IoU loss attempts to generate more precise regression through optimizing the IoU metric, while CE loss aims at ensuring the consistency between the localization and classification confidence to assist the NMS procedure to keep more accurate bounding boxes. Although with a simple formula, quantitative results and analyses in Sec.~\ref{sec:ablation study} demonstrates the effectiveness of our CE loss in ensuring the consistency and improving the 3D detection performance. \subsection{Overall Loss Function} \label{sec:overall loss function} We utilize a multi-task loss function for jointly optimizing the two-stream RPN and the refinement network. The total loss can be formulated as: \begin{equation} L_{total} = L_{rpn} + L_{rcnn}, \end{equation} where $L_{rpn}$ and $L_{rcnn}$ denote the training objective for the two-stream RPN and the refinement network, both of which adopt a similar optimizing goal, including a classification loss, a regression loss and a CE loss. We adopt the focal loss~\cite{lin2017focal} as our classification loss to balance the positive and negative samples with the setting of $\alpha=0.25$ and $\gamma=2.0$. For a bounding box, the network needs to regress its center point~$(x, y, z)$, size~$(l, h, w)$, and orientation $\theta$. Since the range of the Y-axis~(the vertical axis) is relatively small, we directly calculate its offset to the ground truth with a smooth L1 loss~\cite{girshick2015fast}. Similarly, the size of the bounding box~$(h, w,l)$ is also optimized with a smooth L1 loss. As for the X-axis, the Z-axis and the orientation $\theta$, we adopt a bin-based regression loss~\cite{shi2019pointrcnn,qi2018frustum}. For each foreground point, we split its neighboring area into several bins. The bin-based loss first predicts which bin $b_u$ the center point falls in, and then regress the residual offset $r_u$ within the bin. We formulate the loss functions as follows: \begin{equation} L_{rpn}= L_{cls} + L_{reg} + \lambda L_{cf} \end{equation} \begin{equation} L_{cls} = -\alpha(1-c_t)^\gamma \log c_t \end{equation} \begin{equation} L_{reg} = \sum_{u \in {x,z, \theta}}E(b_u, \hat{b_u}) + \sum_{u \in {x,y,z,h,w,l,\theta}} S(r_u, \hat{r_u}) \end{equation} where $E$ and $S$ denote the cross entropy loss and the smooth L1 loss, respectively. $c_t$ is the probability of the point in consideration belong to the ground truth category. $\hat{b_u}$ and $\hat{r_u}$ denote the ground truth of the bins and the residual offsets. \section{Experiments} We evaluate our method on two common 3D object detection datasets, including the KITTI dataset~\cite{geigerwe} and the SUN-RGBD dataset~\cite{song2015sun}. KITTI is an outdoor dataset, while SUN-RGBD focuses on the indoor scenes. In the following, we first present a brief introduction to these datasets in subsection~\ref{sec:datasets and metric}. Then we provide the implementation details in subsection~\ref{sec:implementation details}. Comprehensive analyses of the LI-Fusion module and the CE loss are elaborated in subsection~\ref{sec:ablation study}. Finally, we exhibit the comparisons with state-of-the-art methods on the KITTI dataset and the SUN-RGBD dataset in subsection~\ref{sec:experiments on kitti} and subsection~\ref{sec:experiments on sun-rgbd}, respectively. \subsection{Datasets and Evaluation Metric} \label{sec:datasets and metric} \noindent\textbf{KITTI Dataset} is a standard benchmark dataset for autonomous driving, which consists of 7,481 training frames and 7,518 testing frames. Following the same dataset split protocol as ~\cite{qi2018frustum,shi2019pointrcnn}, the 7,481 frames are further split into 3,712 frames for training and 3,769 frames for validation. In our experiments, we provide the results on both the validation and the testing set for all the three difficulty levels, \textit{i.e.}, Easy, Moderate, and Hard. Objects are classified into different difficulty levels according to the size, occlusion, and truncation. \noindent\textbf{SUN-RGBD Dataset} is an indoor benchmark dataset for 3D object detection. The dataset is composed of 10,335 images with 700 annotated object categories, including 5,285 images for training and 5,050 images for testing. We report results on the testing set for ten main object categories following previous works~\cite{xu2017pointfusion,qi2018frustum} since objects of these categories are relatively large. \noindent\textbf{Metrics.} We adopt the Average Precision~(AP) as the metric following the official evaluation protocol of the KITTI dataset and the SUN-RGBD dataset. Recently, the KITTI dataset applies a new evaluation protocol~\cite{simonelli2019disentangling} which uses 40 recall positions instead of the 11 recall positions as before. Thus it is a fairer evaluation protocol. We compare our methods with state-of-the-art methods under this new evaluation protocol. \subsection{Implementation Details} \label{sec:implementation details} \noindent\textbf{Network Settings.} The two-stream RPN takes both the LiDAR point cloud and the camera image as inputs. For each 3D scene, the range of LiDAR point cloud is [-40, 40], [-1, 3], [0, 70.4] meters along the X~(right), Y~(down), Z~(forward) axis in camera coordinate, respectively. And the orientation of $\theta$ is in the range of [-$\pi$, $\pi$]. We subsample 16,384 points from the raw LiDAR point cloud as the input for the geometric stream, which is same with PointRCNN~\cite{shi2019pointrcnn}. And the image stream takes images with a resolution of $1280 \times 384$ as input. We employ four set abstraction layers to subsample the input LiDAR point cloud with the size of 4096, 1024, 256, and 64, respectively. Four feature propagation layers are used to recover the size of the point cloud for the foreground segmentation and 3D proposal generation. Similarly, we use four convolution block with stride 2 to downsample the input image. Besides, we employ four parallel transposed convolution with stride 2, 4, 8, 16 to recover the resolution from feature maps in different scales. In the NMS process, we select the top 8000 boxes generated by the two-stream RPN according to their classification confidence. After that, we filter redundant boxes with the NMS threshold of 0.8 and obtain 64 positive candidate boxes which will be refined by the refinement network. For both datasets, we utilize similar architecture design for the two-stream RPN as discussed above. \noindent\textbf{The Training Scheme.} Our two-stream RPN and refinement network are end-to-end trainable. In the training phase, the regression loss $L_{reg}$ and the CE loss are only applied to positive proposals, \textit{i.e.}, proposals generated by foreground points for the RPN stage, and proposals sharing IoU larger than 0.55 with the ground truth for RCNN stage. \noindent\textbf{Parameter Optimization.}~ The Adaptive Moment Estimation~(Adam)~\cite{kingma2014adam} is adopted to optimize our network. The initial learning rate, weight decay, and momentum factor are set to 0.002, 0.001, and 0.9, respectively. We train the model for around 50 epochs on four Titan XP GPUs with a batch size of 12 in an end-to-end manner. The balancing weights $\lambda$ in the loss function are set to 5. \noindent\textbf{Data Augmentation.}~Three common data augmentation strategies are adopted to prevent over-fitting, including rotation, flipping, and scale transformations. First, we randomly rotate the point cloud along the vertical axis within the range of $[-\pi/18,\pi/18]$. Then, the point cloud is randomly flipped along the forward axis. Besides, each ground truth box is randomly scaled following the uniform distribution of $[0.95,1.05]$. Many LiDAR-based methods sample ground truth boxes from the whole dataset and place them into the raw 3D frames to simulate real scenes with crowded objects following ~\cite{zhou2018voxelnet,yan2018second}. Although effective, this data augmentation needs the prior information of road plane which is usually difficult to acquire for kinds of real scenes. Hence, we do not utilize this augmentation mechanism in our framework for the applicability and generality. \subsection{Ablation Study} \label{sec:ablation study} We conduct extensive experiments on the KITTI validation dataset to evaluate the effectiveness of our LI-Fusion module and CE loss. \noindent\textbf{Analysis of the fusion architecture.}~We remove all the LI-Fusion modules to verify the effectiveness of our LI-Fusion module. As is shown in Table~\ref{tab:ablation study}, adding LI-Fusion module yields an improvement of 1.73\% in terms of 3D mAP, demonstrating its effectiveness in combining the point features and semantic image features. We further present comparisons with two alternative fusion solutions in Table~\ref{tab:ablation on different fusion mechanism}. One alternative is simple concatenation~(SC). We modify the input of the geometric stream as the combination of the raw camera image and LiDAR point cloud instead of their feature representations. Concretely, we append the RGB channels of camera images to the spatial coordinate channels of LiDAR point cloud in a concatenation fashion. It should be noted that no image stream is employed for SC. The other alternative is the single scale~(SS) fusion, which shares a similar architecture as our two-stream RPN. The difference is that we remove all the LI-Fusion modules in the set abstraction layers and only keep the LI-Fusion module in the last feature propagation layer~(see Fig.~\ref{fig:two-stream RPN}). As shown in Table~\ref{tab:ablation on different fusion mechanism}, SC yields a decreasement of 3D mAP 0.28\% over the baseline, indicating that simple combination in the input level cannot provide sufficient guidance information. Besides, our method outperforms SS by 3D mAP 1.31\%. It suggests the effectiveness of applying the LI-Fusion modules in multiple scales. \begin{minipage}{0.94\textwidth} \hfill \begin{minipage}{0.47\textwidth} \makeatletter\def\@captype{table}\makeatother \setlength{\belowcaptionskip}{4pt}% \caption{Ablation experiments on the KITTI val dataset. } \label{tab:ablation study} \scriptsize \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline LI-Fusion &CE &Easy &Moderate &Hard &3D mAP &Gain \\\hline\hline $\times$ &$\times$ &86.34 &77.52 &75.96 &79.94 &-\\ \checkmark &$\times$ &89.44 &78.84 &76.73 &81.67 &\textcolor{blue}{$\uparrow$ 1.73}\\ $\times$ &\checkmark &90.87 &81.15 &79.59 &83.87 &\textcolor{blue}{$\uparrow$ 3.93} \\ \checkmark &\checkmark &\textbf{92.28} &\textbf{82.59} &\textbf{80.14} &\textbf{85.00} &\textcolor{blue}{$\uparrow$ 5.06} \\ \hline \end{tabular} } \end{minipage} \hfill \begin{minipage}{0.47\textwidth} \centering \makeatletter\def\@captype{table}\makeatother \setlength{\belowcaptionskip}{5pt}% \caption{Analysis of different fusion mechanism on the KITTI val dataset.} \label{tab:ablation on different fusion mechanism} \scriptsize \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline SC & SS & Ours &Easy &Moderate &Hard &3D mAP &Gain\\ \hline\hline $\times$ &$\times$ &$\times$ &86.34 &77.52 &75.96 &79.94 &- \\ \checkmark &$\times$ &$\times$ &85.97 &77.37 &75.65 &79.66 &\textcolor{red}{$\downarrow$ 0.28} \\ $\times$ &\checkmark &$\times$ &87.46 &78.27 &75.35 &80.36 &\textcolor{blue}{$\uparrow$ 0.42} \\ $\times$ &$\times$ &\checkmark &\textbf{89.44} &\textbf{78.84} &\textbf{76.73} &\textbf{81.67} &\textcolor{blue}{ $\uparrow$ 1.73} \\ \hline \end{tabular} } \end{minipage} \end{minipage} \noindent\textbf{Visualization of learned semantic image features.}~It should be noted that we do not add explicit supervision information~(\textit{e.g.}, annotations of 2D detection boxes) to the image stream of our two-stream RPN. The image stream is optimized together with the geometric stream with the supervision information of 3D boxes from the end of the two-stream RPN. Considering the distinct data characteristics of the camera image and LiDAR point cloud, we visualize the semantic image features to figure out what the image stream learns, as presented in Fig.~\ref{fig:visualized image feature}. Although no explicit supervision is applied, surprisingly, the image stream learns well to differentiate the foreground objects from the background and extracts rich semantic features from camera images, demonstrating that the LI-Fusion module accurately establishes the correspondence between LiDAR point cloud and camera image, thus can provide the complementary semantic image information to the point features. It is also worth noting that the image stream mainly focuses on the representative region of the foreground objects and that the region under poor illumination demonstrates very distinct features to neighboring region, as marked by the red arrow. It indicates that it is necessary to adaptively estimate the importance of the semantic image feature since the variance of the illumination condition may introduce harmful interference information. Hence, we further provide the analysis of the weight map $\mathbf{w}$ for the semantic image feature in the following. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{figs/visualized_feature.pdf} \end{center} \setlength{\abovecaptionskip}{2pt} \caption{Visualization of the learned semantic image feature. The image stream mainly focuses on the foreground objects~(cars). The red arrow marks the region under bad illumination, which show a distinct feature representation to its neighboring region. } \label{fig:visualized image feature} \end{figure} \noindent\textbf{Analysis of the weight map in the LI-Fusion layer.} In a real scene, the camera image is usually disturbed by the illumination, suffering from underexposure and overexposure. To verify the effectiveness of the weight map $\mathbf{w}$ in alleviating the interference information brought by the unsatisfying camera image, we simulate the real environment by changing the illumination of the camera image. For each image in the KITTI dataset, we simulate the illumination variance through the transformation $y=a*x+b$, where $x$ and $y$ denote the original and transformed RGB value for a pixel. $a$ and $b$ represent the coefficient and the offset, respectively. We randomly lighten up~(resp. darken) the camera images in the KITTI dataset by setting $a$ to 3~(resp. 0.3) and $b$ to 5. The quantitative results are presented in Table~\ref{tab:ablation on influence of the weight map}. For comparison, we remove the image stream and use our model based on only the LiDAR as the baseline, which yields a 3D mAP of 83.87\%. We also provide the results of simply concatenating the RGB and LiDAR coordinates in the input level~(denoted by SC), which leads to an obvious performance decreasement of 1.08\% and demonstrates that images under poor quality is harmful for the 3D detection task. Besides, our method without estimating the weight map $\mathbf{w}$ also results in a decreasement of 0.69\%. However, with the guidance of the weight map $\mathbf{w}$, our method yields an improvement of 0.65\% compared to the baseline. It means that introducing the weight map can adaptively select the beneficial features and ignore those harmful features. \begin{minipage}{0.94\textwidth} \hfill \begin{minipage}{0.54\textwidth} \makeatletter\def\@captype{table}\makeatother \setlength{\belowcaptionskip}{4pt}% \caption{Comparison between the results our LI-Fusion module with and without estimating the weight map $\mathbf{w}$.} \label{tab:ablation on influence of the weight map} \scriptsize \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|} \hline Method &Easy &Moderate &Hard &3D mAP & Gain \\ \hline\hline only LiDAR &90.87 &81.15 &79.59 &83.87 &- \\ SC &90.73 &79.93 &77.70 &82.79 &\textcolor{red}{$\downarrow$ 1.08} \\ Ours (without $\mathbf{w}$) &91.52 &80.08 &77.95 &83.18 &\textcolor{red}{$\downarrow$ 0.69} \\ Ours &\textbf{91.65} &\textbf{81.77} &\textbf{80.13} &\textbf{84.52} &\textcolor{blue}{ $\uparrow$ 0.65} \\ \hline \end{tabular} } \end{minipage} \hfill \begin{minipage}{0.40\textwidth} \centering \makeatletter\def\@captype{table}\makeatother \setlength{\belowcaptionskip}{5pt}% \caption{The results of our approach on three benchmarks of the the KITTI validation set~(\textit{Cars}).} \label{tab:results on the kitti val set} \scriptsize \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|c|} \hline Benchmark & Easy & Moderate & Hard & mAP\\ \hline\hline 3D Detection &92.28 &82.59 &80.14 &85.00 \\ Bird’s Eye View &95.51 &88.76 & 88.36 &90.88 \\ Orientation &98.48 &91.74 &91.16 &93.79 \\ \hline \end{tabular} } \end{minipage} \end{minipage} \begin{figure}[t] \begin{center} \includegraphics[width=0.9\linewidth]{figs/consistency3.pdf} \end{center} \setlength{\abovecaptionskip}{2pt} \caption{Illustration of the ratio of kept positive candidate boxes varying with different classification confidence threshold. The CE loss leads to significantly larger ratios than those of the IoU loss, suggesting its effectiveness in improving the consistency of localization and classification confidence.} \label{fig:consistency} \end{figure} \noindent\textbf{Analysis of the CE loss.}~As shown in Table~\ref{tab:ablation study}, adding the CE loss yields a significant improvement of 3.93\% over the baseline. We further present a quantitative comparison with the IoU loss to verify the superiority of our CE loss in improving the 3D detection performance. As shown in Fig.~\ref{fig:consistency}(a), the CE loss leads to an improvement of 3D mAP of 1.28\% over the IoU loss, which indicates the benefits of ensuring the consistency of the classification and localization confidence in the 3D detection task. To figure out how the consistency between these two confidences is improved, we give a thorough analysis of the CE loss. For the convenience of description, we denote predicted boxes possessing overlaps larger than a predefined IoU threshold $\tau$ as positive candidate boxes. Moreover, we adopt another threshold of $\upsilon$ to filter positive candidate boxes with smaller classification confidence. Hence, the consistency can be evaluated by the ratio of $\mathcal{R}$ of how many positive candidate boxes are kept, which can be written as follows: \begin{equation} \mathcal{R} = \frac{\mathcal{N}(\mathbf{b}|\mathbf{b} \in \mathcal{B} \ and \ \mathbf{c_{b}} > \upsilon)}{\mathcal{N}(\mathcal{B})}, \end{equation} where $\mathcal{B}$ represents the set of positive candidate boxes. $\mathbf{c_{b}}$ denotes the classification confidence of the box $\mathbf{b}$. $\mathcal{N}(\cdot)$ calculates the number of boxes. It should be noted that all the boxes in $\mathcal{B}$ possess an overlap larger than $\tau$ with the corresponding ground truth box. We provide evaluation results on two different settings, \textit{i.e.}, the model trained with IoU loss and that trained with CE loss. For each frame in the KITTI validation dataset, the model generates 64 boxes without NMS procedure employed. Then we get the positive candidate boxes by calculating the overlaps with the ground truth boxes. We set $\tau$ to 0.7 following the evaluation protocol of 3D detection metric. $\upsilon$ is varied from 0.1 to 0.9 to evaluate the consistency under different classification confidence thresholds. As is shown in Fig.~\ref{fig:consistency}(b), the model trained with CE loss demonstrates better consistency than that trained with IoU loss in all the different settings of classification confidence threshold $\upsilon$. \subsection{Experiments on KITTI Dataset} \label{sec:experiments on kitti} Table ~\ref{tab:results on kitti testing split} presents quantitative results on the KITTI test set. The proposed method outperforms multi-sensor based methods F-PointNet~\cite{qi2018frustum}, MV3D~\cite{chen2017multi}, AVOD-FPN~\cite{ku2018joint}, PC-CNN~\cite{8461232}, ContFuse~\cite{Liang2018ECCV}, and MMF~\cite{liang2019multi} by 10.37\%, 17.03\%, 7.71\%, 6.23\%, 9.85\% and 2.55\% in terms of 3D mAP. It should be noted that MMF~\cite{liang2019multi} exploits multiple auxiliary tasks~(\textit{e.g.}, 2D detection, ground estimation, and depth completion) to boost the 3D detection performance, which requires many extra annotations. These experiments consistently reveal the superiority of our method over the cascading approach~\cite{qi2018frustum}, as well as fusion approaches based on RoIs~\cite{chen2017multi,ku2018joint,8461232} and voxels~\cite{Liang2018ECCV,liang2019multi}. \begin{table}[t] \scriptsize \centering \setlength{\belowcaptionskip}{2pt} \caption{Comparisons with state-of-the-art methods on the testing set of the KITTI dataset~(\textit{Cars}). L and I represent the LiDAR point cloud and the camera image.} \label{tab:results on kitti testing split} \resizebox{1\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} &\multirow{2}{*}{Modality} & \multicolumn{4}{c|}{3D Detection} & \multicolumn{4}{c|}{Bird's Eye View} & \multicolumn{4}{c|}{Orientation}\\ \cline{3-14} & & Easy & Moderate & Hard & 3D mAP & Easy & Moderate & Hard & BEV mAP & Easy & Moderate & Hard & Ori mAP\\ \hline \hline SECOND~\cite{yan2018second}&L & 83.34 & 72.55 & 65.82 &73.90 & 89.39 & 83.77 & 78.59 &83.92 & 90.93 & 82.55 & 73.62 &82.37\\ PointPillars~\cite{lang2019pointpillars}&L & 82.58 & 74.31 & 68.99 &75.29 & 90.07 & 86.56 & 82.81 &86.48 & 93.84 & 90.70 & 87.47 &90.67\\ TANet~\cite{liu2020tanet}&L & 84.39 & 75.94 & 68.82 &76.38 & 91.58 & 86.54 & 81.19 &86.44 & 93.52 & 90.11 & 84.61 &89.41\\ PointRCNN~\cite{shi2019pointrcnn}&L & 86.96 & 75.64 & 70.70 &77.77 & 92.13 & 87.39 & 82.72 &87.41 & 95.90 & 91.77 & 86.92 &91.53\\ Fast Point R-CNN~\cite{Chen2019fastpointrcnn}&L & 85.29 & 77.40 &70.24 &77.64 & 90.87 & 87.84 & 80.52 &86.41 &- &- &- &- \\ \hline\hline F-PointNet~\cite{qi2018frustum}&L+I & 82.19 & 69.79 & 60.59 &70.86 & 91.17 & 84.67 & 74.77 &83.54 &- &- &- &-\\ MV3D~\cite{chen2017multi}&L+I & 74.97 & 63.63 & 54.00 &64.20 & 86.62 & 78.93 & 69.80 &78.45 &- &- &- &-\\ AVOD~\cite{ku2018joint}&L+I & 76.39 & 66.47 & 60.23 &67.70 & 89.75 & 84.95 & 78.32 &84.34 & 94.98 & 89.22 & 82.14 &88.78 \\ AVOD-FPN~\cite{ku2018joint}&L+I & 83.07 & 71.76 & 65.73 &73.52 & 90.99 & 84.82 & 79.62 &85.14 &94.65 & 88.61 & 83.71 &88.99 \\ ContFuse~\cite{Liang2018ECCV}&L+I &83.68 & 68.78 & 61.67 &71.38 & 94.07 & 85.35 & 75.88 &85.10 &- &- &- &- \\ PC-CNN~\cite{8461232}&L+I &85.57 &73.79 &65.65 &75.00 &91.19 &87.40 &79.35 &85.98 &- &- &- &-\\ MMF~\cite{liang2019multi}&L+I & 88.40 & 77.43 & 70.22 &78.68 & 93.67 & 88.21 & 81.99 &87.96 &- &- &- &- \\ \hline \hline Ours &L+I & \textbf{89.81} & \textbf{79.28} & \textbf{74.59} &\textbf{81.23} & \textbf{94.22} & \textbf{88.47} & \textbf{83.69} &\textbf{88.79} & \textbf{96.13} & \textbf{94.22} & \textbf{89.68} &\textbf{93.34}\\ \hline \end{tabular} } \end{table} We also provide the quantitative results on the KITTI validation split in the Table~\ref{tab:results on the kitti val set} for the convenience of comparison with future work. Besides, we present the qualitative results on the KITTI validation dataset in the supplementary materials. \subsection{Experiments on SUN-RGBD Dataset} \label{sec:experiments on sun-rgbd} We further conduct experiments on the SUN-RGBD dataset to verify the effectiveness of our approach in the indoor scenes. Table~\ref{tab:results on the sun-rgbd dataset.} demonstrates the results compared with the state-of-the-art methods. Our EPNet achieves superior detection performance, outperforming PointFusion~\cite{xu2017pointfusion} by 15.7\%, COG~\cite{ren2016three} by 12.2\%, F-PointNet~\cite{qi2018frustum} by 5.8\% and VoteNet~\cite{qi2019deep} by 2.1\% in terms of 3D mAP. The comparisons with multi-sensor based methods PointFusion~\cite{xu2017pointfusion} and F-PointNet~\cite{qi2018frustum} are especially valuable. Both of them first generate 2D bounding boxes from camera images using 2D detectors and then outputs the 3D boxes in a cascading manner. Specifically, F-PointNet utilizes only the LiDAR data to predict the 3D boxes. PointFusion combines global image features and points features in a concatenation fashion. Different from them, our method explicitly establishes the correspondence between point features and camera image features, thus providing finer and more discriminative representations. Besides, we provide the qualitative results on the SUN-RGBD dataset in the supplementary materials. \begin{table}[t] \scriptsize \centering \setlength{\abovecaptionskip}{2pt} \setlength{\belowcaptionskip}{4pt} \caption{Quantitative comparisons with the state-of-the-art methods on the SUN-RGBD test set. P and I represent the point cloud and the camera image.} \label{tab:results on the sun-rgbd dataset.} \resizebox{1\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Method & Modality& bathtub & bed & bookshelf & chair & desk & dresser & nightstand & sofa & table & toilet &3D mAP \\ \hline \hline DSS~\cite{song2016deep} &P + I & 44.2 & 78.8 & 11.9 & 61.2 & 20.5 & 6.4 & 15.4 & 53.5 & 50.3 & 78.9 & 42.1\\ 2d-driven~\cite{lahoud20172d} &P + I & 43.5 & 64.5 & 31.4 & 48.3 & 27.9 & 25.9 & 41.9 & 50.4 & 37.0 & 80.4 & 45.1\\ COG~\cite{ren2016three} &P + I & 58.3 & 63.7 & 31.8 & 62.2 & \textbf{45.2} & 15.5 & 27.4 & 51.0 & \textbf{51.3} & 70.1 & 47.6 \\ PointFusion~\cite{xu2017pointfusion} &P + I & 37.3 & 68.6 & \textbf{37.7} & 55.1 & 17.2 & 24.0 & 32.3 & 53.8 & 31.0 & 83.8 & 44.1 \\ F-PointNet~\cite{qi2018frustum} &P + I & 43.3 & 81.1 & 33.3 & 64.2 & 24.7 & 32.0 & 58.1 & 61.1 & 51.1 & \textbf{90.9} & 54.0 \\ VoteNet~\cite{qi2019deep} &P &74.4 &83.0 &28.8 &\textbf{75.3} &22.0 &29.8 &62.2 &64.0 &47.3 &90.1 & 57.7 \\ \hline \hline Ours &P + I & \textbf{75.4} & \textbf{85.2} & \textbf{35.4} & 75.0 & 26.1 & 31.3 &62.0 & \textbf{67.2} &\textbf{52.1} & 88.2 & \textbf{59.8} \\ \hline \end{tabular} } \end{table} \section{Conclusion} We have presented a new 3D object detector named EPNet, which consists of a two-stream RPN and a refinement network. The two-stream RPN reasons about different sensors~(\textit{i.e.}, LiDAR point cloud and camera image) jointly and enhances the point features with semantic image features effectively by using the proposed LI-Fusion module. Besides, we address the issue of inconsistency between the classification and localization confidence by the proposed CE loss, which explicitly guarantees the consistency between the localization and classification confidence. Extensive experiments have validated the effectiveness of the LI-Fusion module and the CE loss. In the future, we are going to explore how to enhance the image feature representation with depth information of the LiDAR point cloud instead, and its application in 2D detection tasks. \section*{Acknowledgement} This work was supported by National Key R\&D Program of China (No.2018YFB\\1004600), Xiang Bai was supported by the National Program for Support of Top-notch Young Professionals and the Program for HUST Academic Frontier Youth Team 2017QYTD08. \clearpage \bibliographystyle{splncs04}
1,116,691,498,824
arxiv
\section{Introduction} The number of transmission channels for a single-atom contact between two metallic electrodes is simply given by the chemical valence of the atom \cite{Scheer98}. Recently \cite{Bergfield11a} it has been determined that the number of dominant transmission channels in a single-molecule junction (SMJ) is equal to the degeneracy of the molecular orbital \cite{footnote_HOMOLUMO} closest to the metal Fermi level \cite{Bergfield11a}. In this article, we focus on ensembles highly conductive Pt--benzene--Pt junctions \cite{Kiguchi08} in which the lead and molecule are in direct contact. For a two-terminal single-molecule junction (SMJ), the transmission eigenvalues $\tau_n$ are eigenvalues of the elastic transmission matrix \cite{Datta95} \begin{equation} \myT(E)=\Gamma^{\rm L}(E) G(E) \Gamma^{\rm R}(E) G^\dagger(E), \label{eq:trans_matrix} \end{equation} where $G$ is the retarded Green's function \cite{Bergfield09} of the SMJ, $\Gamma^\alpha$ is the tunneling-width matrix describing the bonding of the molecule to lead $\alpha$ and the total transmission function $T(E) = \Tr{\myT(E)}$. The number of transmission channels is equal to the rank of the matrix (1), which is in turn limited by the ranks of the matrices $G$ and $\Gamma^\alpha$ \cite{Bergfield11a}. The additional two-fold spin degeneracy of each resonance is considered implicit throughout this work. As indicated by Eq.~\ref{eq:trans_matrix}, any accurate description of transport requires an accurate description of $G$, which can be calculated using either single-particle or many-body methods. In effective single-particle theories, including current implementations of density functional theory (DFT), it is often necessary \cite{Toher05,Burke06,Datta06,Geskin09} to describe the transport problem by considering an ``extended molecule,'' composed of the molecule and several electrode atoms. Although this procedure is required in order to describe transport at all, it make it difficult, if not impossible, to assign transmission eigenchannels to individual molecular resonances since the extended molecule's Green's function bears little resemblance to the molecular Green's function. We utilize a nonequilibrium many-body theory based on the molecular Dyson equation (MDE) \cite{Bergfield09} to investigate transport distributions of SMJ ensembles. Our MDE theory correctly accounts for wave-particle duality of the charge carriers, simultaneously reproducing the key features of both the Coulomb blockade and coherent transport regimes, alleviating the necessity of constructing an ``extended molecule.'' Consequently, we can unambiguously assign transmission eigenchannels to molecular resonances. Conversely, we can also construct a junction's Green's function with only a single molecular resonance. The theory and efficacy of this `isolated resonance approximation' are investigated in detail. Previous applications of our MDE theory \cite{Bergfield09,Bergfield10b,Bergfield11b} to transport through SMJs utilized a semi-empirical Hamiltonian \cite{Castleton02} for the $\pi$-electrons, which accurately describes the gas-phase spectra of conjugated organic molecules. Although this approach should be adequate to describe molecules weakly coupled to metal electrodes, e.g.\ via thiol linkages, in junctions where the $\pi$-electrons bind directly to the metal electrodes \cite{Kiguchi08}, the lead-molecule coupling may be so strong that the molecule itself is significantly altered, necessitating a more fundamental molecular model. To address this issue, we have developed an {\em effective field theory of interacting $\pi$-electrons} ($\pi$-EFT), in which the form of the molecular Hamiltonian is derived from symmetry principles and electromagnetic theory (multipole expansion). The resulting formalism constitutes a state-of-the-art many-body theory that provides a realistic description of lead-molecule hybridization and van der Waals coupling, as well as the screening of intramolecular interactions by the metal electrodes, all of which are essential for a quantitative description of strongly-coupled SMJs \cite{Kiguchi08}. The bonding between the tip of electrode $\alpha$ with the molecule is characterized by the tunneling-width matrix $\Gamma^\alpha$, where the rank of $\Gamma^\alpha$ is equal to the number of covalent bonds formed between the two. For example, in a SMJ where a Au electrode bonds to an organic molecule via a thiol group, only a single bond is formed, and there is only one non-negligible transmission channel \cite{Djukic06,Solomon06b}. In Pt--benzene--Pt junctions, however, each Pt electrode forms multiple bonds to the benzene molecule and multiple transmission channels are observed \cite{Kiguchi08}. In such highly-conductive SMJs the lead and molecule are in direct contact and the overlap between the $\pi$-electron system of the molecule and {\em all} of the atomic-like wavefunctions of the atomically-sharp electrode are relevant. For each Pt tip, we consider one $s$, three $p$ and five $d$ orbitals in our calculations, which represent the evanescent tunneling modes in free space outside the apex atom of the tip. This physical model for the leads accurately describes the bonding over a wide range of junction configurations. In the next section, we outline the relevant aspects of our MDE theory and derive transport equations in the isolated resonance approximation. We then outline the details of our atomistic lead-molecule coupling approach, in which the electrostatic influence of the leads is treated using $\pi$-EFT and the multi-orbital lead-molecule bonding is described using an atomistic model of the electrode tip. Finally, the transport distributions for an ensemble of Pt--benzene--Pt junctions are shown using both the full molecular Green's function and using the isolated resonance approximation. \section{Many-body theory of transport} When macroscopic leads are bonded to a single molecule, a SMJ is formed, transforming the few-body molecular problem into a full many-body problem. The bare molecular states are dressed by interactions with the lead electrons when the SMJ is formed, shifting and broadening them in accordance with the lead-molecule coupling. Until recently \cite{Bergfield09} no theory of transport in SMJs was available which properly accounted for the {\em particle and wave character} of the electron, so that the Coulomb blockade and coherent transport regimes were considered `complementary ' \cite{Geskin09}. Here, we utilize a many-body MDE theory \cite{Bergfield09,Bergfield11b} based on nonequilibrium Green's functions (NEGFs) to investigate transport in multi-channel SMJs which correctly accounts for both aspects of the charge carriers. In order to calculate transport quantities of interest we must determine the retarded Green's function $G(E)$ of the junction, which may be written as \begin{equation} G(E) = \left[ {\bf S} E - H_{\rm mol}^{(1)} - \Sigma(E)\right]^{-1}, \label{eq:G_full} \end{equation} where $H_{\rm mol}$=$H_{\rm mol}^{(1)}+H_{\rm mol}^{(2)}$ is the molecular Hamiltonian which we formally separate into one-body and two-body terms \cite{Bergfield09, Bergfield11b}. ${\bf S}$ is an overlap matrix, which in an orthonormal basis reduces to the identity matrix, and \begin{equation} \Sigma(E) = \Sigma_{\rm T}^{\rm L}(E)+\Sigma_{\rm T}^{\rm R}(E)+\Sigma_{\rm C}(E), \end{equation} is the self-energy, including the effect of both a finite lead-molecule coupling via $\Sigma_{\rm T}^{L,R}$ and many-body interactions via the Coulomb self-energy $\Sigma_{\rm C}(E)$. The tunneling self-energy matrices are related to the tunneling-width matrices by \begin{equation} \Gamma^\alpha(E) \equiv i \left(\Sigma_{\rm T}^\alpha(E) -\left[\Sigma_{\rm T}^{\alpha}(E)\right]^\dagger\right). \label{eq:Gamma_alpha} \end{equation} Throughout this work we shall invoke the wide-band limit in which we assume that the tunneling widths are energy independent $\Gamma^\alpha(E) \approx \Gamma^\alpha$. It is useful to define a molecular Green's function $G_{\rm mol}(E) = \lim_{\Gamma^\alpha \rightarrow 0^+} G(E)$. In the sequential tunneling regime \cite{Bergfield09}, where lead-molecule coherences can be neglected, the molecular Green's function within MDE theory is given by \begin{equation} G_{\rm mol}(E) = \left[{\bf S}E - H_{\rm mol}^{(1)} - \Sigma^{(0)}_{\rm C}(E) \right]^{-1} \label{eq:Gmol1} \end{equation} where all one-body terms are included in $H_{\rm mol}^{(1)}$ and the Coulomb self-energy $\Sigma^{(0)}$ accounts for the effect of all two-body {\em intramolecular many-body correlations exactly}. The full Green's function of the SMJ may then be found using the {\em molecular Dyson equation} \cite{Bergfield09} \begin{equation} G(E) = G_{\rm mol}(E) + G_{\rm mol}(E)\Delta \Sigma(E) G(E), \label{eq:MDE} \end{equation} where $\Delta \Sigma = \Sigma_{\rm T} + \Delta \Sigma_{\rm C}$ and $\Delta \Sigma_{\rm C}=\Sigma_{\rm C}-\Sigma_{\rm C}^{(0)}$. At room temperature and for small bias voltages, $\Delta \Sigma_{\rm C}\approx 0$ in the cotunneling regime \cite{Bergfield09} (i.e., for nonresonant transport). Furthermore, the inelastic transmission probability is negligible compared to the elastic transmission in that limit. The molecular Green's function $G_{\rm mol}$ is found by exactly diagonalizing the molecular Hamiltonian, including all charge states and excited states of the molecule. Projecting onto a basis of relevant atomic orbitals one finds \cite{Bergfield09,Bergfield11b} \begin{equation} G_{\rm mol}(E) = \sum_{\nu, \nu'} \frac{[{\cal P}(\nu) + {\cal P}(\nu')] C(\nu,\nu')}{E-E_{\nu'}+E_{\nu}+i0^+}, \label{eq:Gmol} \end{equation} where ${\cal P}(\nu)$ is the probability that the molecular state $\nu$ is occupied, $C(\nu,\nu')$ are many-body matrix elements and $H_{\rm mol} \left| \nu \right.\rangle = E_\nu \left| \nu \right.\rangle$. In linear-response, ${\cal P}(\nu)$=$e^{-\beta \left(E_\nu - \mu N_\nu \right)}/{\cal Z}$, where ${\cal Z}$=$\sum_\nu e^{-\beta\left(E_\nu-N_\nu\mu\right)}$ is the grand canonical partition function. The rank-1 matrix $C(\nu,\nu')$ has elements \begin{equation} [C(\nu,\nu')]_{n\sigma,m\sigma'} = \langle \nu | d_{n\sigma} | \nu' \rangle \langle \nu' | d^\dagger_{m\sigma'} | \nu \rangle, \label{eq:manybody_element} \end{equation} where $d_{n\sigma}$ annihilates an electron of spin $\sigma$ on the $n$th atomic orbital of the molecule and $\nu$ and $\nu'$ label molecular eigenstates with different charge. The rank of $C(\nu,\nu')$ in conjunction with Eqs.~\ref{eq:MDE} and \ref{eq:Gmol} implies that each molecular resonance $\nu\rightarrow\nu'$ contributes at most one transmission channel in Eq.~\ref{eq:trans_matrix}, suggesting that an $M$-fold degenerate molecular resonance could sustain a maximum of $M$ transmission channels. \subsection{Isolated-resonance approximation} Owing to the position of the leads' chemical potential relative to the molecular energy levels and the large charging energy of small molecules, transport in SMJs is typically dominated by individual molecular resonances. In this subsection, we calculate the Green's function in the isolated-resonance approximation wherein only a single (non-degenerate or degenerate) molecular resonance is considered. In addition to developing intuition and gaining insight into the transport mechanisms in a SMJ, we also find (cf.\ Results and Discussion section) that the isolated-resonance approximation can be used to accurately predict the transport. \subsubsection{Non-degenerate molecular resonance} If we consider a single non-degenerate molecular resonance then \begin{equation} G_{\rm mol}(E) \approx \frac{\left[{\cal P}(\nu) + {\cal P}(\nu')\right] C(\nu,\nu')}{E-E_{\nu'} + E_\nu + i0^+} \equiv \frac{\tilde{\lambda} \left|\lambda \right.\rangle \langle\left. \lambda \right|}{E-\varepsilon+i0^+}, \label{eq:Gmol_nodegen} \end{equation} where $\varepsilon = E_{\nu'} - E_\nu$, $C(\nu,\nu')\equiv \lambda \left|\lambda \right.\rangle \langle\left. \lambda \right|$ is the rank-1 many-body overlap matrix and we have set $\tilde{\lambda} = [{\cal P}(\nu) + {\cal P}(\nu')]\lambda$. In order to solve for $G$ analytically, it is useful to rewrite Dyson's equation (\ref{eq:MDE}) as follows: \begin{equation} G(E)=\left({\bf 1} - G_{\rm mol}(E)\Delta\Sigma(E)\right)^{-1}G_{\rm mol}(E). \end{equation} In the elastic cotunneling regime ($\Delta \Sigma_{\rm C}=0$) we find \begin{eqnarray} G(E) &=& \frac{\tilde{\lambda} \left|\lambda \right.\rangle \langle\left. \lambda \right|}{E-\varepsilon +i0^+} \left(1 + \frac{\tilde{\lambda} \langle\left. \lambda \right| \Sigma_{\rm T} \left|\lambda \right.\rangle}{E-\varepsilon +i0^+} + \right. \nonumber \\ && \phantom{abdefghijklmno} \left. \left[\frac{\tilde{\lambda} \langle\left. \lambda \right| \Sigma_{\rm T} \left|\lambda \right.\rangle}{E-\varepsilon +i0^+}\right]^2 + \cdots \right) \nonumber \\ &=& \frac{\tilde{\lambda} \left|\lambda \right.\rangle \langle\left. \lambda \right|}{E-\varepsilon - \tilde{\lambda} \langle \lambda \left| \Sigma_{\rm T} \right| \lambda \rangle}. \label{eq:G_isolated_dyson} \end{eqnarray} Equation \ref{eq:G_isolated_dyson} can equivalently be expressed as \begin{equation} G(E) \approx \frac{\left[{\cal P}(\nu) + {\cal P}(\nu')\right] C(\nu,\nu')}{E-\varepsilon - \tilde{\Sigma}}, \label{eq:G_nodegen} \end{equation} where \begin{equation} \tilde{\Sigma}=[{\cal P}(\nu) + {\cal P}(\nu')] \Tr{C(\nu,\nu')\Sigma_{\rm T}} \end{equation} is the effective self-energy at the resonance, which includes the effect of many-body correlations via the $C(\nu,\nu')$ matrix. Using Eq.~\ref{eq:trans_matrix}, the transmission in the isolated-resonance approximation is given by \begin{equation} T(E) = \frac{\tilde{\Gamma}^{\rm L} \tilde{\Gamma}^{\rm R}}{(E-\ve)^2 + \tilde{\Gamma}^2}, \label{eq:Isolated_trans} \end{equation} where \begin{equation} \tilde{\Gamma}^\alpha = [{\cal P}(\nu) + {\cal P}(\nu')] \Tr{C(\nu,\nu')\Gamma^\alpha} \end{equation} is the dressed tunneling-width matrix and $\tilde{\Gamma} = (\tilde{\Gamma}^{\rm L}+\tilde{\Gamma}^{\rm R}) /2$. As evidenced by Eq.~\ref{eq:Isolated_trans}, the isolated-resonance approximation gives an intuitive prediction for the transport. Specifically, the transmission function is a single Lorentzian resonance centered about $\ve$ with a half-width at half-maximum of $\tilde{\Gamma}$. The less-intuitive many-body aspect of the transport problem is encapsulated in the effective tunneling-width matrices $\tilde{\Gamma}^\alpha$, where the overlap of molecular many-body eigenstates can reduce the elements of these matrices and may strongly affect the predicted transport. \subsubsection{Degenerate molecular resonance} The generalization of the above results to the case of a degenerate molecular resonance is formally straightforward. For an $M$-fold degenerate molecular resonance \begin{equation} G_{\rm mol}(E) \approx \frac{\tilde{\lambda}}{E-\varepsilon+i0^+} \sum_{l=1}^M \left|\lambda_l \right.\rangle \langle\left. \lambda_l \right|. \label{eq:Gmol_degen} \end{equation} The $M$ degenerate eigenvectors of $G_{\rm mol}$ may be chosen to diagonalize $\Sigma_{\rm T}$ on the degenerate subspace \begin{equation} \tilde{\lambda}\langle \lambda_i | \Sigma_{\rm T} | \lambda_j \rangle = \delta_{ij} \tilde{\Sigma}_i \end{equation} and Dyson's equation may be solved as before \begin{equation} G(E) \approx \sum_{l=1}^M \frac{\left[{\cal P}(\nu_l) + {\cal P}(\nu')\right] C(\nu_l,\nu')}{E-E_{\nu'}+E_{\nu_l} -\tilde{\Sigma}_{l}}. \end{equation} Although $\Sigma_{\rm T}$ is diagonal in the basis of $\left| \lambda_l \right. \rangle$, $\Gamma^{\rm L}$ and $\Gamma^{\rm R}$ need not be separately diagonal. Consequently, there is no general simple expression for $T(E)$ for the case of a degenerate resonance, but ${\bf T}$ can still be computed using Eq.~\ref{eq:trans_matrix}. In this article we focus on transport through Pt-benzene-Pt SMJs where the relevant molecular resonances (\HO or {\sc LUMO}) are doubly degenerate. Considering the \HO resonance of benzene \begin{equation} G(E) \approx \frac{C(\nu_1,0_6)}{E-\ve_{\rm HOMO} -\tilde{\Sigma}_{\nu_1 \nu'}} + \frac{ C(\nu_2,0_6)}{E-\ve_{\rm HOMO} -\tilde{\Sigma}_{\nu_2 \nu'}}, \label{eq:isoresapprox} \end{equation} where $\nu_{1,2} \in 0_5$ diagonalize $\Sigma_{\rm T}$ and $0_N$ is the $N$-particle ground state. \section{Pi-electron effective field theory} In order to model the degrees of freedom most relevant for transport, we have developed an effective field theory of interacting $\pi$-electron systems ($\pi$-EFT) as described in detail in Ref.\citenum{Barr11}. Briefly, this was done by starting with the full electronic Hamiltonian of a conjugated organic molecule and dropping degrees of freedom far from the $\pi$-electron energy scale. The effective $\pi$-orbitals were then assumed to possess azimuthal and inversion symmetry, and the effective Hamiltonian was required to satisfy particle-hole symmetry and be explicitly local. Such an effective field theory is preferable to semiempirical methods for applications in molecular junctions because it is more fundamental, and hence can be readilly generalized to include screening of intramolecular Coulomb interactions due to nearby metallic electrodes. \subsection{Effective Hamiltonian} This allows the effective Hamiltonian for the $\pi$-electrons in gas-phase benzene to be expressed as \begin{eqnarray} \label{benzeneEffectiveHamiltonian} H_{\rm mol} & = & \mu \sum_n \rho_n - t \sum_{\langle n,m \rangle, \sigma} d^\dagger_{n\sigma} d_{m\sigma} \nonumber \\ & + &\frac{1}{2} \sum_{nm} U_{nm} (\rho_n - 1) (\rho_m - 1), \end{eqnarray} where $t$ is the tight-binding matrix element, $\mu$ is the molecular chemical potential, $U_{nm}$ is the Coulomb interaction between the electrons on the $n$th and $m$th $\pi$-orbitals, and $\rho_n \equiv \sum_\sigma d^\dagger_{n\sigma} d_{n\sigma}$. The interaction matrix $U_{nm}$ is calculated via a multipole expansion keeping terms up to the quadrupole-quadrupole interaction: \begin{eqnarray*} U_{nm} & = & U_{nn}\delta_{nm} \nonumber \\ & + & (1 - \delta_{nm})\left(U^{MM}_{nm} + U^{QM}_{nm} + U^{QM}_{mn} + U^{QQ}_{nm}\right) \nonumber \\ & + & {\cal O}(r^{-6}), \end{eqnarray*} where $U^{MM}$ is the monopole-monopole interaction, $U^{QM}$ is the quadrupole-monopole interaction, and $U^{QQ}$ is the quadrupole-quadrupole interaction. For two orbitals with arbitrary quadrupole moments $Q^{ij}_n$ and $Q^{kl}_m$ separated by a displacement $\vec r$, the expressions for these are \begin{eqnarray*} U^{MM}_{nm} & = & \frac{e^2}{\epsilon r}, \label{monopoleMonopole} \\ U^{QM}_{nm} & = & \frac{-e}{2 \epsilon r^3} \sum_{ij} Q_m^{ij} \hat{r}_i\hat{r}_j, \\ U^{QM}_{mn} & = & \frac{-e}{2 \epsilon r^3} \sum_{ij} Q_n^{ij} \hat{r}_i \hat{r}_j, \\ U^{QQ}_{nm} & = & \frac{1}{12 \epsilon r^5} \sum_{ijkl} Q_n^{ij} Q_m^{kl} W_{ijkl}, \label{quadrupoleQuadrupole} \end{eqnarray*} where \begin{eqnarray*} W_{ijkl} & = & \delta_{li} \delta_{kj} + \delta_{ki} \delta_{lj} - 5 r^{-2}(r_k \delta_{li} r_j + r_k r_i \delta_{lj} + \delta_{ki} r_j r_l \\ & & + r_i \delta_{kj} r_l + r_k r_l\delta_{ij}) + 35 r^{-4} r_i r_j r_l r_k \end{eqnarray*} is a rank-4 tensor that characterizes the interaction of two quadrupoles and $\epsilon$ is a dielectric constant included to account for the polarizability of the core and $\sigma$ electrons. Altogether, this provides an expression for the interaction energy that is correct up to fifth order in the interatomic distance. \begin{figure} \centering \includegraphics{0728_spectral_fcn.eps}\label{fig:benzene_spectral_fcns} \caption[]{Spectral functions $A(E)=-1/\pi \Tr{G(E)}$ at room temperature for gas-phase benzene (top panel) and Pt-benzene-Pt junctions (ensemble average, bottom panel). The gas-phase resonances are broadened artificially as a guide to the eye. The dashed orange lines are fixed by (left to right) the lowest-lying optical excitation of the molecular cation \cite{Kovac80, Sell78, Kobayoshi78, Schmidt77, Baltzer97}, the vertical ionization energy of the neutral molecule \cite{Howell84, Kovac80, Sell78, Kobayoshi78, Schmidt77}, and the vertical electron affinity of the neutral molecule \cite{Burrow87}. The asymmetry in the average spectral function arises because the {\sc HOMO} resonance couples more strongly on average to the Pt tip atoms than does the {\sc LUMO} resonance. The work function of the Pt(111) surface ($-5.93$eV \cite{CRC}) is shown for reference. } \end{figure} \subsection{Benzene} The adjustable parameters in our Hamiltonian for gas-phase benzene are the nearest-neighbor tight-binding matrix element $t$, the on-site repulsion $U$, the dielectric constant $\epsilon$, and the $\pi$-orbital quadrupole moment $Q$. These were renormalized by fitting to experimental values that should be accurately reproduced within a $\pi$-electron only model. In particular, we simultaneously optimized the theoretical predictions of 1) the six lowest singlet and triplet excitations of the neutral molecule, 2) the vertical ionization energy, and 3) the vertical electron affinity. The optimal parametrization for the $\pi$-EFT was found to be $t = 2.70$ eV, $U = 9.69$ eV, $Q = -0.65$ $e${\AA}$^2$ and $\epsilon = 1.56$ with a RMS relative error of $4.2$ percent in the fit of the excitation spectrum. The top panel of Fig.~\ref{fig:benzene_spectral_fcns} shows the spectral function for gas-phase benzene within $\pi$-EFT, along with experimental values for the first optical excitation of the cation ($3.04$ eV), the vertical ionization energy ($9.23$ eV), and the vertical electron affinity ($-1.12$ eV). As a guide to the eye, the spectrum has been broadened artificially using a tunneling-width matrix of $\Gamma_{nm} = (0.2 \; \textrm{eV}) \delta_{nm}$. The close agreement between the experimental values and the maxima of the spectral function suggests our model is accurate at this energy scale. In particular, the accuracy of the theoretical value for the lowest optical excitation of the cation is noteworthy, as this quantity was not fit during the renormalization procedure but rather represents a prediction of our $\pi$-EFT. In order to incorporate screening by metallic electrodes into $\pi$-EFT, we utilized an image multipole method whereby the interaction between an orbital and image orbitals are included up to the quadrupole-quadrupole interaction in a screened interaction matrix $\tilde{U}_{nm}$. In particular, we chose a symmetric $\tilde{U}_{nm}$ that ensures the Hamiltonian gives the energy required to assemble the charge distribution from infinity with the electrodes maintained at fixed potential, namely \[ \tilde{U}_{nm} = U_{nm} + \delta_{nm}U_{nn}^{(i)} + \frac{1}{2}(1 - \delta_{nm})(U_{nm}^{(i)} + U_{mn}^{(i)}), \] where $U_{nm}$ is the unscreened interaction matrix and $U_{nm}^{(i)}$ is the interaction between the $n$th orbital and the image of the $m$th orbital. When multiple electrodes are present, the image of an orbital in one electrode produces images in the others, resulting in an effect reminiscent of a hall of mirrors. We deal with this by including these ``higher order'' multipole moments iteratively until the difference between successive approximations of $\tilde{U}_{nm}$ drops below a predetermined threshold. In the particular case of the Pt--benzene--Pt junction ensemble described in the next section, the electrodes of each junction are modeled as perfect spherical conductors. An orbital with monopole moment $q$ and quadrupole moment $Q^{ij}$ located a distance $r$ from the center of an electrode with radius $R$ then induces an image distribution at $\tilde{r} = \frac{R^2}{r}$ with monopole and quadrupole moments \[ \tilde{q} = -q \frac{R}{r} - \frac{R}{2r^3} \sum_{ij} Q^{ij} \hat{r}_i \hat{r}_j \] and \[ \tilde{Q}^{ij} = -\left(\frac{R}{r}\right)^5 \sum_{kl} T_{ik} T_{jl} Q^{kl} \] respectively. Here $T_{ik}$ is a transformation matrix representing a reflection about the plane normal to the vector $\hat{r}$: \[ T_{ik} = \delta_{ik} - 2 \hat{r}_i \hat{r}_k. \ The lower panel of Fig.~\ref{fig:benzene_spectral_fcns} shows the Pt--benzene--Pt spectral function averaged over the ensemble of junctions described in the next section using this method. Comparing the spectrum with the gas-phase spectral function shown in the top panel of Fig.~\ref{fig:benzene_spectral_fcns}, we see that screening due to the nearby Pt tips reduced the \HOLU gap by 33\% on average, from 10.39eV in gas-phase to 6.86eV over the junction ensemble. \begin{figure} \centering \includegraphics{0713_vdW.eps} \caption{Calculated van der Waals contribution to the binding energy of benzene adsorbed on a Pt(111) surface as a function of distance. Here the plane of the molecule is oriented parallel to the Pt surface. A phenomenological short-range repulsion $\propto r^{-12}$ has been included to model the Pauli repulsion when the $\pi$-orbitals overlap the Pt surface states.} \label{fig:vdW_vs_distance} \end{figure} The screening of intramolecular Coulomb interactions by nearby conductor(s) illustrated in Fig.~\ref{fig:benzene_spectral_fcns} leads to an attractive interaction between a molecule and a metal surface (van der Waals interaction). By diagonalizing the molecular Hamiltonian with and without the effects of screening included in $U_{nm}$, it is possible to determine the van der Waals interaction at arbitrary temperature between a neutral molecule and a metallic electrode by comparing the expectation values of the Hamiltonian in these two cases: \[ \Delta E_{vdW} = \langle H \rangle - \langle \tilde{H} \rangle \] This procedure was carried out at zero temperature for benzene oriented parallel to the surface of a planar Pt electrode at a variety of distances, and the results are shown in Fig.~\ref{fig:vdW_vs_distance}. Note that an additional phenomenological short-range repulsion $\propto r^{-12}$ has been included in the calculation to model the Pauli repulsion arising when the benzene $\pi$-orbitals overlap the Pt surface states. \section{The lead-molecule coupling} When an isolated molecule is connected to electrodes and a molecular junction is formed, the energy levels of the molecule are broadened and shifted as a result of the formation of a lead-molecule bond and the electrostatic influence of the leads. The bonding between lead $\alpha$ and the molecule is described by the tunneling width matrix $\Gamma^\alpha$ and the electrostatics, including intramolecular screening and van der Waals effects, are described by the effective molecular Hamiltonian derived using the aforementioned $\pi$-EFT. Although we use the Pt--benzene--Pt junction as an example here, the techniques we discuss are applicable to any conjugated organic molecular junction. \begin{figure} \centering \includegraphics{0705_stm_contact.eps} \caption{The trace of $\Gamma^\alpha$ for a Pt electrode in contact with a benzene molecule. Nine total basis states of the Pt tip are included in this calculation (one $s$, three $p$ and five $d$ states). The tip height above the plane of the molecule is adjusted at each point such that the Pt-C distance is fixed to 2.65\AA\ (see text). $\Tr{\Gamma^\alpha}$ retains the (six-fold) symmetry of the molecule and is sharply peaked near the center of the benzene ring indicating the strongest bonds are formed when the lead is in the `atop' configuration. The benzene molecule is shown schematically with the black lines; the carbons atoms are located at each vertex.} \label{fig:benzene_contact_image} \end{figure} \subsection{Bonding} The bonding between the tip of electrode $\alpha$ with the molecule is characterized by the tunneling-width matrix $\Gamma^\alpha$ given by Eq.~\ref{eq:Gamma_alpha}. When a highly-conductive SMJ \cite{Kiguchi08} is formed the lead and molecule are in direct contact such that the overlap between the $\pi$-electron system of the molecule and {\em all} of the atomic-like wavefunctions of the atomically-sharp electrode are relevant. In this case we may express the elements of $\Gamma^\alpha$ as \cite{Bergfield09} \begin{equation} \Gamma^\alpha_{nm}(E) = 2\pi \sum_{l \in \{s,p,d,\ldots \}} C_l V^{n}_l \left(V^{m}_l\right)^\ast \, \rho^\alpha_l(E), \label{eq:Gamma_p} \end{equation} where the sum is over evanescent tunneling modes emanating from the metal tip, labeled by their angular momentum quantum numbers, $\rho^\alpha_l(E)$ is the local density of states on the apex atom of electrode $\alpha$, and $V^{n}_l$ is the tunneling matrix element of orbitals $l$ \cite{Chen93}. The constants $C_l$ can in principle be determined by matching the evanescent tip modes to the wavefunctions within the metal tip \cite{Chen93}; however, we set $C_l=C \, \forall \, l$ and determine the constant $C$ by fitting to the peak of the experimental conductance histogram \cite{Kiguchi08}. In the calculation of the matrix elements, we use the effective Bohr radius of a $\pi$-orbital $a^* = a_0/Z$, where $a_0\approx$ 0.53{\AA} is the Bohr radius and $Z=3.22$ is the effective hydrogenic charge associated with the $\pi$-orbital quadrupole moment $-0.65e$\AA$^2$ determined by $\pi$-EFT. For each Pt tip, we include one $s$, three $p$ and five $d$ orbitals in our calculations, which represent the evanescent tunneling modes in free space outside the apex atom of the tip. At room temperature, the Pt density of states (DOS) $\rho^\alpha(E) = \sum_l \rho^\alpha_l(E)$ is sharply peaked around the Fermi energy \cite{Kleber73} with $\rho^\alpha(\varepsilon_F)$=2.88/eV \cite{Kittel_KinderBook}. In accordance with Ref.\ \citenum{Chen93}, we distribute the total DOS such that the $s$ orbital contributes 10\%, the $p$ orbitals contribute 10\%, and the $d$ orbitals contribute 80\%. \begin{figure} \centering \includegraphics{0701_Gamma1_histogram.eps} \caption{Eigenvalue decomposition of an ensemble of $\Gamma^\alpha$ matricies, showing that each lead-molecule contact has $\sim5$ channels. Note that nine orthogonal basis orbitals were included in the calculation for each lead. } \label{fig:Gamma1_eigenchannel} \end{figure} We are interested in investigating transport through stable junctions where the `atop' binding configuration of benzene on Pt has the largest binding energy \cite{Cruz07,Morin03,Saeys02}. In this configuration, the distance between the tip atom and the center of the benzene ring is $\approx$ 2.25{\AA} \cite{Kiguchi08}, giving a tip to orbital distance of $\approx$ 2.65{\AA} (the C--C bonds are taken as 1.4\AA). The trace of $\Gamma^\alpha(\varepsilon_F)$ is shown as a function of tip position in Fig.~\ref{fig:benzene_contact_image}, where for each tip position the height was adjusted such that the distance to the closest carbon atom was 2.65\AA. From the figure, it is evident that the lead-molecule coupling strength is peaked when the tip is in the vicinity of center of the benzene ring (whose outline is drawn schematically in black). As shown in Ref.\ \cite{Bergfield11a}, the hybridization contribution to the binding energy is \begin{eqnarray} \Delta E_{\rm hyb} &=& \sum_{\nu\in {\cal H}_{N-1}} \int_{\mu}^\infty \frac{dE}{2\pi} \frac{\Tr{\Gamma(E)C(\nu,0_{N})}}{E-E_{0_{N}}+E_\nu} \nonumber \\ &+& \sum_{\nu'\in {\cal H}_{N+1}} \int_{-\infty}^\mu \frac{dE}{2\pi} \frac{{\rm Tr} \{\Gamma(E)C(0_{N},\nu')\}}{-E-E_{0_{N}}+E_{\nu'}}, \nonumber \label{eq:delta_Ehybrid} \end{eqnarray} which is roughly $\propto {\rm Tr}\{\Gamma(\varepsilon_F)\}$. Here $\mu$ is the chemical potential of the lead metal, ${\cal H}_{N}$ is the $N$-particle molecular Hilbert space, and $0_{N}$ is the ground state of the $N$-particle manifold of the neutral molecule. The sharply peaked nature of $\Tr{\Gamma^\alpha}$ seen in Fig.~\ref{fig:benzene_contact_image} is thus consistent with the large binding energy of the atop configuration. This result motivates our procedure for generating the ensemble of junctions, where we consider the tip position in the plane parallel to the benzene ring as a 2-D Gaussian random variable with a standard deviation of 0.25\AA, chosen to corresponded with the preferred bonding observed in this region. For each position, the height of each electrode (one placed above the plane and one below) is adjusted such that the closest carbon to the apex atom of each electrode is 2.65\AA. Each lead is placed independently of the other. This procedure ensures that the full range of possible, bonded junctions are included in the ensemble. The eigenvalue distributions of $\Gamma^\alpha$ over the ensemble are shown in Fig.\ref{fig:Gamma1_eigenchannel}. Although we include nine (orthogonal) basis orbitals for each lead, the $\Gamma$ matrix only exhibits five nonzero eigenvalues, presumably because only five linear combinations can be formed which are directed toward the molecule. Although we have shown the distribution for a single lead, the number of transmission channels for two leads, where each $\Gamma^\alpha$ matrix has the same rank, will be the same even though the overall lead-molecule coupling strength will be larger. The average coupling per orbital with two electrodes is shown in the bottom panel of Fig.~\ref{fig:benzene_Unm_Trace_Gamma}. \subsection{Screening} The ensemble of screened interaction matrices $\tilde{U}_{nm}$ is generated using the same procedure discussed above. Each Pt electrode is modelled as a conducting sphere with radius equal to the Pt polarization radius (1.87\AA). This is equivalent to the assumption that screening is due mainly to the apex atoms of each Pt tip. The screening surface is placed such that it lies one covalent radius away from the nearest carbon atom \cite{Barr11}. \begin{figure} \centering \includegraphics{0701_benzene_Unm_and_Trace_Gamma.eps} \caption{The distribution of charging energy $\langle \tilde{U}_{nm} \rangle$ (top panel) and $\Tr{\Gamma}$ (bottom panel) over the ensemble described in the text. Here $\Gamma=\Gamma^1+ \Gamma^2$ is the total tunneling-width matrix of the junction. The width of the $\Tr{\Gamma}$ distribution is $\sim 4\times$ that of the $\langle U_{nm} \rangle$ distribution. The peak of the $\langle \tilde{U}_{nm} \rangle$ and $\Tr{\Gamma}/6$ distributions are $1.58eV$ and $1.95eV$, respectively, suggesting that transport occurs in an intermediate regime where both the particle-like and wave-like character of the charge carriers must be considered.} \label{fig:benzene_Unm_Trace_Gamma} \end{figure} The average over the interaction matrix elements $\langle \tilde{U}_{nm} \rangle$ defines the ``charging energy'' of the molecule in the junction \cite{Barr11}. The charging energy $\langle \tilde{U}_{nm} \rangle$ and per orbital $\Tr{\Gamma}$ distributions are shown in the top and bottom panels of Fig.~\ref{fig:benzene_Unm_Trace_Gamma}, respectively, where two electrodes are used in all calculations. As indicated by the figure, the $\Tr{\Gamma}/6$ distribution is roughly four times as broad as the charging energy distribution. This fact justifies using the ensemble-average $\tilde{U}_{nm}$ matrix for transport calculations \cite{Bergfield11a}, an approximation which makes the calculation of thousands of junctions computationally tractable. The peak values of the $\langle \tilde{U}_{nm} \rangle$ and $\Tr{\Gamma}/6$ distributions are 1.58eV and 1.95eV, respectively, suggesting that transport occurs in an intermediate regime where both the particle-like and wave-like character of the charge carriers must be considered. In addition to sampling various bonding configurations, we also consider the ensemble of junctions to sample all possible Pt surfaces. The work function of Pt ranges from 5.93eV to 5.12eV for the (111) and (331) surfaces, respectively \cite{CRC}, and we assume that $\mu_{\rm Pt}$ is distributed uniformly over this interval. \begin{figure} \centering \includegraphics{0701_benzene_conductance_histo.eps} \caption{Calculated conductance histogram for the ensemble over bonding configurations and Pt surfaces. The value of the conductance peak has been fit to match the experimental data \cite{Kiguchi08}, determining the constant $C$ in Eq.~\ref{eq:Gamma_p}. There is no peak for $G \sim 0$ because we designed an ensemble of junctions where both electrodes are strongly bound to the molecule.} \label{fig:benzene_conductance_histogram} \end{figure} Using this ensemble, the conductance histogram over the ensemble of junctions can be computed, and is shown in Fig.~\ref{fig:benzene_conductance_histogram}. The constant prefactor $C$ appearing in the tunneling matrix elements \cite{Chen93} in Eq.~\ref{eq:Gamma_p} was determined by fitting the peak of the calculated conductance distribution to the that of the experimental conductance histogram \cite{Kiguchi08}. Note that the width of the calculated conductance peak is also comparable to that of the experimental peak \cite{Kiguchi08}. \section{Results and Discussion} \begin{figure*} \centering \subfloat[Full many-body MDE theory]{\label{fig:many-body_benzene} \includegraphics[width=.5\linewidth]{0701_benzene_histogram.eps}} \subfloat[Isolated ({\sc HOMO}) resonance approximation]{\label{fig:iso_res_benzene} \includegraphics[width=.5\linewidth]{0701_benzene_HOMO_histogram.eps}} \caption{The calculated eigenvalue distributions for an ensemble of $1.74\times 10^5$ (2000 bonding configuration $\times$ 87 Pt surfaces) Pt--benzene--Pt junctions using many-body theory with (a) the full spectrum and (b) the isolated-resonance approximation for the (doubly degenerate) \HO resonance. Despite each lead forming $\sim 5$ bonds (cf. Fig.~\ref{fig:Gamma1_eigenchannel}), calculations in both cases exhibit only two dominant channels which arise from the degeneracy of the relevant ({\sc HOMO}) resonance. The weak third channel seen in (a) is a consequence of the large lead-molecule coupling and is consistent with the measurements of Ref.\citenum{Kiguchi08}.} \label{fig:transport_results} \end{figure*} The transmission eigenvalue distributions for ensembles of $1.74\times 10^5$ Pt--benzene--Pt junctions calculated using the full many-body spectrum and in the isolated-resonance approximation are shown in Figs.~\ref{fig:many-body_benzene} and \ref{fig:iso_res_benzene}, respectively. Despite the existence of five covalent bonds between the molecule and each lead (cf.\ Fig.~\ref{fig:Gamma1_eigenchannel}), there are only two dominant transmission channels, which arise from the two-fold degenerate \HO resonance closest to the Pt Fermi level \cite{Bergfield11a}. As proof of this point, we calculated the transmission eigenvalue distribution, over the same ensemble, using only the \HO resonance in the isolated-resonance approximation (Eq.\ref{eq:isoresapprox}). The resulting transmission eigenvalue distributions, shown in Fig.~\ref{fig:iso_res_benzene}, are nearly identical to the full distribution shown in Fig.~\ref{fig:many-body_benzene}, with the exception of the small but experimentally resolvable \cite{Kiguchi08} third transmission channel. The lack of a third channel in the isolated-resonance approximation is a direct consequence of the two-fold degeneracy of the \HO resonance, which can therefore contribute at most two transmission channels. The third channel thus arises from further off-resonant tunneling. In fact, we would argue that the very observation of a third channel in some Pt--benzene-Pt junctions \cite{Kiguchi08} is a consequence of the very large lead-molecule coupling ($\sim 2eV$ per atomic orbital) in this system. Having simulated junctions with electrodes whose DOS at the Fermi level is smaller than that of Pt, we expect junctions with Cu or Au electrodes, for example, to exhibit only two measurable transmission channels. \begin{figure} \centering \includegraphics{0701_average_total_transmission.eps} \caption{The calculated average total transmission averaged over 2000 bonding configurations through a Pt--benzene--Pt junction shown as a function of the leads' chemical potential $\mu_{\rm Pt}$. The isolated resonance approximation using the \HO or \LU resonance accurately describe the full many-body transport in the vicinity of the \HO or \LU resonance, respectively. These data are in good agreement with the measurements of Ref.~\citenum{Kiguchi08}. The work-function range for the crystal planes of Pt are shaded in blue, where $-5.84eV \leq \mu_{\rm Pt} \leq -5.12eV$ \cite{CRC}.} \label{fig:average_transmission} \end{figure} In order to investigate the efficacy of the isolated-resonance approximation further, we calculated the average total transmission through a Pt--benzene--Pt junction. The transmission spectra calculated using the full molecular spectrum, the isolated \HO resonance and the isolated \LU resonance are each shown as a function of the leads' chemical potential $\mu_{\rm Pt}$ in Fig.~\ref{fig:average_transmission}. The spectra are averaged over 2000 bonding configurations and the blue shaded area indicates the range of possible chemical potentials for the Pt electrodes. The close correspondence between the full transmission spectrum and the isolated \HO resonance over this range is consistent with the accuracy of the approximate method shown in Fig.~\ref{fig:transport_results}. Similarly, in the vicinity of the \LU resonance, the isolated \LU resonance approximation accurately characterizes the average transmission. The \HOLU asymmetry in the average transmission function arises because the {\sc HOMO} resonance couples more strongly on average to the Pt tip atoms than does the {\sc LUMO} resonance. It is tempting to assume, based on the accuracy of the isolated-resonance approximation in our many-body transport theory, that an analogous ``single molecular orbital'' approximation would also be sufficient in a transport calculation based e.g.\ on density-functional theory (DFT). However, this is not the case. Although the isolated-resonance approximation can also be derived within DFT, in practice, it is necessary to use an ``extended molecule'' to account for charge transfer between molecule and electrodes. This is because current implementations of DFT fail to account for the {\em particle aspect} of the electron \cite{Toher05,Burke06,Datta06,Geskin09}. Analyzing transport in terms of extended molecular orbitals has proven problematic. For example, the resonances of the extended molecule in Ref.\ \citenum{Heurich02} apparently accounted for less than 9\% of the current through the junction. Using an ``extended molecule'' also makes it difficult, if not impossible, to interpret transport contributions in terms of the resonances of the molecule itself \cite{Heurich02}. Since charging effects in SMJs are well-described in our many-body theory \cite{Bergfield09, Bergfield11b}, there is no need to utilize an ``extended molecule,'' so the resonances in our isolated-resonance approximation are true molecular resonances. \begin{figure} \centering \includegraphics{0701_benzene_fano_histo.eps} \caption{The calculated Fano factor $F$ distribution for the full ensemble of $1.74\times10^5$ Pt--benzene--Pt junctions. $F$ describes the nature of the transport, where $F=0$ and $F=1$ characterize wave-like (ballistic) and particle-like transport, respectively. The peak value of this distribution $F\sim 0.51$ indicates that we are in an intermediate regime.} \label{fig:fano_factor} \end{figure} The full counting statistics of a distribution are characterized by its cumulants. Using a single-particle theory to describe a single-channel junction, it can be shown \cite{Levitov96,Levitov93} that the first cumulant is related to the junction transmission function while the second cumulant is related to the shot noise suppression. Often this suppression is phrased in terms of the Fano factor \cite{Schottky18} \begin{equation} F= \sum_n \frac{\tau_n (1-\tau_n)}{\sum_n \tau_n}. \end{equation} In Fig.~\ref{fig:fano_factor} we show the distribution of $F$ for our ensemble of junctions, where the $\tau_n$ have been calculated using many-body theory. Because of the fermionic character of the charge carriers $0 \leq F \leq 1$, with $F=0$ corresponding to completely wave-like transport and a value of $F=1$ corresponding to completely particle-like transport. From the figure, we see that $F$ is peaked $\sim 0.51$ implying that the {\em both particle and wave} aspects of the carriers are important, a fact which is consistent with the commensurate charging energy and bonding strength (cf.\ Fig.~\ref{fig:benzene_Unm_Trace_Gamma}). In such an intermediate regime both the `complementary' aspects of the charge carriers are equally important, requiring a many-body description and resulting in many subtle and interesting effects. For example, the transport in this regime displays a variety of features stemming from the interplay between Coulomb blockade and coherent interference effects, which occur simultaneously \cite{Bergfield09,Bergfield10b}. Although the Fano factor reflects the nature of the transport, it is not directly related to the shot-noise power in a many-body theory. The richness of the transport in this regime, however, suggests that a full many-body calculation of a higher-order moment, such as the shot-noise, may exhibit equally interesting phenomena. \section{Conclusion} We have developed a state-of-the-art technique to model the lead-molecule coupling in highly-conductive molecular junctions. The bonding between the lead and molecule was described using an `ab initio' model in which the tunneling matrix elements between all relevant lead tip wavefunctions and the molecule were included, producing multi-channel junctions naturally from a physically motivated ensemble over contact geometries. Coulomb interactions between the molecule and the metallic leads were included using an image multipole method within $\pi$-EFT. In concert, these techniques allowed us to accurately model SMJs within our many-body theory. The transport for an ensemble of Pt--benzene--Pt junctions, calculated using our many-body theory, confirmed our statement \cite{Bergfield11a} that the number of dominant transmission channels in a SMJ is equal to the degeneracy of the molecular orbital closest to the metal Fermi level. We find that the transport through a Pt--benzene--Pt junction can be accurately described using only the relevant ({\sc HOMO}) molecular resonance. The exceptional accuracy of such an isolated-resonance approximation, however, may be limited to small molecules with large charging energies. In larger molecules, where the charging energy is smaller, further off-resonant transmission channels are expected to become more important. In metallic point contacts the number of channels is completely determined by the valence of the metal. Despite the larger number of states available for tunneling transport in SMJs, we predict that the number of transmission channels is typically more limited than in single-atom contacts because molecules are less symmetrical than atoms. Channel-resolved transport measurements of SMJs therefore offer a unique probe into the symmetry of the molecular species involved.
1,116,691,498,825
arxiv
\section{\label{secIntroduction}Introduction} The statistical model has been used to describe particle production in high-energy collisions for more than half a century~\cite{fermiheisenberghagedorn}. In this period it has evolved into a very useful and successful model describing a large variety of data, in particular, hadron yields in central heavy-ion collisions~\cite{pbpb,overview} have been described in a very systematic and appealing way unmatched by any other model. It has also provided a very useful framework for the centrality~\cite{centrality} and system-size dependence~\cite{syssize,bec} of particle production. The applicability of the model in small systems like $p-p$~~\cite{pp1} and $e^+-e^-$ annihilation~\cite{ee1} has been the subject of several recent publications \cite{ee2,ee3,pppred}. The statistical-model analysis of elementary particle interactions can be summarized by the statement that the thermal parameters show almost no energy dependence in the range of $\sqrt{s}$~=14~--~900~GeV with the temperature being about 165 MeV and the strangeness undersaturation factor $\gamma_S$ being in the range between 0.5 and 0.7. In the context of the system-size dependence of particle production, the $p-p$~ collisions at $\sqrt{s}$~=~17~GeV~ have been analyzed in detail recently. Based on similar data sets, the extracted parameters in different publications deviated significantly from each other: in a previous analysis (Ref.~\cite{syssize,pppred}) we derived $T$~=~164~$\pm$~9~MeV and $\gamma_S$~=~0.67~$\pm$~0.07, with $\chi^2/n$~=~1.7/3, while the authors in Ref.~\cite{bec} obtained $T$~=~178~$\pm$~6~MeV, $\gamma_S$~=~0.45~$\pm$~0.02 with $\chi^2/n$~=~11/7. These findings motivated different conclusions: In Ref.~\cite{syssize} no system size dependence of the thermal parameters was found, except for $\gamma_S$ which tends to increase when more nucleons participe in the collisions but this rise is weaker than the errors on the strangeness suppression parameter. In Ref.~\cite{syssize} it was therefore concluded that the hadron gas produced in central collisions at $\sqrt{s}$~=~17~GeV~ reaches its limiting temperature. Based on Ref.~\cite{bec} on the other hand, it was argued in Ref.~\cite{na61} that, decreasing $\gamma_S$ and, in particular, increasing temperature towards smaller systems allow for probing QCD matter beyond the freeze-out curve established in Pb-Pb and Au-Au collisions~\cite{eovern,wheaton_phd}. The goal of this paper is to understand the origin of these rather different thermal-model results obtained in the analysis of $p-p$~ data. We use an up-to-date complete set of data and discuss the sensitivity of the thermal model parameters on their values. We present systematic studies of data used as inputs and the methods applied in their thermal model analysis. The paper is organised as follows: In Section~\ref{secData} we discuss the experimental data on which different analysis are based. In Section~\ref{secAnalysis} we summarize the main features of the statistical model and present the analysis of the SPS data obtained in $p-p$~ collisions. In the final section we present our conclusions and summarize our results. \section{\label{secData}Data} \begin{figure} \includegraphics[width=0.99\linewidth]{figure1.eps} \caption{\label{fig1} Charged kaon yields (left panels), negatively charged hadron $h^-$ and pions $\pi^-$ (upper right) and $K^-/\pi^-$ ratios (lower right panel) in $p-p$~ collisions as a function of laboratory momentum. The charged kaons, $K^+$ (diamonds) and $K^-$ (crosses) yields are from Ref.~\cite{compK}. The lines are fits to data. The SPS yields from Ref.~\cite{dataSPSK} (circle) and from Ref.~\cite{dataSPSK0s} (triangle) are also shown. The negatively charged hadrons are from Ref.~\cite{comph}. } \end{figure} The data used throughout this paper for hadron yields in $p-p$~ collisions at $\sqrt s$=17.3~GeV are summarized in Table~\ref{Table_data}. Data in column {\sl Set A} were exploit in our previous analysis~\cite{pppred} and the corresponding references are given in the table. If the numerical values deviate in the analysis of Ref.~\cite{bec}, they are listed in column {\sl Set B}. The relative differences between the particle yields from sets {\sl A} and {\sl B} are also indicated in Table~\ref{Table_data}. The commonly used data in the statistical model description of particles production in $p-p$~ collisions at the SPS are displayed below the horizontal line in Table~\ref{Table_data}. \input{table_data.tex} In Table~\ref{Table_set} the experimental data are grouped in sets which are used in Section~\ref{secAnalysis} to perform the Statistical Model analysis. In the following we motivate the particular choice of data in these sets and discuss how they can influence the model predictions on thermal conditions in $p-p$~ collisions. \input{table_set.tex} The data set~{\sl A1} is most restricted. Firstly, the production yields of $\Xi$ and $\Omega$ are not included because their numerical values are only preliminary (Ref.~\cite{na49web}). Secondly, the $\Lambda^*$ resonance is also not included so as to restrict the analysis to stable hadrons. Finally, the $\phi$ meson is omitted in {\sl Set~A1} since this particle is difficult to address in the statistical model due to its hidden strangeness as discussed in Ref.~\cite{syssize}. The lower yields of charged kaons in {\sl Set B} of Table~\ref{Table_data} are taken from results published in conference proceedings \cite{dataSPSK0s}. Such kaon yields are in disagreement with trends from data measured at lower and higher energies as seen in Fig.~\ref{fig1}. The left panels of Fig.~\ref{fig1} show the charged kaon multiplicities from $p-p$~ interactions at lower and higher beam momenta~\cite{compK} together with data from Table~\ref{Table_data}. The lines in this figure are simple parametrizations interpolating to SPS energies. The $K^-$ yield from~\cite{dataSPSK} is seen to be 7\% below the expected value from the above parametrization, however agrees within errors. The $K^-$ abundance from~\cite{dataSPSK0s} is by 24\% lower and its error is only 10\%. As we discuss below, such a low value for the multiplicity of charged kaons influences the statistical model fit in an essential way. The upper right panel of Fig.~\ref{fig1} shows the negatively charged hadrons from $p-p$~ interactions at several beam momenta from Ref.~\cite{comph}. As indicated in Ref.~\cite{comph} the $K^-$, $\bar{p}$ and $\Sigma^-$ supplement the $\pi^-$ yield. In this case, the ratio $\pi^-$/$h^-$ amounts to 91\%. Including more sources of feed-down, the $\pi^-$/$h^-$ ratio stays at the same level as long as $\Lambda$ and $K^0_S$ can be separated. Consequently, to calculate the negatively charged pions from $h^-$ yields one can use the above 91\% scaling factor. Figure~\ref{fig1} (top right panel) shows the fit to $h^-$ yields as a function of beam momenta and then by rescaling the expected result for the $p_{\rm{lab}}$ dependence of the negatively charged pions. The yields of $\pi^-$ at SPS from Table~\ref{Table_data} agree quite well with that expected from an interpolation line shown in Fig.~\ref{fig1}. They are only slightly higher, by 1\% for yields taken from \cite{dataSPSpi} and by 5\% for yields used in Ref.~\cite{bec}. The lower right panel in Fig.~\ref{fig1} shows the $K^-$/$\pi^-$ ratio at SPS compared to the interpolated data from other beam momenta. The mean value of the $K^-$/$\pi^-$ used in \cite{pppred} is 8\% below the interpolated line but agrees within errors, while the corresponding value used in \cite{bec} is 28\% smaller and exhibits an error of only 11\%. Clearly, the above differences in the $K^-$/$\pi^-$ ratios influence the thermal model fits. In general, a smaller kaon yield implies a stronger suppression of the strange-particle phase space resulting in a smaller value for the strangeness undersaturation factor $\gamma_S$. If other strange particles are included, then the strong suppression caused by $\gamma_S$ has to be compensated by a higher temperature. This might be one of the origins for the different thermal fit parameters obtained in Refs. \cite{pppred} and \cite{bec}. In order to quantify this we have selected a data set {\sl A4} which is equivalent to the {\sl Set~A1} but with the kaon yields of Ref.~\cite{dataSPSK} being replaced by the values from Ref.~\cite{dataSPSK0s}. The {\sl Set~B1} is (besides the $\Lambda^*$) equivalent to {\sl A1} but with numerical values for particle yields from column {\sl B} in Table~\ref{Table_data}. The {\sl Set~B4} is used to demonstrate the influence of the $\Lambda^*$ resonance on thermal fit parameters. The {\sl Sets A3}, {\sl B3} and {\sl A2}, {\sl B2} are chosen to study the influence of the $\phi$ meson and the multistrange hyperons on thermal fit parameters. \section{\label{secAnalysis} Statistical model analysis} The usual form of the statistical model formulated in the grand-canonical ensemble cannot be used when either the temperature or the volume or both are small. As as a rule of thumb one needs $VT^3>1$ for a grand-canonical description to hold~\cite{hagedornred,rafeldan}. Furthermore, even if this condition is matched but the abundance of a subset of particles carrying a conserved charge is small, the canonical suppression still appears even though the grand-canonical description is valid for the bulk of the produced hadrons. There exists a vast literature on the subject of canonical suppression and we refer to several articles (see e.g.~\cite{overview,polishreview,can1}). The effect of canonical suppression in $p-p$~ collisions at ultra-relativistic energies is relevant for hadrons carrying strangeness. The larger the strangeness content of the particle, the stronger is the suppression of the hadron yield. This has been discussed in great detail in ~\cite{hamieh}. In line with the previous statistical model studies of heavy-ion scattering at lower energies, the collisions of small ions at SPS revealed~\cite{syssize} that the experimental data show stronger suppression of strange-particle yields than what was expected in the canonical model~\cite{hotQuarks,syssize,lisbon}. Consequently, an additional suppression effect had to be included in order to quantify the observed yields. Here we introduce the off-equilibrium factor $\gamma_S\leq 1$ which reduces densities $n_s$ of hadrons carrying strangeness $s$ by $n_s \rightarrow n_s \cdot \gamma_S^{|s|}$ \cite{rafeldan}. We investigate whether or not all quantum numbers have to be conserved exactly in $p-p$~ collisions within a canonical approach by comparing data with two model settings: \begin{enumerate} \item [$\bullet$] {\sl Canonical (C) Model}: all conserved charges, i.e. strangeness, electric charge and baryon number are conserved exactly within a canonical ensemble. \item [$\bullet$] {\sl Strangeness Canonical (SC) Model}: only strangeness is conserved exactly whereas the baryon number and electric charge are conserved on the average and their densities are controlled by the corresponding chemical potentials. \end{enumerate} The parameters of these models are listed in Table~\ref{Table_parameter}. In the following we compare predictions of the above statistical models with $p-p$~ data summarized in different sets discussed above. \input{table_parameter.tex} \subsection{\label{secStudy} Comparative study of $p-p$~ data at SPS} We start from the analysis of data set~{\sl A1} and modify it stepwise to find out in which way one matches the conclusion of larger temperature in $p-p$~ than in central $A-A$ collisions at SPS as indicated in Ref.~\cite{bec}. All numerical values of model parameters are listed in Table~\ref{Table_fit}. A detailed discussion on their choice and correlations is presented in the Appendix based on the $\chi^2/n$ systematics. \input{table_fit.tex} The fit to data set~{\sl A1} in the SC model complies with our previous analysis from Ref.~\cite{pppred}, see also Fig.~\ref{fig2} (top). The SC model fit to these data does not change when including $\Lambda^*$ hyperons resulting in the same values of thermal parameters and their errors as summarized in Table~\ref{Table_fit}. \begin{figure} \includegraphics[width=0.99\linewidth]{figure2a.eps} \includegraphics[width=0.99\linewidth]{figure2b.eps} \caption{\label{fig2} The $\chi^2$ scan in the (T--$\gamma_S$)-plane. Starting from its minimum, $\chi^2$ increases by 2 for each contour line. Upper figure: fit to data set {\sl A1} in the model where only strangeness is conserved exactly (SC). Lower figure: fit to data set {\sl A4} in the canonical (C) model. The minima are indicated by the crosses. } \end{figure} The most striking effect on thermal parameters is expected when replacing the kaon yields in {\sl Set A1} (Fig.~\ref{fig3}, top) by those from Ref.~\cite{dataSPSK0s}, {\sl Set A4}, Fig.~\ref{fig2} (bottom). Indeed, smaller kaon yields cause an increase of temperature and a decrease of $\gamma_S$. These changes come along with a reduced volume and in case of the SC fit with increase of the baryon chemical potential. The kaons from Ref.~\cite{dataSPSK0s} dominate the fit because their errors are 10\% while the uncertainties of the $K^+$ and $K^-$ yields, taken from Ref.~\cite{dataSPSK}, are 21\% and 31\%, respectively. Consequently, the smaller errors dominate the statistical model fit. \begin{figure} \includegraphics[width=0.99\linewidth]{figure3a.eps} \includegraphics[width=0.99\linewidth]{figure3b.eps} \includegraphics[width=0.99\linewidth]{figure3c.eps} \caption{\label{fig3} As in Fig. \ref{fig2} but for data {\sl Set A1} (top), {\sl Set A2} (middle), {\sl Set A3} (bottom). Canonical ensemble. } \end{figure} In the next step we add $\Lambda^*$, $\Xi$ and $\Omega$ hyperons resulting in {\sl Set~A2}, see Fig.~\ref{fig3}, middle panel. The measured hyperon multiplicities coincide with the model results obtained before, thus within errors the statistical model parameters remain unchanged. We focus on {\sl Set~A3} and add the $\phi$ meson, Fig.~\ref{fig3} bottom. In this case the temperature is indeed much higher. In the SC model the thermal parameters obtained from the {\sl Sets A3} and {\sl B3} appear to be meaningless and unphysical due to the $\phi$ meson contribution. Additional to different kaon data in {\sl Sets A} and {\sl B} there are also slightly different values for pions and $\Lambda$ yields, see Table~\ref{Table_data}. Compared to results obtained from data {\sl Sets A4}, in the fit of {\sl Set~B1} the higher pion yield reduces the strange to non-strange particle ratios resulting in slightly smaller value of $\gamma_S$. The fit obtained with the {\sl Set B4} yields similar results as that obtained from {\sl Set B1} indicating that the $\Lambda^*$ resonance does not affect the model parameters. In the canonical (C) model analysis the {\sl Sets~A1} and {\sl A2} tend towards a slightly higher temperature and smaller $\gamma_S$ than that obtained in the SC analysis. The situation is different for {\sl Sets~A3} and {\sl B3} that include the $\phi$ meson. Here in the SC model the temperature is very high and $\gamma_S\simeq 0.5$. In the C model the temperature decreases and $\gamma_S$ drops below 0.5. We can conclude that in the case where the $\phi$ meson is included in the fit, one needs to apply the C analysis to get lower temperatures, however with very small values of $\gamma_S$ and a large $\chi^2/n$. For {\sl Set B3} the numerical results for $T$ and $\gamma_s$ summarized in Table IV coincide with that obtained in Ref~\cite{bec}, however with a larger $\chi^2/n$ \footnote{We compare our results to fit B from Ref~\cite{bec}}. \section{\label{secSummary} Discussion and Summary} The statistical-model analysis of hadron yields for $p-p$~ collisions at $\sqrt{s}$~=~17~GeV~ from Refs.~\cite{syssize} and~\cite{bec}, yield different results and lead to different conclusions on the system-size dependence of thermal parameters~\cite{syssize,na61}. In this paper we have reanalyzed the $p-p$~ data and studied the sensitivity of the thermal fit to data selection and on model assumptions. We have shown that different conclusions from Refs.~\cite{syssize} and~\cite{bec} are mostly due to differences in data selections. Slightly different numerical values for charged pions and $\Lambda$ hyperons used in Refs.~\cite{syssize} and~\cite{bec} as well as the contribution of the $\Lambda^*$ resonance altered thermal parameters only within errors. However, the used charged kaon yields in both approaches differ substantially. We have argued that data of kaon yields in Ref.~\cite{bec} deviate from trends seen in data at different energies resulting in a higher temperature. We have shown that higher kaon yields expected from the systematics in the energy dependence in $p-p$~ collisions are in line with data on multi-strange baryons. Unlike the hyperons, when adding the $\phi$ meson the thermal model fit leaves a reasonable range of parameters resulting in a very high temperature exceeding 190~MeV and large $\chi^2/n$. We have quantified the modifications of these results when including an exact conservation of all quantum numbers in the canonical statistical model. We have shown that in the absence of $\phi$ meson the thermal fits are rather weakly influenced by canonical effects due to an exact conservation of the baryon number and an electric charge leading in some cases to a systematic increase of the freezeout temperature. Fits including the $\phi$ meson are sensitive to an exact conservation of all quantum numbers resulting in lower temperatures. However, the thermal model analysis of data sets with hidden strangeness has the largest $\chi^2/n$ indicating that this particle cannot be addressed properly in this model. From our analysis, we conclude that within the presently available data on $p-p$~ collisions at SPS energy and uncertainties on thermal parameters obtained from fits within the statistical model, it is rather unlikely that the temperature in $p-p$~ collisions exceeds significantly that expected in central collisions of heavy ions at the same energy. \begin{acknowledgments} K.R. acknowledges stimulating discussions with P. Braun-Munzinger and support of DFG, the Polish Ministry of Science MEN, and the Alexander von Humboldt Foundation. The financial support of the BMBF, the DFG-NRF and the South Africa - Poland scientific collaborations are also gratefully acknowledged. \end{acknowledgments}
1,116,691,498,826
arxiv
\section{Introduction} Super-resolution is the task of recovering a high-resolution (HR) image from a low-resolution (LR) image. Recent works have achieved significant performance in SR using deep convolutional neural network (CNN) based approaches. Some of them exploit strict content loss as the training objective for super-resolution and propose various network architectures to improve the PSNR score. However, these methods often result in overly smooth images and have poor perceptual quality [6]. Another branch of works focuses on improving perceptual quality with perceptual training methods [1,6,7]. These methods employ generative adversarial networks (GAN) and perceptual loss functions to drive the network's output towards the natural image manifold of possible HR images. We assess and further improve the perceptual quality of these works. Because super-resolution is a one-to-many problem with multiple possible reconstructions for one image, methods based on strict content loss often lead to predicting the average of possible reconstructions[6]. Perceptual-driven solutions utilize perceptual and adversarial loss, which both do not penalize the generator for generating equally realistic images with stochastic variance. Recently, there has been discussions on the ill-posed nature of SR that multiple possible reconstructions exist for a given low-resolution image. Specifically, [23] learns the distribution of the output instead of a deterministic output using normalizing flows. We believe the current GAN-based solutions for super-resolution is missing key components of a one-to-many pipeline. Specifically, we discover two incomplete aspects in the current perceptual SR pipeline. First, although perceptual objectives do not penalize stochastic variation, the final loss is mixed with the strict content loss which strictly penalizes these variations. Second, the generator doesn't have the ability to generate multiple estimates of the image despite a one-to-many problem. To improve the ESRGAN[1] pipeline into a one-to-many pipeline, we provide the generator with pixel-wise noise. The generator estimates the output distribution as a mapping from the random distribution. We also improve the content loss so it doesn't restrict the variation in the image while ensuring the consistency of the content. Our loss is designed so that learning the reconstruction of photo-realistic details are learned on GAN-oriented training objectives while a weak content loss ensures the reconstruction to be consistent with the input image. The key contributions of our work can be described as follows: \begin{itemize} \item We propose a weaker content loss that does not penalize generating high-frequency detail and stochastic variation in the image. \item We enable the generator to generate diverse outputs by adding scaled pixel-wise noise after each RRDB block. \item We filter blurry regions in the training data using Laplacian activation[10]. \item We additionally provide the LR image to the discriminator to give better gradient feedback to the generator. \end{itemize} \section{Related work} \label{gen_inst} Since the pioneering work of SRCNN[9], many works have exploited the pixel-wise loss and PSNR-oriented training objectives to learn the end-to-end mapping from LR to HR images. We denote such pixel-wise losses as the strict content loss. Many network architectures and techniques were experimented with to improve the complexity of such networks. Deeper network architectures[17], residual networks[6], channel attention[18], and techniques to remove batch normalization[19] were introduced. Although these works achieved state-of-the-art SR performance in the peak signal-to-noise ratio (PSNR) metric, they often produce overly smooth images. To improve the perceptual image quality of SR, SRGAN [6] proposes perceptual loss and GAN-based training. The perceptual loss is measured using intermediate activations of the VGG-19 network and a discriminator is used for the adversarial training process. Enhanced SRGAN (ESRGAN) further improves SRGAN by modifying the generator architecture with Residual in Residual Dense Block (RRDB), the Relativistic GAN [16] loss, and improving the perceptual loss. Such methods were superior to PSNR-oriented methods at generating photo-realistic SR images with sharp details, achieving high perceptual scores. However, we could still often find unpleasant artifacts and problematic artifacts in the reconstructions of ESRGAN. Such cases are exemplified in Figure 4. The ill-posed nature of the super-resolution problem has also been addressed in other works e.g. SRFlow [23]. SRFlow approaches super-resolution based on normalizing flows to explicitly learn the conditional distribution of the SR output given the LR input point. Normalizing flows are a method for constructing complex distributions that also shows significance in image generation. Normalizing flows explicitly learn the data distribution by transforming a simple distribution with a sequence of invertible transformation functions. Normalizing flows explicitly learn the distribution using the change of variables theorem, allowing it to optimize using only the negative log likelihood loss, unlike GAN and VAE. Traditional metrics for assessing image quality such as PSNR and SSIM (Structural Similarity Index Measure) fail to coincide with human perception[4]. The PSNR score is calculated based on the pixel-wise MSE, so methods that minimize pixel-wise differences tend to achieve high PSNR scores [9]. However, the PSNR-oriented solutions fail at generating high-frequency details and often drive the reconstruction towards the average of possible solutions, producing overly smooth images[6]. The learned perceptual image patch similarity (LPIPS) score[4] was proposed to measure the perceptual quality on various computer vision tasks. According to [2], the LPIPS score reliably coincides with human perception for assessing super-resolved images. We use the LPIPS score as an indicator of perceptual image quality in our experiments. CycleGAN[8] is a pipeline for image-to-image translation with unpaired images using generative adversarial nets and cycle loss. CycleGAN consists of 2 generators $G_{1}, G_{2}$, and 2 discriminators $D_{1}, D_{2}$, where $G_{1}$ and $G_{2}$ each translate the input image in a cycling manner. The generators are trained to minimize the adversarial loss and cycle loss $||G_{2}(G_{1}(x)) - x||_{1}$ between the input image and cycled image. We were able to design a loss based on the cycle loss to reliably measure the content consistency without such a complicated design. \section{Method} \label{headings} \begin{figure} \centering \includegraphics[width=140mm]{Pipeline} \caption{An overview of our method. The cycle consistency loss is measured by comparing the LR image with the downsampled SR image. The discriminator is provided with the target image and a reference image generated by bicubic-upsampling the LR image.} \end{figure} We design a one-to-many approach for perceptual super-resolution by modifying the generator and the training objective. We also describe additional modifications to the training process and discriminator to improve the perceptual quality of SR. \subsection{Cycle consistency loss} Most works on perceptual super-resolution[1, 6, 7] combine the content loss, adversarial loss (GAN loss), and perceptual loss for the training objective as in Equation 1. Although the strict content loss and adversarial loss are fundamentally disagreeing objectives, relying exclusively on either loss each has significant issues. The strict content loss guides the network output to be exactly consistent with the HR image, guiding the network to learn the mean of possible reconstructions and thus tends to give overly-smoothed results. Although the GAN framework is a powerful method for photo-realistic image generation, adversarial learning is highly unstable, and while the adversarial loss and perceptual loss guide the network to be perceptually convincing, they do not enforce the content of the super-resolved image to be consistent with the low-resolution image. \begin{equation} \label{metric} L_{Total} = L_{percep} + \lambda L_{GAN} + \eta L_{1} \end{equation} We regard simply trading off these disagreeing losses as an incomplete objective for super-resolution since the mixing of such losses will obstruct the optimization of either loss. An improved training objective must be GAN-oriented while ensuring consistent content of the image. That is, there needs a content loss that doesn't hamper the generation of images with high-frequency details. We propose a soft content loss inspired by the cycle loss of CycleGAN[8] to ensure the output of the generator to be consistent with the low-resolution image while not disturbing the generation of high-frequency information. We view the super-resolution problem as an image-to-image translation task between the LR and HR image space and apply the CycleGAN framework. To simplify the problem, we exploit our prior knowledge on $G_{2}: HR->LR$. We can denote the downsampling operation as $f$ and set $G_{2}$ to be $f$ instead of learning it. Consequently, our pipeline doesn't require learning $D_{2}$ which is a tool for learning $G_{2}$. This leaves only $G_{1}$ and $D_{1}$ to be learned. We can write the cycle consistency loss as Equation 2. This loss won't penalize generating high-frequency details in any way while the SR image remains consistent with the LR image. Finally, we can conclude our generator loss as Equation 3. \begin{equation} \label{metric} L_{cyc}(G_{1}) = ||f(G_{1}(LR)) - LR||_{1} \end{equation} \begin{equation} \label{metric} L_{Total}(G_{1}) = L_{cyc}(G_{1}) + \lambda L_{GAN}(G_{1}, D_{1}) + \eta L_{percep} \end{equation} Intuitively, we train the generator to generate high-frequency details with the GAN and perceptual loss while the new content loss simply ensures the reconstructed image to be consistent with the original image. \subsection{Adaptive Gaussian noise to the generator} \begin{figure} \centering \includegraphics[width=140mm]{images/noise_scale_map.png} \caption{Boxplot of the scaling factors against the depth of the network trained on configuration(c) $\times4$. The desired magnitude of noise increases in deeper layers, while the final layers have smaller scaling factors. The sensitivity to random noise varies for each layer and channel.} \end{figure} For the generator to be capable of generating more than one solution given a single image, it must receive and apply random information. The variation between super-resolved images will mostly be stochastic variation in high-frequency textures. StyleGAN[3] achieves stochastic variation in images by adding pixel-wise Gaussian noise to the output of each layer in the generator. We adopt this method and add the noise after every RRDB layer in the generator. However, this introduces new hyperparameters with regard to the magnitude of the noise. We also observe that the sensitivity and the desired magnitude of noise differs among layers and channels. Adding the same noise directly after every layer could rather harm the ability of the generator. For example, a channel that detects edges would be seriously harmed by the noise. To mitigate such possible issues, we allow each channel to adaptively learn the desired magnitude of the noise. Specifically, we multiply the noise with a channel-wise scaling factor before adding the noise to the output of each layer. The scaling factor is learned concurrently with the network parameters. The noise is not applied at evaluation. In Figure 2, We observe that the desired magnitude actually differs along the network depth and between channels. The early and final layers seem to have a lower noise tolerance. We suspect this is because early layers focus on extracting the feature of the image and the final layers preferred the noise to be reduced before being applied to the final reconstruction. There was also differences of the magnitude between channels as shown in the boxplot. This suggests that our method of scaling the noise adaptive effectively provides random information for generating multiple realistic reconstructions for super-resolution. \subsection{Reference image for the discriminator} Traditionally, the discriminator network receives a single image and is trained to classify whether the given image is real or a generated image. This setting will provide the generator with gradients to "any natural image" instead of towards the corresponding HR image. In an extreme example, the traditional discriminator won't penalize the generator for generating completely different but equally realistic images from an LR image. Although this is unlikely due to the existence of other content and perceptual losses, the gradient feedback given by the discriminator is sub-optimal for the task of super-resolution. As a solution, we provide the low-resolution image as a reference along with the target image to the discriminator. This enables the discriminator to learn more important features for discriminating the generated image and provide better gradient feedback according to the LR image. For details, refer to Figure 1. We upsample the LR image to the same size as the HR image and concatenate them, feeding a tensor of shape $(H, W, 6)$ to the discriminator. Despite its simplicity, conditioning the discriminator on the input is a crucial modification for training such a supervised problem with GAN-oriented losses. \subsection{Blur detection} We recognized that there are often severely blurry regions in the images from the DIV2K[14] and DIV8K[15] datasets. Although the authors of [15] argue that the data was collected by "paying special attention to image quality", there were many scenes with out-of-focus backgrounds. These blurry regions might plague the generator to learn to generate such blurry patches. Blurry backgrounds are often indistinguishable from finer objects based only on the LR image. Though some might argue that the blurry backgrounds must also be learned, we were able to achieve finer detail and higher LPIPS score by detecting and removing blurry patches from both datasets. We propose to detect and remove blurry patches before the network is trained on those patches. There are various methods for blur detection e.g. algorithmic methods and deep-learning-based approaches[11, 12]. However, most deep-learning-based works focus on predicting pixel-wise blur maps of the image, which wouldn't be suited for our needs. Mostly, the algorithmic method of [10] was successful at reliably detecting blurry patches as can be observed in Figure 3. We measure the variance of the Laplacian activation of the patch and consider patches with variance of under 100 as blurry patches. The algorithm detects 28.8\% blurry patches in a sample of 16,000 randomly cropped patches of size 96$\times$96 from the DIV2K dataset and 48.9\% of patches in a sample of 140,000 patches from the DIV8K dataset. \begin{figure} \centering \includegraphics[width=140mm]{images/Blurexample.png} \caption{Randomly selected samples of the blur detection algorithm tested on image 0031 from the DIV8K dataset. The top two rows are the patches classified as clear and the bottom rows are blurry patches. Regions that are clear in the image (person, pole) are correctly considered as clear patches by the detection algorithm.} \end{figure} \section{Experiments} We conduct experiments to evaluate the effectiveness of our proposed techniques in $\times$4 and $\times$16 resolution and compare them with the baseline ESRGAN. We first experiment the effects of blur detection, then we perform an ablation study of our proposed training methods to evaluate their effectiveness. Implementation detail and training logs can be found on GitHub\footnote{\url{https://github.com/krenerd/ultimate-sr}}. All our experiments were performed on a single Tesla T4 or Tesla K80 GPU. We observed that a large portion of the training was used for loading high-resolution images, despite most of the images not being used. As an implementation detail to improve training speed significantly, we extract multiple patches and save them in a buffer while training instead of extracting only a single patch after loading the image. We randomly pick images from the buffer for training and discard the selected patches from the buffer. In all of our experiments, we extract 128 patches from each image and create a buffer of 1024 patches. \subsection{$\times$4 super-resolution} \subsubsection{Training details} We employ the ESRGAN network architecture with 23 RRDB blocks and most of its training configurations for the baseline of our experiments on $\times$4 super-resolution. The training process is divided into two stages. We first pretrain the PSNR-oriented models then train the ESRGAN-based models. The PSNR-oriented models are trained with the L1 loss with a batch size of 16 for 500K iterations. We apply learning rate decay with an initial learning rate of $2\times10^{-4}$, decayed by a factor of 2 every 200\textit{k} iterations. We initialize the GAN-based model with the PSNR-oriented model. We initialize the learning rate with $1\times10^{-4}$ for both $G_{1}$ and $D_{1}$, decaying the learning rate by a factor of 2 at [50\textit{k}, 100\textit{k}, 200\textit{k}, 300\textit{k}] iterations. For optimization, we use the Adam optimizer for both pretrained networks and GAN-based models, with $\beta_{1}$ = 0.9 and $\beta_{2}$ = 0.99. The learning rate decay schedule corresponds to the one proposed by ESRGAN. We implement our models and methods with the Tensorflow framework. The loss function is scaled with $\eta=10$ and $\lambda=5\times10^{-3}$, which is equivalent to the training configuration of ESRGAN used in the PIRM-SR challenge. This is slightly different from the configuration used in the released model trained with $\eta=10^{-2}$. All of our networks are trained exclusively on the DIV2K dataset[14], while the original ESRGAN was trained with DIV2K, Flickr2K, and OST datasets combined. We obtained the LR images by downsampling the HR images with MATLAB bicubic interpolation. We compare the effects of our methods on LPIPS, PSNR, and SSIM scores on the Set5, Set14, BSD100, and Urban100 datasets. Scores evaluated on the Set5 and Set14 datasets are obtained by averaging the final 5 checkpoints, each recorded at [480\textit{k}, 485\textit{k}, 490\textit{k}, 495\textit{k}, 500\textit{k}] iterations. \begin{table} \centering \caption{LPIPS, PSNR, SSIM scores of various configurations for $\times$4.} \label{tab_numerical_geo} \resizebox{\textwidth}{!} {% \begin{tabular}{|c|c|c|c|c|} \hline & & & & \\ Methods & Set5 & Set14 & BSD100 & Urban100 \\ & (LPIPS / PSNR / SSIM) & & & \\ \hline Pretrained (a) & 0.1341 / 30.3603 / 0.8679 & 0.2223 / 26.7608 / 0.7525 & 0.2705 / 27.2264 / 0.7461 & 0.1761 / 24.8770 / 0.7764 \\[1mm] +Blur detection (b) & 0.1327 / 30.4582 / 0.7525 & 0.2229 / 26.8448 / 0.7547 & 0.2684 / 27.2545 / 0.7473 & 0.1744 / 25.0816 / 0.7821 \\[1mm] \hline ESRGAN (Official) & 0.0597 / 28.4362 / 0.8145 & 0.1129 / 23.4729 / 0.6276 & 0.1285 / 23.3657 / 0.6108 & 0.1025 / 22.7912 / 0.7058 \\[1mm] \hline +blur detection (c) & 0.0538 / 27.9285 / 0.7968 & 0.1117 / 24.5264 / 0.6602 & 0.1256 / 24.6554 / 0.6447 & 0.1026 / 23.2829 / 0.7137 \\[1mm] +refGAN (d) & 0.0536 / 27.9871 / 0.8014 & 0.1157 / 24.4505 / 0.6611 & 0.1275 / 24.5896 / 0.6470 & 0.1027 / 23.0496 / 0.7103 \\[1mm] +Add noise (e) & \textbf{0.04998} / 28.23 / 0.8081 & 0.1104 / 24.48 / 0.6626 & \textbf{0.1209} / 24.8439 / 0.6577 & \textbf{0.1007} / 23.2204 / 0.7203 \\[1mm] +Cycle loss (f) & 0.0524 / 28.1322 / 0.8033 & \textbf{0.1082} / 24.5802 / 0.6634 & 0.1264 / 24.6180 / 0.6468 & 0.1015 / 23.1363 / 0.7103 \\[1mm] \textit{-Perceptual loss} (g) & 0.2690 / 23.4608 / 0.6312 & 0.2727 / 22.2703 / 0.5685 & 0.2985 / 24.1648 / 0.5859 & 0.2411 / 20.8169 / 0.6244 \\[1mm] \hline \end{tabular} } \label{tab:my_label} \end{table} \subsubsection{Ablation study} To study the effects of our proposed methods, we perform an ablation study of our proposed method. We enable our proposed methods one by one and list the resulting scores in Table 1. Each training configuration was fully trained with the original training configurations. We provide the saved model and configuration files to reproduce our results in our project repository. We also list the results of the official ESRGAN for fair comparison. The improvements from the official results and the result from configuration(c) is because the $\eta$ value is different from the official model. First, blur detection is experimented with in configuration(b) and improves the LPIPS score for all benchmarks. We train our baseline ESRGAN in configuration(c) and get reasonable results. By applying the technique of Section 3.3 in configuration(d), we slightly harm the network in terms of the LPIPS score. However, providing conditional information to the discriminator is crucial for learning such a supervised problem with adversarial learning. Our method of directly concatenating the reference image in the input is not optimal. The low-resolution image could be applied through SPADE[20] or alternative spatial transformation methods for improvements. Applying scaled noise shows large improvements as experimented in configuration(e). The cycle consistency loss applied in configuration(f) shows neutral and slightly negative effects on the LPIPS score. The reason for this is mostly because of the incompetent GAN framework lacking the training techniques of modern GAN literature. Our statement is stated by the failure of configuration(g) where the GAN framework alone is responsible for learning the super-resolution process. The GAN framework of ESRGAN is incapable of lead the training process and thus the image quality wasn't improved when we gave more responsibility to the adversarial loss in configuration(f). However, coupled with improved GAN techniques in further research, the cycle consistency content loss will further enhance the image quality. \subsection{$\times$16 super-resolution} \subsubsection{Training details} We employ the RFB-ESRGAN of [21] as the baseline for our experiments on $\times16$ super-resolution. The RFB-ESRGAN proposes an architecture using Receptive Field Blocks(RFB) and Residual of Receptive Field Dense Block(RRFDB), each as an alternative for convolution and RRDB blocks. The RFB-ESRGAN uses less memory compared to methods that manipulate the image in the intermediate $\times4$ resolution[22] and this allowed larger batch size in our environment. We employ the RFB-ESRGAN network architecture with 16 RRDB blocks and 8 RRFDB blocks for the baseline of our experiments on $\times16$ super-resolution. The model is first trained with the L1 loss for 100K iterations with an initial learning rate $2\times10^{-4}$, decayed by a factor of two every $2.5\times10^{5}$ iteration. The GAN-based model is initialized with the pretrained model and is trained for 200K iterations, which is shorter than the original 400K iterations. Additionally, the batch size is decreased from 16 to 4 and we therefore approximately scale the initial learning rate of $10^{-4}$ to $2\times10^{-5}$ by a factor of 5. The learning rate is decayed at [50\textit{k}, 100\textit{k}] iterations. We do not use model ensemble to further stabilize the network. All other models and hyperparameter configurations are equal. We train the network on the DIV8K dataset[15], while the original network was trained with additional datasets including DIV2K, Flicker2K, OST dataset. The first 1,400 images of DIV8K are used as training data and the rest 100 validation images are used for evaluation. \begin{table} \centering \caption{LPIPS, PSNR scores for various configurations for $\times16$ super-resolution.} \label{tab_numerical_geo} {% \begin{tabular}{|l||l|} \hline Methods & DIV8K validation \\ \hline Pretrained (a) & 0.4664 / 30.3603 \\[1mm] +Blur detection (b) & 0.4603 / 25.53 \\[1mm] \hline RFB-ESRGAN(official) & 0.345 / 24.03 \\[1mm] \hline Baseline RFB-ESRGAN (c) & 0.356 / 24.78 \\[1mm] Ours w/o cycle-loss (d) & \textbf{0.321} / 23.95 \\[1mm] Ours w/ cycle-loss (e) & 0.323 / 23.49 \\[1mm] \hline \end{tabular} } \label{tab:my_label} \end{table} \subsubsection{Ablation study} The PSNR-oriented method is improved using blur detection in configuration(b). Our GAN-baed model of configuration(c) achieves worse performance compared to the results reported in [21] because of the lighter training configurations. We were able to make significant improvements in the LPIPS score from the baseline RFB-ESRGAN using our proposed methods in configuration(d). We apply all of our proposed methods except the cycle consistency loss in configuration(d). We also train the model with cycle consistency loss and get similar results in configuration(e). We were able to make such improvements using much lighter training configurations with only half iteration steps, $\times4$ smaller batch size, and without model ensemble. The results are described in Table 2. \section{Conclusion} We proposed a one-to-many approach for super-resolution and achieve improved perceptual quality and better LPIPS score from the baseline ESRGAN configuration and achieve the state-of-art LPIPS score in x16 perceptual super-resolution. We provide scaled pixel-wise to the generator to allow stochastic variation in the reconstructed image and implement a generator capable of a one-to-many pipeline. We also address the limitations of mixing the strict content loss with perceptual losses and propose an alternative based on the cycle loss. Our newly modified loss will ensure the consistency of the content while not penalizing high-frequency detail. Additionally, we further propose more techniques such as blur detection using Laplacian activation and redesign the discriminator input by providing a reference image to further improve the perceptual quality of $\times$4 and $\times$16 super-resolution. However, the GAN framework from ESRGAN was incompetent to guide the training on its own. Modern GAN training techniques could be applied to further improve the GAN framework used in super-resolution. Our proposed loss function will become more effective as a content loss when coupled with a robust GAN framework since it will reduce constraints in generating high-frequency detail. Such improvements are left for future work. \begin{figure} \centering \includegraphics[width=160mm]{images/result_comparison.png} \caption{Qualitative comparison of our methods with the official ESRGAN on $\times 4$ super-resolution. We compare the poorly reconstructed outputs of ESRGAN from BSD100 and Urban100 datasets with our proposed model trained with configuration(f). Our method produces sharp textures and more realistic structures compared to the baseline ESRGAN, although it also fails to accurately reconstruct human faces.} \end{figure} \section*{References} \medskip \small [1] Wang, Xintao, et al. "Esrgan: Enhanced super-resolution generative adversarial networks." Proceedings of the European Conference on Computer Vision (ECCV) Workshops. 2018. [2] Zhang, Kai, Shuhang Gu, and Radu Timofte. "Ntire 2020 challenge on perceptual extreme super-resolution: Methods and results." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2020. [3] Karras, Tero, Samuli Laine, and Timo Aila. "A style-based generator architecture for generative adversarial networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [4] Zhang, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. [5] Jo, Younghyun, Sejong Yang, and Seon Joo Kim. "Investigating loss functions for extreme super-resolution." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2020. [6] Ledig, Christian, et al. "Photo-realistic single image super-resolution using a generative adversarial network." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. [7] Sajjadi, Mehdi SM, Bernhard Scholkopf, and Michael Hirsch. "Enhancenet: Single image super-resolution through automated texture synthesis." Proceedings of the IEEE International Conference on Computer Vision. 2017. [8] Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-consistent adversarial networks." Proceedings of the IEEE international conference on computer vision. 2017. [9] Dong, Chao, et al. "Image super-resolution using deep convolutional networks." IEEE transactions on pattern analysis and machine intelligence 38.2 (2015): 295-307. [10] Bansal, Raghav, Gaurav Raj, and Tanupriya Choudhury. "Blur image detection using Laplacian operator and Open-CV." 2016 International Conference System Modeling \& Advancement in Research Trends (SMART). IEEE, 2016. [11] Tang, Chang, et al. "Defusionnet: Defocus blur detection via recurrently fusing and refining multi-scale deep features." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [12] Wang, Xuewei, et al. "Accurate and fast blur detection using a pyramid M-Shaped deep neural network." IEEE Access 7 (2019): 86611-86624. [13] Yuan, Yuan, et al. "Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018. [14] Agustsson, Eirikur, and Radu Timofte. "Ntire 2017 challenge on single image super-resolution: Dataset and study." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2017. [15] Gu, Shuhang, et al. "Div8k: Diverse 8k resolution image dataset." 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). IEEE, 2019. [16] Jolicoeur-Martineau, Alexia. "The relativistic discriminator: a key element missing from standard GAN." arXiv preprint arXiv:1807.00734 (2018). [17] Kim, Jiwon, Jung Kwon Lee, and Kyoung Mu Lee. "Accurate image super-resolution using very deep convolutional networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. [18] Zhang, Yulun, et al. "Image super-resolution using very deep residual channel attention networks." Proceedings of the European conference on computer vision (ECCV). 2018. [19] Lim, Bee, et al. "Enhanced deep residual networks for single image super-resolution." Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2017. [20] Park, Taesung, et al. "Semantic image synthesis with spatially-adaptive normalization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [21] Shang, Taizhang, et al. "Perceptual extreme super-resolution network with receptive field block." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2020. [22] Jo, Younghyun, Sejong Yang, and Seon Joo Kim. "Investigating loss functions for extreme super-resolution." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2020. [23] Lugmayr, Andreas, et al. "Srflow: Learning the super-resolution space with normalizing flow." European Conference on Computer Vision. Springer, Cham, 2020. \end{document}
1,116,691,498,827
arxiv
\subsubsection{ Time evolution and histogram} To display the discontinuous character of the transition in the model with $z=100$ equivalent neighbors, we have recorded the behavior of the energy of an $L=200$ system as a function of Monte Carlo time. The system appears to display a sort of flip-flop behavior between two states with different energies, at random intervals typically in the order of $10^5$ Wolff cluster steps. But the fluctuations of the higher-energy state are still considerable which suggest that we should also bring the aspect of system size into consideration. Histograms of the energy are shown in Fig.~\ref{histo} for several system sizes, taken at couplings chosen such that both maxima have the same height. Minor reweighting was applied to this purpose. These results show that the peaks become narrower and the minima between them deeper when the system size increases. This is in accordance with first-order behavior \cite{LK}. \begin{figure} \begin{center} \leavevmode \epsfxsize 10.0cm \epsfbox{prob1.eps} \end{center} \caption{Histograms of the energy distributions $p$ of finite $q=3$ Potts models with $z=100$ equivalent neighbors. The values of the finite sizes $L$ are shown in the figure. The couplings are chosen such that the two peaks are equally high. These data represent $5 \times 10^5$ samples separated by $L/4$ single-cluster steps per system size, except for $L=600$ where the latter number is $L/3$.} \label{histo} \end{figure} \subsubsection{Hysteresis loops} We have recorded the behavior of the energy and the magnetization of the model of an $L=600$ system with $z=120$ equivalent neighbors, while the coupling was stepped up or down in small intervals. The results for the energy and the magnetization are displayed in Figs.~\ref{em120n}. The energy-like quantity $E$ is defined as the reduced Hamiltonian (\ref{model}) divided by $-L^2K$. \begin{figure} \begin{center} \leavevmode \epsfxsize 8.0cm \epsfbox{hys3e.eps} \epsfxsize 8.0cm \epsfbox{hys3m.eps} \end{center} \caption{Hysteresis loops of the energy (a) and the squared magnetization (b) of the $q=3$ Potts model with 120 equivalent neighbors for system size $L=600$. Data points are separated by $2 \times 10^5$ single-cluster steps. A data point at the end of the observed metastability could be obtained from intermediate results taken at smaller intervals.} \label{em120n} \end{figure} These data display clear hysteresis loops. The first-order transition is located near $K_{\rm c}\approx 0.0234$; this is rather close to the mean-field prediction \cite{MS,Wu} $K_{\rm c}=0.02310 \ldots$ for $z=120$. The ranges of overlap of the two branches in Figs.~\ref{em120n} are narrow, roughly $10^{-5}$ in $K$. While this is much smaller than the range of metastability according to the mean-field prediction for $q=3$, it is naturally dependent on the system size and the simulation length per data point. \subsubsection{Dynamic behavior} Figure \ref{ztau} displays the dynamic behavior of the $q=3$ model with $100$ equivalent neighbors at the phase transition point, under single-cluster steps. The figure shows the autocorrelation time $\tau$ versus the system size $L$. The autocorrelation time unit is chosen as the number of Wolff-type single-cluster steps equal to the inverse single-cluster size. In the case of a critical point, one expects $\tau \propto L^{z_d}$. The use of logarithmic scales would then lead to a straight line with slope ${z_d}$ if $\tau \propto L^{z_d}$ in Fig.~\ref{ztau}. The upward curvature of the line indicates that the average cluster size does not scale algebraically with $L$, confirming the weakly first-order character of the transition. \begin{figure} \begin{center} \leavevmode \epsfxsize 8.4cm \epsfbox{ztau.eps} \end{center} \caption{Dynamic properties of the cluster simulation of the $q=3$ model with $z=100$ equivalent neighbors, in terms of the autocorrelation time $\tau$ versus the system size $L$. The use of logarithmic scales leads to a straight line with slope ${z_d}$ if $\tau \propto L^{z_d}$. The upward curvature of the data is in agreement with a weakly first-order transition. The slope of the straight line corresponds to with a dynamic exponent $z_d=2.3258$. The line is shown for visual aid only.} \label{ztau} \end{figure} \section {Results for four-state Potts models} \label{sec4} \subsection {Auxiliary results } \label{sec4aux} As for the three-state model, one may attempt to determine the universal ratio $Q_0$ from simulations of the nearest-neighbor Potts model at the exactly known critical point. However, the logarithmic corrections for $q=4$ lead to anomalously slow finite-size convergence and inhibit accurate numerical analysis. Instead, we chose the Baxter-Wu model \cite{BW}, a model of Ising spins on the triangular lattice, with three-spin interactions $K s_i s_j s_k$ in each triangle. It is solved exactly \cite{BW} and belongs to the 4-state Potts universality class, but without logarithmic corrections. In view of its triangular geometry, caution is needed to obtain the universal result for $Q_0$ for models defined on a square periodic box with the proper boundary conditions. The invariance of boundary conditions under renormalization indicates that value of $Q_0$ is universal, but still depending on the type of boundary conditions. In the case of periodic boundary conditions, the periodic images may, for instance, form a square or a triangular lattice. It is thus not surprising that the value of $Q_0$ was found to be different in these two cases \cite{KBQ}; see also a confirmation by Selke \cite{WS} and a discussion by Dohm \cite{VD}. Furthermore, in the case that the periodic images form a rectangular pattern, $Q_0$ is a universal function of the aspect ratio \cite{KBQ}. In the case of a model with anisotropic couplings, this universal function can be used to determine the equivalent geometric anisotropy ratio \cite{KBQ,BD}. In order to account for these effects, we chose the following numerical approach. We simulated Baxter-Wu systems of $L_x \times L_y$ spins, with $L_x$ a multiple of 3, and $L_y$ a multiple of 2, and $L_y/L_x\approx 2/\sqrt 3$. The $x$ direction is parallel to one set of edges of the lattice. The proper positioning of the periodic box with respect to its periodic images was guaranteed by choosing a square lattice representation of the triangular lattice, with diagonal bonds added in the $(1,1)$ direction in the elementary faces labeled with even $y$, and in the $(-1,1)$ direction in the faces labeled with odd $y$. We employed the Wolff-like variant of an algorithm \cite{NE} that freezes one of three sublattices, and grows a single Ising cluster on the remaining two sublattices. Simulations, performed at the critical point $K_{\rm c}=\frac{1}{2} \ln(1+\sqrt 2)$, involved 45 system sizes with $3\leq L_x\leq 240$, with a number of samples in the order of $10^9$ for $L \leq 72$ and $10^8$ for $L > 72$. The periodic boxes defined above are rectangular, with aspect ratios that are only approximately equal to 1. Therefore the aspect ratio was included in the fit formula Eq.~\ref{drq} for the finite-size data as follows: \begin{equation} Q(L_x,L_y)=Q_0+b_1 L^{-1}+b_2 L^{-7/4}+b_3 L^{-2}+c_1 a^2(L_x,L_y)+ \ldots \label{QBW} \end{equation} where $L\equiv \sqrt{L_x A_y}$, with $A_y\equiv\sqrt{3/4}L_y$ the actual size of the rectangular periodic box in the $y$ direction. The aspect ratio is parametrized by $a\equiv (L_x-A_y)/\sqrt{L_x^2+A_y^2}$. The correction exponent $y_1=-1$ was strongly suggested by the finite-size data, and is equal to the difference between the first two magnetic exponents $y_{h2}-y_{h}$ \cite{BN}. The exponent $y_2=-7/4$ is equal to $2-2y_h$ and may arise from the analytic part of the susceptibility. Also the term with $y_3=-2$ helped to reduce the fit residuals, enabling satisfactory fits for $Q$ down to a minimum system size of $L_x=6$. We included an independent determination of $Q_0$ from simulations of the dilute $q=4$ Potts model on the square lattice at the estimated fixed point $K_f=1.45790$, $D_f=2.478438$, which is very close to the value reported in Ref.~\onlinecite{QDB}. We simulated $L \times L$ systems for a number of finite sizes $L=4,$ 5, ...., 80. Since logarithmic corrections are absent at the fixed point, we used Eq.~(\ref{QBW}) to fit to the finite-size data, but without the term accounting for the aspect ratio. The fits behave very similar to those for the Baxter-Wu model, and the results for $Q_0$ of both models agreed satisfactorily. We performed several other fits, by including a correction with an exponent $y_2=-2.5$ instead of $-1.75$, and with various subsets of fixed correction exponents. After discarding the fits with a too large residual $\chi^2$, the results are consistent with our final estimate $Q_0=0.81505 \, (15)$ where the error estimate is twice the statistical margin of the average of the preferred fits for the two models. This value of $Q_0$ will be helpful in the analysis of the results for the $q=4$ equivalent-neighbor models. In addition to the universal ratio $Q_0$, we also investigate the universal ratio $c_1/b$ mentioned in Sec.~\ref{sec2}, by means of simulations of a modified Baxter-Wu model. The model remains self-dual when the couplings $K_{\rm up}$ and $K_{\rm down}$ in the up- and down triangles are made different. The self-dual line is located at $\sinh 2K_{\rm up}\sinh 2K_{\rm down}=1$. For $K_{\rm up} \ne K_{\rm down}$ the model shifts away from the $q=4$ fixed point, and thus acquires logarithmic corrections \cite{genbw}. The direction of its shift is away from the nearest-neighbor model, into the first-order range. Since the ratio $K_{\rm up}/K_{\rm down}$ can be chosen arbitrarily, we can arrange it such that for our range of $L$ values the finite-size-scaling behavior of the model is determined by the renormalization flow in the vicinity of the fixed point. Thus the value of the marginal field $v$ in Eq.~(\ref{fq4}), as well as that of the parameter $c$ in Eq.~(\ref{Qq4c}), remains small. Then, we may assume that higher-order terms with $c_2$, $c_3$, etc.~ in that equation may be neglected. Under this assumption we attempt to determine the universal ratio $c_1/b$ from a fit of Eq.~(\ref{Qq4c}) to the Monte Carlo results for $Q$, taken at the self-dual point for a suitable value $K_{\rm up}/K_{\rm down}$. We simulated the model with $K_{\rm up}/K_{\rm down}=2$ at the self-dual point, for 27 system sizes $6 \leq L \leq 120$. Most of the simulations were rather short, with a few times $10^7$ samples, but we also included long runs with about $10^9$ samples for $L=24$, 48 and 72. Good statistics is necessary, because the differences of the finite-size data for $Q$ and those of the fixed point, which we wish to analyze, are still quite small. Satisfactory least-squares fits could be obtained on the basis of Eq.~(\ref{Qq4c}) for system sizes down to $L_x=6$. We obtain $c_1=-0.0049~(2)$ and $b=0.102~(6)$, from which we estimate the universal ratio $c_1/b=-0.048$. \subsection {Critical points} \label{sec4kc} We estimated the critical points and the temperature exponent $y_t$, as well as $Q_0$, from the Monte Carlo data for the Binder ratio for several values of $z$ in the $q=4$ medium-range Potts model. As a preliminary analysis, we fitted Eq.~(\ref{drq}) to the finite-size data for $Q$, with the values of $Q_0$ and $y_t$ left free. The correction exponents were fixed at $y_1=-1$ and $y_2=-7/4 $. The fit results are shown in Table \ref{pot4}. While these results are inaccurate as a measure of the universal quantities, they provide information how the nature of the phase transition depends on $z$. For $z \hbox{\lower0.7ex\hbox{$\sim$}\llap{\raise0.4ex\hbox{$<$}}} 12$, the estimates of $y_t$ are smaller than the exact value $y_t=3/2$, as is usually the case for $q=4$ Potts-like models with short-range interactions \cite{RS,BN82,ATTTI}. The estimates of the Binder ratio are clearly too large in comparison with the universal value $Q_0=0.81505 \, (15)$ as listed in Sec.~\ref{sec4aux}. These discrepancies are explained by logarithmic factors, such as in Eq.~(\ref{Qq4c}), which are not taken into account in these fits. These differences decrease when $z$ increases, signaling a decrease of the marginal field $v$. The results for $z \hbox{\lower0.7ex\hbox{$\sim$}\llap{\raise0.4ex\hbox{$>$}}} 20$ indicate that the model resides in the first-order range. This is probably also the case for $z=20$, since the $y_t$ estimates exceed 3/2, with an increasing trend for large $L$, suggesting crossover to the discontinuity fixed-point value $y_t=2$. Since the fixed-point value of $z$ seems to lie between 12 and 20, we have included a model with $z=16$ equivalent neighbors. We realized this by including only four of the eight neighbors at a distance $R=\sqrt 5$, with coordinates $(x,y)=(2,1),(-1,2),(-2,-1),(1,-2)$. This preserves the fourfold rotational symmetry of the local interacting environment. \begin{table}[h] \centering \caption{\label{pot4} Binder ratio $Q$ and thermal exponent $y_t$ as estimated from simulations of the medium-range $q=4$ Potts model. These results suggest that the tricritical point between the critical and first-order range occurs between $z=12$ and 20. The error margins, quoted as 2 times the standard deviation of the statistical analysis, are not realistic because logarithmic correction factors are omitted in this analysis. Moreover, the errors for $z=60$ may be underestimated because of slow dynamics in the first-order range. The third column shows the number of millions of samples taken for each data point as specified by $K,L$. A number of $K$ values near $K_{\rm c}$ was chosen for each $L$, typically varying between 6 for $L<20$ and 1 for the largest values of $L$. } \vskip 5mm \begin{tabular}{|c|c|c|l|l|l|l|l|} \hline $z$& $L$ & Ms &$K_{\rm c}$ & $Q$ &$y_t$ & $y_1$ & $y_2$ \\ \hline 4 &12-240& 8 &1.09862 (1)&0.840 (2)& 1.418 (5)&$-$1 &$-$7/4 \\ 8 &12-224& 8 &0.49098 (2)&0.836 (2)& 1.431 (4)&$-$1 &$-$7/4 \\ 12 &12-224& 30 &0.30625 (2)&0.828 (3)& 1.490 (10)&$-$1 &$-$7/4 \\ 16 &12-224& 30 &0.222856 (2)&0.814 (1)& 1.529 (10)&$-$1 &$-$7/4 \\ 20 &12-224& 10 &0.175842 (2)&0.805 (1)& 1.610 (10)&$-$1 &$-$7/4 \\ 24 & 8-120& 12 &0.144523 (2)&0.795 (1)& 1.70 (4)&$-$2 &$-$3 \\ 28 & 8-96 & 12 &0.122812 (2)&0.788 (1)& 1.82 (6)&$-$2 &$-$3 \\ 36 & 8-84 & 15 &0.094528 (2)&0.780 (1)& 1.91 (5)&$-$2 &$-$3 \\ 44 & 8-48 & 25 &0.076826 (4)&0.781 (6)& 2.02 (5)&$-$2 &$-$3 \\ 60 &12-44 & 20 &0.055921 (2)&0.804 (8)& 2.04 (5)&$-$2 &$-$3 \\ \hline \end{tabular} \end{table} We have also determined the first derivative of $Q$ with respect to the coupling $K$ at criticality, similarly as for $q=3$. In the critical range one thus expects, in principle, convergence of $(\ln({\rm d}Q/{\rm d}K)/\ln L$ to $y_t=3/2$, but the presence of a marginal field leads to corrections behaving as an inverse logarithm of $L$, so that the available range of system sizes is insufficient for an accurate result. \begin{figure} \begin{center} \leavevmode \epsfxsize 15.0cm \epsfbox{dqdk4.eps} \end{center} \caption{Finite-size dependence of the derivative at $K_{\rm c}$ of the Binder ratio $Q$ of the 4-state Potts model with respect to $K$. The data points for each value of $z$ are connected by by lines which are intended for visual aid, and for display of possible extrapolations to $L=\infty$. The values of $z$ are shown in the figure for each curve. This way of plotting should lead, for critical models with sufficiently large $L$, to linear behavior and convergence to the temperature exponent $y_t=3/2$. However, due to the presence of logarithmic corrections, such behavior may only be observed in practice when the marginal field vanishes. The data in this figure suggest that this is the case near $z=16$. } \label{dQ4} \end{figure} Nevertheless, the data for $(\ln({\rm d}Q/{\rm d}K)/\ln L$ versus $1/\ln L$, shown in Fig.~\ref{dQ4} are sufficiently clear to demonstrate that the $z=16$ model lies close to the $q=4$ fixed point, and signals the boundary between the short-range behavior for $z < 16$ and first-order behavior for $z > 16$. In an attempt to obtain more accurate estimates of the critical points, $Q_0$ and $y_t$ were fixed at their expected values, and a fit formula based on Eq.~(\ref{drq4}) was used for $z \leq 20$. The $z=20$ model still seems to be rather close to the fixed point. However, except for $L=16$, it appears that the accuracy of the fits is limited, because the higher-order logarithmic terms do not converge satisfactorily. As a result, the parameters $b$ and $c_1$, purportedly describing the marginal scaling field, are poorly determined. Instead, we define, on the basis of Eqs.~(\ref{Qq4c}) and (\ref{drq4}), a measure of the marginal field as the sum of the logarithmic terms at $K_{\rm c}$ \begin{equation} \Delta Q(L) \equiv \sum_k c_k/(1-b\ln L)^k=Q(K_{\rm c},L)- \sum_j b_j L^{y_j} -Q_0 \, . \label{delq} \end{equation} At a constant finite size $L$, this quantity represents the scaling function $Q(u)$ as a function of the distance $u$ to the fixed point. Unlike the individual amplitudes, the sum of the logarithmic terms is well determined by the fit, at least within the range of $L$ covered by the data. We chose $L=80$ where the power-law corrections, and their error margins, are relatively small. The results for $\Delta Q(80) $ are included in Table \ref{pot42}. For $z \geq 24$ we used a fit formula with a different value $Q_0=0.8$ and without logarithmic terms. Power-law corrections are included with exponents as shown in Table \ref{pot42}. The distance to the discontinuity fixed point is purportedly approximated by $\Delta Q(L) \equiv Q(K_{\rm c},L) -Q_0$ at a sufficiently large size $L=80$. \\ \begin{table}[htbp] \centering \caption{\label{pot42} Critical points $K_{\rm c}$ for $q=4$ as derived from fits with the Binder ratio $Q_0$ and thermal exponent $y_t$ fixed as shown in the table. Error margins are quoted as twice the standard deviation of the statistical analysis. The exponents of the power-law corrections were fixed at the values shown in the table, except for $y_1$ in the range $24 \leq z \leq 44$ where this exponent was left free in the fit. } \vskip 5mm \begin{tabular}{|r|r|l|l|l|l|l|l|} \hline $z$ &$L_{min}$&$K_{\rm c}$&$Q_0$ &$y_t$ & $y_1$ & $y_2$&$\Delta Q(80)$ \\ \hline 4 & 6 &ln\,3 (exact) & 0.81505 &3/2 &$-$1 &$-$7/4 &~~0.0346 \\ 8 &12 &0.49097 (1)& 0.81505 &3/2 &$-$1 &$-$7/4 &~~0.0245 \\ 12 & 8 &0.306252 (1)& 0.81505 &3/2 &$-$1 &$-$7/4 &~~0.0136 \\ 16 &12 &0.222856 (1)& 0.81505 &3/2 &$-$1 &$-$7/4 &$-0.0010 $ \\ 20 &12 &0.175843 (1)& 0.81505 &3/2 &$-$1 &$-$7/4 &$-0.0072 $ \\ 24 & 8 &0.144523 (1)& 4/5 &2 &$-$0.3 (1)&$-$2 &$-$0.005 \\ 28 & 8 &0.122812 (1)& 4/5 &2 &$-$0.6 (1)&$-$2 &$-$0.010 \\ 36 & 8 &0.094531 (2)& 4/5 &2 &$-$0.6 (1)&$-$2 &$-$0.018 \\ 44 & 8 &0.076829 (2)& 4/5 &2 &$-$0.8 (1)&$-$2 &$-$0.012 \\ 60 &12 &0.055919 (2)& 4/5 &2 &$-$1 &$-$2 &$-$0.004 \\ \hline \end{tabular} \end{table} The accuracy of the values of $\Delta Q(80)$, shown in the last column of Table \ref{pot42}, is estimated as about 0.001. The results in the range $4 \leq z \leq 20$ clearly display a change of sign of the marginal field near $z=16$. The results in the range $24 \leq z \leq 60$ indicate that the finite-size scaling function of $Q$ describing the crossover from the merged fixed point to the discontinuity fixed point goes through an extremum before approaching the limit $Q=4/5$. \subsection {Hysteresis loop} For $q=4$ we have determined the behavior of the energy and the magnetization of an $L=120$ system with $z=60$ equivalent neighbors, while the coupling was stepped up or down in small intervals. We find very clear hysteresis loops, which are displayed in Figs.~\ref{q460nem}. The first-order transition takes place near $K_{\rm c}=0.0559$, not far from the mean-field prediction $K_{\rm c}=0.0593$ for the $q=4$ model with $z=60$ interacting neighbors. \begin{figure} \begin{center} \leavevmode \epsfxsize 8.0cm \epsfbox{hys4e.eps} \epsfxsize 8.0cm \epsfbox{hys4m.eps} \end{center} \caption{Hysteresis loops of the energy (a) and the squared magnetization (b) of the $q=4$ model with 60 equivalent neighbors and finite size $L=120$. The energy-like quantity $E$ is equal to the expectation value of the reduced Hamiltonian (\ref{model}) divided by $-L^2K$. Data points are separated by $2 \times 10^5$ single-cluster steps. Two data points at the end of the observed metastability could be obtained from intermediate results taken at smaller intervals.} \label{q460nem} \end{figure} \section {Discussion and conclusion} \label{sec5} The results in Sec.~\ref{sec3} indicate that, for the $q=3$ Potts model, the renormalization flow is as shown in Fig.~\ref{rgq34} (left-hand side), i.e., the role of the interaction range is similar to that of the fugacity of the vacancies in the dilute Potts model \cite{NBRS}. Our results indicate that the $q=3$ Potts model with $z=80$ lies close to the tricritical fixed point in Fig.~\ref{rgq34}, and that the critical fixed point corresponds with a value of $z$ between 8 and 12. \begin{figure} \begin{center} \leavevmode \epsfxsize 7.9cm \epsfbox{q3rg.eps} \epsfxsize 7.5cm \epsfbox{q4rg.eps} \end{center} \caption{Renormalization flow of the three-state (a) and of the four-state (b) Potts model along the line of ordering transitions.} \label{rgq34} \end{figure} Also the results for the $q=4$ model in Sec.~\ref{sec4} display this analogy: increasing the range of the interactions has a similar effect as dilution in the nearest-neighbor $q=4$ Potts model \cite{NBRS,QDB}. Thus our results are well described by the renormalization flow diagram that follows when the critical and tricritical fixed points in Fig.~\ref{rgq34} merge into a single fixed point that is marginally irrelevant on the short-range side and marginally relevant on the long-range side \cite{NBRS}, as shown in Fig.~\ref{rgq34}. Since the marginal field is absent in the Baxter-Wu model \cite{BW}, that model faithfully reproduces the expected scaling behavior a the merged fixed point. Our results in Sec.~\ref{sec4kc} indicate that also the $q=4$ Potts model with $z=16$ lies close to the merged fixed point in Fig.~\ref{rgq34}. These results for the $q=3$ and 4 Potts model stand in a strong contrast with the Ising case $q=2$, where the effects of vacancies and of an increased interaction range are different. The Ising tricritical point such as present in the dilute Ising model is absent in the $q=2$ model with medium-range interactions \cite{LBB}, as illustrated in Fig.~\ref{ising}. There is only a gradual crossover, with mean-field behavior only in the limit $R \to \infty$. Ising universality applies for all finite $R$. An obvious question not answered in the present work is how the present work can be generalized to non-integral values of $q$, i.e., the medium-range random-cluster model \cite{KF}. Self-consistent solution in the mean-field limit $z \to \infty$ shows that, for $q<2$, the critical behavior of this model is the same as that of the mean-field percolation model, with critical exponents $\beta=1$, $\gamma=1$ and $\delta=2$. For this range of $q$, we conjecture that the mean-field fixed point is unstable with respect to finite values of $z$. We thus expect that, for $q<2$, the universal behavior is that of the short-range $q$-state random-cluster model, for all finite ranges $R$ of interaction. \acknowledgments We acknowledge useful discussions with A.~D. Sokal about the value of $Q_0$ in the first-order range. H.~B. is grateful for the hospitality extended to him by the Faculty of Physics of the Beijing Normal University. This research was supported by the National Natural Science Foundation of China under Grant No.~11275185 (YD), No.~ 11175018 (YL,WG), and by the Fundamental Research Funds for the Central Universities of the Ministry of Education (China).
1,116,691,498,828
arxiv
\section{Introduction and discussion} Recent work by various authors has provided evidence that in QCD, a magnetic field leads to several interesting phenomena, for values of the magnetic field which are substantially smaller than anticipated in earlier studies. The origin of these effects lies in the chiral axial anomaly and the additional couplings it provides. For instance, it has been shown \cite{Son:2007ny} that the axial anomaly is responsible for the appearance of \emph{pion domain walls}, which carry baryon charge. These are stable for sufficiently large baryon chemical potentials. Importantly, they form for relatively small critical values of the magnetic field, as this critical value scales with the pion mass. In the presence of an axial chemical potential \cite{Kharzeev:2007jp,Fukushima:2008xe}, an external magnetic field can trigger the appearance of a vectorial current in the direction of this field, \begin{equation} \vec{j}_V \sim e \mu_A \vec{B}\,. \end{equation} The origin of this so called \emph{chiral magnetic effect} (CME) is the asymmetry between left- and right-handed fermions (parametrised by the axial potential $\mu_A$), which leads to separation of electric charge. Related to this is the \emph{chiral separation effect} \cite{Son:2004tq,Metlitski:2005pr,Son:2009tf}, where instead of an axial potential, a baryonic quark potential is introduced. The magnetic field now leads to a separation of chiral charge, and an \emph{axial} current is generated, \begin{equation} \vec{j}_A \sim e \mu_B \vec{B}\,. \end{equation} Combining these two effects leads to evidence for the existence of a so-called \emph{chiral magnetic wave} \cite{Kharzeev:2010gd}. All these effects suggest that the consequence of a magnetic field may be much more important (and potentially easier to see in experiments) than previously thought. These effects were originally derived using a quasi-particle picture of chiral charge carriers, which is only valid at weak coupling. They have meanwhile also been confirmed in a number of holographic descriptions of the strong coupling regime. The pion gradient walls were found in the Sakai-Sugimoto model in \cite{Bergman:2008qv,Thompson:2008qw} (see also~\cite{Bergman:2012na}), and there are even abelian analogues of this solution \cite{Kim:2010pu} which involve an $\eta'$-meson gradient (and of course no baryon number in this case). Whether or not holographic models exhibit the CME or CSE is subject of more controversy, as it requires a careful definition of the holographic currents. An analysis of the D3/D7 system was given in \cite{Hoyos:2011us,Gynther:2010ed} and we will comment extensively on the Sakai-Sugimoto model later in this paper. A major question is what happens when the chemical potentials are large enough so that they trigger a condensation of bound states. Even without a magnetic field and without an axial anomaly, various models predict the formation of a non-isotropic (although homogeneous) vector meson condensate, once the axial chemical potential becomes of the order of the meson mass (see \cite{Aharony:2007uu} for an analysis in the Sakai-Sugimoto model relevant here, and a list of references to other works). When the axial anomaly is present, the condensate is typically no longer even homogeneous, but forms a spiral structure \cite{Bayona:2011ab}. A lot of work in this direction focuses on high temperature models, but in fact such condensation already happens at low and zero temperatures~\cite{Aharony:2007uu,Bayona:2011ab}. \medskip In the present paper, we therefore examine the Sakai-Sugimoto model in the presence of a magnetic field and an abelian chiral chemical potential, and set out to determine whether there is an instability against decay to a non-homogeneous chiral spiral also in this case. We will also analyse whether the ground state exhibits a chiral magnetic effect, and whether there is an $\eta'$-gradient in this case. We will only consider the confined chirally broken phase of this model, i.e.~the low-temperature behaviour. Our main results are as follows. First, we determine the location of the phase boundary between the homogeneous solution of \cite{Bergman:2008qv,Thompson:2008qw} and a new chiral spiral like solution\footnote{The stability analysis of the homogeneous solution presented in \cite{Kim:2010pu} is incorrect in some crucial aspects; we will comment on the details and the consequences later.}. We construct the non-homogeneous chiral spiral like solution explicitly.\footnote{The chiral magnetic wave mentioned earlier is different from the chiral magnetic spiral, because the chiral magnetic wave involves the density and the component of the current parallel to the magnetic field, while the chiral magnetic spiral involves the components of the current transverse to the magnetic field.} We establish that the latter still does not exhibit the CME, but point out that this conclusion relies crucially on subtleties related to the precise way in which the chemical potential is introduced. Finally, we show that a magnetic field tends to stabilise the homogeneous solution, and there is a critical value beyond which the chiral spiral ceases to exist. The parameter space in which we need to scan for non-linear spiral solutions is rather large and the numerical analysis is consequently sometimes tricky; we comment on details of this procedure in the appendix. An important issue in this analysis is the precise way in which the chemical potential is included in the holographic setup. It was recently pointed out \cite{Landsteiner:2011tf,Gynther:2010ed} that the usual approach, which roughly amounts to identifying the chemical potential with the asymptotic value of the gauge field, may be incorrect in case the associated charge is not conserved (like, in our case, the axial charge). However, we can define $\mu_A$ as the integrated radial electric flux between the two boundaries of the brane. Following the field theory arguments of~\cite{Landsteiner:2011tf} we will argue that there are then two natural formalisms for introducing the chemical potential. In formalism A we introduce $\mu_A$ through boundary conditions in the component ${\cal A}_0$ of the gauge field. In formalism B the chemical potential instead sits in ${\cal A}_z$. In the presence of the axial anomaly these two formalisms are inequivalent. In formalism A, we find that the non-homogeneous phase characterised by wave number $k$, is dominant and there is a preferred value of this wave number $k = k_{\text{g.s.}}$. When the magnetic field is small compared to the chiral chemical potential $B \ll \mu_A$, $k_{\text{g.s.}}$ depends only very weakly on the value of $\mu_A$. This result is consistent with our previous work~\cite{Bayona:2011ab}. For sufficiently large magnetic field we find that the non-homogeneous ground state is suppressed. In formalism B, on the other hand, the homogeneous state always has lower energy than the non-homogeneous one, and there is hence no chiral magnetic spiral. As far as the chiral magnetic effect is concerned, we find that in formalism A the effect is absent. This result is consistent with previous calculations on the Sakai-Sugimoto model \cite{Rebhan:2009vc}. In formalism B, on the other hand, there exists a non-zero chiral magnetic effect. This seems to be more in line with recent lattice calculations~\cite{Yamamoto:2012bi}, chiral model calculations~\cite{Fukushima:2012fg} and holographic bottom-up models~\cite{Gorsky:2010xu,Rubakov:2010qi} for the confined chirally broken phase of QCD. However, a main shortcoming of formalism~B is that the inhomogeneous phase, whose existence is indicated by our perturbative analysis, is absent in the full nonlinear theory. Since the perturbative analysis is blind to the subtle issues on how one introduces the chemical potential in the holographic setup, we are inclined to think that, as it stands, formalism B is incorrect. Whether formalism A is correct on the other hand, or also needs to be altered, is an open question at the moment, and needs a separate study, which we leave for future work. \section{Chemical potentials, currents and anomalies} \subsection{Effective five-dimensional action} In order to set the scene and to introduce our conventions, let us start by giving a brief review of the basics of the Sakai-Sugimoto model~\cite{Sakai:2004cn,Sakai:2005yt}. For a more detailed description of the features of this model which are relevant here, we refer the reader to the original papers or to~\cite{Bayona:2011ab}. The Sakai-Sugimoto model at low temperature consists of $N_f$ flavour D8-branes and $N_f$ anti-D8-branes which are placed in the gravitational background sourced by a large number $N_c$ of D4-branes which are compactified on a circle of radius~$R$. In the simplest set up, the probes are asymptotically positioned at the antipodal points of a circle, while in the interior of the geometry they merge together into one U-shaped object. The gauge theory on the world-volume of the probe brane is nine-dimensional, containing a four-dimensional sphere, the holographic direction and four directions parallel to the boundary. By focusing on the sector of the theory that does not include excitations over the $S^4$, one can integrate the probe brane action over this sub-manifold and end up with an effective five dimensional DBI action on the probe brane world-volume. By expanding this action to the leading order in the string tension, one ends up with a five dimensional Yang-Mills theory with a Chern-Simons term. For $N_f = 1$ we can write the action as \begin{equation} \label{fullaction} S= S_{\text{YM}} + S_{\text{CS}} = - \frac{\kappa}{2} \int\! {\rm d}^4 x {\rm d}z \sqrt{-g} \, {\cal F}^{mn} {\cal F}_{mn} + \frac{\alpha}{4} \, \epsilon^{\ell m n p q} \int\! {\rm d}^4x {\rm d}z {\cal A}_{\ell} {\cal F}_{mn} {\cal F}_{pq} \, . \end{equation} where the indices are now raised or lowered using the effective five dimensional metric $g_{mn}$. This metric is defined as \begin{equation} {\rm d}s_{(5)}^2 = g_{mn} {\rm d}x^m {\rm d}x^n = M_{\text{KK}}^2 K_z^{2/3} \, \eta_{\mu \nu} {\rm d}x^\mu {\rm d}x^\nu \,+\, K_z^{-2/3} {\rm d}z^2 \, , \label{gmn} \end{equation} where $K_z \equiv (1 + z^2)$. The $x^\mu$ directions are parallel to the boundary and $z$ is the holographic direction orthogonal to the boundary. The coupling constants $\kappa$ and $\alpha$ are given in terms of the number of colours $N_c$, the compactification mass scale $M_{\text{KK}}$ and the 't~Hooft coupling $\lambda$ by \begin{equation} \kappa= \frac{\sqrt{\alpha '} g_s N_c^2 M_{\text{KK}}}{108 \pi^2} = \frac{\lambda N_c}{216 \pi^3} \,, \qquad \alpha = \frac{N_c}{24 \pi^2} \, . \end{equation} The Chern-Simons term written in~\eqref{fullaction} is valid only for a single D8 probe, i.e.~for a $U(1)$ gauge theory on the brane world-volume. \subsection{Symmetries and chemical potentials} Holographic models encode \emph{global} symmetries of the dual gauge theory in the form of \emph{gauge} symmetries in the bulk theory, and these relations hold both for the closed as well as the open sectors of the string theory. The same is true for the Sakai-Sugimoto model at hand, where there are two independent gauge theories living near the two boundaries of the flavour D8-$\overline{{\rm D}8}$ brane system. These $U(N_f)_L$ and $U(N_f)_R$ gauge symmetries correspond to the global $U(N_f)\times U(N_f)_R$ flavour symmetries of the dual gauge theory. In the low-temperature phase of the Sakai-Sugimoto model that we are interested in, the two branes are connected in the interior of the bulk space, and thus the gauge fields ${\cal A}_{M}^L$ and ${\cal A}_{M}^R$ are limits of a single gauge field living on the two connected branes. Therefore, one cannot independently perform gauge transformations on these two gauge fields, but is constrained to gauge transformations which, as $z\rightarrow \pm \infty$, act in a related way. Specifically, since near the boundary a large bulk gauge transformation acts as ${\cal A}_{L/R} \rightarrow g_{L/R} {\cal A}_{L/R} g^{-1}_{L/R} $, then clearly any state (and in particular the trivial vacuum ${\cal A}=0$) is invariant under the vectorial transformations, i.e.~those transformations for which $g_{L} = g_{R}$. This means that the vector-like symmetry is unbroken in this model. On the other hand, the fact that the branes are joined into one U-shaped object means that the axial symmetry is broken, which corresponds to the \emph{spontaneous} breaking of the axial symmetry in the dual gauge theory. The corresponding Goldstone bosons can be seen explicitly in the spectrum of the fluctuations on the brane world-volume. The relationship between the bulk gauge field and the source and global symmetry current of the dual gauge theory is encoded in the asymptotic behaviour of the former. More precisely, the bulk gauge field ${\cal A}_M(z,x^\mu)$ behaves, near the boundary and in the ${\cal A}_z=0$ gauge, as \begin{equation} \label{asymptotic} {\cal A}_\nu(x^\mu,z) \rightarrow a_{\nu}(x^\mu)\Big(1 + \mathcal{O}(z^{-2/3})\Big) + \rho_\nu(x^\mu) \frac{1}{z}\Big(1 + \mathcal{O}(z^{-2/3}) \Big)\,. \end{equation} Here $\rho_\nu$ parametrises the normalisable mode, while $a_\nu(x^\mu)$ describes the non-norm\-alisable behaviour of the field. The latter is interpreted as a source in the dual field theory action, where it appears as \begin{equation} \label{e:boundaryaJ} \int\!{\rm d}^4x\, a_\nu(x^\mu) J^\nu(x^\mu)\, . \end{equation} Hence the expectation value of the current corresponding to the global symmetry in the gauge theory is given by \begin{equation} \label{currentsholography} J^\mu(x)_{\pm} = \frac{\delta S}{\delta A_{\mu}(x,z \rightarrow \pm \infty)} \, . \end{equation} When the bulk action is just the ordinary Yang-Mills action (in curved space), this expectation value is the same as the coefficient $\rho_\nu$ in the expansion \eqref{asymptotic}. However, in the presence of the Chern-Simons term, the coefficient $\rho_\nu$ is different from the current, as we will explicitly demonstrate for the system at hand in section~\ref{s:anomalies}. This difference has been a source of some of confusion in the literature. Its importance has recently been emphasised in the context of the chiral magnetic effect in the Sakai-Sugimoto model in~\cite{Rebhan:2010ax}. From~\eqref{e:boundaryaJ} we also see that adding a chemical potential to the field theory corresponds to adding a source for $J^0$, which implies the boundary condition for the holographic gauge field ${\cal A}_\nu(x)= \mu \delta_{\nu 0}$. For the Sakai-Sugimoto model, the bulk field ${\cal A}_m$ living on the D8-branes has \emph{two} asymptotic regions, corresponding to each brane, and hence there are two independent chemical potentials $\mu_L$ and $\mu_R$ which can be separately turned on. Instead of left and right chemical potentials one often introduces vectorial and axial potentials, defined respectively as $\mu_B=\frac{1}{2}(\mu_R +\mu_L)$ and $\mu_A=\frac{1}{2}(\mu_R-\mu_L)$. For the $N_f=2$ case, the vectorial and axial chemical potentials for the $U(1)$ subgroup of the $U(2)$ gauge group on the two D8-branes correspond to the baryonic and axial chemical potential in the dual gauge theory, while the non-abelian $SU(2)$ chemical potentials are mapped to the vectorial and axial isospin potentials. In what follows we will also be interested in studying the system in the presence of an \emph{external} (non-dynamical) magnetic source, which will be introduced by turning on a nontrivial profile for the non-normalisable component $a_\nu(x)$ of the bulk field. \subsection{The Chern-Simons term, anomalies and the Bardeen counter term} \label{s:anomalies} The symmetries and currents of the model discussed in the previous section are, however, valid only at leading order in $\lambda^{-1}$. At the next order in $\lambda^{-1}$, the Yang-Mills action on the brane world-volume receives many corrections, among which the Chern-Simons term. This term turns out to be crucial for the existence of a nontrivial ground state of the system in the presence of the external magnetic field, a situation which we will study in the following sections. On manifolds with boundaries, however, the Chern-Simons term is manifestly gauge non-invariant and it also spoils conservation of the vectorial and axial currents. Both of these reflect the fact that the Chern-Simons term is the holographic manifestation of the vector and axial gauge anomalies in the dual theory. Apart from the Chern-Simons term there are of course also various other corrections. One important class of terms is coming from the expansion of the DBI action. In previous work~\cite{Bayona:2011ab} we have seen that the qualitative picture of chiral spiral formation at vanishing magnetic field does not change when these corrections are taken into account. We will assume that this holds true here as well. For the other higher derivative corrections, it is important to note that we will find that our ground state has a momentum scale of order $M_{\text{KK}}$, not $1/l_s$, so higher derivatives can typically be ignored as long as $M_{\text{KK}} l_s$ is small. Returning to the role of the Chern-Simons term in describing anomalies, let us start by decomposing the gauge field in terms of the axial and vectorial components, as \begin{equation} {\cal A}_m (x,z)= {\cal A}_m^V (x,z) + {\cal A}_m^A (x,z) \, . \label{VA} \end{equation} These two components transform under under an inversion of the holographic coordinate $z \to - z$ as \begin{equation} {\cal A}_\mu^{V/A}(-z,x) = \pm {\cal A}_\mu^{V/A}(z,x) \,, \quad {\cal A}_z^{V/A}(-z,x) = \mp {\cal A}_z^{V/A}(z,x) \, . \label{mirror} \end{equation} Furthermore, the $\mu$ component of these fields are related to the dual gauge fields as \begin{equation} {\cal A}^V_\mu ( z = \pm \infty , x ) = A^V_\mu (x) \quad , \quad {\cal A}^A_\mu (z = \pm \infty , x) = \mp A^A_\mu (x) \, , \label{bdyfields} \end{equation} where the non-calligraphic $A^{V/A}_\mu(x)$ are the boundary vector and axial-vector gauge fields. When written in terms of these vectorial and axial potentials the action reads \begin{multline} \label{fullactionAV} S = \int\! {\rm d}^4 x {\rm d}z \Big \{ - \frac{\kappa}{2} \sqrt{-g} \left [ {\cal F}^{mn}_V {\cal F}_{mn}^V + {\cal F}^{mn}_A {\cal F}_{mn}^A \right ] \\[1ex] + \frac{\alpha}{4} \epsilon^{\ell m n p q} \left [ 2 {\cal A}_\ell^V {\cal F}_{mn}^A {\cal F}_{pq}^V + {\cal A}_\ell^A ( {\cal F}_{mn}^V {\cal F}_{pq}^V + {\cal F}_{mn}^A {\cal F}_{pq}^A ) \right ] \Big \} \,. \end{multline} We now want to compute the currents following the prescription \eqref{currentsholography}. The variation of the action can be written as \begin{multline} \delta S = \int\! {\rm d}^4 x {\rm d}z \Big \{ \left [ \frac{ \partial {\cal L}}{ \partial {\cal A}^V_\ell} - \partial_m {\cal P}^{m \ell}_V \right ] \delta {\cal A}^V_\ell + \left [ \frac{ \partial {\cal L}}{ \partial {\cal A}^A_\ell} - \partial_m {\cal P}^{m \ell}_A \right ] \delta {\cal A}^A_\ell \\[1ex] + \partial_m \left[ {\cal P}^{m \ell}_V \delta {\cal A}^V_\ell + {\cal P}^{m \ell}_A \delta {\cal A}^A_\ell \right ] \Big \} \, , \label{varYMCS} \end{multline} where the derivatives of the Lagrangian are given by \begin{equation} \begin{aligned} \frac{ \partial {\cal L}}{ \partial {\cal A}^V_\ell} &= \frac{\alpha}{2} \epsilon^{\ell m n p q} {\cal F}_{mn}^A {\cal F}_{pq}^V \, , \\[1ex] \frac{ \partial {\cal L}}{ \partial {\cal A}^A_\ell} &= \frac{\alpha}{4} \epsilon^{\ell m n p q} ( {\cal F}_{mn}^V {\cal F}_{pq}^V + {\cal F}_{mn}^A {\cal F}_{pq}^A )\, , \\[1ex] {\cal P}^{m \ell}_V &\equiv \frac{ \partial {\cal L}}{ \partial (\partial_m {\cal A}^V_\ell) } = - 2 \kappa \sqrt{-g} {\cal F}^{m \ell}_V + \alpha \epsilon^{m \ell n p q} ({\cal A}_n^V {\cal F}_{pq}^A + {\cal A}_n^A {\cal F}_{pq}^V ) \, , \\[1ex] {\cal P}^{m \ell}_A &\equiv\frac{ \partial {\cal L}}{ \partial (\partial_m {\cal A}^A_\ell) } = - 2 \kappa \sqrt{-g} {\cal F}^{m \ell}_A + \alpha \epsilon^{m \ell n p q} ({\cal A}_n^V {\cal F}_{pq}^V + {\cal A}_n^A {\cal F}_{pq}^A ) \,. \label{genmom} \end{aligned} \end{equation} Note that ${\cal P}^{m \ell}_{V/A}$ are antisymmetric in $m \leftrightarrow \ell$. Imposing that the bulk term in the variation (the first line in \eqref{varYMCS}) vanishes gives the equations of motion, \begin{equation} \begin{aligned} 2 \kappa \partial_m ( \sqrt{-g} {\cal F}^{m \ell}_V ) + \frac32 \alpha \epsilon^{\ell m n p q} {\cal F}_{mn}^A {\cal F}_{pq}^V &= 0 \, , \\[1ex] 2 \kappa \partial_m ( \sqrt{-g} {\cal F}^{m \ell}_A ) + \frac34 \alpha \epsilon^{\ell m n p q} ({\cal F}_{mn}^V {\cal F}_{pq}^V + {\cal F}_{mn}^A {\cal F}_{pq}^A ) &= 0 \, . \label{eomYMCS} \end{aligned} \end{equation} We will need these shortly to show that the currents are not conserved. The boundary term in the action variation can be written as \begin{equation} \delta S_{\text{bdy}} = \int\! {\rm d}^4x {\rm d}z \, \Big \{ \partial_z \left [ {\cal P}^{z \mu}_V \delta {\cal A}^V_\mu + {\cal P}^{z \mu}_A \delta {\cal A}^A_\mu \right ] + \partial_\mu \left [ {\cal P}^{\mu \ell}_V \delta {\cal A}^V_\ell + {\cal P}^{\mu \ell}_A \delta {\cal A}^A_\ell \right ] \Big \} \,. \label{decompbdyterm} \end{equation} This implies that the holographic vector and axial currents are given by the expressions \begin{equation} \begin{aligned} \label{currentss} J^\mu_V(x) &= - 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} {\cal F}^{z \mu}_V \right ] - 2 \alpha \epsilon^{\mu \nu \rho \sigma} ( A_\nu^V F_{\rho \sigma}^A + A_\nu^A F_{\rho \sigma}^V ) \, , \\[1ex] J^\mu_A(x) &= 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} {\cal F}^{z \mu}_A \right ] - 2 \alpha \epsilon^{\mu \nu \rho \sigma} ( A_\nu^V F_{\rho \sigma}^V + A_\nu^A F_{\rho \sigma}^A ) \, , \end{aligned} \end{equation} where we have used the boundary conditions (\ref{bdyfields}) and $F^{V/A}_{\mu \nu} = \partial_\mu A_\nu^{V/A} - \partial_\nu A_\mu^{V/A}$ are the boundary field strengths. Using the equations of motion \eqref{eomYMCS} we can explicitly show that these two currents are not conserved due to the presence of the Chern-Simons term. One finds \begin{equation} \begin{aligned} \partial_\mu J^\mu_V (x) &= 2 \lim_{z \to \infty} \partial_\mu \left [ \sqrt{-g} {\cal P}^{z \mu}_V \right ] \equiv - 2 \lim_{z \to \infty} \left [ \sqrt{-g} \frac{ \partial {\cal L} }{\partial {\cal A}^V_z} \right ] \\[1ex] &= - \alpha \, \epsilon^{\mu \nu \rho \sigma} F_{\mu \nu}^A F_{\rho \sigma}^V \,,\\[1ex] \partial_\mu J^\mu_A (x) &= - 2 \lim_{z \to \infty} \partial_\mu \left [ \sqrt{-g} {\cal P}^{z \mu}_V \right ] \equiv 2 \lim_{z \to \infty} \left [ \sqrt{-g} \frac{ \partial {\cal L} }{\partial {\cal A}^A_z} \right ] \\[1ex] &= \frac{\alpha}{2} \, \epsilon^{\mu \nu \rho \sigma} ( F_{\mu \nu}^V F_{\rho \sigma}^V + F_{\mu \nu}^A F_{\rho \sigma}^A )\,, \end{aligned} \end{equation} We see two things here. Firstly, the anomaly (i.e.~the right-hand side of above equations) is indeed sourced by the Chern-Simons term. Secondly, the anomaly is present only if the \emph{boundary} value of the bulk gauge \emph{field strength} is non-vanishing. In other words, only in the presence of an \emph{external} field does the anomaly show up. While the axial anomaly is not problematic (it just reflects the fact that this symmetry is no longer present in the dual quantum field theory), this is not the case with the vectorial symmetry. In QED coupled to chiral fermions one has to require that the vector current is strictly conserved, since its non-conservation would imply a gauge anomaly. It is possible to make the vectorial current conserved by adding extra boundary terms to the action, as was first shown by Bardeen in~\cite{Bardeen:1969md}. In the holographic setup, one expects that such a Bardeen-type counter term will appear, but this type of term should come from the requirement that the full theory in the bulk \emph{in the presence of the boundary} is gauge invariant under vectorial gauge transformations. Let us therefore consider a generic \emph{vectorial} gauge transformation in the bulk \begin{equation} \delta {\cal A}_\ell^V = \partial_\ell \Lambda_V (x,z) \,, \quad \delta {\cal A}_\ell^A = 0 \, , \label{vecgauge} \end{equation} where $\Lambda_V(x,z)$ is a function even in $z$. Under this transformation, the action \eqref{fullactionAV} is invariant up to a boundary term, \begin{equation} \begin{aligned} \delta S^{(\Lambda_V)}_{\text{bdy}} &= \int\! {\rm d}^4x {\rm d}z (\partial_\ell \Lambda_V) \partial_m {\cal P}^{m \ell}_V \\[1ex] &= \frac{\alpha}{2} \epsilon^{\ell m n p q} \int\! {\rm d}^4x {\rm d}z (\partial_\ell \Lambda_V) {\cal F}_{mn}^A {\cal F}_{pq}^V \\[1ex] &= \alpha \epsilon^{\ell m n p q} \int\! {\rm d}^4x {\rm d}z \partial_m \left [ (\partial_\ell \Lambda_V) {\cal A}^A_n {\cal F}_{pq}^V \right ] \,. \end{aligned} \end{equation} Therefore, if we want to impose invariance of the action under \eqref{vecgauge} we need to add the anomaly counter term correction \begin{equation} \begin{aligned} S_{\text{an.}} &= - \alpha \epsilon^{\ell m n p q} \int\! {\rm d}^4x {\rm d}z \partial_m \left [ {\cal A}^V_\ell {\cal A}^A_n {\cal F}_{pq}^V \right ] \\[1ex] &= \frac{\alpha}{2} \epsilon^{\ell m n p q} \int\! {\rm d}^4x {\rm d}z \left [ - {\cal A}^V_\ell {\cal F}_{mn}^A {\cal F}_{pq}^V + {\cal A}_\ell^A {\cal F}_{mn}^V {\cal F}_{pq}^V \right ] \,. \end{aligned} \end{equation} We see that there are two contributions of this surface term: at the ``holographic'' boundaries $z\rightarrow \pm \infty$ and at the boundary at spatial infinity $|\vec{x}| \rightarrow \infty$. Its contribution at holographic infinity is indeed the Bardeen counter term as derived in quantum field theory~\cite{Bardeen:1969md}, and which was, in the holographic setup, initially postulated (added by hand) in \cite{Rebhan:2010ax}. We see that its presence automatically follows from the requirement of classical gauge invariance of the bulk theory in the presence of the boundary. The contribution at spatial infinity, would typically vanish, as all physical states in the system are localised in the interior. However, in the presence of external sources which generate an external magnetic field that fills out the whole four-dimensional space, like the one we will be considering, it is not a priori clear if this is true, and one has to be careful about possible extra contributions to the action and currents. It will however turn out that for our non-homogeneous ansatz, these extra terms are irrelevant and that only the Bardeen counter term is non-vanishing. Let us continue by showing the effect of adding the $S_{\text{an.}}$ term on the currents. The total action now reads \begin{multline} \tilde S = S + S_{\text{an.}} = \int\! {\rm d}^4 x {\rm d}z \Big \{ - \frac{\kappa}{2} \sqrt{-g} \left [ {\cal F}^{mn}_V {\cal F}_{mn}^V + {\cal F}^{mn}_A {\cal F}_{mn}^A \right ] \\[1ex] + \frac{\alpha}{4} \epsilon^{\ell m n p q} {\cal A}_\ell^A ( \, 3 \, {\cal F}_{mn}^V {\cal F}_{pq}^V + {\cal F}_{mn}^A {\cal F}_{pq}^A ) \Big \} \equiv \int\! {\rm d}^4 x {\rm d}z {\cal \tilde L} \,. \label{newYMCS} \end{multline} The variation of this new action can again be written in the form \eqref{varYMCS}, with $\tilde {\cal L}$ and $\tilde P$, instead of ${\cal L}$ and $P$, i.e. \begin{equation} \begin{aligned} \frac{\partial {\cal \tilde L}}{\partial {\cal A}_\ell^V} &= 0 \, , \cr \frac{\partial {\cal \tilde L}}{\partial {\cal A}_\ell^A} &= \frac{\alpha}{4} \epsilon^{\ell m n p q} ( \, 3 \, {\cal F}_{mn}^V {\cal F}_{pq}^V + {\cal F}_{mn}^A {\cal F}_{pq}^A ) \, , \cr {\tilde P}_V^{m \ell} &\equiv \frac{ \partial {\cal \tilde L}}{ \partial (\partial_m {\cal A}^V_\ell) } = - 2 \kappa \sqrt{-g} {\cal F}^{m \ell}_V + 3 \alpha \epsilon^{m \ell n p q} {\cal A}_n^A {\cal F}_{pq}^V \, , \cr {\tilde P}_A^{m \ell} &\equiv \frac{ \partial {\cal \tilde L}}{ \partial (\partial_m {\cal A}^A_\ell) } = - 2 \kappa \sqrt{-g} {\cal F}^{m \ell}_A + \alpha \epsilon^{m \ell n p q} {\cal A}_n^A {\cal F}_{pq}^A \, . \end{aligned} \end{equation} The equations of motion are of course unchanged (as the Bardeen counter term is only a surface term), while the new currents are obtained as \begin{equation} \tilde J^\mu_{V/A}(x) = \frac{ \delta \tilde S}{ \delta A^{V/A}_\mu (x)} = \pm \frac{ \delta \tilde S}{ \delta {\cal A}^{V/A}_\mu (x, z = \infty)} = \pm 2 \lim_{z \to \infty} {\cal \tilde P}^{z \mu}_{V/A} \,. \end{equation} Explicitly, the expressions read \begin{equation} \label{extracurrents} \begin{aligned} \tilde J^\mu_V(x) &= - 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} {\cal F}^{z \mu}_V \right ] - 6 \alpha \epsilon^{\mu \nu \rho \sigma} A_\nu ^A F_{\rho \sigma}^V \, , \\[1ex] \tilde J^\mu_A(x) &= 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} {\cal F}^{z \mu}_A \right ] - 2 \alpha \epsilon^{\mu \nu \rho \sigma} A_\nu^A F_{\rho \sigma}^A \, . \end{aligned} \end{equation} The divergences of these currents are \begin{equation} \begin{aligned} \partial_\mu \tilde J^\mu_V (x) &= 0 \, , \\[1ex] \partial_\mu \tilde J^\mu_A (x) &= \frac{\alpha}{2} \epsilon^{\mu \nu \rho \sigma} \left [ 3 F_{\mu \nu}^V F_{\rho \sigma}^V + F_{\mu \nu}^A F_{\rho \sigma}^A \right ] \,, \end{aligned} \end{equation} where we used again the equations of motion \eqref{eomYMCS}. This clearly shows that the vector current is now conserved, while the anomaly is seen only in the axial sector. When the coefficient $\alpha$ is taken from string theory, the non-conservation of the axial current is exactly the same (including the numerical factor) as in QED coupled to external fermions~\cite{Peskin:1995a}. In what follows we will work with these renormalised currents and action. We should also emphasise that when one considers chemical potential for a symmetry which is anomalous (as it is case here), just knowing the corrected action \eqref{newYMCS} may not be enough. Also, one has to be careful about the boundary conditions one has to impose on the states, as not all boundary conditions are allowed. We discuss these subtle issues in section~\ref{s:chempots_nonconserved}. \subsection{The corrected Hamiltonian} The Lagrangian density of the action after inclusion of the anomaly counter term can be written as \begin{multline} \tilde {\cal L} = - \kappa \sqrt{-g} \left [ {\cal F}^{0a}_V {\cal F}_{0a}^V + \frac12 {\cal F}^{ab}_V {\cal F}_{ab}^V + {\cal F}^{0a}_A {\cal F}_{0a}^A + \frac12 {\cal F}^{ab}_A {\cal F}_{ab}^A \right ] \\[1ex] + \frac{\alpha}{4} \epsilon^{0abcd} \Big [ {\cal A}_0^A ( 3 {\cal F}_{ab}^V {\cal F}_{cd}^V + {\cal F}_{ab}^A {\cal F}_{cd}^A ) + 4 {\cal A}_b^A ( 3 {\cal F}_{0a}^V {\cal F}_{cd}^V + {\cal F}_{0a}^A {\cal F}_{cd}^A ) \Big ] \,. \end{multline} The conjugate momenta associated with the vector and axial gauge fields thus take the form \begin{equation} \begin{aligned} \tilde \Pi^a_V &= \frac{ \partial \tilde L}{ \partial (\partial_0 {\cal A}_a^V )} = \tilde P^{0a}_V = - 2 \kappa \sqrt{-g} {\cal F}^{0a}_V + 3 \alpha \epsilon^{0abcd} {\cal A}_b^A {\cal F}_{cd}^V \,, \\[1ex] \tilde \Pi^a_A &= \frac{ \partial \tilde L}{ \partial (\partial_0 {\cal A}_a^A )} = \tilde P^{0a}_A = - 2 \kappa \sqrt{-g} {\cal F}^{0a}_A + \alpha \epsilon^{0abcd} {\cal A}_b^A {\cal F}_{cd}^A \,. \end{aligned} \end{equation} We then obtain the on-shell Hamiltonian as $\tilde H = \tilde H_{\text{Bulk}} + \tilde H_{\text{Bdy}}$, where the two contributions read \begin{equation} \begin{aligned} \tilde H_{\text{Bulk}} &= \kappa \int {\rm d}^3x {\rm d}z \sqrt{-g} \left [ - {\cal F}^{0a}_V {\cal F}_{0a}^V + \frac12 {\cal F}^{ab}_V {\cal F}_{ab}^V - {\cal F}^{0a}_A {\cal F}_{0a}^A + \frac12 {\cal F}^{ab}_A {\cal F}_{ab}^A \right ] \cr &= \kappa \int\! {\rm d}^3x {\rm d}z \sqrt{-g} \left [ - {\cal F}^{0a} {\cal F}_{0a} + \frac12 {\cal F}^{ab} {\cal F}_{ab} \right ] \, , \label{tHBulk} \end{aligned} \end{equation} \begin{equation} \tilde H_{\text{Bdy}} = \int {\rm d}^3 x {\rm d}z \, \partial_a \left [ \tilde \Pi^a_V {\cal A}_0^V + \tilde \Pi^a_A {\cal A}_0^A \right ] \label{tHBdy} \, . \end{equation} Here we have used the gauge field equations (\ref{eomYMCS}) for the time component $\ell = 0$ (generalised Gauss law). \section{The spatially modulated phase} Having settled the issue of how to deal with the Chern-Simons term in the presence of external fields for the Sakai-Sugimoto model, we now want to find the ground state of the system, at strong coupling, in the presence of external magnetic field and non-vanishing axial and baryon chemical potentials. Based on the weak coupling, partonic, arguments which were mentioned in the introduction, we expect that the ground state should be a chiral magnetic spiral like configuration. In particular, we expect it to be \emph{non-homogeneous}. So far, for Sakai-Sugimoto model a non-homogeneous ground state was constructed in the presence of large enough axial chemical potential but with no external fields in \cite{Bayona:2011ab} (at low temperature; see \cite{Ooguri:2010xs} for a high-temperature analysis). The main reason why such a state appeared was due to the nontrivial Chern-Simons term. We now want to see if this state persists and how it is modified once the external magnetic field is introduced. \subsection{Magnetic ground state ansatz in the presence of chemical potentials} We are interested in studying the ground state of the system at non-zero axial chemical potential $\mu_A$ and non-zero baryon chemical potential $\mu_B$. In addition, we will turn on a constant magnetic field in the $x^1$ direction $\vec{B}=B \hat x_1 $. The boundary conditions associated with this physical scenario are \begin{equation} \label{boundarystuff5d} \vec{{\cal A}}( z = \pm \infty ) \,=\, \frac12 \vec{B} \times \vec{x} , \quad \quad {\cal A}_0( z = \pm \infty ) = \mp \mu_A + \mu_B \,. \end{equation} We will see that in fact our ansatz is insensitive to the baryon chemical potential, but we will keep it in the formulas for a little while longer. Let us now consider our particular ansatz, in the gauge ${\cal A}_z =0$. First, we want to introduce the vectorial and axial chemical potentials, hence we turn on ${\cal A}_0= f(z)$ with the above boundary conditions. Second, we need to introduce the (constant) magnetic field in the boundary, hence we turn on a component of ${\cal A}$ transverse to the direction of $\vec{B}$: $\vec{\cal A}_B^T(x_2,x_3,z)$. In principle this function could depend on $z$, but the Bianchi identity tells us that in fact the magnetic field is \emph{independent} of~$z$. Therefore, we have for $\vec{\cal A}_B^T$ \begin{equation} \vec{{\cal A}}_B^T ( x_2,x_3 ) \,=\, \frac12 \vec{B} \times \vec{x} = \frac{B}{2} \left [ - x_3 \hat x_2 + x_2 \hat x_3 \right ]\,. \label{newansatzABT} \end{equation} Next by looking at the equations of motion for the $\vec{\cal A}_B^L $ component parallel to the magnetic field, we see that this component also has to be turned on, and is only a function of $z$, \begin{equation} \vec{\cal A}_B^L= a(z) \hat x_1 \, , \label{newansatzABL} \end{equation} Finally, we expect that a chiral magnetic spiral will appear in the direction transverse to the external magnetic field \begin{equation} \vec{\cal A}_W^T (x_1,z) = h (z) \left [ \cos(k x_1) \hat x_2 -\sin(k x_1) \hat x_3 \right ] \, , \label{newansatzAWT} \end{equation} i.e.~it represents the chiral wave transverse to the boundary magnetic field satisfying \begin{equation} \nabla \times \vec{\cal A}_W^T \,=\,k \, \vec{\cal A}_W^T \, , \end{equation} where $k=\pm |\vec{k}|$ and $\vec{k}$ is the spatial momentum. So in summary our ansatz is given by \begin{equation} \begin{aligned} {\cal A}_0^V &= f_V(z) \quad , \quad {\cal A}_0^A = f_A (z) \, , \\[1ex] \vec{\cal A}_V &= \frac{B}{2} \left [ - x_3 \hat x_2 + x_2 \hat x_3 \right ] + h_V(z) \left [ \cos (k x_1) \hat x_2 - \sin (k x_1) \hat x_3 \right ] + a_V(z) \hat x_1 \, ,\\[1ex] \vec{ \cal A}_A &= a_A (z) \hat x_1 + h_A(z) \left [ \cos (k x_1) \hat x_2 - \sin (k x_1) \hat x_3 \right ] \, , \label{newansatz} \end{aligned} \end{equation} where the fields satisfy the boundary conditions \begin{equation} \begin{aligned} f_V (z \to \pm \infty) &= \mu_B\,, \quad& f_A (z \pm \infty) &= \mp \mu_A \, , \\[1ex] h_V( z \to \pm \infty) &= 0 \quad , \quad& h_A( z \to \pm \infty) &= 0 \, , \\[1ex] a_V (z \to \pm \infty) &= 0 \quad , \quad& a_A( z \to \pm \infty ) &= \mp j \, \label{boundarystuff} \, . \end{aligned} \end{equation} The Gauss law, i.e.~the zeroth component of the equation of motion ~\eqref{eomYMCS}, is automatically satisfied for our ansatz. The remaining equations reduce, after integrating one of them with integration constant $\tilde\rho$, to \begin{align} & \sqrt{- g} \, g^{zz} g^{00} \partial_z f = 3 \frac{\alpha}{\kappa} \left [ B a + \frac{k}{2} h^2 \right ] - \tilde \rho \, , \label{feq} \\[1ex] & \partial_z \left [ \sqrt{- g} \, g^{zz} g^{xx} \partial_z a \right ] + 3 \frac{\alpha}{\kappa} B \partial_z f = 0 \,, \label{aeq}\\[1ex] & \partial_z \left [\sqrt{- g} \, g^{zz} g^{xx} (\partial_z h) \right ] - \sqrt{-g} \, (g^{xx})^2 k^2 h + 3 \frac{\alpha}{\kappa} k \partial_z f h = 0 \label{heq} \,. \end{align} Restricting now to the metric~\eqref{gmn}, and substituting eq.~\eqref{feq} into ~\eqref{aeq} and \eqref{heq} we obtain our master equations, \begin{align} & K_z \partial_z \hat f = - \hat b - \frac{1}{2} \hat k \hat h^2 \, , \label{hatfequation} \\[1ex] & K_z \partial_z \left [ K_z \partial_z \hat b \right ] - \hat B^2 \left [ \hat b + \frac{1}{2} \hat k \hat h^2 \right ] = 0 \, , \label{hataequation} \\[1ex] & K_z \partial_z \left [ K_z \partial_z \hat h \right ] - K_z^{2/3} \hat k^2 \hat h - \hat k \hat h \left [ \hat b + \frac{1}{2} \hat k \hat h^2 \right ] = 0 \, , \label{hathequation} \end{align} where \begin{equation} \label{e:bversusa} \hat b \equiv \hat B \hat a - \hat \rho \, , \end{equation} and we have also introduced a set of dimensionless variables $\hat f$, $\hat h$, $\hat a$, $\hat k$, $\hat \rho$ and $\hat B$ defined by \begin{align} & f = \bar \lambda \, M_{\text{KK}} \hat f \quad , \quad h = \bar \lambda \, M_{\text{KK}} \, \hat h \quad , \quad a = \bar \lambda \, M_{\text{KK}} \hat a \, ,\\[1ex] & k= M_{\text{KK}} \hat k \quad , \quad \tilde \rho = \bar \lambda \, M^3_{\text{KK}} \, \hat \rho \quad , \quad B = \bar \lambda M^2_{\text{KK}} \hat B \,, \label{hatdimless} \end{align} with $\bar\lambda = \lambda/(27\pi)$. These coupled equations are in general not solvable analytically, except in the special case $k=0$, which we will review next. \subsection{Review of the homogeneous solution} Before embarking on the full task of finding the non-homogeneous solutions to the equations of motion, we will in this section first review the homogeneous solution (i.e.~the solution for which $k=0$) in the presence of a constant magnetic field. This is an abelian version of the solution first constructed in~\cite{Bergman:2008qv,Thompson:2008qw}, see also~\cite{Rebhan:2009vc}. In the homogeneous case there is no transverse spiral, i.e.~$h=0$ and the equations of motion simplify to \begin{equation} \begin{aligned} \partial_{\tilde{z}} \hat{f} =& \hat\rho - \hat B \hat{a} \, , \\[1ex] \left ( \partial_{\tilde z}^2 - \hat B^2 \right ) \hat{a} &= - \hat B \hat{\rho}\,, \end{aligned} \end{equation} with $\tilde z = \arctan z$. These equations can be integrated exactly, and the solution takes the form \begin{equation} \begin{aligned} \label{homogenious} \hat{a}(z)&= \frac{\hat{C}_A}{\hat B} \cosh (\hat B \arctan z) + \frac{\hat{C}_B}{\hat B} \sinh (\hat B \arctan z) + \frac{\hat \rho}{\hat B} \, ,\\ \hat{f}(z) &= - \frac{\hat{C}_A}{\hat B} \sinh ( \hat B \arctan z) - \frac{\hat{C}_B}{\hat B} \cosh (\hat B \arctan z) + \hat{f}_0 \,, \end{aligned} \end{equation} where $\hat{C}_A,\hat{C}_B,\tilde{\rho}$ and $\hat{f}_0$ are four integration constants for the two second order differential equations. The corresponding field strengths take the form \begin{equation} \begin{aligned} \label{fieldstr} {\cal F}_{z1} &= \frac{1}{1 + z^2} \left [ C_A \sinh ( \hat B \arctan z) + C_B \cosh (\hat B \arctan z) \right ]\,, \\ {\cal F}_{z0} &= - \frac{1}{1 + z^2} \left [ C_A \cosh ( \hat B \arctan z) + C_B \sinh (\hat B \arctan z) \right ] \,, \end{aligned} \end{equation} and $C_{A/B} = \bar \lambda M_{KK} \hat C_{A/B}$. In terms of the baryonic and axial chemical potentials the boundary condition on $f(z)$ reads \begin{equation} f (z \to \pm \infty) = \mu_{L/R} = \mu_B \mp \mu_A \, . \end{equation} From the expression for $f(z)$ these potentials are related to $\hat{C}_A$ and $\hat{C}_B$ by \begin{equation} \hat{C}_A = \frac{\hat B }{\sinh \left ( \frac{\pi}{2} \hat B \right )} \hat{\mu}_A \, , \quad \quad \hat{C}_B = - \frac{\hat B } { \cosh \left ( \frac{\pi}{2} \hat B \right )} ( \hat{\mu}_B - \hat{f}_0) \,, \end{equation} where $\mu_{A/B} = \bar \lambda M_{KK} \hat \mu_{A/B}$. In analogy with the analysis of \cite{Son:2007ny} we are here interested in configurations in which there is a non-vanishing pion (or rather, $\eta'$) gradient in the direction of the external field. Since we are working in the ${\cal A}_z=0$ gauge, a pion field will appear as part of the axial, non-normalisable component of all ${\cal A}_\mu$'s (see \cite{Sakai:2004cn,Sakai:2005yt}). Hence, we impose the boundary condition\footnote{We stick to this notation, introduced in~\cite{Rebhan:2009vc}, but want to emphasise that even though using the symbol $j$ suggests that the asymptotic value of $a$ is a current, it is not.} \begin{equation} a (z \to \pm \infty) = \mp j \, . \end{equation} We should note here that this is not the most general boundary condition for the field $a(z)$, since we have set the even part to zero. Having thus reduced the parameter space to the set $\{\mu_A,\mu_B, j\}$, we have the relations \begin{equation} \begin{aligned} \hat{C}_A &= \frac{\hat B }{\sinh \left ( \frac{\pi}{2} \hat B \right )} \hat{\mu}_A\,, \quad& \hat \rho &= - \hat B \coth \left ( \frac{\pi}{2} \hat B \right ) \hat{\mu}_A \,, \\[1ex] \hat{C}_B &= - \frac{\hat B} { \sinh \left ( \frac{\pi}{2} \hat B \right )} \hat{j}\,,\quad & \hat{f}_0 &= \hat{\mu}_B - \coth \left ( \frac{\pi}{2} \hat B \right ) \hat{j} \,, \end{aligned} \end{equation} where $j = \bar \lambda M_{KK} \hat j$. Using these relations we can rewrite $\hat f(z)$ and $\hat a(z)$ as \begin{equation} \begin{aligned} \label{homosolu} \hat{f}(z) &=\hat\mu_B - \hat\mu_A \frac{ \sinh (\hat B \arctan z)}{\sinh \left ( \frac{\pi}{2} \hat B \right )} + \hat j \left [ \frac{ \cosh ( \hat B \arctan z) }{ \sinh \left ( \frac{\pi}{2} \hat B \right )} - \coth \left ( \frac{\pi}{2} \hat B \right ) \right ] \, , \\[1ex] \hat{a}(z) &= \hat\mu_A \left [ \frac{ \cosh ( \hat B \arctan z) }{ \sinh \left ( \frac{\pi}{2} \hat B \right )} - \coth \left ( \frac{\pi}{2} \hat B \right ) \right ] - \hat j \frac{ \sinh (\hat B \arctan z)}{ \sinh \left ( \frac{\pi}{2} \hat B \right ) } \, . \end{aligned} \end{equation} Since the constant $\mu_B$ does not appear in the Hamiltonian, it is effectively a free parameter, which we are free to set to zero. As expected, we find that the baryon chemical potential has no effect on this abelian system. In contrast, minimising the Hamiltonian will impose a constraint on the axial chemical potential and $j$, as expected for physical systems. In summary, we have two physical boundary values, $\mu_A$ and $j$, which are implicitly expressed in terms of the two parameters $C_A$ and $C_B$. At any given fixed values of $\mu_A$ and $j$, we will want to compare the homogeneous solution given above to possible non-homogeneous condensates and determine which of the two has lower energy. \subsection{Currents for the homogeneous and non-homogeneous ansatz} The work of \cite{Rebhan:2010ax,Rebhan:2009vc}, which studied the homogeneous solution discussed above, resulted in the interesting conclusion that there is \emph{no} chiral magnetic effect present in the holographic Sakai-Sugimoto model. There are some subtleties with this which were pointed out in \cite{Landsteiner:2011tf}, to which we will return shortly. However, their result also leaves the open question as to whether there is a chiral magnetic effect for more general solutions to the equations of motion, for instance the non-homogeneous ones which we consider here. So even before we find the full non-homogeneous solution, an important lesson might be learnt from an evaluation of the corrected holographic currents \eqref{extracurrents} for the non-homogeneous ansatz. For our ansatz (\ref{newansatz}), the corrected currents become \begin{equation} \label{allcurrents} \begin{aligned} \tilde J^0_V &= - 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} g^{zz} g^{00} \partial_z f_V \right ] - 12 \alpha B j \\[1ex] &= - 12 \alpha \lim_{z \to \infty} \left [ B a_A + k h_V h_A \right ] - 12 \alpha B j = 0 \, , \\[1ex] \tilde J^1_V &= - 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} g^{zz} g^{xx} \partial_z a_V \right ] + 12 \alpha B \mu_A \\[1ex] &= 12 \alpha B \lim_{z \to \infty} f_A + 12 \alpha B \mu_A = 0 \, , \\[1ex] \tilde J^2_V &= - 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} g^{zz} g^{xx} \partial_z {\cal A}_2^V \right ] = - 4 \kappa M_{\text{KK}}^2 \lim_{z \to \infty} \left [ K_z \partial_z h_V \right ] \cos(kx_1) \, , \\[1ex] \tilde J^3_V &= - 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} g^{zz} g^{xx} \partial_z {\cal A}_3^V \right ] = 4 \kappa M_{\text{KK}}^2 \lim_{z \to \infty} \left [ K_z \partial_z h_V \right ] \sin(k x_1) \, , \\[1ex] \tilde J^0_A &= 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} g^{zz} g^{00} \partial_z f_A \right ] \\[1ex] &= \lim_{z \to \infty} \left [ 12 \alpha \left ( B a_V + \frac{k}{2} h_V^2 + \frac{k}{2} h_A^2 \right ) - 4 \kappa \tilde \rho \right ] = - 4 \kappa \tilde \rho \, , \\[1ex] \tilde J^1_A &= 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} g^{zz} g^{xx} \partial_z a_A \right ] = 4 \kappa M_{\text{KK}}^2 \lim_{z \to \infty} \left [ K_z \partial_z a_A \right ] \, , \\[1ex] \tilde J^2_A &= 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} g^{zz} g^{xx} \partial_z {\cal A}_2^A \right ] = 4 \kappa M_{\text{KK}}^2 \lim_{z \to \infty} \left [ K_z \partial_z h_A \right ] \cos(kx_1) \, , \\[1ex] \tilde J^3_A &= 4 \kappa \lim_{z \to \infty} \left [ \sqrt{-g} g^{zz} g^{xx} \partial_z {\cal A}_3^A \right ] = - 4 \kappa M_{\text{KK}}^2 \lim_{z \to \infty} \left [ K_z \partial_z h_A \right ] \sin(k x_1) \, . \end{aligned} \end{equation} where we have used the gauge field equations for the components $\ell = 0$ and $\ell =1$. From \eqref{allcurrents} we see two important facts. First, the density of particles carrying baryonic charge is zero. This confirms once more that there is nothing baryonic in the solutions under consideration, in agreement with the fact that the baryon chemical potential $\mu_B$ decouples completely. The second observation is that the component of the vector current \emph{in the direction of the external magnetic field} is zero. This is in sharp contrast to what one would expect if there was a chiral magnetic effect present. We should, however, emphasise that it is possible to define corrected currents which are different from those above, if one decides to deal with the anomalous symmetry in a different way, see section~\ref{s:chempots_nonconserved}. However, this alternative method, although it produces the chiral magnetic effect, suffers from other shortcomings, as we will explain. The expressions (\ref{allcurrents}) simplify further when we restrict to the homogeneous ansatz, for which one obtains \begin{equation} \begin{aligned} \label{homogeniouscurrents} \tilde J^0_V &= \tilde J^1_V = \tilde J^2_V = \tilde J^3_V = 0 \, , \cr \tilde J^0_A &= 4 \kappa M_{\text{KK}}^2 \tilde B \mu_A \coth( \frac{\pi}{2} \tilde B ) = 12 \alpha B \mu_A \coth( \frac{\pi}{2} \tilde B ) \, , \cr \tilde J^1_A &= - 4 \kappa M_{\text{KK}}^2 \tilde B j \coth( \frac{\pi}{2} \tilde B ) = - 12 \alpha B ( \mu_B - f_0 ) \, , \cr \tilde J^2_A &= \tilde J^3_A = 0 \,. \end{aligned} \end{equation} We should emphasise once more that the vector (baryonic) currents vanish \emph{only} when the contribution from the Bardeen term in the action is properly taken into account. One could in principle evaluate an ``abelianised'' version of the baryon number (second Chern class), $\sim \int F_{3z}F_{12}$. However, this expression is strictly speaking valid only in a \emph{nonabelian} system, and in this situation one expects that the charge computed using the corrected conserved current $\tilde{J}_V^0$ should coincide with this topological number. Our analysis shows that the fact that there is no chiral magnetic effect in the Sakai-Sugimoto model is not due to the simplified homogeneous ansatz, and that it is also not a consequence of the details of any numerical solution which we will present later; rather, the chiral magnetic effect is absent for the entire class of solutions captured by the ansatz~\eqref{newansatz}. The contribution of the Bardeen terms required to make the vector currents conserved is crucial for the absence of the chiral magnetic effect. We finally see that, in contrast to the homogeneous solution, where no vector currents are present at all, the non-homogeneous system exhibits transverse vector currents. This shows some resemblance to the chiral magnetic spiral of \cite{Basar:2010zd}. \subsection{Chemical potentials for non-conserved charges} \label{s:chempots_nonconserved} Given the rather convincing quasi-particle picture of the origin of the chiral magnetic effect at weak coupling \cite{Kharzeev:2007jp,Fukushima:2008xe}, it is somewhat surprising that it is not present in the model at hand at strong coupling. A reason for this discrepancy has been suggested in \cite{Landsteiner:2011tf}, in which it was emphasised that one should be careful in computing the effects of a chemical potential in theories for which the associated charge is not conserved. The main observation made in \cite{Landsteiner:2011tf} is that there are two ways to introduce a chemical potential into a thermal quantum system. One is to twist the fermions along the thermal circle, i.e.~to impose \begin{equation} \label{e:Bformalism} \Psi_{L,R}(\tau) = - e^{\pm\beta\mu_A} \Psi_{L,R}(\tau-\beta)\,, \end{equation} instead of the usual anti-periodic boundary condition. This is what one would do at weak coupling. It was called the ``B-formalism'' in \cite{Landsteiner:2011tf}. The other way is to keep anti-periodic boundary conditions, but instead use a shifted Hamiltonian, \begin{equation} \label{e:Aformalism} \tilde H = H - \mu_A Q_L + \mu_A Q_R\,. \end{equation} This coupling can be viewed as a coupling to a gauge field for which $\tilde A_0=\mu_A$. We will refer to this as the ``A-formalism'' (in which we will temporarily put tildes on all objects, as above, for clarity). For a non-anomalous symmetry, these two formalisms are equivalent, and one can go from the B-formalism to the A-formalism using a gauge transformation involving the external gauge field, with parameter $\theta_A = -\mu_A t$, relating the gauge fields in the two formalisms according to \begin{equation} A_\mu = \tilde{A}_\mu + \partial_\mu \theta_A\,. \end{equation} In terms of our holographic picture, this gauge transformation acts directly on the 5d gauge field, with a parameter $\Theta_A(z)$ that is $z$-dependent, \begin{equation} \label{e:ThetaAdef} \Theta_A(z) = t g_A(z)\,,\qquad g_A(z\rightarrow\pm\infty) = \pm\mu_A\,. \end{equation} which again acts on the gauge field as\footnote{The boundary gauge transformation parameter $\theta_A(x)$ is related to the 5d parameter by \begin{equation} \theta_A(x) \equiv \mp \Theta_A(x, z \to \pm \infty) = -\mu_A t \,, \label{thetadef} \end{equation} with the perhaps somewhat inconvenient signs following from our relation between the bulk and boundary gauge fields~\eqref{bdyfields}.} \begin{equation} {\cal A}_m = \tilde{\cal A}_m + \partial_m \Theta_A\,. \end{equation} The difference between the two formalism can thus be formulated in a clear holographic language as well: in the A-formalism the ansatz has $\tilde{\cal A}_0$ asymptoting to the chemical potential, whereas in the B-formalism the ansatz instead has this chemical potential stored in the ${\cal A}_z$ component. The chemical potential is then best written as \begin{equation} \mu_A = - \frac12 \int_{-\infty}^{\infty}\!{\rm d}z\, {\cal F}_{z0} \, , \label{chiralmu} \end{equation} which is nicely gauge invariant and independent of the formalism used. This is all clear and unambiguous when the symmetry is non-anomalous. However, in the presence of an anomaly, one cannot pass from the formalism defined by \eqref{e:Bformalism} to that defined by \eqref{e:Aformalism}. This is what happens in our system: as we discussed above, the 5d action after the anomaly correction \eqref{newYMCS} is not invariant under a chiral gauge transformation, \begin{multline} \tilde S [\tilde{\cal A }_\ell + \partial_\ell \Theta_A] = \tilde S[\tilde{\cal A}_\ell] + \frac{\alpha}{4} \epsilon^{\ell m n p q} \int\!{\rm d}^4 x {\rm d}z\, (\partial_\ell \Theta_A) \left ( 3 \tilde{\cal F}_{mn}^V \tilde{\cal F}_{pq}^V + \tilde{\cal F}_{mn}^A \tilde{\cal F}_{pq}^A \right ) \\[1ex] =: \tilde S[\tilde{\cal A}_\ell] + \tilde S^\Theta[\tilde{\cal A}_\ell]\,. \end{multline} The anomaly implies that one of the two formalisms is incorrect. The point of view of \cite{Landsteiner:2011tf} is that there are strong indications that the B-formalism is the correct one in field theory. If one insists on computing with untwisted fermions, one needs to perform a gauge transformation, which not just introduces the chemical potential into $A_0$, but also modifies the action and Hamiltonian to correct for the fact that the action is not gauge invariant. To be precise, when one uses untwisted fermions, the action that one should use is the gauge-transformed action, which differs from the original one by the anomaly. This was called the ``A'-formalism'' (note the prime) in \cite{Landsteiner:2011tf}. Let us see how this logic works for the Sakai-Sugimoto system under consideration here. The idea is thus that if we want to identify the chemical potential with the asymptotic value of $\tilde{\cal A}_0$, we should be working not with $\tilde S[\tilde {\cal A}_\ell]$ but rather with $\tilde S[\tilde {\cal A}_\ell] + \tilde S^\Theta[\tilde{\cal A}_\ell]$. In order to compute the currents, we need to compute the variation of the $\Theta_A$, which takes the form \begin{equation} \delta S^{(\Theta_A)} = \alpha \epsilon^{m \ell n p q} \int\!{\rm d}^4x {\rm d}z\, \partial_m \left [ 3 (\partial_n \Theta_A) \tilde{\cal F}_{pq}^V \delta \tilde{\cal A}_\ell^V + (\partial_n \Theta_A) \tilde{\cal F}_{pq}^A \delta \tilde{\cal A}_\ell^A \right ] \,. \end{equation} The contribution of the $\Theta_A$ term to the holographic currents can be obtained using the dictionary \begin{equation} \Delta \tilde J^\mu_{V/A} = \pm \frac{ \delta S^{(\Theta_A)}} {\delta \tilde{\cal A}_\mu^{V/A} (x, z = \infty)} \,, \end{equation} We then obtain \begin{equation} \begin{aligned} \Delta \tilde J^\mu_V &= - 6 \alpha \epsilon^{\mu \nu \rho \sigma} (\partial_\nu \theta_A) \tilde F_{\rho \sigma}^V = 6 \alpha \mu_A \epsilon^{\mu 0 \rho \sigma} \tilde F_{\rho \sigma}^V \\[1ex] \Delta \tilde J^\mu_A &= - 2 \alpha \epsilon^{\mu \nu \rho \sigma} (\partial_\nu \theta_A) \tilde F_{\rho \sigma}^A = 2 \alpha \mu_A \epsilon^{\mu 0 \rho \sigma} \tilde F_{\rho \sigma}^A \,. \end{aligned} \end{equation} These expressions are independent of the particular function $g_A(z)$ which one chooses in~\eqref{e:ThetaAdef}. For our ansatz \eqref{newansatz} with the boundary conditions \eqref{boundarystuff} we get \begin{equation} \label{e:DeltaJ} \begin{aligned} \Delta \tilde J^0_V &= 0 \, , \\[1ex] \Delta \tilde J^1_V &= - 12 \alpha B \mu_A \, , \\[1ex] \Delta \tilde J^2_V &= \Delta \tilde J^3_V = 0 \, , \\[1ex] \Delta \tilde J^0_A &= \Delta \tilde J^1_A = \Delta \tilde J^2_A = \Delta \tilde J^3_A = 0 \, . \end{aligned} \end{equation} This shows a very promising feature: any solution in this class will now exhibit the chiral magnetic effect, as there is a non-vanishing $\tilde J^1_V$ component. Unfortunately, we will see later that things are more subtle at the level of the Hamiltonian. There are two main problems when writing down the $A'$-formalism for things more complicated than the currents. Firstly, any bulk quantities such as the Hamiltonian will typically depend on $g_A(z)$, not just on its asymptotic values. Secondly, it will turn out that even for a `natural' choice of $g_A(z)$, for instance $g_A(z)=-f_A(z)$, the new Hamiltonian has the property that it does not lead to a minimum for non-homogeneous configurations. To see this requires some more details about this condensate, but let us here already present the expression for the corrected Hamiltonian. It can be written as \begin{equation} H_{\text{Tot}}({\cal A_\ell},\Theta_A) = \tilde H_{\text{Bulk}} ({\cal A_\ell}) + \tilde H_{\text{Bdy}} ({\cal A_\ell}) + H^{(\Theta_A)} ({\cal A_\ell},\Theta_A) \label{Htot} \end{equation} where $\tilde H_{\text{Bulk}} ({\cal A_\ell})$ and $\tilde H_{\text{Bdy}} ({\cal A_\ell})$ are given by (\ref{tHBulk}), (\ref{tHBdy}) and the $\Theta_A$ term is \begin{multline} H^{(\Theta_A)}({\cal A_\ell},\Theta_A) = \alpha \int {\rm d}^3 x {\rm d}z \Big \{ \epsilon^{0abcd} (\partial_b \Theta_A) \left [ 3 {\cal F}_{cd}^V (\partial_0 {\cal A}_a^V) + {\cal F}_{cd}^A (\partial_0 {\cal A}_a^A) \right ] \\[1ex] - \frac14 \epsilon^{\ell m n p q} (\partial_\ell \Theta_A) ( 3 {\cal F}_{mn}^V {\cal F}_{pq}^V + {\cal F}_{mn}^A {\cal F}_{pq}^A ) \Big \} \end{multline} For our ansatz \eqref{newansatz} with the boundary conditions \eqref{boundarystuff} the theta term in the Hamiltonian take the form \begin{equation} \begin{aligned} H^{(\Theta_A)}({\cal A_\ell},\Theta_A) &= 2 \alpha \int {\rm d}^3 x {\rm d}z\, (\partial_0 \Theta_A) \left [ 3 B \partial_z a_V + \frac{k}{2} \partial_z (3 h_V^2 + h_A^2) \right ] \cr &= - \frac23 {\cal H}_0 \int {\rm d}z \, \partial_z \hat g_A \left [ 3 \hat B \hat a_V + \frac{k}{2} (3 h_V^2 + h_A^2) \right ] \cr &= - \frac23 {\cal H}_0 \int {\rm d}z \, \partial_z \hat g_A \left [ 3 ( \hat b_V + \frac{k}{2} h_V^2) + \frac{k}{2} h_A^2\right ] - 4 {\cal H}_0 \hat \rho \hat \mu_A \,. \end{aligned} \end{equation} Specialising to $g_A(z) = - f_A(z)$ we get \begin{multline} \label{e:HTheta} H^{(\Theta_A)}({\cal A_\ell},\Theta_A) = - \frac23 {\cal H}_0 \int {\rm d}z \, \frac{1}{K_z} \left [ \hat b_V + \frac{ \hat k}{2} \hat h_V^2 + \frac{ \hat k}{2} \hat h_A^2 \right ] \left [ 3 ( \hat b_V + \frac{k}{2} h_V^2) + \frac{k}{2} h_A^2\right ] \\[1ex] - 4 {\cal H}_0 \hat \rho \hat \mu_A \,. \end{multline} In the homogeneous case we get \begin{equation} \label{e:HTheta_hom} H^{(\Theta_A)}({\cal A_\ell},\Theta_A) = - {\cal H}_0 \hat C_A^2 \left [ \frac{\sinh( \pi \hat B)}{\hat B} + \pi \right ] - 4 {\cal H}_0 \hat \rho \hat \mu_A \,. \end{equation} This boundary term can have drastic consequences for the phase structure of the theory: we will see in section~\ref{s:inhomogeneous} that it disfavours non-homogeneous configurations. We should emphasise that the procedure for introducing the A'-formalism in the holographic context is, unfortunately, rather ambiguous. This is essentially because the required boundary condition on the gauge transformation parameter $\Theta_A(z)$, given in \eqref{thetadef}, does not uniquely specify the behaviour of $\Theta_A(z)$ in the bulk. In contrast, this kind of ambiguity does not appear in field theory, as gauge transformation which untwists fermions, and ``moves'' chemical potential into temporal component of the gauge potential, is unique. Instead of using the A'-formalism one could consider doing the holographic computation directly in the B-formalism. This would require writing the solution in the $A_0=0$ gauge (the fermions are not directly accessible so it is unclear whether additional changes are required to implement the twisting). This does lead to a CME, but the Hamiltonian again turns out to have no minimum for non-homogeneous configurations. Some details are given in appendix~\ref{a:Bformalism}. \subsection{Perturbative stability analysis of the homogeneous solution} \label{s:perturbative_stability} In this section we will perturbatively analyse the stability of the homogeneous solution \eqref{homosolu}, in order to show that this configuration is unstable and wants to ``decay'' to a non-homogeneous solution~\eqref{newansatz} (whose explicit form will be found later). Our analysis here is a revision of the work done in \cite{Kim:2010pu}, but our findings differ from theirs in an important way which is crucial for the remainder of our paper. Our starting point is given by the equations of motion linearised around the configuration \eqref{homosolu}. Following \cite{Kim:2010pu}, we will look for fluctuations of the modes transverse to the direction of the external field, i.e.~along $(A_1,A_2)$, since these should lead to the formation of a chiral spiral. These fluctuations are given by \begin{equation} \delta A_{i} = \delta A_{i}(\omega,k) e^{-i \omega t + i k x_1} \quad \quad (i=2,3) \, . \end{equation} The equations of motion for the fluctuations $(\delta A_1,\delta A_2)$ are coupled, but they diagonalise in the complex basis \begin{equation} \delta A^{(\pm)}\equiv \delta A_2 \pm i \delta A_3 \, , \end{equation} where the equations become \begin{equation} \label{perturba} K_z^{-1/3}(\omega^2 - k^2) \delta A^{(\pm)} + M_{\text{KK}}^2 \partial_z(K_z \partial_z \delta A^{(\pm)}) \pm \frac{N_c}{8\pi^2 \kappa} (k \mathcal{F}_{0z} + \omega \mathcal{F}_{1z}) \delta A^{(\pm)} =0 \, . \end{equation} Here $\mathcal{F}_{1z},\mathcal{F}_{0z}$ are field strengths of the background homogeneous solution \eqref{fieldstr}, and we will express all results in terms of the constants $C_A$ and $C_B$ (instead of~$\mu_A$ and~$j$) in order enable a simpler comparison with the results of \cite{Kim:2010pu}. Given values of $C_A$ and $C_B$, we numerically solve equation~\eqref{perturba}. As usual in perturbation theory, solutions with real $\omega$ represent fluctuations that are stable, while those for which $\omega$ has a positive imaginary part correspond to instabilities, since they are exponentially growing in time. Fluctuations for which $\omega=0$ are marginal and have to be analysed in the full nonlinear theory in order to see if they correspond to unstable directions in configuration space. We will argue now that, while \cite{Kim:2010pu} has correctly identified the perturbatively unstable solutions with complex $\omega$, they have missed the marginally unstable modes, which are actually unstable in the full theory. In the following section we will then explicitly construct the new vacua corresponding to these marginal modes. Before presenting solutions to the equation \eqref{perturba}, observe that this equation exhibits the symmetry $t\rightarrow -t,\omega\rightarrow -\omega$, which means that all solutions will come in pairs $(\omega, -\omega)$. Additionally, when solving this equation, one looks for \emph{normalisable} solutions, i.e.~one looks for the solutions that behave as $\delta A^{(\pm)} \sim 1/z$ near the boundaries. \begin{figure}[t] \includegraphics[width=.31\textwidth]{dispersion_CB0_1}~~ \includegraphics[width=.31\textwidth]{dispersion_CB0_2}~~ \includegraphics[width=.31\textwidth]{dispersion_CB0_3} \caption{\label{f:dispersion_relations_CB0} Dispersion relation for small fluctuations at vanishing $C_B$ and increasing values of $C_A$. In the last plot, the region between the two $\omega=0$ modes corresponds to modes with positive imaginary $\omega$, hence signalling proper instability. This follows Kim et al.~\cite{Kim:2010pu}.} \end{figure} Let us start by discussing the solutions with only $C_A$ non-vanishing. At $B=0$ these correspond to solutions with $j=0$. Samples of these solutions are shown in figure~\ref{f:dispersion_relations_CB0}. We see that as $|C_A|$ is increased, the two branches of solutions come closer to the $\omega=0$ axis and then touch (the middle plot in figure \ref{f:dispersion_relations_CB0}). For all those values of $|C_A|$, given any momenta $k$, there is always real $\omega$ solution. However, as $|C_A|$ is increased even more, the two branches of solutions separate along the $k$-axis, so that there is a region of momenta for which there are no real $\omega$ solutions (see the third plot of figure \ref{f:dispersion_relations_CB0}). For these ``forbidden'' values of the momenta, one can explicitly find solutions with complex $\omega$, which clearly signal a proper instability of the solution. These modes have previously been found in \cite{Kim:2010pu}. \begin{figure}[t] \includegraphics[width=.31\textwidth]{dispersion_CBn0_1}~~ \includegraphics[width=.31\textwidth]{dispersion_CBn0_2}~~ \includegraphics[width=.31\textwidth]{dispersion_CBn0_3} \caption{\label{f:dispersion_relations_CBnon0} Dispersion relation at fixed value of $C_A$ and increasing values of $C_B$. Modes with $\omega=0$ occur for sufficiently large $C_B$, but in the region on the $\omega=0$ axis between these two roots, there are no strictly unstable modes. Nevertheless, the $\omega$ could signal that there appears a new ground state.} \end{figure} Let us now turn on $|C_B|$, while keeping $|C_A|$ fixed. Samples of these solutions are shown in figure \ref{f:dispersion_relations_CBnon0}. We see that as $|C_B|$ is increased, the two branches of solutions shift in the vertical direction, while the distance between them remains non-vanishing (which is the reason why this other branch is not visible in the second and third plots). For some value of $|C_B|$, the upper branch crosses the $\omega=0$ axis, and continues to go towards negative $\omega$. We thus see that for large enough $|C_B|$, marginal $\omega=0$ modes are always present in the spectrum. In the next section, we will show that these modes, which were previously missed in the literature, are actually unstable once non-linearities are taken into account. \begin{figure}[t] \begin{center} \includegraphics[width=.7\textwidth]{root_movements} \caption{\label{f:physical_modes} The physical picture (not to scale) that arises from the analysis of the dispersion relation. The plot displays the location of the physical modes at the extrema of the curves in figures~\protect\ref{f:dispersion_relations_CB0} and \protect\ref{f:dispersion_relations_CBnon0}, but now depicted in the complex $\omega$ plane.} \end{center} \end{figure} These findings are summarised in figure~\ref{f:physical_modes}. Depicted there is the behaviour of the frequencies $\omega$ of the fluctuation modes at the `extrema' of the dispersion relation plots, as a function of the two parameters $C_A$ and $C_B$. For a generic value of $C_B$ and $C_A=0$, one root is always positive. As $C_A$ is increased from zero, the positive root (red dot) moves towards the left, until it becomes zero. This defines the lower, solid curve of marginal modes in figure~\ref{f:physical_modes}. Above this curve there is always a marginally stable mode in the spectrum, and thus potentially an instability. As $C_A$ is increased further, both roots eventually develop a positive imaginary part, and we enter the region of strictly unstable modes (the blue shaded region). This area was previously discussed in \cite{Kim:2010pu}.\footnote{As we have already mentioned before, we disagree with their interpretation of $C_B$ being related to the baryon chemical potential; see also below.} \vfill\eject \subsection{The inhomogeneous solutions to the equations of motion} \label{s:inhomogeneous} Having established the location of the unstable and marginally stable modes in the fluctuation spectrum of the homogeneous solution, we would now like to explicitly construct the non-homogeneous solutions to which they are expected to decay, and study their properties. In our previous work~\cite{Bayona:2011ab} we have explicitly constructed a non-homogeneous solution in the presence of non-vanishing \emph{axial} chemical potential, and in the absence of a magnetic field. We will therefore first discuss solutions with nonzero $\mu_A$ and non-zero $B$, which are a natural generalisation of those considered before. We will then introduce a non-zero potential $j$ as well and discuss how the solutions behave as a function of this parameter. The upshot of our previous analysis \cite{Bayona:2011ab} was that for large enough axial chemical potential, larger than some critical value, a non-homogeneous solution is formed. The relation between the chemical potential and axial particle density was almost linear, similar to the homogeneous case. However, a particular density of particles was achieved for a smaller value of the chemical potential than in the homogeneous configuration. The wave number characterising the period of the non-homogeneous solution turned out to be very weakly sensitive to the actual value of the particle charge density in the canonical ensemble (or to the value of the chemical potential in the case of the grand canonical ensemble). The first things we would like to know is whether any of these characteristics change in the presence of an external magnetic field. Our starting point is the nonlinear system of equations \eqref{hatfequation}, \eqref{hataequation} and \eqref{hathequation}. We first observe that the function $\hat{f}(z)$ appears only in the first equation \eqref{hatfequation}, and this equation can be directly integrated once we determine the functions $\hat{b}(z)$ and $\hat{h}(z)$, from the other two equations. Hence, we first need to solve equations \eqref{hataequation} and \eqref{hathequation}. The parameters $\hat{B}$ and $\hat{k}$ in these equations correspond to the external magnetic field and the momentum of the transverse spiral, and are fixed at this stage. There are then four undetermined constants corresponding to non-normalisable and normalisable modes for each of the functions $\hat{b}$ and $\hat{h}$. However, we solve equations (\ref{hataequation}), (\ref{hathequation}) requiring that the transverse spiral describes a normalisable mode, \begin{equation} \hat{h}(z) = \frac{h_0}{z} + \cdots \, , \end{equation} i.e. we impose that the only external field is the magnetic field, and that there are no transverse external fields. \begin{figure}[t] \begin{center} \includegraphics[width=.45\textwidth]{nonlinear_CA_CB_Bfixed_new}~~ \caption{Value of $h_0$ versus $k$ at fixed $B=1.0$, for increasing values of $|C_B|$. Increasing $|C_B|$ makes the curve go up, indicating that a full new non-linear ground state exists for increasing $|C_B|$ and decreasing $|C_A|$. That is to say, the $\omega=0$ fluctuation modes are the indicators of the transition to a new ground state, not the ${\rm Im}(\omega)>0$ modes.\label{f:varyCB}} \end{center} \end{figure} In contrast, the function $\hat{b}(z)$ (or alternatively, $\hat{a}(z)$, see \eqref{e:bversusa}), which describes the longitudinal field, does have a non-normalisable component. For example, on the positive side of the U-shaped brane it generically behaves as \begin{equation} \label{bexpansions} \hat{b}(z) = b_0 + \frac{b_1}{z} + \cdots \quad (z\rightarrow \infty)\,, \end{equation} where the coefficients $b_0$ and $b_1$ are given by \begin{equation} \begin{aligned} b_0 &= C_A \cosh\left(\frac{\pi B}{2}\right) + C_B \sinh\left(\frac{\pi B}{2}\right)\,, \\[1ex] b_1 &= B C_B \cosh\left(\frac{\pi B}{2}\right) -B C_A \sinh\left(\frac{\pi B}{2} \right) \,. \end{aligned} \end{equation} However, just like for the homogeneous solution, this non-normalisable component of the solution corresponds to a gradient of the $\eta'$ field and not to an external field (we will provide more evidence of this in section~\ref{s:inhomogeneous}). Its unusual appearance as a non-normalisable mode is a consequence of our choice of the $A_z=0$ gauge. We should note that at this stage we do not impose that $\hat{b}$ or $\hat{h}$ are of definite parity. We numerically solve the equation of motion using the shooting method, and to do this we only need to specify three undetermined constants $h_0$, $C_A$ and $C_B$ on one side of the U-shaped brane, as well as the parameter $B$ and $k$ in the equations. We solve equations for various values of constants and parameters, but keep only those for which $\hat{h}$ is normalisable. \begin{figure}[t] \begin{center} \vspace{1cm} \includegraphics[width=.43\textwidth]{cA_versus_k_cB0_B1}\quad \includegraphics[width=.43\textwidth]{j_versus_k_cB0_B1} \end{center} \caption{Data for solutions at $C_B=0$, for the (arbitrary) value $\mu=-4.83$. The second plot clearly shows that when $B\not=0$, constant $j$ solutions require a scan through the full parameter space $\{h_0,k,C_A,C_B\}$.\label{f:cB0}} \end{figure} A first indication of the behaviour of the solutions can be obtained by looking at the effect of increasing $C_B$ at non-zero value of $B$. Remember from section~\ref{s:perturbative_stability} that there exists a marginally stable mode in the spectrum, which follows the downward bending curve in figure~\ref{f:physical_modes}. Along this curve there might be a decay to a new ground state. That this indeed happens is easily confirmed by looking for nonlinear solutions at non-zero $C_A$ and $C_B$. A number of physical solutions is depicted in figure~\ref{f:varyCB}. They confirm that the marginal modes found in the previous section in fact correspond to true instabilities and new ground states. However, these figures say little about the actual physics, as neither $C_A$ nor $C_B$ are physical parameters. It is tempting to associate $C_B$ with a baryon chemical potential, but as we have already mentioned, this is incorrect, as the parameter $\mu_B$ completely decouples and does not influence the value of the Hamiltonian. Instead, the correct interpretation is that the physical parameters in our problem are $\mu_A$ and $j$, which happen to be related in a non-trivial way to $C_A$ and $C_B$. Families of solutions (parametrised by~$k$) at fixed $\mu_A$ and $j$ lie on curves in the space spanned by $C_A$ and $C_B$, which only become straight lines at $C_B=0$ when $B=0$. This is once more clearly visible in figure~\ref{f:cB0}. This depicts a family of solutions at constant $\mu_A$, parametrised by $k$, at $C_B=0$. These clearly do not have a constant value of $j$. In order to find solutions with both $\mu_A$ and $j$ fixed, we need to allow for a variation of both $C_A$ and $C_B$ as a function of $k$. Even with the magnetic field fixed to a particular value, this still means that the normalisable solutions lie on a curve in a four-dimensional parameter space spanned by $\{h_0, k, C_A, C_B\}$. This makes a brute force scan computationally infeasible. Independent of the large dimensionality of this problem, we also found that the larger $\mu_A$ cases require substantial computational time because of the fact that the asymptotic value $h(-\infty)$ varies rather strongly as a function of $k$ and the other parameters. In other words, the valley of solutions is rather steep and the bottom of the valley, where $h(z)$ is normalisable at $z\rightarrow-\infty$, is difficult to trace. In this respect, it is useful to note that a solution written in C\raisebox{.4ex}{\scriptsize ++} using odeint~\cite{Ahnert:2011a} and GSL~\cite{Galassi:2009a} outperformed our Mathematica implementations by two to three orders of magnitude (!) in these computations. The details of the procedure which we followed can be found in appendix~\ref{a:scanning}. We first focus on solutions at fixed and rather arbitrary value $B=1$; the dependence on $B$ will be discussed in the next section. We will mainly discuss $j=0$ solutions. Our numerical investigations of $j\neq 0$ solutions show that all of these actually have higher free energy, and are thus not real ground states of the system (see below). In figure \ref{f:energy_overview} we display a set of configurations at constant $\mu_A$ and vanishing~$j$. Also depicted is the difference of the Hamiltonian of the homogeneous solution for this pair of $\mu_A, j$ values, and the Hamiltonian of the non-homogeneous solution. \begin{figure}[t] \begin{center} \vspace{-2ex} \hbox{\hspace{-.6cm}\includegraphics[width=.6\textwidth]{energy_overview}\hspace{-1.5cm}\includegraphics[width=.6\textwidth]{h0_k_overview}} \vspace{-2cm} \end{center} \caption{Left: Comparison of the energy of the non-homogeneous ground state (solid curve) with the energy of the homogeneous state (dashed curves) for $j=0, B=1$ and four values of $\mu_A$; from top to bottom $\mu_A = -5.5, -6.0, -6.5$ and $-7.5$. Right: $h_0$ versus $k$, for the same values of the parameters (from bottom to top).\label{f:energy_overview}} \end{figure} As in earlier work without magnetic field~\cite{Ooguri:2010xs,Bayona:2011ab}, there is a family of solutions parametrised by the wave momentum $k$, and the physical solution is the one for which the Hamiltonian is minimised. Perhaps somewhat surprisingly, for smaller values of the chemical potential (we have done computations up to about $\mu_A=-8$) the momentum $k_{\text{g.s.}}$ at which the non-homoge\-neous ground state attains its minimum energy is only very weakly dependent on $\mu_A$. Our numerics for the range of $\mu_A$ up to $-7.5$ show that the ground state momentum for all these cases equals $0.83$ to within less than one percent. Furthermore, this ground state momentum is also the same (to within numerical accuracy) as the ground state momentum for the $B=0$ case analysed in \cite{Bayona:2011ab}. This is despite the fact that the actual solutions are quite different. It would be interesting to understand this behaviour better. If one evaluates the Hamiltonian of the A'-formalism~\eqref{e:HTheta}, one finds that all non-homogeneous configurations always have higher energy than the corresponding homogeneous one at the same values of $\mu_A$ and $j$. We take this as a strong sign that there is still something un-understood about the A'-formalism, as the analysis of section~\ref{s:perturbative_stability} clearly indicates a perturbative instability. It is in principle possible that we are not looking at the correct ansatz for the ground state, but we consider this unlikely. A similar statements holds for the B-formalism. The dependence of these solutions on $j$ can also be computed, and is depicted in figure~\ref{f:vary_j}. From those plots one observes two things. Firstly, a minimisation of the Hamiltonian with respect to $j$ drives the system to $j=0$. Secondly, a non-zero $j$ leads to a non-vanishing $J^1_A$ current, which can be interpreted as an $\eta'$-gradient. Together, these observations show that the preferred non-homogeneous state is one with a vanishing $\eta'$-gradient. \begin{figure}[t] \begin{center} \hbox{\hspace{-.8cm}\includegraphics[width=.6\textwidth]{vary_j}\hspace{-1cm}\includegraphics[width=.6\textwidth]{j1a}}\vspace{-1cm} \end{center} \caption{The dependence of the Hamiltonian and the current $J^1_A$ on the parameter $j$ (showing $j=0$, $0.1$, $0.3$ and $0.5$ bottom to top on the left, top to bottom on the right). This shows that minimisation with respect to~$j$ drives the system to $j=0$, both for the homogeneous and for the non-homogeneous state, and that this also drives the $\eta'$-gradient sitting in $J^1_A$ to zero. Under $j\rightarrow -j$ the Hamiltonian is invariant while $J^1_A$ changes sign. \label{f:vary_j}} \end{figure} \begin{figure}[p!] \begin{center} \hbox{\hspace{-1.6cm}\includegraphics[width=.45\textwidth]{tricky_numerics_B19}\hspace{-1.2cm} \includegraphics[width=.45\textwidth]{tricky_numerics_B21}\hspace{-1.2cm} \includegraphics[width=.45\textwidth]{tricky_numerics_B23}} \vspace{-1cm} \end{center} \caption{The sign of $h(z\rightarrow-\infty)$, where red and orange regions denote minus and plus respectively, and grey regions correspond to solutions that diverge when $z\rightarrow-\infty$. Normalisable non-homogeneous solutions sit at the boundary between red and orange regions. The branch that starts near $k=0.2$ is the relevant one, as it has the lowest energy. It becomes increasingly difficult to find numerically as it is sandwiched in a steep valley as $k$ increases. All plots are at $C_A=-2.4$, with $B=1.9$, $2.1$ and $2.3$ respectively. Note the different axis scales on the first plot.\label{f:tricky_numerics}} \vspace{1cm} \begin{center} \includegraphics[width=.49\textwidth]{increase_B_naive_effect_h0} \includegraphics[width=.49\textwidth]{increase_B_naive_effect_mu} \caption{Left: The increase of $B$ from $0$ to $1.0$ to $1.5$ (bottom black to top orange curves) at $C_B=0$ and fixed $C_A=-2.4$ has the effect of raising the curves of normalisable solutions. This implies that a larger magnetic field requires a smaller value of $|C_A|$ to trigger an instability. Right: However, one cannot conclude anything about the effect at constant $\mu_A$ from this, since this physical parameter varies over the curves in the figure on the left. \label{f:naiveBincrease}} \end{center} \end{figure} \subsection{Critical magnetic field} \begin{figure}[t] \begin{center} \hbox{\hspace{-.5cm}\includegraphics[width=.52\textwidth]{B_suppresses_instability} \includegraphics[width=.52\textwidth]{B_suppresses_instability_60}} \vspace{-1cm} \end{center} \caption{Solutions at constant $\mu=-5.0$ (left) and $\mu=-6.0$ (right) and $j=0$, for various values of the magnetic field. This shows that a magnetic field \emph{suppresses} the instability to a non-homogeneous ground state, and there is a critical magnetic field $B_{\text{crit}}(\mu_A)$ above which the non-homogeneous solution ceases to exist.\label{f:B_suppresses_instability}} \vspace{.5cm} \begin{center} \includegraphics[width=.45\textwidth]{muAcvsBc}\hspace{1cm} \includegraphics[width=.45\textwidth]{CAcvsBc} \end{center} \caption{Results from a linearised analysis close to the critical magnetic field. Left: critical chemical potential (i.e.~the potential for which a non-homogeneous ground state first develops) versus critical magnetic field (i.e.~the field for which the solution disappears). Right: as determined earlier for the non-linear case, the parameter $C_A$ goes to zero (asymptotically) as $B$ increases. \label{f:close_to_critical_mu_CA}} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=.45\textwidth]{kcvsBc}\hspace{1cm} \includegraphics[width=.45\textwidth]{kcvsmuAc} \end{center} \caption{Dependence of the ground state momentum $k_{\text{g.s.}}$ of the critical solution on the magnetic field or the chemical potential, computed in the linear approximation. The non-linear solutions which we computed cover the regime up to about $B_c=2$. One should keep in mind that the linear approximation will break down for sufficiently large values of $B_c$.\label{f:close_to_critical_k}} \end{figure} For larger magnitudes of the external magnetic field, the numerics becomes increasingly expensive as the valleys of solutions to the equations of motion rapidly become steeper and are more and more closely approached by regions in parameter space in which no regular solutions exist (i.e.~regions in which $h(z\rightarrow\infty)$ diverges). See figure~\ref{f:tricky_numerics} for an impression. A first analysis which one can make is to simply scan for normalisable solutions at fixed $C_A$ and $C_B$, for increasing values of $B$. This leads to plots like in figure~\ref{f:naiveBincrease}. One observes that a larger magnetic field requires a smaller $|C_A|$ for the chiral spiral solution to exist. In \cite{Kim:2010pu} this effect was seen at the level of the perturbative instability analysis, and led the authors to conclude that a magnetic field enhances the instability. However, this conclusion is premature and actually incorrect. What one needs to do is to analyse the effect of the magnetic field on solutions at fixed value of $\mu_A$ and $j$, not at fixed values of the unphysical parameters $C_A$ and $C_B$. If one does this more elaborate analysis, the conclusion is actually opposite: the magnetic field tends to stabilise the homogeneous solution, and there exists a critical $B_{\text{crit}}(\mu_A)$ above which the non-homogeneous solution ceases to exist. This can be seen in figure~\ref{f:B_suppresses_instability}, where solutions at fixed $\mu_A$ and $j$ are depicted. By increasing $B$ sufficiently slowly while keeping the other physical parameters fixed, we can determine this $B_{\text{crit}}$ in numerical form. It is difficult to get a good picture of $B_{\text{crit}}$ versus $\mu_A$, as the numerics become expensive, for reasons we have mentioned earlier. However, we can make use of the fact that near the critical magnetic field, the parameter $h_0$ is small. Assuming that this implies that the entire function $h(z)$ is small, we can then use a linear approximation to the equations of motion, which is much easier to solve. The result of this linear analysis is depicted in figures~\ref{f:close_to_critical_mu_CA} and~\ref{f:close_to_critical_k}. The latter shows that the ground state momentum $k_{\text{g.s.}}$ is actually not as flat as the non-linear computations suggests. For larger $B$, one should keep in mind that the results may be invalidated for a variety of reasons. Firstly, the linear approximation may break down, as smallness of $h_0$ does not necessarily imply smallness of the full function $h(z)$. Secondly, we have seen that the valley near the non-linear solutions becomes very steep for large values of $B$, and hence the linear solution may deviate quite strongly from the non-linear one for large $B$. Thirdly, when $B$ is large, DBI corrections to the equations of motion may become relevant, as higher powers of the field strength are no longer necessarily small. For these reasons, one should be careful interpreting the large $B_c$ regime of figure~\ref{f:close_to_critical_k}. \section{Conclusions and open questions} We have analysed in detail the instability of the Sakai-Sugimoto model in the presence of a chiral chemical potential $\mu_A$, an $\eta'$-gradient $j$ and a magnetic field $B$. We have shown that the presence of marginally stable modes (overlooked in~\cite{Kim:2010pu}) is a signal for decay towards a non-homogeneous chiral spiral like ground state. Minimising the Hamiltonian on a constant $\mu_A$, constant $j$ curve leads to a unique ground state momentum $k_{\text{g.s.}}$ (see figure~\ref{f:energy_overview}), which for small $B$ is only weakly dependent on $\mu_A$. Increasing the magnetic field to sufficiently large values suppresses the instability and drives the system back to the homogeneous phase (figure~\ref{f:close_to_critical_k}). This result may be related to the effective reduction of the Landau levels in a strong magnetic field (see \cite{Basar:2012gm}). A linear analysis suggests that the ground state momentum may have a non-trivial dependence on the chemical potential, and might be compatible with a linear scaling at large $\mu_A$ (as in the Deryagin-Grigoriev-Rubakov non-homogeneous \mbox{large-$N_c$} QCD ground state~\cite{Deryagin:1992rw}). However, substantial additional work is necessary to determine whether this is indeed happening in the full non-linear theory. We have found that a recent proposal for the correction of the currents~\cite{Landsteiner:2011tf,Gynther:2010ed}, while capable of producing the chiral magnetic effect both in the homogeneous and non-homogeneous ground states, is incomplete. It is a) not unique and b) leads to a Hamiltonian which does not prefer the non-homogeneous ground state. We emphasise that this is a problem with the currents and boundary terms in the Hamiltonian, as the perturbative instability analysis is not affected by these corrections, and neither are the non-linear solutions. \acknowledgments We thank Karl Landsteiner for discussions about~\cite{Gynther:2010ed,Landsteiner:2011tf}. This work was partially supported by STFC grant ST/G000433/1. The work of MZ was supported by an EPSRC Leadership Fellowship. \vfill\eject
1,116,691,498,829
arxiv
\subsection{Data sources.} AR~Sco's location in the ecliptic plane, not far from the Galactic centre and only $2.5^\circ$ North-West of the centre of the Ophiuchus molecular cloud, means that it appears in many archival observations. It is detected in the FIRST $\SI{21}{\cm}$ radio survey$^{30}$, the Two Micron All Sky Survey (2MASS)$^{31}$, the Catalina Sky Survey (CSS)$^{7}$, and in the \textit{Herschel}, WISE\ and \textit{Spitzer}\ infrared satellite archives$^{32,33,34}$. Useful upper limits come from non-detections in the Australia Telescope $\SI{20}{\giga\Hz}$ (AT20G) survey$^{35}$ and the WISH survey$^{36}$. Flux measurements, ranges (when time resolved data are available) and upper limits from these sources are listed in Extended Data Table~\ref{t:archive}. We supplemented these data with our own intensive observations on a variety of telescopes and instruments, namely: the $\SI{8.2}{\m}$ Very Large Telescope (VLT) with the FORS and X-SHOOTER optical/NIR spectrographs and the HAWK-I NIR imager; the $\SI{4.2}{\m}$ William Herschel Telescope (WHT) with the ISIS spectrograph and the ULTRACAM high-speed camera$^{6}$; the $\SI{2.5}{\m}$ Isaac Newton Telescope (INT) with the Intermediate Dispersion Spectrograph (IDS); the $\SI{2.4}{\m}$ Thai National Telescope with the ULTRASPEC high-speed camera$^{37}$; the UV/optical and X-ray instruments UVOT and XRT on the \textit{Swift}\ satellite; the COS UV spectrograph on the Hubble Space Telescope, \textit{HST}; radio observations on the Australia Telescope Compact Array (ATCA). Optical monitoring data came from a number of small telescopes. We include here data taken with a $\SI{406}{\mm}$ telescope at the Remote Observatory Atacama Desert (ROAD) in San Pedro de Atacama$^{38}$. Extended Data Table~\ref{t:obslog} summarises these observations. \subsection{The orbital, spin and beat frequencies.} The orbital, spin and beat frequencies were best measured from the small-telescope data because of their large time-base. For example, see the amplitude spectrum around the spin/beat components of the clear filter data from 19--28 July 2015 shown in Extended Data Fig.~\ref{f:ClearAmps}. The final frequencies, which give the dashed lines of Extended Data Fig.~\ref{f:ClearAmps}, were obtained from the CSS data. These consisted of 305 exposures each 30-seconds in duration spanning the interval 30 May 2006 until 8 July 2013. We rejected 6 points which lay more than $4\sigma$ from the multi-sinusoid fits that we now describe. To search for signals in these sparsely-sampled data, we first transformed the UTC times of the CSS data to a uniform timescale (TDB) and then corrected these for light-travel delays to the solar system barycentre. The periodogram of these data is dominated by the strong orbital modulation, which leaks so much power across the spectrum owing to the sparse sampling that the spin/beat component can only be seen after the orbital signal is removed. Once this was done, beat and spin components matching those of Extended Data Fig.~\ref{f:ClearAmps} could be identified (Extended Data Fig.~\ref{f:CRTSamps}). We carried out bootstrap multi-sinusoid fits to compute the distributions of the orbital, beat and spin frequencies. The orbital frequency closely follows a Gaussian distribution; the beat and spin distributions are somewhat non-Gaussian in their high and low frequency wings respectively, but are nevertheless well-defined. Statistics computed from these distributions are listed in Extended Data Table~\ref{t:freqs}. Having established that the two pulsation frequencies are separated by the orbital frequency, we carried out a final set of fits in which we enforced the relation $\nu_{\mathrm{S}}-\nu_{\mathrm{B}}=\nu_{\mathrm{O}}$, but also allowed for a linear drift of the pulsation frequency in order to be sensitive to any change in the pulsation frequency. This led to a significant improvement in $\chi^2$ ($> 99.99$\% significance on an $F$-test) which dropped from $326$ to $289$ for the 299 fitted points relative to a model in which the frequencies did not vary (after scaling uncertainties to yield $\chi^2/N \approx 1$ for the final fit). Bootstrap fits gave a near-Gaussian distribution for the frequency derivative with $\dot{\nu}=-\SI{2.86(36)e-17}{\hertz\per\s}$. Pulsations are detected at all wavelengths with suitable data other than X-rays, where limited signal ($\approx 630$ source photons in $\SI{10.2}{\kilo\s}$) leads to the upper limit of a $\SI{30}{\percent}$ pulse fraction quoted in the main text. The \textit{Swift}\ X-ray observations were taken in $\SI{1000}{\s}$ chunks over the course of more than one month and we searched for the pulsations by folding into 20 bins and fitting a sinusoid to the result. There were no significant signals on either the beat or spin periods or their harmonics. We used a power-law fit to the X-ray spectrum to deduce the slopes shown in Fig.~\ref{f:sed}. \subsection{The M star's spectral type and distance.} The CSS data establish the orbital period $P = \SI[separate-uncertainty=false]{0.14853528(8)}{\day}$, but not the absolute phase of the binary. This we derived from observations of the M star, which also led to a useful constraint upon the distance to the system. The VLT+FORS data were taken shortly before the photometric minimum, allowing a clear view of the M star's contribution. We used M star spectral-type templates developed from SDSS spectra$^{39}$ to fit AR~Sco's spectrum, applying a flux scaling factor $\alpha$ to the selected template and adding a smooth continuum to represent any extra flux in addition to the M star. The smooth spectrum was parameterised by $\exp(a_1+a_2\lambda)$ to ensure positivity. The coefficients $a_1$, $a_2$ and $\alpha$ were optimised for each template, with emission lines masked since they are not modelled by the smooth spectrum. Out of the templates available (M0-9 in unit steps), the M5 spectrum gave by far the best match with $\chi^2=24$,$029$ for 1165 points fitted compared to $>100$,$000$ for the M4 and M6 templates on either side (Extended Data Fig.~\ref{f:spectype}). The templates used were normalised such that the scaling factor $\alpha=(R_2/d)^2$. We found $\alpha=3.02\times10^{-21}$, so $R_2/d=5.5\times10^{-11}$. Assuming that the M star is close to its Roche lobe (there is evidence supporting this assumption in the form of ellipsoidal modulations of the minima between pulsations in the HAWK-I data, Fig.~\ref{f:lc}), its mean density is fixed by the orbital period, which means that its radius is fixed by its mass. Assuming $M_2=\SI{0.3}{\msun}$, for reasons outlined in the main text, we find that $R_2=\SI{0.36}{\rsun}$, and hence $d=\SI{149}{\parsec}$. This is an overestimate as the FORS spectrum was taken through a narrow slit. We estimated a correction factor by calculating the $i'$-band flux of the spectrum ($\SI{2.50}{\mjy}$) and comparing it to the mean $i'$-band flux ($\SI{4.11}{\mjy}$) of the ULTRACAM photometry over the same range of orbital phase. This is approximate given that the ULTRACAM data were not taken simultaneously with the FORS data and there may be stochastic variations in brightness from orbit-to-orbit, however the implied $\SI{61}{\percent}$ throughput is plausible given the slit width of $0.7"$ and seeing of $\sim 1"$. The final result is the distance quoted in the main text of $d=\SI{116\pm16}{\parsec}$, and allows for uncertainties in the calibration of the surface brightness of the templates and in the slit-loss correction. We used the radius, spectral type and distance to estimate the $K_S$ flux density from the donor as $f_{Ks}=\SI{9.4}{\mjy}$. The minimum observed flux density from the HAWK-I data is $\SI{9.1}{\mjy}$. Uncertainties in the extrapolation required to estimate the $K_S$ flux and from ellipsoidal modulations allow the numbers to be compatible, but they suggest that the estimated distance is as low as it can be and that the M star dominates the $K_S$ flux at minimum light. The estimated M star fluxes for $i'$ and $g'$, $f_{i'} = \SI{1.79}{\mjy}$ and $f_{g'}=\SI{0.07}{\mjy}$, are comfortably less than the minimum observed fluxes of $\SI{2.57}{\mjy}$ and $\SI{0.624}{\mjy}$ in the same bands. We do not detect the white dwarf. The strongest constraint comes from the \textit{HST}\ far ultraviolet data which at its lowest require $T_1<\SI{9750}{\K}$. A white dwarf model atmosphere of $T=\SI{9750}{\K}$, $\log g = 8$, corrected for slit-losses is plotted in Fig.~\ref{f:spectype}, and also (without slit losses) in Fig.~\ref{f:hstspec} which shows the average \textit{HST}\ spectrum. Given the small maximum contribution of the white dwarf seen in these figures, the absence of absorption features from the white dwarf is unsurprising. \subsection{The M star's radial velocity.} We used spectra taken with the ISIS spectrograph on the WHT and X-SHOOTER on the VLT to measure radial velocities of the M star using the NaI 8200 doublet lines. These vary sinusoidally on the same $\SI{3.56}{\hour}$ period as the slowest photometric variation (Fig.~\ref{f:rvs}), hence our identification of this period as the orbital period. We fitted the velocities with \[V_R=\gamma+K_2\sin\left(2\pi(t-T_0)/P\right),\] with the period fixed at the value obtained from the CSS data, $P=\SI{0.14853528}{\day}$, and the systemic offset $\gamma$ allowed to float free for each distinct subset of the data to allow for variable offsets. We found $K_2=\SI{295\pm4}{\km\per\s}$ and $T_0=\SI[separate-uncertainty=false]{57264.09615(33)}{\day}$, thus the orbital ephemeris of AR~Sco is \[\mathrm{BMJD(TDB)}=57264.09615(33)+0.14853528(8)E,\] where $E$ is the cycle number, and the time scale is TDB, corrected to the barycentre of the solar system, expressed as a Modified Julian Day number ($\mathrm{MJD} = \mathrm{JD} -2400000.5$). This ephemeris is important in establishing the origin of the emission lines, as will be shown below. The radial velocity amplitude and orbital period along with Kepler's third law define the ``mass function'' \[\frac{M_1^3 \sin^3i}{(M_1+M_2)^2}=\frac{PK_2^3}{2\pi G}=\SI{0.395\pm0.016}{\msun},\] where $i$ is the orbital inclination. This is a hard lower limit to the mass of the compact object, $M_1$, which is only met for $i = 90^\circ$ and $M_2=0$. There is however a caveat to this statement: it is sometimes observed that irradiation can weaken the absorption lines on the side of the cool star facing the compact object causing the observed radial velocity amplitude to be an over-estimate of the true amplitude$^{40,41}$. If this effect applied here, which we suspect it might, both $K$ and the mass function limit would need to be reduced. Given the large intrinsic variability of AR~Sco, and the lack of flux-calibrated spectra, it was not possible to measure the absolute strength of NaI. We attempted therefore to search for the influence of irradiation from another side effect, which is that it causes the radial velocity to vary non-sinusoidally$^{42}$. We failed to detect any obvious influence of irradiation through this method, but its effectiveness may be limited by the heterogeneous nature of our data which required multiple systematic velocity offsets. Despite our failure to detect clear signs of the effect of irradiation upon the M star's radial velocities, we would not be surprised if the true value of $K$ was anything up to $\sim \SI{20}{\km\per\s}$ lower than we measure. However, with no clear evidence for the effect, in this we paper we proceed on the basis that we have measured the true value of the M star's centre of mass radial velocity amplitude. This is conservative in the sense that any reduction in $K$ would move the mass limits we deduce to lower values, which would tilt the balance even more heavily towards a white dwarf as the compact star. The UV and optical emission lines come from the irradiated face of the M star and their amplitude compared to $K_2$ sets a lower limit to the relative size of the M star, and hence, through Roche geometry, the mass ratio $q = M_2/M_1$. Extended Data Fig.~\ref{f:roche} shows how the emission measurements lead to the quoted limit of $q > 0.35$, which leads in turn to the lower limits $M_1>\SI{0.81}{\msun}$ and $M_2>\SI{0.28}{\msun}$ quoted in the main text. The orbital period of a binary star sets a lower limit on the mean densities of its component stars$^{43}$. Since the mean densities of main-sequence stars decrease with increasing mass, this implies that we can set an upper limit to the mass of any main-sequence component. In the case of AR~Sco we find that $\left<\rho_2\right>>\SI{8900}{\kg\per\m\cubed}$ which leads to $M_2<\SI{0.42}{\msun}$; the slightly larger value of $\SI{0.45}{\msun}$ quoted in the text allows for uncertainty in the models. The limit becomes an equality when the M star fills its Roche lobe, which we believe to be the case, or very nearly so, for AR~Sco. However, we expect that even in this case the number deduced still functions as an upper limit because the mass-losing stars in close binaries are generally over-sized and therefore less dense than main-sequence stars of the same mass$^{44}$. Indeed, systems with similar orbital periods to that of AR~Sco have donor star masses in the range $\SI{0.2}{\msun}$ to $\SI{0.3}{\msun}$$^{44}$. This, and the M5 spectral type, are why we favour a mass of $M_2 \approx\SI{0.3}{\msun}$, close to the lower limit on $M_2$. \subsection{Brightness temperature at radio wavelengths.} The pulsations in radio flux are a remarkable feature of AR~Sco, unique amongst known white dwarfs and white dwarf binaries. If we assume that, as at other wavelengths, and as suggested by the alignment of the second harmonic power with $2\nu_{\mathrm{B}}$ (Extended Data Fig.~\ref{f:pgram}), they arise largely from the M star, then we can deduce brightness temperatures from the relation \[ T_b = \frac{\lambda^2}{2\pi k} \left(\frac{d}{R_2}\right)^2 f_\nu .\] These work out to be $\approx \SI{e12}{\K}$ and $\approx \SI{e13}{\K}$ for the observations at $\nu = \SI{5.5}{\GHz}$ and $\nu = \SI{1.4}{\GHz}$ respectively. Although the value at the lowest frequency exceeds the $\sim \SI{e12}{\K}$ limit at which severe cooling of the electrons due to inverse Compton scattering is thought to occur$^{45}$, this is not necessarily a serious issue given the short-term variability exhibited by the source. The limits can be lowered by appealing to a larger emission region as the radio data in hand are not enough to be certain that emission arises solely on the M star. Even so, the $\SI{0.98}{\minute}$ second harmonic pulsations that are seen in the radio flux suggest an upper limit to the size of the emission region of $\SI{25}{\rsun}$ from light-travel time alone. This implies a minimum brightness temperature of $\SI{e9}{\K}$ at $\SI{1.4}{\GHz}$, showing clearly that the radio emission is non-thermal in origin. We assume that synchrotron emission dominates; while there may be thermal and cyclotron components at shorter wavelengths, there is no clear evidence for either. \subsection{Code availability.} The data were reduced with standard instruments pipelines for the \textit{HST}, VLT, and \textit{Swift}\ data. The WHT and INT data were reduced with STARLINK software. Scripts for creating the figures are available from the first author apart from the code for computing the white dwarf model atmosphere, which is a legacy F77 code and complex to export. The atmosphere model itself however is available on request. \end{methods} \noindent \begin{enumerate} \setcounter{enumi}{29} \item Becker, R.~H., White, R.~L. \& Helfand, D.~J. The FIRST Survey: Faint Images of the Radio Sky at Twenty Centimeters. \textit{Astrophys.\ J.\ } \textbf{450,} 559--577 (1995). \item Skrutskie, M.~F. \textit{et al.} The Two Micron All Sky Survey (2MASS). \textit{Astron.\ J.\ } \textbf{131,} 1163--1183 (2006). \item Pilbratt, G.~L. \textit{et al.} Herschel Space Observatory. An ESA facility for far-infrared and submillimetre astronomy. \textit{Astron.\ \& Astrophys.\ } \textbf{518,} L1--L6 (2010). \item Wright, E.~L. \textit{et al.} The Wide-field Infrared Survey Explorer (WISE): Mission Description and Initial On-orbit Performance. \textit{Astron.\ J.\ } \textbf{140,} 1868--1881 (2010). \item Werner, M.~W. \textit{et al.} The Spitzer Space Telescope Mission. \textit{Astrophys.\ J.\ Supp.\ } \textbf{154,} 1--9 (2004). \item Murphy, T. \textit{et al.} The Australia Telescope 20 GHz Survey: the source catalogue. \textit{Mon.\ Not.\ R.\ Astron.\ Soc.\ } \textbf{402,} 2403--2423 (2010). \item De Breuck, C., De Breuck, C., De Breuck, C., R\"ottgering, H. \& van Breugel, W. A sample of ultra steep spectrum sources selected from the Westerbork In the Southern Hemisphere (WISH) survey. \textit{Astron.\ \& Astrophys.\ } \textbf{394,} 59--69 (2002). \item Dhillon, V.~S. \textit{et al.} ULTRASPEC: a high-speed imaging photometer on the 2.4-m Thai National Telescope. \textit{Mon.\ Not.\ R.\ Astron.\ Soc.\ } \textbf{444,} 4009--4021 (2014). \item Hambsch, F.-J. ROAD (Remote Observatory Atacama Desert): Intensive Observations of Variable Stars. \textit{Journal of the American Association of Variable Star Observers (JAAVSO)} \textbf{40,} 1003--1009 (2012). \item Rebassa-Mansergas, A., Rebassa-Mansergas, A., Rebassa-Mansergas, A., Schreiber, M.~R. \& Koester, D. Post-common-envelope binaries from SDSS - I. 101 white dwarf main-sequence binaries with multiple Sloan Digital Sky Survey spectroscopy. \textit{Mon.\ Not.\ R.\ Astron.\ Soc.\ } \textbf{382,} 1377--1393 (2007). \item Hessman, F.~V., Hessman, F.~V., Nather, R.~E. \& Zhang, E.-H. Time-resolved spectroscopy of SS Cygni at minimum and maximum light. \textit{Astrophys.\ J.\ } \textbf{286,} 747--759 (1984). \item Wade, R.~A. \& Horne, K. The radial velocity curve and peculiar TiO distribution of the red secondary star in Z Chamaeleontis. \textit{Astrophys.\ J.\ } \textbf{324,} 411--430 (1988). \item Marsh, T.~R. A spectroscopic study of the deeply eclipsing dwarf nova IP Peg. \textit{Mon.\ Not.\ R.\ Astron.\ Soc.\ } \textbf{231,} 1117--1138 (1988). \item Faulkner, J., Flannery, B.~P. \& Warner, B. Ultrashort-Period Binaries. II. HZ 29 (=AM CVn): a Double-White Semidetached Postcataclysmic Nova?. \textit{Astrophys.\ J.\ Lett.\ } \textbf{175,} L79--L83 (1972). \item Knigge, C., Baraffe, I. \& Patterson, J. The Evolution of Cataclysmic Variables as Revealed by Their Donor Stars. \textit{Astrophys.\ J.\ Supp.\ } \textbf{194,} 28--75 (2011). \item Kellermann, K.~I. \& Pauliny-Toth, I.~I.~K. The Spectra of Opaque Radio Sources. \textit{Astrophys.\ J.\ Lett.\ } \textbf{155,} L71--L78 (1969). \end{enumerate} \end{refsegment} \begin{efigure} \begin{center} \includegraphics[width=0.9\textwidth]{spectral_type.pdf} \end{center} \caption{\textbf{The optical spectrum of the white dwarf's M star companion.} A 10~minute exposure of AR~Sco taken with FORS on the VLT between orbital phases 0.848 and 0.895 (black). Other spectra: an optimally-scaled M5 template (green); the sum of the template plus a fitted smooth spectrum (red); AR~Sco minus the template, i.e. the extra light (magenta); a white dwarf model atmosphere of $T = \SI{9750}{\K}$, $\log g = 8.0$, the maximum possible consistent with the \textit{HST}\ data (blue). A slit-loss factor of $0.61$ has been applied to the models. The strong emission lines come from the irradiated face of the M star. \label{f:spectype}} \end{efigure} \begin{efigure} \begin{center} \includegraphics[width=0.9\textwidth]{avespec.pdf} \end{center} \caption{\textbf{\textit{HST}\ ultraviolet spectrum of AR~Sco.} This shows the mean \textit{HST}\ spectrum with geocoronal emission plotted in grey. The blue line close to the $x$-axis is a white dwarf model atmosphere of $T = \SI{9750}{\K}$, $\log g = 8.0$, representing the maximal contribution of the white dwarf consistent with light-curves. The radial velocities of the emission lines (Extended Data Fig.~\protect\ref{f:roche}) show that, like the optical lines, the ultraviolet lines mainly come from the irradiated face of the M star.\label{f:hstspec}} \end{efigure} \begin{efigure} \begin{center} \includegraphics[width=0.99\textwidth]{trail.pdf} \end{center} \caption{\textbf{Velocity variations of atomic emission lines compared to those of the M star.} \textbf{a}, \textbf{b} and \textbf{d} show emission lines from a sequence of spectra from the VLT+X-SHOOTER data; \textbf{c} shows the NaI~8200 absorption doublet from the M star. The dashed line shows the motion of the centre of mass of the M star deduced from the NaI measurements while the dotted lines show the maximum range of radial velocities from the M star for $q = M_2/M_1 =0.35$. The emission lines move in phase with the NaI doublet but at lower amplitude, showing that they come from the inner face of the M star.\label{f:trail}} \end{efigure} \begin{efigure} \begin{center} \includegraphics[width=0.8\textwidth]{roche.pdf} \end{center} \caption{\textbf{The emission lines' origin relative to the M star.} Velocities of the lines were fitted with $V_R = -V_X \cos 2\pi \phi + V_Y \sin 2\pi \phi$. The points show the values of $(V_X,V_Y)$. Red: the M star from NaI (by definition this lies at $V_X=0$). Blue: SiIV and HeII lines from the \textit{HST}\ FUV data. Green: H$\alpha$, $\beta$ and $\gamma$ from optical spectroscopy. The black and green plus signs mark the centres of mass of the binary and white dwarf respectively. The red line shows the Roche lobe of the M star for a mass ratio $q = 0.35$.\label{f:roche}} \end{efigure} \begin{efigure} \begin{center} \includegraphics[width=0.9\textwidth]{monit_pgram.pdf} \end{center} \caption{\textbf{Amplitude spectra from 9 days monitoring with a small telescope.} \textbf{a}, Amplitude as a function of frequency around the $\SI{1.97}{\minute}$ signal from data taken with a $\SI{40}{\cm}$ telescope. \textbf{b}, The same at the second harmonic. \textbf{c} and \textbf{d}, The same as \textbf{a} and \textbf{b} after subtracting the beat frequency signals at $\nu_{\mathrm{B}}$ and $2\nu_{\mathrm{B}}$. Signals at $\nu_{\mathrm{S}} + \nu_{\mathrm{O}}$ and $2\nu_{\mathrm{S}} - \nu_{\mathrm{O}}$ are also apparent. \textbf{e}, The window function, computed from a pure sinusoid of frequency $\nu_{\mathrm{B}}$ and amplitude $0.18$ magnitudes (cf \textbf{a}).\label{f:ClearAmps}} \end{efigure} \begin{efigure} \begin{center} \includegraphics[width=0.9\textwidth]{crts_pgram.pdf} \end{center} \caption{\textbf{Amplitude spectra from 7 years of sparsely-sampled CSS data.} \textbf{a}-\textbf{c}, The amplitude as a function of frequency relative to the mean orbital (\textbf{a}), beat (\textbf{b}) and spin (\textbf{c}) frequencies listed in Extended Data Table~\protect\ref{t:freqs}. The grey line is the spectrum without any processing; the blue line is the spectrum after subtraction of the orbital signal. \label{f:CRTSamps}} \end{efigure} \newpage \begin{table} \begin{center} \begin{tabular}{lllp{1in}l} \hline Tel./Inst. & Type & Wavelength & Date & Exposure \\ & & & & $T[\mathrm{s}]\times N$ \\ \hline VLT+FORS & Spectra & $420$ -- $\SI{900}{\nm}$ & 2015-06-03 & 600x1 \\ WHT+ULTRACAM& Photometry & $u'$, $g'$, $r'$ & 2015-06-23 & 2.9x768 \\ WHT+ULTRACAM& Photometry & $u'$, $g'$, $i'$ & 2015-06-24 & 1.3x7634 \\ \textit{Swift}+UVOT/XRT & UV, X-rays& $\SI{260}{\nm}$, $0.2$ -- $\SI{10}{\keV}$ & 2015-06-23 -- 2015-08-03 & 1000x10\\ VLT+HAWKI & Photometry & $K_S$ & 2015-07-06 & 2.0x7020 \\ WHT+ISIS & Spectra & $354$ -- $539$, $617$ -- $\SI{884}{\nm}$ & 2015-07-16 & 20x94\\ WHT+ISIS & Spectra & $354$ -- $539$, $617$ -- $\SI{884}{\nm}$ & 2015-07-17 & 300x4\\ WHT+ISIS & Spectra & $356$ -- $520$, $540$ -- $\SI{697}{\nm}$ & 2015-07-19 & 30x130\\ ROAD $\SI{40}{\cm}$ & Photometry & White light & 2015-07-19 -- 2015-07-28 & 30x1932\\ WHT+ISIS & Spectra & $356$ -- $520$, $540$ -- $\SI{697}{\nm}$ & 2015-07-20 & 30x210\\ INT+IDS & Spectra & $440$ -- $\SI{685}{\nm}$ & 2015-07-22 & 27x300\\ INT+IDS & Spectra & $440$ -- $\SI{685}{\nm}$ & 2015-07-23 & 34x300\\ ATCA & Radio & $5.5$, $\SI{9.0}{\GHz}$ & 2015-08-13 & 271x10\\ WHT+ISIS & Spectra & $320$ -- $535$, $738$ -- $\SI{906}{\nm}$ & 2015-08-26 & 600x8\\ WHT+ISIS & Spectra & $320$ -- $535$, $738$ -- $\SI{906}{\nm}$ & 2015-09-01 & 600x8\\ VLT+XSHOOTER& Spectra & $302$ -- $\SI{2479}{\nm}$ & 2015-09-23 & 11x300\\ \emph{HST}+COS & Spectra & $110$ -- $\SI{220}{\nm}$ & 2016-01-19 & 5 orbits\\ TNT+ULTRASPEC & Photometry & $g'$ & 2016-01-19 & 3.8x1061\\ \hline \end{tabular} \end{center} \caption{\textbf{Observation log.} \label{t:obslog}} \end{table} \begin{table} \begin{center} \begin{tabular}{lccccc} \hline Frequency & $\SI{5}{\percent}$-ile & $\SI{95}{\percent}$-ile & Median & Mean & RMS\\ & mHz & mHz & mHz & mHz & mHz \\ \hline $\nu_{\mathrm{O}}$ & 0.077921311 & 0.077921449 & 0.077921380 & 0.077921380 & 0.000000042\\ $\nu_{\mathrm{B}}$ & 8.4603102 & 8.4603140 & 8.4603112 & 8.4603114 & 0.0000011\\ $\nu_{\mathrm{S}}$ & 8.5382332 & 8.5382356 & 8.5382348 & 8.5382346 & 0.0000008\\ \hline \end{tabular} \end{center} \caption{\textbf{Statistics of the orbital, beat and spin frequencies from bootstrap fits.} \label{t:freqs}} \end{table} \begin{table} \begin{center} \begin{tabular}{llllll} \hline Source & Wavelength, & Flux & Source & Wavelength, & Flux \\ & Frequency & mJy & & Frequency & mJy \\ \hline WISH & $\SI{352}{\mega\Hz}$ & $< 18$ & WISE\ & $\SI{22.0}{\micro\m}$ & $45.2$ -- $105.4$ \\ FIRST & $\SI{1.4}{\giga\Hz}$ & $8.0\pm0.3$ & WISE\ & $\SI{12}{\micro\m}$ & $18.0$ -- $48.3$ \\ AT20G & $\SI{20}{\giga\Hz}$ & $< 50$ & \textit{Spitzer}\ & $\SI{5.73}{\micro\m}$ & $11.9$ -- $23.5$ \\ \textit{Herschel}\ & $\SI{500}{\micro\m}$ & $92 \pm 25$ & WISE\ & $\SI{4.60}{\micro\m}$ & $11.8$ -- $20.5$ \\ \textit{Herschel}\ & $\SI{350}{\micro\m}$ & $76 \pm 21$ & \textit{Spitzer}\ & $\SI{3.6}{\micro\m}$ & $13.0 \pm 0.7$ \\ \textit{Herschel}\ & $\SI{250}{\micro\m}$ & $55 \pm 23$ & WISE\ & $\SI{3.4}{\micro\m}$ & $13.2$ -- $13.8$ \\ \textit{Herschel}\ & $\SI{160}{\micro\m}$ & $118 \pm 38$ & 2MASS & $\SI{2.1}{\micro\m}$ & $13.5 \pm 0.3$ \\ \textit{Herschel}\ & $\SI{70}{\micro\m}$ & $196 \pm 63$ & 2MASS & $\SI{1.7}{\micro\m}$ & $15.0 \pm 0.3$ \\ \textit{Spitzer}\ & $\SI{24}{\micro\m}$ & $59.9 \pm 6.0$ & 2MASS & $\SI{1.2}{\micro\m}$ & $13.3 \pm 0.3$ \\ \hline \end{tabular} \end{center} \caption{\textbf{Archival data sources and flux values.}\label{t:archive}} \end{table} \end{document}
1,116,691,498,830
arxiv
\section{Introduction} In this paper we compute the additive structure of the Hochschild \linebreak (co)homology and cyclic homology of preprojective algebras of ADE quivers over a field of characteristic zero. That is, we compute the (co)homology spaces together with the grading induced by the natural grading on the preprojective algebra (in which all edges have degree 1). We also use the result (for second cohomology) to find the universal deformation of the preprojective algebra. This generalizes the results of the papers \cite{ES1}, \cite{ES2}, where the dimensions of the Hochschild cohomology groups were found for type A and partially for type D. Our computation is based on the same method that was used by the second author in the paper \cite{Eu}, where the same problem was solved for centrally extended preprojective algebras, introduced by E. Rains and the first author. Namely, we use the periodic (with period 6) Schofield resolution of the algebra, and consider the corresponding complex computing the Hochschild homology. Using this complex, we find the possible range of degrees in which each particular Hochschild homology and space can sit. Then we use this information, as well as the Connes complex for cyclic homology and the formula for the Euler characteristic of cyclic homology to find the exact dimensions of the homogeneous components of the homology groups. Then we show that the same computation actually yields the Hochschild cohomology spaces as well. We note that for connected non-Dynkin quivers, the Hochschild (co)homology and the cyclic homology of the preprojective algebra were calculated in \cite{CBEG,EG}; in this case, unlike the ADE case, the homological dimension of the preprojective algebra is 2, so the situation is simpler. {\bf Acknowledgments.} P.E. is grateful to V. Ostrik and for a useful discussion and to K. Erdmann for references. The work of the authors was partially supported by the NSF grant DMS-0504847. \section{Preliminaries} \subsection{Quivers and path algebras} Let $Q$ be a quiver of ADE type with vertex set $I$ and $|I|=r$. We write $a\in Q$ to say that $a$ is an arrow in $Q$. We define $Q^*$ to be the quiver obtained from $Q$ by reversing all of its arrows. We call $\bar Q=Q\cup Q^*$ the \emph{double} of $Q$. Let $C$ be the adjacency matrix corresponding to the quiver $\bar Q$. The concatenation of arrows generate the \emph{nontrivial paths} inside the quiver $\bar Q$. We define $e_i$, $i\in I$ to be the \emph{trivial path} which starts and ends at $i$. The \emph{path algebra} $P_{\bar Q}=\mathbb{C}\bar Q$ of $\bar Q$ over $\mathbb{C}$ is the $\mathbb{C}$-algebra with basis the paths in $\bar Q$ and the product $xy$ of two paths $x$ and $y$ to be their concatenation if they are compatible and $0$ if not. We define the \emph{Lie bracket} $[x,y]=xy-yx$. Let $R=\oplus\mathbb{C}e_i$. Then $R$ is a commutative semisimple algebra, and $P_Q$ is naturally an $R$-bimodule. \subsection{Frobenius algebras} Let $A$ be a finite dimensional unital $\mathbb{C}-$algebra. We call it Frobenius if there is a linear function $f:A\rightarrow\mathbb{C}$, such that the form $(x,y):=f(xy)$ is nondegenerate, or, equivalently, if there exists an isomorphism $\phi:A\stackrel{\simeq}{\rightarrow}A^*$ of left $A-$modules: given $f$, we can define $\phi(a)(b)=f(ba)$, and given $\phi$, we define $f=\phi(1)$. If $\tilde f$ is another linear function satisfying the same properties as $f$ from above, then $\tilde f(x)=f(xa)$ for some invertible $a\in A$. Indeed, we define the form $\{a,b\}=\tilde f(ab)$. Then $\{-,1\}\in A^*$, so there is an $a\in A$, such that $\phi(a)=\{-,1\}$. Then $\tilde f(x)=\{x,1\}=\phi(a)(x)=f(xa)$. \subsection{The Nakayama automorphism} Given a Frobenius algebra $A$ (with a function $f$ inducing a bilinear form $(-,-)$ from above), the automorphism $\eta:A\rightarrow A$ defined by the equation $(x,y)=(y,\eta(x))$ is called the \emph{Nakayama automorphism} (corresponding to $f$). We note that the freedom in choosing $f$ implies that $\eta$ is uniquely determined up to an inner automorphism. Indeed, let $\tilde f(x)=f(xa)$ and define the bilinear form $\{a,b\}=\tilde f(ab)$. Then \begin{align*} \{x,y\}&=\tilde f(xy)=f(xya)=(x,ya)=(ya,\eta(x))=f(ya\eta(x)a^{-1}a)\\ &=(y,a\eta(x)a^{-1}). \end{align*} \subsection{The preprojective algebra} Given an ADE-quiver $Q$, we define the \emph{preprojective algebra} $\Pi_Q$ to be the quotient of the path algebra $P_{\bar Q}$ by the relation $\sum\limits_{a\in Q}[a,a^*]=0$. It is known that $\Pi_Q$ is a Frobenius algebra (see e.g. \cite{ES2},\cite{MOV}). From now on, we write $A=\Pi_Q$. \subsection{Graded spaces and Hilbert series} Let $W=\oplus_{d\geq0}W(d)$ be a $\mathbb Z_+$-graded vector space, with finite dimensional homogeneous subspaces. We denote by $M[n]$ the same space with grading shifted by $n$. The graded dual space $M^*$ is defined by the formula $M^*(n)=M(-n)^*$. \begin{definition} \textnormal{(The Hilbert series of vector spaces)}\\ We define the \emph{Hilbert series} $h_W(t)$ to be the series \begin{displaymath} h_W(t)=\sum\limits_{d=0}^{\infty}\dim W(d)t^d. \end{displaymath} \end{definition} \begin{definition} \textnormal{(The Hilbert series of bimodules)}\\ Let $W=\oplus_{d\geq0}W(d)$ be a $\mathbb{Z_+}$-graded bimodule over the ring $R$, so we can write $W=\oplus W_{i,j}$. We define the \emph{Hilbert series} $H_W(t)$ to be a matrix valued series with the entries \begin{displaymath} H_W(t)_{i,j}=\sum\limits_{d=0}^{\infty}\dim\ W(d)_{i,j}t^d. \end{displaymath} \end{definition} \subsection{Root system parameters} Let $w_0$ be the longest element of the Weyl group $W$ of $Q$. Then we define $\nu$ to be the involution of $I$, such that $w_0(\alpha_i)=-\alpha_{\nu(i)}$ (where $\alpha_i$ is the simple root corresponding to $i\in I$). It turns out that $\eta(e_i)= e_{\nu(i)}$ (\cite{S}; see \cite{ES2}). Let $m_i$, $i=1,...,r$, be the exponents of the root system attached to $Q$, enumerated in the increasing order. Let $h=m_r+1$ be the Coxeter number of $Q$. Let $P$ be the permutation matrix corresponding to the involution $\nu$. Let $r_+=\dim\ker(P-1)$ and $r_-=\dim\ker(P+1)$. Thus, $r_-$ is half the number of vertices which are not fixed by $\nu$, and $r_+=r-r_-$. \section{The main results} Let $U$ be a positively graded vector space with Hilbert series $h_U(t)=\sum\limits_{i,\,m_i<\frac{h}{2}}t^{2m_i}$. Let $Y$ be a vector space with $\dim Y= r_+-r_--\#\{i:m_i=\frac{h}{2}\}$, and let $K=\ker(P+1),\,L=\langle e_i|\nu(i)=i\rangle$, so that $\dim K=r_-$, $\dim L=r_+-r_-$ (we agree that the spaces $K,L,Y$ sit in degree zero). The main results of this paper are the following theorems. \begin{theorem}\label{t1} The Hochschild cohomology spaces of $A$, as graded spaces, are as follows: \begin{align*} {HH^0}(A)&=U[-2]\oplus L[h-2],\\ {HH^1}(A)&=U[-2],\\ {HH^2}(A)&=K[-2],\\ {HH^3}(A)&=K[-2],\\ {HH^4}(A)&=U^*[-2],\\ {HH^5}(A)&=U^*[-2]\oplus Y^*[-h-2],\\ {HH^6}(A)&=U[-2h-2]\oplus Y[-h-2], \end{align*} and ${HH^{6n+i}}(A)={HH^i}(A)[-2nh]\,\forall i\geq1$. \end{theorem} \begin{corollary}\label{center} The center $Z=HH^0(A)$ of $A$ has Hilbert series $$ h_Z(t)=\sum\limits_{i,\,m_i<\frac{h}{2}}t^{2m_i-2}+(r_+-r_-)t^{h-2}. $$ \end{corollary} \begin{theorem}\label{t2} The Hochschild homology spaces of $A$, as graded spaces, are as follows: \begin{align*} {HH_0}(A)=R,\\ {HH_1}(A)=U,\\ {HH_2}(A)=U\oplus Y[h],\\ {HH_3}(A)=U^*[2h]\oplus Y^*[h],\\ {HH_4}(A)=U^*[2h].\\ {HH_5}(A)=K[2h],\\ {HH_6}(A)=K[2h],\\ \end{align*} and ${HH_{6n+i}}(A)={HH_i}(A)[2nh]\,\forall i\geq1$. \end{theorem} (Note that the equality $HH_0(A)=R$ was established in \cite{MOV}). \begin{theorem}\label{t3} The cyclic homology spaces of $A$, as graded spaces, are as follows: \begin{align*} {HC_0}(A)=R,\\ {HC_1}(A)=U,\\ {HC_2}(A)=Y^*[h],\\ {HC_3}(A)=U^*[2h],\\ {HC_4}(A)=0,\\ {HC_5}(A)=K[2h],\\ {HC_6}(A)=0,\\ \end{align*} and ${HH_{6n+i}}(A)={HH_i}(A)[2nh]\,\forall i\geq1$. \end{theorem} The rest of the paper is devoted to the proof of Theorems \ref{t1},\ref{t2},\ref{t3} \section{Hochschild (co)homology and cyclic homology of A} \subsection{The Schofield resolution of A} We want to compute the Hochschild (co)homology of $A$, by using the Schofield resolution, described in \cite{S}. Define the $A-$bimodule $\mathfrak{N}$ obtained from $A$ by twisting the right action by $\eta$, i.e., $\mathfrak{N}=A$ as a vector space, and $\forall a,b\in A,x\in\mathfrak{N}:a\cdot x\cdot b=ax\eta(b).$ Introduce the notation $\epsilon_a=1$ if $a\in Q$, $\epsilon_a=-1$ if $a\in Q^*$. Let $x_i$ be a homogeneous basis of $A$ and $x_i^*$ the dual basis under the form attached to the Frobenius algebra $A$. Let $V$ be the bimodule spanned by the edges of $\bar Q$. We start with the following exact sequence: \[ 0\rightarrow\mathfrak{N}[h]\stackrel{i}{\rightarrow}A\otimes_R A[2]\stackrel{d_2}{\rightarrow}A\otimes_RV\otimes_RA\stackrel{d_1}{\rightarrow}A\otimes_RA\stackrel{d_0}{\rightarrow}A\rightarrow 0, \] where \begin{align*} d_0(x\otimes y)&=xy,\\ d_1(x\otimes v\otimes y)&=xv\otimes y-x\otimes vy,\\ d_2(z\otimes t)&=\sum\limits_{a\in\bar Q}\epsilon_aza\otimes a^*\otimes t+\sum\limits_{a\in\bar Q}\epsilon_az\otimes a\otimes a^*t,\\ i(a)&=a\sum x_i\otimes x_i^*. \end{align*} Since $\eta^2=1,$ we can make a canonical identification $A=\mathfrak{N}\otimes_A\mathfrak{N}$ (via $x\mapsto x\otimes 1$), so by tensoring the above exact sequence with $\mathfrak{N}$, we obtain the exact sequence \[ 0\rightarrow A[2h]\stackrel{d_6}{\rightarrow}A\otimes_R \mathfrak{N}[h+2]\stackrel{d_5}{\rightarrow}A\otimes_RV\otimes_R\mathfrak{N}[h]\stackrel{d_4}{\rightarrow}A\otimes_R\mathfrak{N}[h]\stackrel{j}{\rightarrow}\mathfrak{N}[h]\rightarrow 0, \] and by connecting both sequences with $d_3=ij$ and repeating this process, we obtain the Schofield resolution which is periodic with period $6$: \begin{align*} \ldots&\rightarrow A\otimes A[2h]\stackrel{d_6}{\rightarrow}A\otimes_R \mathfrak{N}[h+2]\stackrel{d_5}{\rightarrow}A\otimes_RV\otimes_R\mathfrak{N}[h]\stackrel{d_4}{\rightarrow}A\otimes_R\mathfrak{N}[h]\\ &\stackrel{d_3}{\rightarrow}A\otimes_RA[2]\stackrel{d_2}{\rightarrow}A\otimes_RV\otimes_RA\stackrel{d_1}{\rightarrow}A\otimes_RA\stackrel{d_0}{\rightarrow}A\rightarrow0. \end{align*} This implies that the Hochschild homology and cohomology of $A$ is periodic with period $6$, in the sense that the shift of the (co)homological degree by $6$ results in the shift of degree by $2h$ (respectively $-2h$). \subsection{The Hochschild homology complex} Let $A^{op}$ be the algebra $A$ with opposite multiplication. We define $A^e=A\otimes_R A^{op}$. Then any $A-$bimodule naturally becomes a left $A^e-$ module (and vice versa). We make the following identifications (for all integers $m\geq0$):\\ $(A\otimes_RA)\otimes_{A^e}A[2mh]=A^R[2mh]:\,(a\otimes b)\otimes c=bca$,\\ $(A\otimes_RV\otimes_RA)\otimes_{A^e}A[2mh]=(V\otimes_R A)^R[2mh]:\,(a\otimes x\otimes b)\otimes c=-x\otimes bca$,\\ $(A\otimes_RA)\otimes_{A^e}A[2mh+2]=A^R[2mh+2]:\,(a\otimes b)\otimes c=-bca$,\\ $(A\otimes_R\mathfrak{N})\otimes_{A^e}A[(2m+1)h]=\mathfrak{N}^R[(2m+1)h]:\,(a\otimes b)\otimes c=-b\eta(ca)$,\\ $(A\otimes_RV\otimes_R\mathfrak{N})\otimes_{A^e}A[(2m+1)h]=(V\otimes_R A)^R[(2m+1)h]:$\\$(a\otimes x\otimes b)\otimes c=x\otimes b\eta(ca)$,\\ $(A\otimes_R\mathfrak{N})\otimes_{A^e}A[(2m+1)h+2]=\mathfrak{N}^R[(2m+1)h+2]:\,(a\otimes b)\otimes c=b\eta(ca)$.\\ Now, we apply to the Schofield resolution the functor $-\otimes_{A_e}A$ to calculate the Hochschild homology: \begin{align*} \ldots&\rightarrow \underbrace{A^R[2h]}_{=C_6}\stackrel{d_6'}{\rightarrow}\underbrace{\mathfrak{N}^R[h+2]}_{=C_5}\stackrel{d_5'}{\rightarrow}\underbrace{(V\otimes_R\mathfrak{N})^R[h]}_{=C_4}\stackrel{d_4'}{\rightarrow}\\ &\stackrel{d_4'}{\rightarrow}\underbrace{\mathfrak{N}^R[h]}_{=C_3}\stackrel{d_3'}{\rightarrow}\underbrace{A^R[2]}_{=C_2}\stackrel{d_2'}{\rightarrow}\underbrace{(V\otimes_RA)^R}_{=C_1}\stackrel{d_1'}{\rightarrow}\underbrace{A^R}_{=C_0}\rightarrow0. \end{align*} We compute the differentials: \[ d_1'(a\otimes b)=d_1(-1\otimes a\otimes 1)\otimes_{A^e} b=(-a\otimes 1+1\otimes a)\otimes_{A^e} b=[a,b], \] \begin{align*} d_2'(x)&=d_2(-1\otimes1)\otimes_{A^e} x=-(\sum\limits_{a\in\bar Q}\epsilon_aa\otimes a^*\otimes 1+\sum\limits_{a\in\bar Q}\epsilon_a1\otimes a\otimes a^*)\otimes_{A^e} x\\ &=-\sum\limits_{a\in\bar Q}\epsilon_aa^*\otimes [a,x], \end{align*} \begin{eqnarray*} d_3'(x)&=&d_3(-1\otimes1)\otimes_{A^e}\eta(x)=-\sum (x_i\otimes x_i^*)\otimes_{A^e}\eta(x)=\sum x_i^*\eta(x)x_i\\ &=&\sum x_i^*xx_i=\sum x_ix\eta(x_i^*), \end{eqnarray*} the second to last equality is true, since we can assume that each $x_i$ lies in a subspace $e_kAe_{\nu_k}$, and then we see that\\ $x_i^*\eta(x)x_i=x_i^*x_i=x_i^*xx_i$ if $x=e_k$, $k=\nu(k)$,\\and $x_i^*\eta(x)x_i=0=x_i^*xx_i$ if $k\neq\nu(k)$ or $x=e_j$, $j\neq k$ or $\deg x>0$,\\ and the last equality is true because if $(x_i^*)$ is a dual basis of $(x_i)$, then $(x_i)$ is a dual basis of $\eta(x_i^*)$. \[ d_4'(a\otimes b)=d_4(1\otimes a\otimes 1)\otimes_{A^e} \eta(b)=(a\otimes 1-1\otimes a)\otimes_{A^e} \eta(b)=ab-b\eta(a), \] \begin{align*} d_5'(x)&=d_5(1\otimes1)\otimes_{A^e}x=(\sum\limits_{a\in\bar Q}\epsilon_aa\otimes a^*\otimes 1+\sum\limits_{a\in\bar Q}\epsilon_a1\otimes a\otimes a^*)\otimes_{A^e} x\\ &=\sum\limits_{a\in\bar Q}\epsilon_aa^*\otimes (x\eta(a)-ax), \end{align*} \begin{align*} d_6'(x)&=d_6(1\otimes1)\otimes_{A^e}x=\sum (x_i\otimes x_i^*)\otimes_{A^e}x= \sum x_i^*\eta(x)\eta(x_i)\\ &=\sum x_i\eta(x)x_i^*=\sum x_ixx_i^*, \end{align*} the second to last equality is true because if $(x_i^*)$ is a dual basis of $(x_i)$, then $(x_i)$ is a dual basis of $\eta(x_i^*)$, and \\ the last equality is true because for each $j\in I$, $\sum x_ie_jx_i^*=\sum\dim(e_kAe_j)\omega_j$, where we call $\omega_j$ the dual of $e_j$, and $\dim(e_kAe_j)=\dim(e_kAe_{\nu(j)})$ (given a basis in $e_kAe_j$, the involution which reverses all arrows gives us a basis in $e_jAe_k$, its dual basis lies in $e_kAe_{\nu(j)}$). Since $A=[A,A]+R$ (see \cite{MOV}), $HH_0(A)=R$, and $HH_6(A)$ sits in degree $2h$. Let us define $\overline{HH_i}(A)=HH_i(A)$ for $i>0$ and $\overline{HH_i}(A)=HH_i(A)/R$ for $i=0$. Then $\overline{HH_0}(A)=0$. The top degree of $A$ is $h-2$ (since $h_A(t)=\frac{1+Pt^h}{1-Ct+t^2}$ by \cite[2.3.]{MOV}, and $A$ is finite dimensional). Thus we see immediately from the homology complex that ${HH_1}(A)$ lives in degrees between $1$ and $h-1$, ${HH_2}(A)$ between $2$ and $h$, ${HH_3}(A)$ between $h$ and $2h-2$, ${HH_4}(A)$ between $h+1$ and $2h-1$, ${HH_5}(A)$ between $h+2$ and $2h$ and ${HH_6}(A)$ in degree $2h$. \subsection{Self-duality of the homology complex} The nondegenerate form allows us to make identifications $A=\mathfrak{N}^*[h-2]$ and $\mathfrak{N}=A^*[h-2]$ via $x\mapsto(-,x)$. We can define a nondegenerate form on $V\otimes A$ and $V\otimes\mathfrak{N}$ by \begin{equation} (a\otimes x_a,b\otimes x_b)=\delta_{a,b^*}\epsilon_a(x_a,x_b) \end{equation} where $a,b\in Q$, and $\delta_{x,y}$ is $1$ if $x=y$ and $0$ else. This allows us to make identifications $V\otimes_RA=(V\otimes_R\mathfrak{N})^*[h]$ and $V\otimes_R\mathfrak{N}=(V\otimes_RA)^*[h]$. Let us take the first period of the Hochschild homology complex, i.e. the part involving the first $6$ bimodules: \[ \underbrace{\mathfrak{N}^R[h+2]}_{=C_5}\stackrel{d_5'}{\rightarrow}\underbrace{(V\otimes_R\mathfrak{N})^R[h]}_{=C_4}\stackrel{d_4'}{\rightarrow}\underbrace{\mathfrak{N}^R[h]}_{=C_3}\stackrel{d_3'}{\rightarrow}\underbrace{A^R[2]}_{=C_2}\stackrel{d_2'}{\rightarrow}\underbrace{(V\otimes_RA)^R}_{=C_1}\stackrel{d_1'}{\rightarrow}\underbrace{A^R}_{=C_0}\rightarrow0. \] By dualizing and using the above identifications, we get the dual complex: \begin{align*} \stackrel{(d_3')^*}{\leftarrow}\underbrace{(A^R[2])^*}_{=C_3[-2h]}\stackrel{(d_2')^*}{\leftarrow}\underbrace{((V\otimes_RA)^R)^*}_{=C_4[-2h]}\stackrel{(d_1')^*}{\leftarrow}\underbrace{(A^R)^*}_{=C_5[-2h]}&\leftarrow0.\\ \underbrace{(\mathfrak{N}^R[h+2])^*}_{=C_0[-2h]}\stackrel{(d_5')^*}{\leftarrow}\underbrace{((V\otimes_R\mathfrak{N})^R[h])^*}_{=C_1[-2h]}\stackrel{(d_4')^*}{\leftarrow}\underbrace{(\mathfrak{N}^R[h])^*}_{=C_2[-2h]}&\stackrel{(d_3')^*}{\leftarrow}\\ \end{align*} We see that $C_i^*=C_{5-i}$. We will now prove that, moreover, $d_i'=\pm(d_{6-i}')^*$, i.e. the homology complex has a self-duality property. \begin{proposition} One has $d_i'=\pm(d_{6-i}')^*$. \end{proposition} \begin{proof} \underline{$(d_1')^*=d_5'$}: We have \begin{align*} (\sum\limits_{a\in\bar Q}(a\otimes x_a),d'_5(y))&=(\sum\limits_{a\in\bar Q}(a\otimes x_a),\sum\limits_{a\in\bar Q}\epsilon_aa^*\otimes(y\eta(a)-ay))\\ &=\sum\limits_{a\in\bar Q}(x_a,y\eta(a)-ay)=(\sum\limits_{a\in\bar Q}[a,x_a],y)\\ &=(d'_1(\sum\limits_{a\in\bar Q}a\otimes x_a),y) \end{align*} \underline{$(d_2')^*=-d_4'$}: We have \begin{align*} (x,d'_4(\sum\limits_{a\in\bar Q}a\otimes x_a))&=(x,\sum\limits_{a\in\bar Q}ax_a-x_a\eta(a))=\sum\limits_{a\in\bar Q}(-[a,x],x_a)\\ &=(\sum\limits_{a\in\bar Q}\epsilon_aa^*\otimes[a,x],\sum\limits_{a\in \bar Q}a\otimes x_a) =(-d'_2(x),\sum\limits_{a\in\bar Q}a\otimes x_a) \end{align*} \underline{$(d_3')^*=d_3'$}: We have \[ (x,d'_3(y))=(x,\sum x_iy\eta(x_i^*))=(\sum x_i^*xx_i,y)=(d'_3(x),y). \] \end{proof} \subsection{Cyclic homology} Now we want to introduce the cyclic homology which will help us in computing the Hochschild cohomology of $A$. We have the Connes exact sequence \[ 0\rightarrow \overline{HH_0}(A)\stackrel{B_0}{\rightarrow}\overline{HH_1}(A)\stackrel{B_1}{\rightarrow}\overline{HH_2}(A)\stackrel{B_2}{\rightarrow}\overline{HH_3}(A)\stackrel{B_3}{\rightarrow}\overline{HH_4}(A)\rightarrow\ldots \] where the $B_i$ are the Connes differentials (see \cite[2.1.7.]{Lo}) and the $B_i$ are all degree-preserving. We define the \emph{reduced cyclic homology} (see \cite[2.2.13.]{Lo}) \begin{align*} \overline{HC_i}(A)&=\ker(B_{i+1}:\overline{HH_{i+1}}(A)\rightarrow\overline{HH_{i+2}}(A))\\ &=\mbox{Im}(B_i:\overline{HH_i}(A)\rightarrow\overline{HH_{i+1}}(A)). \end{align*} The usual cyclic homology $HC_i(A)$ is related to the reduced one by the equality $\overline{HC}_i(A)=HC_i(A)$ for $i>0$, and $\overline{HC}_0(A)=HC_0(A)/R$. Let $U={HH_1}(A)$. Then by the degree argument and the injectivity of $B_1$ (which follows from the fact that $\overline{HH}_0(A)=0$), we have ${HH_2}(A)=U\oplus Y[h]$ where $Y={HH_2}(A)(h)$ (the degree-$h$-component). Using the duality of the Hochschild homology complex, we find ${HH_4}(A)=U^*[2h]$ and ${HH_3}(A)=U^*[2h]\oplus Y^*[h]$. Let us set $K={HH_5}(A)[-2h]$. So we can rewrite the Connes exact sequence as follows: $$ \CD \text{degree}\\ @. 0 \\ @. @VVV\\ 1\leq\deg\leq h-1 @. {HH_1}(A)@= U @.@.@. \overline{HC}_0(A)=0\\ @. @VB_1VV @V\sim VV\\ 1\leq\deg\leq h @. {HH_2}(A)@= U @. \oplus @. Y[h] @. HC_1(A)=U\\ @. @VB_2VV @. @. @V\sim VV\\ h+1\leq\deg\leq 2h-1 @. {HH_3}(A)@= U^*[2h] @. \oplus @. Y^*[h] @. \, HC_2(A)=Y^*[h] \\ @. @VB_3VV @V\sim VV\\ h+1\leq\deg\leq2h-1 @. {HH_4}(A)@= U^*[2h] @.@.@. HC_3(A)=U^*[2h]\\ @. @VB_4VV @V0VV \\ 2h @. {HH_5}(A)@= K[2h] @.@.@. HC_4(A)=0\\ @. @VB_5VV @V\sim VV\\ 2h @. {HH_6}(A)@= K[2h] @.@.@. HC_5(A)=K[2h]\\ @. @VB_6VV @V0VV \\ 2h+1\leq\deg\leq3h-1\, @. {HH_7}(A)@=U[2h]\\ @. @VB_7VV\\ @. \vdots \endCD $$ From the exactness of the sequence it is clear that $B_2$ and $B_3$ restrict to an isomorphism on $Y[h]$ and $U^*[2h]$ respectively and that $B_4=0$. $B_6=0$ because it preserves degrees, so $B_5$ is an isomorphism. An analogous argument applies to the portion of the Connes sequence from homological degree $6n+1$ to $6n+6$ for $n>0$. Thus we see that the cyclic homology groups $HC_i(A)$ live in different degrees: $HC_{6n+1}(A)$ between $2hn+1$ and $2hn+h-1$, $HC_{6n+2}(A)$ in degree $2hn+h$, $HC_{6n+3}(A)$ between $2hn+h+1$ and $2hn+2h-1$, and $HC_{6n+5}(A)$ in degree $2hn+2h$. So to prove the main results, it is sufficient to determine the Hilbert series of the cyclic homology spaces. This is done with the help of the following lemma. \begin{lemma} The Euler characteristic of the reduced cyclic homology $\chi_{\overline{HC}(A)}(t)=\sum\ (-1)^ih_{\overline{HC}_i(A)}(t)$ is \[ \sum\limits_{k=0}^\infty a_kt^k=\frac{1}{1-t^{2h}}(-\sum t^{2m_i}-r_-t^{2h}+(r_+-r_-)t^h). \] \end{lemma} \begin{proof} To compute the Euler characteristic, we use the theorem from \cite{EG} that \[ \prod\limits_{k=1}^\infty(1-t^k)^{-a_k}=\prod\limits_{s=1}^\infty\det H_A(t^s). \] From \cite[Theorem 2.3.]{MOV} we know that \[H_A(t)=(1+Pt^h)(1-Ct+t^2)^{-1}.\] Since $r=r_++r_-$, \[ \det(1+Pt^h)=(1+t^h)^{r_+}(1-t^h)^{r_-}. \] From \cite[Proof of Theorem 4.1.2.]{Eu} we know that \[ \prod_{s=1}^\infty \det(1-Ct^s+t^{2s})=\prod\limits_{k=1}^\infty(1-t^{2k})^{-\#\{i:m_i\equiv k\mod h\}}. \] So \begin{align*} \prod\limits_{k=1}^\infty(1-t^k)^{-a_k}&=\prod\limits_{s=1}^\infty\det H_A(t^s)\\ &=\prod\limits_{s=1}^\infty(1+t^{hs})^{r_+}(1-t^{hs})^{r_-}\det(1-Ct^s+t^{2s})^{-1}\\ &=\frac{\prod\limits_{s\,even}(1-t^{hs})^{r_-}}{\prod\limits_{s\,odd}(1-t^{hs})^{r_+-r_-}}\prod\limits_{k=1}^\infty(1-t^{2k})^{\#\{i:m_i\equiv k\mod h\}}. \end{align*} It follows that \[ \chi_{\overline{HC}(A)}(t)=\sum\limits_{k=0}^\infty a_kt^k=(1+t^h+t^{2h}+\ldots)(-\sum t^{2m_i}-r_-t^{2h}+(r_+-r_-)t^h). \] This implies the lemma. \end{proof} Since all $HC_i(A)$ live in different degrees, we can immediately derive their Hilbert series from the Euler characteristic: \begin{align*} h_{HC_1(A)}(t)&=\sum\limits_{i,\,m_i<\frac{h}{2}}t^{2m_i},\\ h_{HC_2(A)}(t)&=(r_+-r_--\#\{i:m_i=\frac{h}{2}\})t^h,\\ h_{HC_3(A)}(t)&=\sum\limits_{i,\,m_i>\frac{h}{2}}t^{2m_i},\\ h_{HC_5(A)}(t)&=r_-t^{2h}.\\ \end{align*} It follows that $h_U(t)=\sum\limits_{i,\,m_i<\frac{h}{2}}t^{2m_i}$, $\dim Y=r_+-r_--\#\{i:m_i=\frac{h}{2}\}$, $\dim K=r_-$, and $Y,K$ sit in degree zero. This completes the proof of Theorems \ref{t2},\ref{t3}. \subsection{The Hochschild cohomology complex} Now we would like to prove Theorem \ref{t1}. We make the following identifications: $Hom_{A^e}(A\otimes_R A,A)=A^R$ and $Hom_{A^e}(A\otimes_R \mathfrak{N},A)=\mathfrak{N}^R$, by identifying $\phi$ with the image $\phi(1\otimes1)=a$ (we write $\phi=a\circ -$), and $Hom_{A^e}(A\otimes_R V\otimes_R A,A)=(V\otimes_R A)^R[-2]$ and $Hom_{A^e}(A\otimes_R V\otimes_R \mathfrak{N},A)=(V\otimes_R \mathfrak{N})^R[-2],$ by identifying $\phi$ which maps $1\otimes a\otimes1\mapsto x_a$ ($a\in\bar Q$) with the element $\sum\limits_{a\in\bar Q}\epsilon_{a^*}a^*\otimes x_a$ (we write $\phi=\sum\limits_{a\in\bar Q}\epsilon_{a^*}a^*\otimes x_a\circ -$). Now, apply the functor $Hom_{A^e}(-,A)$ to the Schofield resolution to obtain the Hochschild cohomology complex \begin{align*} \stackrel{d_4^*}{\leftarrow}\mathfrak{N}^R[-h]\stackrel{d_3^*}{\leftarrow}A^R[-2]\stackrel{d_2^*}{\leftarrow}(V\otimes A)^R[-2]\stackrel{d_1^*}{\leftarrow}A^R&\leftarrow0 \\ \ldots\leftarrow A^R[-2h]\stackrel{d_6^*}{\leftarrow}\mathfrak{N}^R[-h-2]\stackrel{d_5^*}{\leftarrow}(V\otimes\mathfrak{N})^R[-h-2]&\stackrel{d_4^*}{\leftarrow} \end{align*} \begin{proposition} Using the differentials $d_i'$ from the Hochschild homology complex, we can rewrite the Hochschild cohomology complex in the following way: \begin{align*} \stackrel{d_5'[-2h-2]}{\leftarrow}\mathfrak{N}^R[-h]\stackrel{d_6'[-2h-2]}{\leftarrow}A^R[-2]\stackrel{d_1'[-2]}{\leftarrow}(V\otimes A)^R[-2]\stackrel{d_2'[-2]}{\leftarrow}A^R&\leftarrow0\\ \ldots\leftarrow A^R[-2h]\stackrel{d_3'[-2h-2]}{\leftarrow}\mathfrak{N}^R[-h-2]\stackrel{d_4'[-2h-2]}{\leftarrow}(V\otimes\mathfrak{N})^R[-h-2]&\stackrel{d_5'[-2h-2]}{\leftarrow}. \end{align*} \end{proposition} \begin{proof} \[ d_1^*(x)(1\otimes a\otimes1)=x\circ d_1(1\otimes a\otimes1)=x\circ(a\otimes1-1\otimes a)=[a,x], \] so \[d_1^*(x)=\sum\limits_{a\in\bar Q}\epsilon_{a^*}a^*\otimes[a,x]=d_2'(x).\] \begin{align*} d_2^*(\sum\limits_{a\in\bar Q}a\otimes x_a)(1\otimes1)&=(\sum\limits_{a\in\bar Q}a\otimes x_a)\circ(\sum\limits_{b\in\bar Q}\epsilon_bb\otimes b^*\otimes1+\sum\limits_{b\in\bar Q}\epsilon_b1\otimes b\otimes b^*)\\ &=\sum\limits_{a\in\bar Q}(ax_a-x_aa)=\sum\limits_{a\in\bar Q}[a,x_a], \end{align*} so \[ d_2^*(\sum\limits_{a\in\bar Q}a\otimes x_a)=\sum\limits_{a\in\bar Q}[a,x_a]=d_1'(\sum\limits_{a\in\bar Q}a\otimes x_a). \] \[ d_3^*(x)(1\otimes1)=x\circ d_3(1\otimes1)=x\circ(\sum x_i\otimes x_i^*)=\sum x_ix x_i^*, \] so \[ d_3^*(x)=\sum x_ixx_i^*=d_6'(x). \] \[ d_4^*(x)(1\otimes a\otimes1)=x\circ d_1(1\otimes a\otimes1)=x\circ(a\otimes1-1\otimes a)=ax-x\eta(a), \] so \[d_4^*(x)=\sum\limits_{a\in\bar Q}\epsilon_{a^*}a^*\otimes (ax-x\eta(a))=d_5'(x).\] \begin{align*} d_5^*(\sum\limits_{a\in\bar Q}a\otimes x_a)(1\otimes1)&=(\sum\limits_{a\in\bar Q}a\otimes x_a)\circ(\sum\limits_{b\in\bar Q}\epsilon_bb\otimes b^*\otimes1+\sum\limits_{b\in\bar Q}\epsilon_b1\otimes b\otimes b^*)\\ &=\sum\limits_{a\in\bar Q}(ax_a-x_a\eta(a)), \end{align*} so \[ d_5^*(\sum\limits_{a\in\bar Q}a\otimes x_a)=\sum\limits_{a\in\bar Q}(ax_a-x_a\eta(a))=d_4'(\sum\limits_{a\in\bar Q}a\otimes x_a). \] \[ d_6^*(x)(1\otimes1)=x\circ d_6(1\otimes1)=x\circ(\sum x_i\otimes x_i^*)=\sum x_ix\eta(x_i^*), \] so \[ d_6^*(x)=\sum x_ix\eta(x_i^*)=d_3'(x). \] \end{proof} Thus we see that each 3-term portion of the cohomology complex can be identified, up to shift in degree, with an appropriate portion of the homology complex. This fact, together with Theorem \ref{t2}, implies Theorem \ref{t1}. \section{The deformed preprojective algebra} In this subsection we would like to consider the universal deformation of the preprojective algebra $A$. If $\nu=1$, then $P=1$ and hence by Theorem \ref{t1} $HH^2(A)=0$ and thus $A$ is rigid. On the other hand, if $\nu\ne 1$ (i.e. for types $A_n$, $n\ge 2$, $D_{2n+1}$, and $E_6$), then $HH^2(A)$ is the space $K$ of $\nu$-antiinvariant functions on $I$, sitting in degree $-2$. \begin{proposition}\label{uni} Let $\lambda$ be a weight (i.e. a complex function on $I$) such that $\nu\lambda=-\lambda$. Let $A_\lambda$ be the quotient of $P_Q$ by the relation $$ \sum_{a\in Q}[a,a^*]=\sum \lambda_i e_i. $$ Then ${\rm gr}A_\lambda=A$ (under the filtration by length of paths). Moreover, $A_\lambda$, with $\lambda$ a formal parameter in $K$, is a universal deformation of $A$. \end{proposition} \begin{proof} To prove the first statement, it is sufficient to show that for generic $\lambda$ such that $\nu(\lambda)=-\lambda$, the dimension of the algebra $A_\lambda$ is the same as the dimension of $A$, i.e. $rh(h+1)/6$. But by Theorem 7.3 of \cite{CBH}, $A_\lambda$ is Morita equivalent to the preprojective algebra of a subquiver $Q'$ of $Q$, and the dimension vectors of simple modules over $A_\lambda$ are known (also from \cite{CBH}). This allows one to compute the dimension of $A_\lambda$ for any $\lambda$, and after a somewhat tedious case-by-case computation one finds that indeed $\dim A_\lambda=\dim A$ for a generic $\lambda\in K$. The second statement boils down to the fact that the induced map $\phi: K\to HH^2(A)$ defined by the above deformation is an isomorphism (in fact, the identity). This is proved similarly to the case of centrally extended preprojective algebras, which is considered in \cite{Eu}. \end{proof} {\bf Remark.} For type $A_n$ (but not $D$ and $E$) the algebra $A_\lambda$ for generic $\lambda\in K$ is actually semisimple, with simple modules of dimensions $n,n-2,n-4...$.
1,116,691,498,831
arxiv
\section{Introduction}\label{s:introduction} Let $f:\mathbb{N} \to \mathbb{R}$ be a multiplicative arithmetic function. Daboussi showed (see Daboussi and Delange \cite{dab}) that if $|f|$ is bounded by $1$, then \begin{equation}\label{eq:fourier-bound} \frac{1}{x}\sum_{n\leq x} f(n) e^{2\pi i \alpha n} = o(x) \end{equation} holds for every irrational $\alpha$. A detailed proof of the following slightly strengthened version may be found in Daboussi and Delange \cite{DD82}: Suppose $f$ satisfies \begin{equation}\label{eq:moment-cond} \sum_{n\leq x} |f(n)|^2 = O(x), \end{equation} then \eqref{eq:fourier-bound} holds for every irrational $\alpha$. Montgomery and Vaughan \cite{mv77} give explicit error terms for the decay in \eqref{eq:fourier-bound} for multiplicative functions that satisfy, in addition to \eqref{eq:moment-cond}, a uniform bound at all primes, in the sense that $|f(p)| \leq H$ holds for some constant $H \geq 1$ and all primes $p$. In this paper we will study the closely related question of bounding correlations of multiplicative functions with polynomial nilsequences in place of the exponential function $n \mapsto e^{2 \pi i \alpha n}$. A chief concern in this work is to include unbounded multiplicative functions in the analysis. To this end we shall significantly weaken the moment condition \eqref{eq:moment-cond} by decomposing $f$ into a suitable Dirichlet convolution $f=f_1 * \dots * f_t$ and analysing the correlations of the individual factors with exponentials, or rather nilsequences. The benefit of such a decomposition is that we merely require control on the second moments of the individual factors of the Dirichlet convolution and not of $f$ itself. This essentially allows us to replace \eqref{eq:moment-cond} by the condition that there exists $\theta_f \in (0,1]$ such that \begin{align} \label{eq:new-moment} \sqrt{\frac{1}{x}\sum_{ n \leq x } |f_i(n)|^2} \ll (\log x)^{1- \theta_f} \frac{1}{x} \sum_{n \leq x} |f_i(n)| \end{align} for all $i \in \{1, \dots, t\}$. Before we continue, we provide an example of a function that satisfies \eqref{eq:new-moment}, but neither \eqref{eq:moment-cond} nor \begin{align}\label{eq:moment-2} \sum_{n\leq x} |f(n)|^2 \ll \sum_{n\leq x} |f(n)|. \end{align} \begin{example} \label{ex:moments} Let $t>1$ be an integer and consider the general divisor function $d_t(n) = \mathbf{1} * \dots * \mathbf{1} (n)$ which is defined to be the $t$-fold convolution of $\mathbf{1}$ with itself. This is a function which satisfies \eqref{eq:new-moment}, but neither \eqref{eq:moment-cond} nor \eqref{eq:moment-2}. Indeed, we have $$ \frac{1}{x} \sum_{n \leq x} d_t(n) \asymp_t (\log x)^{t-1}, $$ while the second moment satisfies $$ \frac{1}{x} \sum_{n \leq x} d^2_t(n) \asymp_t (\log x)^{t^2-1}. $$ Thus, the second moment is not controlled by the first. On the other hand, choosing $f_i=\mathbf{1}$ for each $1\leq i \leq t$, it is clear that \eqref{eq:new-moment} holds with $\theta_f = 1$. \end{example} To define the class of functions we shall work with, we introduce the following notation. For integers $q,r \in \mathbb{N}$ let $$ S_f(x;q,r) = \frac{q}{x} \sum_{\substack{n\leq x \\ x \equiv r \Mod{q}}} f(x) $$ denote the average value of $f$ on the progression $x \equiv r \Mod{q}$. Let $w:\mathbb{N} \to \mathbb{R}$ be an increasing function such that $\frac{\log \log x}{\log \log \log x} < w(x) \leq \log \log x$ for all $x > e^e$ and set $$ W(x) = \prod_{p\leq w(x)} p. $$ \begin{definition} \label{def:M} Given positive integers $E$ and $H \geq 1$, we let $\mathcal{M}_H(E)$ denote the class of multiplicative arithmetic functions $f:\mathbb{N} \to \mathbb{R}$ with the following property: There is a decomposition $f=f_1 * \dots * f_t$ for some $t \leq H$ such that each $f_i$ satisfies: \begin{enumerate} \item[(1)] $|f_i(p^k)| \leq H^k$ for all prime powers $p^k$. \item[(2)] There is $\theta_f \in (0,1]$ such that for all $x>1$, for all positive integers $q \leq (\log x)^{E}$ and for all $A \in (\mathbb{Z}/Wq\mathbb{Z})^*$ with $S_f(x;Wq,A) > 0$ we have \begin{align*} &\sum_{i=1}^t \sum_{\substack{d_1,\dots,\hat{d_i},\dots,d_t \\ (D_i,W)=1 \\ D_i \leq x^{1-1/t}}} \bigg( \prod_{j \not= i} \frac{|f_j(d_j)|}{d_j} \bigg) \sqrt{\frac{D_iWq}{x}\sum_{\substack{n \leq x/D_i \\ n D_i \equiv A \Mod{Wq}}} |f_i(n)|^2}\\ &\ll (\log x)^{1- \theta_f} S_{|f|}(x;Wq,A), \end{align*} as $x \to \infty$, where $D_i= \prod_{j \not= i} d_i$ and $D_1=1$ if $t=1$, and $W=W(x)$. \item[(3)] $|f_i|$ has a stable mean value in the following sense: whenever $x \ll {x'}^{C_1} \ll x^{C_2}$ for positive constants $C_1$ and $C_2$, then $$ S_{|f_i|}(x;W(x)q,A) \asymp_{C_1,C_2} S_{|f_i|}(x';W(x)q,A) $$ for all positive integers $q \leq (\log x)^{E}$ and for all $A \in (\mathbb{Z}/W(x)q\mathbb{Z})^*$ such that $S_f(x;W(x)q,A) > 0$. \item[(4)] For all $x>1$, for all positive integers $q \leq (\log x)^{E}$ and all residues $A \in (\mathbb{Z}/W(x)q\mathbb{Z})^*$ such that $S_f(x;W(x)q,A) > 0$, we have $$ \frac{1}{\log x} \sum_{ \substack{j \in \mathbb{N}: \\ \exp((\log \log x)^2) < 2^j \leq x}} S_{|f_i|}(2^j; W(x)q,A) \ll S_{|f_i|}(x; W(x)q,A) . $$ In other words, if $P$ denotes the arithmetic progression $\{n \equiv A \Mod{W(x)q}\}$, then the average of $|f_i|$-mean values on all sufficiently long subprogressions of the form $P \cap [1,2^j] \subseteq P \cap [1,x]$ does, up to a constant factor, not exceed that of the mean value with respect to $P \cap [1,x]$. \end{enumerate} \end{definition} In view of this rather long definition we discuss in the sequel three simple examples of functions that belong to $\mathcal{M}_H(E)$ and provide a few general remarks on the individual conditions (1)-(4) before presenting in more detail the actual aim of this paper. The examples we consider are the general divisor function $d_H$ (that is, the $H$-fold convolution of $\mathbf{1}$ with itself), the characteristic function $\mathbf{1}_S$ of the set of sums of two squares, and the representation function $r$ of sums of two squares. Given any positive constant $E$, we show below, that $d_H \in \mathcal{M}_H(E)$, $\mathbf{1}_S \in \mathcal{M}_1(E)$, and $r \in \mathcal{M}_2(E)$. \begin{example} \label{ex:d} Let $f=d_H$, and set $f_i = \mathbf{1}$ for $1 \leq i \leq H$. We check that $f \in \mathcal{M}_H(E)$ for any $E=O(1)$. Indeed, (1) holds trivially. Since $S_{f_i}(x;q,r) \asymp 1$, conditions (3) and (4) are also easily checked. Condition (2) can be checked in a similar way as the special case considered in Example \ref{ex:moments}. \end{example} \begin{example} \label{ex:1_S} Let $S = \{ x^2 + y^2 \mid x,y \in \mathbb{Z} \}$ denote the set of integers that are representable as a sum of two squares and let $f = \mathbf{1}_S$ be its characteristic function. Then $f \in \mathcal{M}_1(E)$ for all $E = O(1)$. Indeed, $f$ is multiplicative and clearly satisfies (1). Prachar \cite{prachar} showed that for $a\equiv 1 \Mod{\gcd(4,q)}$ we have \begin{align}\label{eq:prachar} \sum_{\substack{n \leq x \\ n \equiv a \Mod{q}}} 1_S(n) \sim B_q x(\log x)^{-1/2}, \end{align} where $$B_q = B q^{-1} \frac{(4,q)}{(2,q)} \prod_{\substack{p\equiv 3 \Mod{4} \\ p|q}} (1+ p^{-1}) \qquad \text{and} \qquad B= \frac{1}{\sqrt{2}} \prod_{p\equiv 3 \Mod{4}}(1-p^{-2})^{-1/2}.$$ Thus, (3) holds since $(\log x)^{-1/2} \asymp (\log x')^{-1/2}$. Condition (4) is satisfied since \begin{align*} \sum_{j= \lfloor (\log \log x)^2/\log 2 \rfloor}^{\log x/\log 2} (\log 2^j)^{-1/2} &= \sum_{j= \lfloor (\log \log x)^2/\log 2 \rfloor}^{\log x/\log 2} (j\log 2)^{-1/2}\\ &\ll (\log x)^{1-1/2} - (\log \log x) \ll (\log x)^{1-1/2}. \end{align*} To check (2), suppose that $A \in (\mathbb{Z}/Wq\mathbb{Z})^*$ is representable as a sum of two squares and observe that this implies $$ 1 \ll \phi(Wq)~\frac{D_i}{x} \hspace{-.3cm} \sum_{\substack{p \leq x/D_i \\ p \equiv A \Mod{Wq}}} \hspace{-.3cm} \mathbf{1}_S(p) \log p \ll (\log x) ~\frac{WqD_i}{x} \hspace{-.3cm} \sum_{\substack{n \leq x/D_i \\ n \equiv A \Mod{Wq}}} \hspace{-.3cm} \mathbf{1}_S(n). $$ Thus, \begin{align*} \frac{WqD_i}{x} \sum_{\substack{n \leq x/D_i \\ n \equiv A \Mod{Wq}}} \mathbf{1}_S^2(n) &= \frac{WqD_i}{x} \sum_{\substack{n \leq x/D_i \\ n \equiv A \Mod{Wq}}} \mathbf{1}_S(n) \\ &\ll \log x \Bigg( \frac{WqD_i}{x} \sum_{\substack{n \leq x/D_i \\ n \equiv A \Mod{Wq}}} \mathbf{1}_S(n) \Bigg)^2. \end{align*} Inserting this into the left hand side of (2) with $D_1=1$, we see that (2) is satisfied with $\theta_f=1/2$. \end{example} \begin{example} \label{ex:r} As a final example, we consider the function $$f(n)=r(n):=\frac{1}{4}~\#\Big\{(x,y): x^2+y^2=n\Big\}$$ and show that $f \in \mathcal{M}_2(E)$ for all $E=O(1)$. We decompose $r$ as $r=\mathbf{1}'_S * \mathbf{1}''_{S}$, where $\mathbf{1}'_{S}(n) = \mu^2(n) \mathbf{1}_S(n) \mathbf{1}_{2\nmid n}$ for the function $\mathbf{1}_S$ from the previous example, and $\mathbf{1}''_{S}$ is chosen in such a way that the convolution identity holds. In particular, $\mathbf{1}''_{S}(n) \mu^2(n) = \mathbf{1}_S(n) \mu^2(n)$. It is not difficult to check that both functions $\mathbf{1}'_S$ and $\mathbf{1}''_S$ satisfy condition (1). In both cases, $f_i=\mathbf{1}'_S$ and $f_i=\mathbf{1}''_S$, we have $$ \frac{\phi(Wq)D_i}{x} \sum_{\substack{p \leq x/D_i \\ p \equiv A' \Mod{Wq}}} f_i(p) \log p \gg 1 $$ whenever $A' \in (\mathbb{Z}/Wq\mathbb{Z})^*$ is a sum of two squares. Furthermore, either of the conditions $\mathbf{1}'_{S}(n) > 0$ and $\mathbf{1}''_{S}(n) > 0$ implies $\mathbf{1}_{S}(n) > 0$. Thus, when $A \in (\mathbb{Z}/Wq\mathbb{Z})^*$ is a sum of two squares and when $\mathbf{1}'_S(D_1)>0$ or $\mathbf{1}''_S(D_1)>0$ for some $D_1$ with $D_1 \bar{D_1} \equiv 1 \Mod{Wq}$, then $\bar{D_1}A \in (\mathbb{Z}/Wq\mathbb{Z})^*$ is a sum of two squares. Thus, let $A' \in (\mathbb{Z}/Wq\mathbb{Z})^*$ be a sum of two squares. Then \begin{align*} \frac{WqD_i}{x} \sum_{\substack{n \leq x/D_i \\ n \equiv A' \Mod{Wq}}} f_i(n)^2 &= \frac{WqD_i}{x} \sum_{\substack{n \leq x/D_i \\ n \equiv A' \Mod{Wq}}} \mathbf{1}_S(n) \\ &\ll \log x \Bigg( \frac{WqD_i}{x} \sum_{\substack{n \leq x/D_i \\ n \equiv A' \Mod{Wq}}} \tilde{f}_i(n) \Bigg)^2, \end{align*} where $\tilde{f}_i$ is multiplicatively defined via $\tilde{f}_i(p)=f_i(p)$ at primes and $\tilde{f}_i(p^k)=f_i(p^k)^2$ when $k \geq 2$. Condition (2) can now be deduced with the help of Shiu's lemma (cf.\ Lemma \ref{l:shiu}). We refer to the proof of Lemma \ref{l:general-f} for more details. Properties (3) and (4) follow in a similar way as in the previous example. The trick is that $$S_{\mathbf{1}_S}(x;Wq,A) \asymp S_{h}(x;Wq,A)$$ for $h(n)=\mu^2(n)\mathbf{1}_S(n)$, which allows us to obtain lower bounds on $S_{f_i}(x;Wq,A)$ that match the upper bounds provided by Shiu's lemma. \end{example} \addtocontents{toc}{\SkipTocEntry} \subsection*{Remarks on the individual conditions in Definition \ref{def:M}} $(i)~$ If $f$ is a function that enjoys good second moment estimates, then the parameter $t$ in Definition \ref{def:M} can be chosen to equal $1$ in order to ensure that condition (2) holds. It is not difficult to show that any bounded multiplicative function $f$ satisfies the condition $(2)$ for $t=1$ if it satisfies density conditions of the form $$ \sum_{\substack{p\leq x \\ p \equiv A \Mod{Wq}}} |f(p)| \log p \gg \frac{x}{\phi(Wq)}. $$ More generally, we shall show in Lemma \ref{l:general-f} that any non-negative $f:\mathbb{N} \to \mathbb{R}_{\geq 0}$ that satisfies $f(p^k) \leq H^k$ at prime powers admits a Dirichlet decomposition $f=f_1 * \dots * f_H$ with properties (1) and (2) if it satisfies density conditions on the set of primes like the one above and if it has Shiu-type average values in arithmetic progressions. The decomposition we work with in Lemma \ref{l:general-f} is given by $$ f_i(p^k) = \begin{cases} f(p)/H & \mbox{if $k=1$}\\ 0 & \mbox{if $k>1$} \end{cases} $$ for $1 \leq i < H$, and $f_H$ is defined in such a way that $f=f_1 * \dots * f_H$ holds. $(ii)~$ Observe that condition (3) is not a strong restriction, since the bound $|f_i(p^k)| \leq (CH)^k$ implies (cf.\ Lemma \ref{l:shiu}) that $$ S_{|f_i|}(x;Wq,r) \ll (\log x)^{O(H)} $$ for any positive integer $q \leq x^{1/2}$ and $r \in (\mathbb{Z}/Wq\mathbb{Z})^*$. $(iii)~$ Setting $R(t)=S_{|f_i|}(2^t; Wq,r)$, then condition (4) may be written as $$ \int_{(\log y)^2}^y R(t) ~dt \ll y R(y). $$ If $R$ was differentiable, this would translate to $R(t) \ll R(t) + tR'(t)$. That is, there is $c>0$ such that $- R'(t) < (1-c) (R(t)/t)$, or, equivalently, $\frac{R'(t)}{R(t)} > (-1 + c) \frac{1}{t}$. Integrating yields that for each solution there are constants $C_1>0$ and $C_2$ such that $R(t) > C_1t^{-1+c} + C_2$, which essentially means that $|f_i|$ is strictly denser than the set of primes. \vspace*{1 \baselineskip} In view of the final observation from above, we may without imposing further restrictions add the following assumption to the list from Definition \ref{def:M}. This will lead to a slight simplification of the error terms we obtain in Section \ref{s:non-corr} (cf. Section \ref{ss:D-contribution}). \begin{definition}\label{def:M*} Let $\mathcal{M}^*_H(E)$ denote the class of multiplicative functions $f \in \mathcal{M}_H(E)$ for which the decomposition $f = f_1 * \dots * f_t$ from Definition \ref{def:M} has the additional property that there is a positive constant $\alpha_f > 0$ such that for all $x>1$ and all positive integers $q \leq (\log x)^E$ the following estimate holds: \begin{equation}\label{eq:mean-fi-lowerbound} \max_{1\leq i \leq t} \max_{A \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|f_i|}(x;\widetilde{W}q,A) \gg (\log x)^{-1 + \alpha_f}. \end{equation} \end{definition} \addtocontents{toc}{\SkipTocEntry} \subsection*{Aim and motivation} As mentioned before, the purpose of this paper is to study correlations of multiplicative functions, more specifically of functions from $\mathcal{M}^*_H(E)$, with polynomial nilsequences. In general, such correlations can only shown to be small if either the nilsequence is highly equidistributed or else if the multiplicative function is equidistributed in progressions with short common difference. We will consider both cases, the former in Proposition \ref{p:equid-non-corr} and the latter in Theorem \ref{t:non-corr}. Taking account of the above mentioned restriction, the latter result applies to the subset $\mathcal{F}_H(E) \subset \mathcal{M}^*_H(E)$ of functions that satisfy the \emph{major arc condition} that we will introduce as \eqref{eq:major-arc} in Section \ref{s:W}. In short, there is for every $f \in \mathcal{F}_H(E)$ a product $\widetilde W = \widetilde W(x)$ of small prime powers such that $f$ has a constant average value on certain subprogressions of the progression $\{n \equiv A \Mod{\widetilde W}\}$ for any fixed residue $A \in (\mathbb{Z}/\widetilde W \mathbb{Z})^*$. Instead of the bound \eqref{eq:fourier-bound} on Fourier coefficients of $f$, we aim to show that every $f \in \mathcal{F}_H(E)$ satisfies \begin{align}\label{eq:general-fourier} \nonumber &\frac{\widetilde{W}}{x} \sum_{n \leq x/\widetilde{W}} \Big(f(\widetilde{W}n+A) - S_f(x;\widetilde{W},A)\Big) F(g(n)\Gamma)\\ &= o_{G/\Gamma}\bigg( \frac{1}{\log x} \exp\bigg( \sum_{w(x) < p \leq x} \frac{|f_1(p)|+\dots+|f_t(p)|}{p} \bigg) \bigg) \end{align} for all $1$-bounded polynomial nilsequences $F(g(n)\Gamma)$ of bounded degree and bounded Lipschitz constant that are defined with respect to a nilmanifold $G/\Gamma$ of bounded step and bounded dimension. The precise statement will be given in Section \ref{s:non-corr}. The interest in estimates of the form \eqref{eq:general-fourier} lies in the fact that the Green--Tao--Ziegler inverse theorem \cite{GTZ} allows one to deduce that $f(\widetilde Wn+A) - S_f(x;\widetilde W,A)$ has small $U^k$-norms of all orders, where `small' may depend on $k$. Employing the nilpotent Hardy--Littlewood method of Green and Tao \cite{GT-linearprimes}, this in turn allows one to deduce asymptotic formulae for expressions of the form \begin{align} \label{eq:star} \sum_{\mathbf{x} \in K \cap \mathbb{Z}^s} f(\varphi_1(\mathbf{x})+a_1) \dots f(\varphi_r(\mathbf{x})+a_r), \end{align} where $a_1, \dots, a_r \in \mathbb{Z}$, where $\varphi_1, \dots, \varphi_r : \mathbb{Z}^s \to \mathbb{Z}$ are pairwise non-proportional linear forms, and where $K \subset \mathbb{R}^s$ is convex, \emph{provided} that $f$ has a sufficiently pseudorandom majorant. Such majorants were constructed for a general class of positive multiplicative functions with divisor function like growth conditions in \cite[\S7]{bm}. We note aside that in the special case where the nilsequence is replaced by $e^{2\pi i n \alpha}$ for irrational $\alpha$ the approach of this paper yields for $f \in \mathcal{M}_{H}(E)$ with $E=O(1)$ that $$ \frac{1}{x} \sum_{n \leq x} f(n) e^{2\pi i n \alpha} = o\Big( \frac{1}{x} \sum_{n \leq x} |f_1| * \dots * |f_t|(n) \Big). $$ (Here, no $W$-trick is required, since the sequence $n \mapsto e^{2\pi i n \alpha}$ is equidistributed and we have $\int_{\mathbb{R}/\mathbb{Z}} e^{2\pi i t} ~dt = 0$.) Restricting to non-negative functions $f$ for which $\sum_{n \leq x} f(n) \asymp \sum_{n \leq x} |f_1| * \dots * |f_t|(n)$, this provides a new family of examples of multiplicative functions $f$ that satisfy the condition of a question due to Dupain, Hall and Tenenbaum~\cite[p.410]{DHT}. We refer to Bachman \cite{Bachman} for further results and a more detailed discussion of the Dupain--Hall--Tenenbaum question. \addtocontents{toc}{\SkipTocEntry} \subsection*{Strategy and related work} Our overall strategy is to decompose the given multiplicative function via Dirichlet decomposition in such a way that we can employ the Montgomery--Vaughan approach to the individual factors. This approach reduces matters to bounding correlations of sequences defined in terms of primes. We apply Green and Tao's result \cite[Prop.\ 10.2]{GT-linearprimes} on the correlation of the `$W$-tricked von Mangoldt function' with nilsequences to bound one type of correlation that appears. Carrying out the Montgomery--Vaughan approach in the nilsequences setting furthermore necessitates the understanding of equidistribution properties of certain families of product nilsequences that arise after an application of the Cauchy--Schwarz inequality. These products are studied in Section \ref{s:products} refining techniques introduced in \cite{GT-nilmobius}. More precisely, we show that most of these products are equidistributed when the original sequence from which the products are derived is equidistributed. The latter can be achieved by the Green--Tao factorisation theorem for nilsequences from \cite{GT-polyorbits}. The question studied in this paper is in spirit related to that of Bourgain--Sarnak--Ziegler~\cite{BSZ}, who use an orthogonality criterion that can be proved employing ideas that go back to Daboussi--Delange \cite{dab} (cf.\ Harper \cite{Harper} and Tao \cite{Tao}). Invoking the orthogonality criterion in the form it is presented in K\'atai \cite{katai}, recent and very substantial work of Frantzikinakis and Host \cite{FH} shows that every bounded multiplicative function can be decomposed into the sum of a Gowers-uniform function, a structured part and an error term. Here, the error term is small in the sense that the integral of the error term over the space of all $1$-bounded multiplicative functions is small. While this result provides no information on the quality of the error term of individual functions, it allows one to study simultaneously all bounded multiplicative functions. The point of view taken in the present work is a different one: we have applications to explicit multiplicative functions in mind that appear naturally in number theoretic contexts. Such functions will often have a mean value $\frac{1}{x}\sum_{n \leq x} f(x)$ that is given by a reasonably nice function in $x$. We assume here that for our explicit function $f$ the conditions from Definition \ref{def:M} can be verified. In order to deduce asymptotic formulae for expressions of the form \eqref{eq:star}, it is indispensable to understand the error terms in the non-correlation result well enough in order to see that they improve on the trivial bound given by the average value of $|f|$. The non-correlation result (Theorem \ref{t:non-corr}) we establish comes with explicit error terms which preserve at least some information on the average value of the multiplicative function. An important feature of this work is that it applies to a large class of unbounded functions. The results of this paper have several natural applications involving for instance the functions considered in Examples \ref{ex:d}--\ref{ex:r} but also more interesting functions like the absolute value of eigenvalues of holomorphic cusp forms. We will discuss these applications in forthcoming work. \subsection*{Notation} \addtocontents{toc}{\SkipTocEntry} The following, perhaps unusual, piece of notation will be used throughout the paper: Suppose $\delta \in (0,1)$, then we write $x=\delta^{-O(1)}$ instead of $x=(1/\delta)^{O(1)}$ to indicate that there is a constant $0 \leq C \ll 1$ such that $x = (1/\delta)^{C}$. \subsection*{Convention} \addtocontents{toc}{\SkipTocEntry} If the statement of a result contains Vinogradov or $O$-notation in the assumptions, then the implied constants in the conclusion may depend on all implied constants from the assumptions. \begin{ack} I would like to thank Hedi Daboussi, R\'egis de la Bret\`eche, Nikos Frantzikinakis, Andrew Granville, Ben Green, Dimitris Koukoulopoulos and Terry Tao for very helpful discussions and Nikos Frantzikinakis for valuable comments on a previous version of this paper. \end{ack} \section{Brief outline of some ideas} In this section we give a very rough outline of the ideas behind the application of the Montgomery--Vaughan approach. This sections is only intended as guidance. In particular, we make a number of oversimplifications, so that statements will not be quite true as stated here. The main idea of Montgomery--Vaughan \cite{mv77} is to introduce $\log$ factor into the Fourier coefficient that we wish to analyse. Let $f:\mathbb{N} \to \mathbb{R}$ be a multiplicative function that satisfies $|f(p)| \leq H$ for some constant $H \geq 1$ and all primes $p$ and suppose \eqref{eq:moment-2} holds. Then we have \begin{align*} \sum_{n \leq N} f(n) e(n \alpha) \log(N/n) \leq \Big( \sum_{n\leq N} (\log N/n)^2 \Big)^{1/2} \Big( \sum_{n\leq N} |f(n)|^2 \Big)^{1/2} \ll N^{1/2} \Big( \sum_{n\leq N} |f(n)|^2 \Big)^{1/2}, \end{align*} and thus \begin{align*} \log N \Big(\frac{1}{N}&\sum_{n \leq N} f(n) e(n \alpha) \Big) \ll \Big(\frac{1}{N} \sum_{n\leq N} |f(n)|^2 \Big)^{1/2} + \Big|\frac{1}{N} \sum_{n \leq N} f(n) e(n \alpha) \log n \Big|. \end{align*} The first term in the bound is handled by the assumptions on $f$, that is, by assuming that \eqref{eq:moment-2} holds. To bound the second term, one invokes the identity $\log n = \sum_{d|n} \Lambda(d)$, which reduces the task to bounding the expression $$ \sum_{nm \leq N} f(nm) \Lambda(m) e(nm \alpha). $$ This in turn may be reduced to the task of bounding $$ \sum_{np \leq N} f(n)f(p) \Lambda(p) e(pn \alpha), $$ where $p$ runs over primes. Applying the Cauchy-Schwarz inequality and smoothing, it furthermore suffices to estimate expressions of the form $$ \sum_{p,p'} f(p)f(p') \log(p) \log(p') \sum_{n} w(n) e((p-p')n \alpha), $$ where $p$ and $p'$ run over primes and where $w$ is a smooth weight function. One employs a standard sieve estimate to bound $\#\{(p,p'):p-p'=h\}$ for fixed $h$. Standard exponential sum estimates and a delicate decomposition of the summation ranges for $n, p, p'$ yield an explicit bound on $\frac{1}{N}\sum_{n \leq N} f(n) e(n \alpha)$. It will be our aim to employ the above approach to correlations of the form $$\frac{1}{N} \sum_{n \leq N} \Big(f(n) - \frac{1}{N}\sum_{m\leq N}f(m)\Big) F(g(n)\Gamma)$$ for multiplicative $f$. One problem we face is that the above approach makes substantial use of the strong equidistribution properties of the exponential functions $e((p-p') n \alpha)$ for distinct primes $p,p'$. A general nilsequence $F(g(n)\Gamma)$ on the other hand is not even equidistributed. This problem is resolved by an application of the factorisation theorem for polynomial sequences from Green--Tao \cite{GT-polyorbits}, which allows us to assume that $(g(n)\Gamma)_{n \leq N}$ is equidistributed in $G/\Gamma$ if $f$ is equidistributed in progressions to small moduli. The latter will be arranged for by employing a $W$-trick. As above, we then consider the following expression which we split into the case of large, resp.\ small primes with respect to a suitable cut-off parameter $X$: \begin{align*} \frac{1}{N} \sum_{mp \leq N} f(m) f(p) \Lambda(p) F(g(mp) \Gamma) & = \frac{1}{N} \sum_{m \leq X} \sum_{p \leq N/m} f(m) f(p) \Lambda(p) F(g(mp)\Gamma) \\ &\quad + \frac{1}{N} \sum_{m > X} \sum_{p \leq N/m} f(m) f(p) \Lambda(p) F(g(mp)\Gamma). \end{align*} Applying Cauchy-Schwarz to both terms shows that it suffices to understand correlations of the form \begin{align*} \sum_{m,m'} f(m) f(m') \sum_{p} \Lambda(p) F(g(mp) \Gamma) \overline{F(g(m'p) \Gamma)} \end{align*} and \begin{align*} \sum_{p,p'} f(p) f(p') \Lambda(p) \Lambda(p') \sum_{m} F(g(pm) \Gamma) \overline{F(g(p'm) \Gamma)}. \end{align*} Choosing $X$ suitably, only the first of these correlations matters. We shall bound this correlation by employing Green and Tao's result that the $W$-tricked von Mangoldt function is orthogonal to nilsequences. The necessary equidistribution properties of the sequences $n \mapsto F(g(mn) \Gamma) \overline{F(g(m'n) \Gamma)}$ will be established in Sections \ref{s:linear-subsecs} and \ref{s:products}. We explain at the beginning of Section \ref{s:proof-of-prop} how the Dirichlet decomposition $f = f_1 * \dots * f_t$ from Definition \ref{def:M} can be employed to use this method for functions $f$ that enjoy property (2) of Definition \ref{def:M} in place of the moment condition \eqref{eq:moment-2}. \section{Digression on the Dirichlet decomposition of non-negative $f$} \label{s:decomposition} The Montgomery--Vaughan approach outlined in the previous section requires good control of the $L^2$-norm of the multiplicative function $f$ in terms of its mean value. Let $N$ and $T$ be positive integer parameters that satisfy $N^{1-o(1)} \ll T \ll N$, and write $W=W(N)$. A function $f: \mathbb{N} \to \mathbb{R}$ will exhibit a sufficient amount of control if it satisfies the bound \begin{align}\label{eq:second-moment} \sqrt{\frac{Wq}{T}\sum_{\substack{n \leq T \\ n \equiv A' \Mod{Wq}}} |f(n)|^2} ~ \ll (\log N)^{1-\theta}~ \frac{Wq}{T}\sum_{\substack{n\leq T \\ n \equiv A' \Mod{Wq} }} |f(n)| \end{align} for all positive integers $q \leq (\log N)^{O(1)}$, for all $A' \in (\mathbb{Z}/Wq\mathbb{Z})^*$, and for some $\theta \in (0,1]$. However, the condition \eqref{eq:second-moment} is rather strong and excludes many interesting examples of multiplicative functions such as the divisor function or the function that counts representations as sums of two squares. In order to extend the class of multiplicative functions that we can handle, we consider decompositions of $f$ as the Dirichlet convolution $f=f_1* \dots * f_t$ of a bounded number of multiplicative functions $f_i:\mathbb{N} \to \mathbb{R}$ such that the $L^2$-norms of the $f_i$ are controlled on average by the mean value of $f$. For given integers $d_1,\dots, d_t$ and $1 \leq i \leq t$, we write $D_i:= \prod_{j \not= i} d_j$. Then the following condition forms a suitable generalisation of \eqref{eq:second-moment}: \begin{align}\label{eq:expectation-condition*} \nonumber \sum_{i=1}^t \sum_{\substack{d_1,\dots,\hat{d_i},\dots,d_t \\ (D_i,W)=1 \\ D_i \leq T^{1-1/t}}} &\bigg( \prod_{j \not= i} \frac{|f_j(d_j)|}{d_j} \bigg) \sqrt{\frac{D_iWq}{T}\sum_{\substack{n \leq T/D_i \\ n D_i \equiv A' \Mod{Wq}}} |f_i(n)|^2}\\ &\ll (\log T)^{1-\theta_f} ~ \frac{Wq}{T}\sum_{\substack{n\leq T \\ n \equiv A' \Mod{Wq} }} |f(n)| \end{align} for some quantity $\theta_f \in (0,1]$. Observe that the bound on $D_i$ guarantees that every second moment that occurs on the left hand side has a long summation range. The special case $t=1$ corresponds to the bound \eqref{eq:second-moment}. Since the condition \eqref{eq:expectation-condition*} might look rather restrictive, the purpose of this section is to give some evidence for the applicability of this approach. The following lemma provides sufficient conditions for the existence of Dirichlet decompositions $f=f_1 * \dots * f_t$ with the property that \eqref{eq:expectation-condition*} holds. \begin{lemma} \label{l:general-f} Let $E$ and $H$ be positive integers. Let $f$ be a multiplicative function that satisfies $|f(p^k)| \leq H^k$ at all prime powers $p^k$. Suppose that $N$, $T$ and $W$ are as before, and suppose as well that for all positive integers $q \leq (\log N)^{E}$ and for all $A' \in (\mathbb{Z}/Wq\mathbb{Z})^*$ such that $S_{|f|}(x;Wq,A')>0$ the following two conditions are satisfied: \begin{equation} \label{eq:fp_logp} \sum_{\substack{p \leq T \\ p \equiv A' \Mod{Wq}}} |f(p)| \log p \gg T/\phi(Wq) \end{equation} and \begin{equation} \label{eq:shiu-assumption} S_{|f|}(x;Wq,A') \asymp \frac{1}{\phi(Wq)} \frac{1}{\log x} \exp\bigg( \sum_{\substack{p\leq x\\p\nmid Wq}} \frac{|f(p)|}{p}\bigg). \end{equation} Then $f$ admits a Dirichlet decomposition $f = f_1 * \dots * f_H$ of real-valued multiplicative functions $f_i$ such that the condition \eqref{eq:expectation-condition*} holds for $t=H$ and $\theta_f= \frac{1}{2}$. \end{lemma} \begin{rem} Note that \eqref{eq:shiu-assumption} agrees with the bound predicted by Shiu's lemma. Assuming all the conditions of the lemma except for \eqref{eq:shiu-assumption}, Wirsing's result \cite[Satz 1.1]{wirsing} implies that $$\sum_{n \leq x} |f(x)| \asymp \frac{x}{\log x} \exp\bigg( \sum_{\substack{p\leq x}} \frac{|f(p)|}{p}\bigg). $$ Thus, condition \eqref{eq:shiu-assumption} is satisfied on a positive proportion of residues $A' \in (\mathbb{Z}/W(N)q\mathbb{Z})^*$. \end{rem} Before we prove the lemma, we record a quick consequence of Shiu \cite[Theorem~1]{shiu} that we shall make use of. \begin{lemma}[Shiu] \label{l:shiu} Let $H$ be a positive integer and suppose $f: \mathbb{N} \to \mathbb{R}$ is a non-negative multiplicative function satisfying $f(p^k) \leq H^k$ at all prime powers $p^k$. Let $W=W(x)$ be as before, let $q>0$ be an integer and let $A' \in (\mathbb{Z}/Wq \mathbb{Z})^*$. Then, as $x \to \infty$, \begin{align}\label{eq:shiu} \sum_{\substack{0< n \leq x \\ n \equiv A' \Mod{Wq}}} f(n) \ll \frac{x}{\phi(Wq)} \frac{1}{\log x} \exp\bigg(\sum_{\substack{w(x) < p \leq x \\ p \nmid q}} \frac{f(p)}{p}\bigg)~, \end{align} uniformly in $A'$ and $q$, provided that $q\leq x^{1/2}$. \end{lemma} \begin{proof} This lemma differs from \cite[Theorem~1]{shiu} in that it does not concern short intervals but at the same time it does not require $f$ to satisfy $f(n) \ll_{\varepsilon} n^{\varepsilon}$. Shiu's result works with a summation range $x-y < n \leq x$ where $x^{\beta} < y \leq x$, $\beta \in (0,\frac{1}{2})$. Here, we are only interested in the case $x=y$. Thus, the parameter $\beta$ can be regarded as fixed. As observed in \cite{nair-tenenbaum}, the proof of \cite[Theorem~1]{shiu} only requires the condition $f(n) \ll_{\varepsilon} n^{\varepsilon}$ to hold for one fixed value of $\varepsilon$ once $\beta$ is fixed. Note that any integer $n \equiv A' \Mod{Wq}$ is free from prime divisors $p < w(x)$. Thus, $f(n) \leq H^{\Omega(n)} \leq n^{\log H / \log w(x)}$. Given any $\varepsilon > 0$, we deduce that $n \equiv A' \Mod{Wq}$ implies $f(n) \leq n^{\varepsilon}$ provided $x$ is sufficiently large. \end{proof} \begin{proof}[Proof of Lemma \ref{l:general-f}] We define $f_i$ for $1\leq i \leq H - 1$ multiplicatively by setting $$ f_i(p^k) = \begin{cases} f(p)/H & \mbox{if $k=1$}\\ 0 & \mbox{if $k>1$} \end{cases}. $$ The remaining function $f_H$ is defined in such a way that $f=f_1*\dots*f_H$ holds true. Thus, we have $f_H(p)= f(p)/H$ as well and the values at higher prime powers $p^k$ are inductively defined via $f(p^k) = f_1 * \dots * f_H(p^k)$. This yields \begin{align*} f_H(p^k) = f(p^k)- & \left( f_H(p^{k-1}) \frac{f(p)}{H} (H-1) + f_H(p^{k-2}) \left(\frac{f(p)}{H}\right)^2 \binom{H-1}{2} + \dots \right. \\ &\quad \left. + f_H(p^{k-H+1}) \left(\frac{f(p)}{H}\right)^{H-1} \binom{H-1}{H-1} \right). \end{align*} We claim that $|f_H(p^k)| \ll (3H)^k$. Indeed, for $k=1$ we have $f(p)/H \leq 1$. An induction hypothesis of the form $|f_H(p^j)| \ll (3H)^j$ for all $j<k$ leads to the conclusion that $$ |f_H(p^k)| \ll H^k + 3^{k-1}H^k \left(1+ \frac{1}{2!} + \dots + \frac{1}{(H-1)!}\right) \ll (3H)^k. $$ Note that the functions $|f_1|, \dots, |f_{H-1}|$ take values in $[0,1]$. Hence, $$ \frac{Wq}{T} \sum_{\substack{n \leq T \\ n \equiv A' \Mod{Wq}}} f_i^2(n) \leq \frac{Wq}{T} \sum_{\substack{n \leq T \\ n \equiv A'\Mod{Wq}}} |f_i(n)|. \qquad(1 \leq i \leq H-1) $$ Furthermore, we have \begin{align*} \frac{Wq}{T} \sum_{\substack{n \leq T \\ n \equiv A'\Mod{Wq}}} |f_i(n)| \ll \frac{\log T}{\log w(N)} \Bigg( \frac{Wq}{T} \sum_{\substack{n \leq T \\ n \equiv A'\Mod{Wq}}} |f_i(n)| \Bigg)^2, \end{align*} since $$ \frac{\log T}{T}\frac{Wq}{\log w(N)} \sum_{\substack{n \leq T \\ n \equiv A'\Mod{Wq}}} |f_i(n)| \geq \frac{\phi(Wq)}{T} \sum_{\substack{p \leq T \\ p \equiv A'\Mod{Wq}}} |f_i(p)| \log p \gg_H 1, $$ by the assumptions on $f$. In the case of $f_H$, we consider the auxiliary function $\tilde f_H:\mathbb{N} \to \mathbb{R}_{\geq 0}$, defined multiplicatively by $$ \tilde f_H(p^k) = \begin{cases} |f_H(p^k)| & \mbox{if $p<w(N)$},\\ |f_H(p)| & \mbox{if $k=1$, $p>w(N)$},\\ f_H(p^k)^2 & \mbox{if $k>1$, $p>w(N)$}. \end{cases} $$ Then, similarly as before, $$ \frac{Wq}{T} \sum_{\substack{n \leq T \\ n \equiv A' \Mod{Wq}}} f_H^2(n) \leq \frac{Wq}{T} \sum_{\substack{n \leq T \\ n \equiv A'\Mod{Wq}}} \tilde f_H(n) \ll \log T \Bigg( \frac{Wq}{T} \sum_{\substack{n \leq T \\ n \equiv A'\Mod{Wq}}} \tilde f_H(n) \Bigg)^2. $$ Thus, setting $\tilde f_i = |f_i|$ for $1 \leq i < H$, we obtain an upper bound for the left hand side of \eqref{eq:expectation-condition*} of the form \begin{align*} &\ll \frac{1}{(\log T)^{1/2}} \sum_{i=1}^H \sum_{\substack{d_1,\dots,\hat{d_i},\dots,d_H \\ (D_i,W)=1}} \bigg( \prod_{j \not= i} \frac{\tilde f_j(d_j)}{d_j} \bigg) \frac{D_iW q}{T}\sum_{\substack{n \leq T/D_i \\ n D_i \equiv A' \Mod{Wq}}} \tilde f_i(n) \\ &\ll_H \frac{Wq}{T(\log T)^{1/2}} \sum_{\substack{n\leq T \\ n \equiv A' \Mod{Wq}}} \tilde f_1 * \dots *\tilde f_H(n). \end{align*} The functions $\tilde f_1 * \dots *\tilde f_H$ and $|f|$ agree at square-free integers $n$ and the remaining difficulty is to bound the contribution of values at integers that are not square-free. Note that $\tilde f_1 * \dots *\tilde f_H(p^k) \leq (CH)^k$. Thus, Shiu's bound \eqref{eq:shiu} applied to the progression $n \equiv A' \Mod{Wq}$ yields \begin{align*} \frac{Wq}{T} \sum_{\substack{n \leq T \\ n \equiv A' \Mod{Wq}}} \tilde f_1 * \dots *\tilde f_H (n) &\ll \frac{W}{\phi(W)} \frac{1}{\log T} \exp\left( \sum_{w(N)< p< T} \frac{|f(p)|}{p} \right). \end{align*} The latter expression equals $\asymp S_{|f|}(N;Wq,A')$ by assumption, which completes the proof. \end{proof} \section{$W$-trick} \label{s:W} In the same way as the bounds mentioned in the introduction only apply to Fourier coefficients $\frac{1}{x}\sum_{n\leq x} f(n) e(\alpha n)$ at an irrational phase $\alpha$, it is the case that an arbitrary multiplicative function $f$ will most likely \emph{correlate} with a given nilsequence, unless this sequence itself is sufficiently equidistributed. Thus, statements of the form $$\frac{1}{N}\sum_{n\leq N} h(n)F(g(n)\Gamma)=o_{G/\Gamma}(1)$$ with $h=f$ or $h = f - S_f(N;1,1)$ cannot be expected to hold in general. On the other hand, it turns out to be sufficient to ensure that $h$ is equidistributed in progressions to small moduli in order to resolve this problem. For arithmetic applications such as establishing a result of the form \eqref{eq:star}, this can be achieved with the help of the $W$-trick from \cite{GT-longprimeaps}. The basic idea is to decompose $f$ into a sum of functions that are equidistributed in progressions to small moduli. This is achieved by decomposing the range $\{1,\dots, N\}$ into subprogressions modulo a product $W(N)$ of small primes, which has the effect of fixing or eliminating the contribution from small primes on each of the subprogressions. For multiplicative functions some minor modifications are necessary. Our aim is to decompose the interval $\{1, \dots, N\}$ into subprogressions $r \Mod{q}$ in such a way that $$S_f(N;q,r)=(1+o(1))S_f(N;qq',r + qr')$$ for small $q'$. Thus, $f$ should essentially have a constant average value when decomposing one of the given subprogressions into further subprogressions of small moduli $q'$. The example of the characteristic function of sums of two squares shows that we cannot in general choose $q$ to be a product of small primes (consider the case where $r \equiv 1 \Mod{2}$, $q'=2$ and $r+qr' \equiv 3 \Mod{4}$), but rather need to allow $q$ to be a product of small prime powers. Recall that an integer is said to be $k$-smooth if it is free from prime factors exceeding $k$. Furthermore, recall from Section \ref{s:introduction} that $w(x) \leq \log \log x$ for all sufficiently large $x$, and that $W(x) = \prod_{p\leq w(x)} p$. Thus, $\log W(x) = \sum_{p \leq w(x)} \log p \sim w(x)$ and hence $$ W(x) \leq (\log x)^{1+o(1)}. $$ Let $q^*: \mathbb{N} \to \mathbb{N}$ be a function that satisfies $q^*(x) \leq (\log x)^{H}$ for all sufficiently large integers $x>1$ and such that $q^*(x)$ is a $w(x)$-smooth integer for every $x \in \mathbb{N}$. Let $$\widetilde{W}(x)=q^*(x)W(x).$$ The factor $q^*$ will allow us for instance to consider the representation function of sums of two squares or the representation function of a more general norm form with this $W$-trick. We decompose the range $[1,N]$ into subprogressions of the form \begin{align*} \Big\{ 1 \leq m \leq N : m \equiv w_1A \Mod{w_1\widetilde{W}(N)} \Big\}, \end{align*} where $A \in (\mathbb{Z}/\widetilde{W}(N)\mathbb{Z})^*$ and where $w_1 \geq 1$ is $w(N)$-smooth. Abbreviating $\widetilde{W}=\widetilde{W}(N)$, we have $\gcd(w_1, \widetilde{W}n+A)= 1$ and hence $f(w_1(\widetilde{W}n+A)) = f(w_1) f(\widetilde{W}n+A)$. Thus, it suffices to study the family of functions $$ \left\{n \mapsto f(\widetilde{W}n+A) : \begin{array}{l} 0<A<\widetilde{W}(N) \\ \gcd(A, \widetilde W)=1 \end{array} \right\}.$$ Since large values of $w_1$ form a sparse set, it is possible to bound their contribution to the sum $\sum_{n \leq N} f(n)F(g(n)\Gamma)$ separately. In the remaining case, $w_1$ is bounded above, which ensures that the range on which each function $n \mapsto f(\widetilde{W}n+A)$ needs to be considered is always large. More precisely, combining the Cauchy--Schwarz inequality and a bound on the second moment of $f$, it is possible to discard the contribution from values of $w_1$ such that $v_p(w_1) > C_1 \log_p \log N$ for some prime $p \leq w(N)$ and a sufficiently large constant $C_1>1$; see \cite[\S3]{lmd} for details. Thus for the purpose of arithmetic applications, it suffices to consider $n \mapsto f(\widetilde{W}n+A)$ for $n \in \{1, \dots, T\}$ with $$T = \frac{N-A w_1}{w_1\widetilde{W}(N)} \gg N (\log N)^{-2H}\prod_{p<w(N)} (\log N)^{-C_1 -1} \gg N (\log N)^{-2H - 2C_1 \log \log N} \gg N^{1 - o(1)}.$$ \begin{definition}\label{d:F} Let $\mathcal{F}_H(E)$ denote the class of multiplicative functions from $\mathcal{M}^*_H(E)$ with the additional property that for each $f \in \mathcal{M}^*_H(E)$, there is \begin{enumerate} \item[(1)]~ a function $\varphi:\mathbb{N} \to \mathbb{R}$ such that $\varphi(x) = o(1)$ as $x\to\infty$, and \item[(2)]~ a function $q^*:\mathbb{N} \to \mathbb{N}$ such that $q^*(x)$ is $w(x)$-smooth and $q^* (x) \leq (\log x)^{H}$ for all $\varepsilon>0$, \end{enumerate} such that for all intervals $I \subseteq \{1, \dots, x\}$ of length at least $|I|> x(\log x)^{-E}$, the estimate \begin{align}\label{eq:major-arc} \frac{q_0 \widetilde{W}(x)}{|I|} \sum_{\substack{ m \in I \\ m \equiv A \Mod{q_0 \widetilde{W}(x)}}} f(m) = S_{f}(x;\widetilde{W}(x),A) + O\bigg(\varphi(x)S_{|f|}(x;\widetilde{W}(x),A)\bigg) \end{align} holds uniformly for all integers $q_0$ such that $0 < q_0 \leq (\log x)^{E}$ and for all representatives $A \in \{1, \dots, q_0 \widetilde{W}(x)\}$ of invertible residues classes from $(\mathbb{Z}/q_0 \widetilde{W}(x)\mathbb{Z})^*$. Here $\widetilde{W}(x)=q^*(x)W(x)$, as before. \end{definition} The estimate \eqref{eq:major-arc} will be refered to as `the major arc estimate'. We point out that a recent preprint of Irving \cite{Irving} obtains strong results on the error term in \eqref{eq:major-arc} in the case where $f$ is the divisor function. We will show in Section \ref{ss:reduction} that despite the restriction to invertible residues $A \in (\mathbb{Z}/q_0 \widetilde{W}\mathbb{Z})^*$, the estimate \eqref{eq:major-arc} implies that $f(\widetilde{W}n+A) - S_f(x;\widetilde{W},A)$ does not correlate with periodic sequences of period at most $(\log x)^E$. This information will be used in combination with a factorisation theorem to reduce the task of proving non-correlation for $(f(\widetilde{W}n+A) - S_f(x;\widetilde{W},A))$ with general nilsequences to the case where the nilsequence enjoys certain equidistribution properties and the Lipschitz function satisfies, in particular, $\int_{G/\Gamma} F = 0$. \section{The non-correlation result} \label{s:non-corr} This section contains a precise statement of the main result, which, informally speaking, shows the following: for every multiplicative function $f \in \mathcal{F}_H(E)$, for every invertible residue $A \in (\mathbb{Z}/\widetilde{W}(N)\mathbb{Z})^*$ and for parameters $N$ and $T$ such that $N^{1-o(1)} \ll T \ll N$, the sequence $(f(\widetilde{W}n+A) - S_f(N;\widetilde{W},A))_{n \leq T}$ is orthogonal to polynomial nilsequences provided $E$ is sufficiently large with respect to $H$. In Section \ref{ss:reduction} we carry out a standard reduction of the main result to an equidistributed version, modelled on \cite[\S2]{GT-nilmobius}. \addtocontents{toc}{\SkipTocEntry} \subsection{Statement of the main result} \label{ss:statement} Let $G$ be a connected, simply connected, $s$-step nilpotent Lie group and let $g:\mathbb{Z} \to G$ be a polynomial sequence on $G$ as defined in \cite[Definition 1.8]{GT-polyorbits}. Let $\Gamma < G$ be a discrete cocompact subgroup. Then the compact quotient $G/\Gamma$ is called a nilmanifold. Any Mal'cev basis $\mathcal{X}$ (see \cite[\S2]{GT-polyorbits} for a definition) for $G/\Gamma$ gives rise to a metric $d_{\mathcal{X}}$ on $G/\Gamma$ as described in \cite[Definition 2.2]{GT-polyorbits}. This metric allows us to define Lipschitz functions on $G/\Gamma$ as the set of functions $F:G/\Gamma \to \mathbb{C}$ for which the Lipschitz norm (cf. \cite[Definition 1.2]{GT-polyorbits}) $$ \|F\|_{\mathrm{Lip}} = \|F\|_{\infty} + \sup_{x,y \in G/\Gamma} \frac{|F(x)-F(y)|}{d_{\mathcal{X}}(x,y)} $$ is finite. If $F$ is a $1$-bounded Lipschitz function, then $(F(g(n)\Gamma))_{n \in \mathbb{Z}}$ is called a (polynomial) nilsequence. We are now ready to state the main result: \begin{theorem}\label{t:non-corr} Let $E, H$ be positive integers and suppose that $f \in \mathcal{F}_H(E)$. Let $N$ and $A$ be positive integer parameters with the property that $0 < A < \widetilde{W}(N)$ and $\gcd(W(N),A)=1$. Suppose further that $T$ satisfies $N^{1-o(1)} \ll T \ll N$ and that $T,N > e^e$. Let $G/\Gamma$ be an $s$-step nilmanifold of positive dimension, let $G_{\bullet}$ be a filtration of $G$ and $g \in \mathrm{poly}(\mathbb{Z},G_{\bullet})$ a polynomial sequence. Suppose that $G/\Gamma$ has a $Q_0$-rational Mal'cev basis adapted to $G_{\bullet}$ for some $Q_0 \geq 2$ and let $G/\Gamma$ be endowed with the metric defined by this basis. Let $F:G/\Gamma \to \mathbb{C}$ be a $1$-bounded Lipschitz function. Then, provided $E \geq 1$ is sufficiently large with respect to $d$, $m_G$, $H$ and $1/\theta_f$, we have \begin{align}\label{eq:main} &\left| \frac{\widetilde{W}}{T} \sum_{n\leq T/\widetilde{W}} (f(\widetilde{W}n+A)-S_f(N;\widetilde{W},A)) F(g(n)\Gamma) \right| \ll (1+ \|F\|_{\mathrm{Lip}}) \Bigg\{ \\ \nonumber &\quad \quad Q_0^{O_{d,m_G}(1)} \frac{1}{(\log T)(\log \log T)^{1/(2^{s+4} \dim G)}} \exp\bigg( \sum_{w(N)<p\leq N} \frac{|f_1(p)|+ \dots + |f_t(p)|}{p} \bigg)\\ \nonumber &\quad \quad+ \frac{(\log \log T)^{2H+1}}{\log T} \max_{1\leq i \leq t} \exp \bigg( \sum_{w(N)<p<T} \frac{|f_1(p) + \dots + f_t(p) - f_i(p)|}{p} \bigg) \\ \nonumber &\quad \quad+ \mathbf{1}_{t>1}~ \frac{(\log \log T)^{2H+1}}{\log T} \max_{1\leq i \leq t} \exp\bigg( \sum_{w(N)<p\leq N} \frac{|f_i(p)|}{p} \bigg)\\ \nonumber & \quad \quad+ \frac{1}{\log w(N)} \max_{A'\in(\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} S_{|f|}(N;\widetilde{W},A') ~+~ \varphi(N) S_{|f|}(N;\widetilde{W},A)\Bigg\}, \end{align} where $\varphi$ is given by \eqref{eq:major-arc}. The implied constants may depend on $H$ as well as on the step and dimension of $G$ and on the degree of the filtration $G_{\bullet}$. \end{theorem} \addtocontents{toc}{\SkipTocEntry} \subsection{Reduction of Theorem \ref{t:non-corr} to the equidistributed case} \label{ss:reduction} Proceeding similarly as in \S2 of \cite{GT-nilmobius}, we will reduce Theorem \ref{t:non-corr} to a special case that involves only equidistributed polynomial sequences. The tool that makes this reduction work is the following factorisation theorem \cite[Thm 1.19]{GT-polyorbits} due to Green and Tao: \begin{lemma}[Factorisation lemma, Green--Tao \cite{GT-polyorbits}] \label{l:factorisation} Let $m$ and $d$ be positive integers, and let $M_0, N, B \geq 1$ be real numbers. Let $G/\Gamma$ be an $m$-dimensional nilmanifold together with a filtration $G_{\bullet}$ of degree $d$ and suppose that $\mathcal{X}$ is an $M_0$-rational Mal'cev basis adapted to $G_{\bullet}$. Let $g \in \mathrm{poly}(\mathbb{Z}, G_{\bullet})$ be a polynomial sequence. Then there is an integer $M$ such that $M_0 \ll M \ll M_0^{O_{B,m,d}(1)}$, a rational subgroup $G' \subseteq G$, a Mal'cev basis $\mathcal{X'}$ for $G'/\Gamma'$ in which each element is an $M$-rational combination of the elements of $\mathcal{X}$, and a decomposition $g = \varepsilon g' \gamma$ into polynomial sequences $\varepsilon, g', \gamma \in \mathrm{poly}(\mathbb{Z}, G_{\bullet})$ with the following properties: \begin{enumerate} \item $\varepsilon: \mathbb{Z} \to G$ is $(M,N)$-smooth\footnote{The notion of smoothness was defined in \cite[Def.\ 1.18]{GT-polyorbits}. A sequence $(\varepsilon (n))_{n\in \mathbb{Z}}$ is said to be \emph{$(M,N)$-smooth} if both $d_{\mathcal{X}}(\varepsilon(n),\rm{id}_G) \leq N$ and $d_{\mathcal{X}}(\varepsilon(n),\varepsilon(n-1)) \leq M/N$ hold for all $1 \leq n \leq N$.}; \item $g': \mathbb{Z} \to G'$ takes values in $G'$ and the finite sequence $(g'(n)\Gamma')_{n \leq T}$ is totally $M^{-B}$-equidistributed in $(G'\Gamma/\Gamma,d_{\mathcal{X'}})$; \item $\gamma: \mathbb{Z} \to G$ is $M$-rational\footnote{A sequence $\gamma: \mathbb{Z} \to G$ is said to be \emph{$M$-rational} if for each $n$ there is $0<r_n\leq M$ such that $(\gamma(n))^{r_n} \in \Gamma$; see \cite[Def.\ 1.17]{GT-polyorbits}.} and the sequence $(\gamma(n)\Gamma)_{n \in \mathbb{Z}}$ is periodic with period at most $M$. \end{enumerate} \end{lemma} The following proposition handles the special case of Theorem \ref{t:non-corr} where the nilsequence is equidistributed. \begin{proposition}[Non-correlation, equidistributed case] \label{p:equid-non-corr} Let $E, H\geq 1$ be integers and suppose that $f \in \mathcal{M}_H^*(E)$. Let $N$ and $T$ be integer parameters satisfying $N^{1-o(1)} \ll T \ll N$. Let $m,d > 0$ and let $\delta= \delta(N) \in (0,1/2)$ depend on $N$ in such a way that $$\log N \leq \delta(N)^{-1} \leq (\log N)^{E}.$$ Let $G/\Gamma$ be an $s$-step nilmanifold of dimension $m_G$ together with a filtration $G_{\bullet}$ of degree $d$, and suppose that $\mathcal X$ is a $\frac{1}{\delta(N)}$-rational Mal'cev basis adapted to $G_{\bullet}$. This basis gives rise to the metric $d_{\mathcal X}$. Let $q$, $r$ and $A$ be integers such that $0\leq r < q \leq (\log N)^E$ and $0 \leq A < \widetilde{W}$, where $\widetilde{W}=\widetilde{W}(N)$, and suppose that $\widetilde{W} r + A \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*$. Then there is $E_0 \geq 1$, depending on $d$, $m_G$, $H$ and on $\theta_f$ from Definition \ref{def:M}, such that the following holds for every $E \geq 1$ that is sufficiently large with respect to $d$, $m_G$ and $H$. Let $g \in \mathrm{poly}(\mathbb{Z},G_{\bullet})$ be any polynomial sequence such that the finite sequence $$(g(n)\Gamma)_{n\leq T/(\widetilde{W}q)}$$ is totally $\delta(N)^{E_0}$-equidistributed. Let $F:G/\Gamma \to \mathbb{C}$ be any $1$-bounded Lipschitz function such that $\int_{G/\Gamma}F=0$, and let $I \subset \{1, \dots, T/(q\widetilde{W})\}$ be any discrete interval of length at least $T/(q\widetilde{W} (\log N)^E)$. Then: \begin{align} \label{eq:prop-bound} &\bigg| \frac{\widetilde{W}q}{T} \sum_{n \in I} f(\widetilde{W}(qn+r)+A) F(g(n)\Gamma) \bigg| \ll_{d,m_G,s,E,H,\theta_f} (1 +\|F\|_{\mathrm{Lip}}) \mathcal{N}', \end{align} where \begin{align*} \mathcal{N}'= & \bigg( (\log \log T)^{-1/(2^{s+2} \dim G)} + \frac{Q^{10^s \dim G}}{(\log \log T)^{1/2}} \bigg) \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|f_1|* \dots *|f_t|}(N;\widetilde{W}q,A') \\ &+\mathbf{1}_{t>1}~ (\log \log T)^{2H} \max_{1\leq i \leq t} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|f_i|}(T;\widetilde{W}q,A') \\ &+ (\log \log T)^{2} \max_{1\leq i \leq t} \frac{Wq}{\phi(Wq)} \frac{1}{\log T} \exp \bigg( \sum_{w(N)<p<T} \frac{|f_1(p) + \dots + f_t(p) - f_i(p)|}{p} \bigg). \end{align*} \end{proposition} \begin{proof}[Proof of Theorem \ref{t:non-corr} assuming Proposition \ref{p:equid-non-corr}] We loosely follow the strategy of \cite[\S 2]{GT-nilmobius}. In view of the first error term from Theorem \ref{t:non-corr}, we may assume that $Q_0 \leq \log(N)$, as the theorem holds trivially otherwise. This implies that $\mathcal{X}$ is a $\log(N)$-rational Mal'cev basis. Applying the factorisation lemma from above with $T$ replaced by $T/\widetilde{W}$, with $Q_0=\log(N)$, and with a parameter $B$ that will be determined in course of the proof (as parameter $E_0$ in an application of Proposition \ref{p:equid-non-corr}), we obtain a factorisation of $g$ as $\varepsilon g' \gamma$ with properties (1)--(3) from Lemma \ref{l:factorisation}. In particular, there is $Q$ such that $\log N \leq Q \leq (\log N)^{O_{B,m_G,d}(1)}$ and such that $g'$ takes values in a $Q$-rational subgroup $G'$ of $G$ and is $Q^{-B}$-equidistributed in $G'\Gamma/\Gamma$. Since $\gamma$ is periodic with some period $a \leq Q$, the function $n \mapsto \gamma(an + b)$ is constant for every $b$, that is, $\gamma$ is constant on every progression $$P_{a,b}:=\{ n \in [1, T/\widetilde{W}] : n \equiv b \Mod{a}\}$$ where $0\leq b < a$. Let $\gamma_b$ denote the value $\gamma$ takes on $P_{a,b}$ and note that $$|P_{a,b}| \geq T/(2a\widetilde{W}) \geq T/(2Q\widetilde{W}).$$ Let $g'_{a,b} : \mathbb{Z} \to G'$ be defined via $$g'_{a,b}(n) = g'(an + b).$$ Since $(g'(n)\Gamma)_{n \leq T/\widetilde{W}}$ is totally $Q^{-B}$-equidistributed in $G'\Gamma/\Gamma$, it is clear that every finite subsequence $(g'_{a,b}(n)\Gamma)_{n \leq T/(Ca\widetilde{W})}$ is $Q^{-B/2}$-equidistributed if $a$, $b$ and $C$ are such that both $0 \leq b < a \leq Q$ and $C>0$ and, furthermore, $Q^{B/2}>Ca$ hold. Let $M \geq 1$ be an integer that will be chosen later depending on $s$ and $\dim G$. By splitting each progression $P_{a,b}$ into $\ll Q (\log \log N)^{1/M}$ pieces $P_{a,b}^{(j)}$ of diameter bounded by $\ll T/(Q\widetilde{W} (\log \log N)^{1/M}),$ we may also arrange for $\varepsilon$ to be almost constant. More precisely, the fact that $\varepsilon$ is $(Q,T/\widetilde{W})$-smooth implies that $$ d_{\mathcal{X}}(\varepsilon(n),\varepsilon(n')) \leq |n-n'| Q \widetilde{W} T^{-1} \ll (\log \log N)^{-1/M} $$ for all $n,n' \leq T/\widetilde{W}$ with $|n-n'| \ll T/(Q \widetilde{W} (\log \log N)^{1/M})$. By choosing $B$ sufficiently large, we may ensure that $Q^{B/2} \geq Q \log \log N$ and, hence, that the equidistribution properties of $g'_{a,b}$ are preserved on the new bounded diameter pieces of $P_{a,b}$. Let $\mathcal{P}$ denote the collection of all progressions $P_{a,b}^{(j)}$ in our decomposition. Since $F$ is a Lipschitz function and since $d_{\mathcal{X}}$ is right-invariant (cf.\ \cite[Appendix A]{GT-polyorbits}), we deduce that \begin{align}\label{eq:Lipschitz-application} \nonumber |F(\varepsilon(n) g'(n) \gamma(n)) -F(\varepsilon(n') g'(n) \gamma(n))| &\leq (1 + \|F\|) d(\varepsilon(n),\varepsilon(n')) \\ &\ll (1 + \|F\|) (\log \log N)^{-1/M} \end{align} for all $n,n' \in P_{a,b}$ with $|n-n'| \ll T/(Q\widetilde{W} (\log \log N)^{1/M} )$. Thus, this bound holds in particular for any $n,n' \in P_{a,b}^{(j)}$. Let us fix one element $n_{b,j}$ for each progression $P_{a,b}^{(j)}$. As we will show next, it will be sufficient to bound the correlation \begin{align}\label{eq:P-abj-corr} \nonumber &\bigg| \sum_{n \in P_{a,b}^{(j)}} \Big( f(\widetilde{W}n +A) - S_f(N;\widetilde{W},A) \Big) F(\varepsilon(n_{b,j}) g'(n) \gamma_{b})\Gamma) \bigg| \\ &= \bigg| \sum_{\substack{n: (an+b) \in P_{a,b}^{(j)}}} \Big( f(\widetilde{W}(an+b)+A) - S_f(N;\widetilde{W},A) \Big) F(\varepsilon(n_{b,j}) g'_{a,b}(n) \gamma_{b})\Gamma) \bigg| \end{align} for each bounded diameter piece $P_{a,b}^{(j)}$. Indeed, the estimate \eqref{eq:Lipschitz-application}, applied with $n'= n_{b,j}$ to each bounded diameter piece, shows that the error term that incurs from this reduction satisfies \begin{align}\label{eq:eps-error} \nonumber &\sum_{P_{a,b}^{(j)} \in \mathcal{P}} \sum_{n \in P_{a,b}^{(j)}} |f(\widetilde{W}n +A)| \Big|F(\varepsilon(n) g'(n) \gamma(n)) -F(\varepsilon(n_{b,j}) g'(n) \gamma_b)\Big|\\ &\qquad\ll \frac{(1 + \|F\|)S_{|f|}(N;\widetilde{W},A)}{(\log \log N)^{1/M}}. \end{align} Since $\frac{\log \log N}{\log \log \log N} < w(N) \leq \log \log N$, we have $$ \frac{1}{(\log \log N)^{1/M}} \ll_{s,\dim G} \frac{1}{\log w(N)}, $$ which implies that the error term \eqref{eq:eps-error} is acceptable in view of \eqref{eq:main}. We aim to estimate the correlation \eqref{eq:P-abj-corr} with the help of Proposition \ref{p:equid-non-corr}. This task will be carried out in four steps, the first of which will be to bound the contribution from non-invertible residues $\widetilde{W}b + A \Mod{\widetilde{W}a}$ to which Proposition \ref{p:equid-non-corr} does not apply. The following two steps deal with checking the various assumptions of Proposition \ref{p:equid-non-corr}, while the fourth contains the actual application of the proposition. \paragraph*{\em Step 1: Non-invertible residues} We seek to bound the contribution to \eqref{eq:main} of all progressions $P_{a,b}^{(j)} \in \mathcal{P}$ such that $\gcd(\widetilde{W}b + A,\widetilde{W}a)>1$. Let $a' = \prod_{p>w(N)} p^{v_p(a)}$. Since $\gcd(A,\widetilde{W})=1$, it suffices to check whether $b$ satisfies $\gcd(\widetilde{W}b + A,a')>1$. Thus, the contribution we seek to bound satisfies \begin{align*} &\frac{\widetilde{W}}{T'} \sum_{d|a',\,d>1} \sum_{\substack{b < a:\\ \gcd(\widetilde{W}b + A,q')=d~}} \sum_{\substack{n < T'/\widetilde{W} \\ n\equiv b \Mod{q'}}} |f(\widetilde{W}n+A)| \\ & \leq \sum_{d|a',\,d>1} \sum_{\substack{b < a:\\ \gcd(\widetilde{W}b + A,q')=d~}} \frac{|f(d)|}{a} ~S_{|f|} \left(\frac{T}{d};\frac{\widetilde{W}a}{d},\frac{\widetilde{W}b+A}{d}\right) \\ & \leq \sum_{d|a',\,d>1} \frac{|f(d)|}{a} \phi\Big(\frac{a}{d}\Big) ~S_{|f|} \left(\frac{T}{d};\frac{\widetilde{W}a}{d},\frac{\widetilde{W}b+A}{d}\right) \\ & \leq \sum_{d|a',\,d>1} \frac{|f(d)|}{d} \max_{A' \in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} S_{|f|} (N;\widetilde{W},A') (1 + \varphi(N)), \end{align*} where we made use of \eqref{eq:major-arc} again. Note that $a'\leq (\log N)^E$ and that its total number of prime factors satisfies $\Omega(a')\leq \frac{E\log\log N}{\log w(N)}$. Thus: \begin{align*} \sum_{d|a',\,d>1} \frac{|f(d)|}{d} &\leq \prod_{p|a'}\left(1+\frac{H}{p} + \frac{H^2}{p^2} + \dots \right)-1\\ &\leq \prod_{p|a'}\left(1 + \frac{H}{p}\right) \left(1 + \frac{H^2}{p^2(1 - \frac{H}{p})}\right)-1\\ &\leq \left(1+\frac{1}{w(N)}\right) \prod_{p|a'}\left(1 + \frac{H}{p}\right) - 1\\ &\leq \left(1+\frac{1}{w(N)}\right) \prod_{w(N)< p < w(N)+ \Omega(a')}\left(1+\frac{1}{p}\right)^H-1 \\ &\ll \left(1+\frac{1}{w(N)}\right) \left(\frac{\log (w(N)+ \Omega(a'))}{\log w(N)}\right)^H-1 \\ &\ll \left(1+\frac{1}{w(N)}\right) \frac{(\log (w(N)+ \Omega(a')))^H-(\log w(N))^H}{(\log w(N) )^H} + \frac{1}{w(N)}. \end{align*} Using the elementary identity $A^H-B^H= (A-B)(A^{H-1} + A^{H-2}B + \dots + B^{H-1})$, the above is bounded by \begin{align*} &\ll \log \left(\frac{w(N)+ \Omega(a')}{w(n)}\right) \frac{(\log (w(N)+ \Omega(a')))^{H-1}}{(\log w(N) )^H} + \frac{1}{w(N)}, \end{align*} which in turn is bounded by \begin{align*} &\ll \log \left(1 + \frac{\Omega(a')}{w(n)}\right) (\log w(N))^{-1} + \frac{1}{w(N)} \ll \frac{1}{\log w(N)}, \end{align*} since $\Omega(a') \leq \frac{E\log\log N}{\log w(N)} \leq w(N)$ by assumption on the function $w$. Thus, the total contribution of non-invertible residues $\widetilde{W}b + A \Mod{\widetilde{W}a}$ is at most $$ O\left(\frac{1}{\log w(N)} \max_{A'\in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} S_{|f|}(N;\widetilde{W},A')\right), $$ which has been taken care of in \eqref{eq:main}. This leaves us to considering the case where the value of $b$ does not impose an obstruction to applying Proposition \ref{p:equid-non-corr}. \paragraph*{\em Step 2: Checking the initial conditions of Proposition \ref{p:equid-non-corr}} The central assumption of Proposition \ref{p:equid-non-corr} concerns the equidistribution of the nilsequence it is applied to. To verify this assumption in the case of \eqref{eq:P-abj-corr}, it is necessary to show that the conjugated polynomial sequence $h:\mathbb{Z} \to G$ defined via $h(n)=\gamma_{b}^{-1} g'_{a,b}(n) \gamma_{b}$ is in fact a polynomial sequence and that it inherits the equidistribution properties of $g'_{a,b}(n)$. Both these questions have been addressed in \cite[\S2]{GT-nilmobius} in a way that allows us to refer back to. Let $H=\gamma_{b}^{-1} G' \gamma_{b}$ and define $H_{\bullet} = \gamma_{b}^{-1} (G')_{\bullet} \gamma_{b}$. Let $\Lambda = \Gamma \cap H$ and define $F_{b,j}: H/\Lambda \to \mathbb{R}$ via $$ F_{b,j}(x \Lambda) = F(\varepsilon(n_{b,j}) \gamma_{b} x \Gamma). $$ Then $h \in \mathrm{poly}(\mathbb{Z},H_{\bullet})$ and the correlation \eqref{eq:P-abj-corr} that we seek to bound takes the form \begin{align} \label{eq:F-b-j} \bigg| \sum_{\substack{n: (an+b) \in P_{a,b}^{(j)}}} \Big( f(\widetilde{W}(an+b)+A) - S_f(N;\widetilde{W},A) \Big) F_{b,j}(h(n)\Lambda) \bigg|. \end{align} The `Claim' from the end of \cite[\S2]{GT-nilmobius} guarantees the existence of a Mal'cev basis $\mathcal{Y}$ for $H/\Lambda$ adapted to $H_{\bullet}$ such that each basis element $Y_i$ is a $Q^{O(1)}$-rational combination of basis elements $X_i$. Thus there is $C'=O(1)$ such that $\mathcal{Y}$ is $Q^{C'}$-rational. Furthermore, it implies that there is $c'>0$, depending only on the dimension of $G$ and the degree of $G_{\bullet}$, such that whenever $B$ is sufficiently large the sequence \begin{equation}\label{eq:h-seq} (h(n) \Lambda)_{n \leq T/(a\widetilde{W})} \end{equation} is totally $Q^{-c'B/2 + O(1)}$-equidistributed in $H/\Lambda$, equipped with the metric $d_{\mathcal{Y}}$ induced by $\mathcal{Y}$. Taking $B$ sufficiently large, we may assume that the sequence \eqref{eq:h-seq} is totally $Q^{-c'B/4}$-equidistributed. Finally, the `Claim' also provides the bound $\|F_{b,j}\| \leq Q^{C''} \|F\|$ for some $C''=O(1)$. This shows that all conditions of Proportion \ref{p:equid-non-corr} are satisfied except for $\int_{H/\Lambda}F_{b,j}=0$. \paragraph*{\em Step 3: The final condition} The final condition that needs to be satisfied in order for Proposition \ref{p:equid-non-corr} to apply is $\int_{H/\Lambda}F_{b,j}=0$. This is where the major arc condition \eqref{eq:major-arc} comes into play, which notably requires that $\gcd(\widetilde{W}b+A,\widetilde{W}a)=1$. In fact, one can subtract off the mean value $\int_{H/\Lambda}F_{b,j}$ of $F_{b,j}$ at the expense of an additional error term, provided one can show that $f(\widetilde{W}n+A)-S_f(N;\widetilde{W},A)$ does not correlate with the characteristic function $\mathbf{1}_{P_{a,b}^{(j)}}$ of the corresponding progression $P_{a,b}^{(j)}$. Note that the common difference of $P_{a,b}^{(j)}$ satisfies $a \leq Q \ll (\log N)^{O_{d,m_G,B}(1)}$, which is bounded by $(\log N)^E$, provided $E$ is sufficiently large in terms of $d$, $m_G$ and $B$. Similarly, the length of $P_{a,b}^{(j)}$ satisfies $$|P_{a,b}^{(j)}| \geq T/(2aQ\widetilde{W} (\log \log N)^{1/M}) \gg T/(a\widetilde{W} (\log N)^{E}),$$ provided $E$ is sufficiently large in terms of $d$, $m_G$ and $B$. Condition \eqref{eq:major-arc} guarantees for every interval $I \subset \{1,\dots,T/\widetilde{W}\}$ of length $|I|\gg T/(\log T)^E$ the uniform estimate $$ \frac{q' \widetilde{W}}{|I|} \sum_{\substack{ m \in I \\ m \equiv \widetilde{W}r' + A \Mod{q'\widetilde{W}(x)}}} f(m) = S_{f}(x;\widetilde{W},A) + O\bigg(\varphi(x)S_{|f|}(x;\widetilde{W},A)\bigg), $$ valid for all positive integers $q' < (\log T)^{E}$ and for all $r'$ such that $0\leq r' \leq q'$ and such that $\gcd(\widetilde{W} r' + A, q')=1$. Thus, assuming $E$ is sufficiently large to satisfy both conditions mentioned above, \eqref{eq:major-arc} applies. Let $\mu_{b,j}:=\int_{H/\Lambda}F_{b,j}$ and note that $\mu_{b,j} \ll 1$. Then the error term incurred by replacing for each $P_{a,b}^{(j)}$ with $\gcd(\widetilde{W}b+A,\widetilde{W}a)=1$ the factor $F_{b,j}(h(n)\Lambda)$ in \eqref{eq:F-b-j} by $(F_{b,j}(h(n)\Lambda) - \mu_{b,j})$ is bounded as follows: \begin{align*} &\bigg|\frac{\widetilde{W}}{T} \sum_{\substack{{P}_{a,b}^{(j)} \in \mathcal{P} \\ \gcd(\widetilde{W} b + A,a)=1 }} \mu_{b,j} \sum_{n\in {P}_{a,b}^{(j)}} \Big(f(\widetilde{W}n+A)-S_f(N;\widetilde{W},A)\Big) \bigg|\\ &\ll \frac{\widetilde{W}}{T} \sum_{{P}_{a,b}^{(j)} \in \mathcal{P}} |P_{a,b}^{(j)}| \varphi(N) S_{|f|} (N;\widetilde{W},A) \ll \varphi(N) S_{|f|} (N;\widetilde{W},A). \end{align*} This error term has been taken care of in \eqref{eq:main}. \paragraph*{\em Step 4: Application of Proposition \ref{p:equid-non-corr}} In view of the work carried out in Steps 1--3, we may now assume that $\gcd(\widetilde{W}b+A,\widetilde{W}a)=1$ and that $\int_{H/\Lambda}F_{b,j} = 0$ holds, and apply Proposition \ref{p:equid-non-corr} with: \begin{itemize} \item $g=h$, $q=a$, $I=\{n: an+b \in P_{a,b}^{(j)}\}$, \item with a function $\delta:\mathbb{N} \to \mathbb{R}$ such that $\delta(N)=Q^{-C'} (= Q_0^{O_{d,m_G,B}(1)})$, which ensures that $\mathcal{Y}$ is $\frac{1}{\delta(N)}$-rational, \item with $E$ sufficiently large to ensure that $Q^{C'} < (\log N)^E$, which in particular means that $E$ depends on $B$, and \item with $E_0=c'B/(4C')=O_{d, m_G}(B)$, where $B$ is chosen sufficiently large for $E_0$ to meet the requirements of Proposition \ref{p:equid-non-corr}. \end{itemize} Since there are $\ll aQ(\log \log N)^{1/M}$ intervals $P_{a,b}^{(j)}$ in the decomposition $\mathcal{P}$, this yields the bound \begin{align*} \sum_{P_{a,b}^{(j)} \in \mathcal{P}} &\bigg| \sum_{\substack{n: (an+b) \in P_{a,b}^{(j)}}} \Big( f(\widetilde{W}(an+b)+A) - S_f(N;\widetilde{W},A) \Big) F_{b,j}(h(n)\Lambda) \bigg|\\ &\ll aQ(\log \log N)^{1/M} (1 + Q^{O(1)} \|F\|) \frac{T}{\widetilde{W}a} \mathcal{N}'' \\ &\ll (1 + \|F\|) Q^{O(1)} (\log \log N)^{1/M} \frac{T}{\widetilde{W}}\mathcal{N}'', \end{align*} where \begin{align*} \mathcal{N}''=& \bigg( (\log \log T)^{-1/(2^{s+2} \dim G)} + \frac{Q^{O_{s,\dim G}(1)}}{(\log \log T)^{1/2}} \bigg) \max_{A' \in (\mathbb{Z}/\widetilde{W}a\mathbb{Z})^*} S_{|f_1|* \dots *|f_t|}(N;\widetilde{W}a,A') \\ &+\mathbf{1}_{t>1}~ (\log \log T)^{2H} \max_{1\leq i \leq t} \max_{A' \in (\mathbb{Z}/\widetilde{W}a\mathbb{Z})^*} S_{|f_i|}(T;\widetilde{W}a,A') \\ &+ (\log \log T)^{2} \max_{1\leq i \leq t} \frac{\widetilde{W}a}{\phi(\widetilde{W}a)} \frac{1}{\log T} \exp \bigg( \sum_{w(N)<p<T} \frac{|f_1(p) + \dots + f_t(p) - f_i(p)|}{p} \bigg). \end{align*} To finish the proof of Theorem \ref{t:non-corr} it remains to remove the dependence on $a$ in this bound. To this end, recall (cf.\ Step 1) that $a' = \prod_{p>w(N)}p^{v_p(a)}$ satisfies $\Omega(a') \leq E \log \log N$, which implies \begin{align*} \frac{\widetilde{W}a}{\phi(\widetilde{W}a)} \ll \log (w(N) + E \log \log N) \ll_E \log \log \log N. \end{align*} Invoking Shiu's lemma (Lemma \ref{l:shiu}) we furthermore obtain $$ S_{|f_1|* \dots *|f_t|}(N;\widetilde{W}a,A') \ll_E \frac{\log \log \log N}{\log N} \exp \bigg( \sum_{w(N)<p<T} \frac{|f_1(p)|+\dots +|f_t(p)|}{p} \bigg) $$ and $$ S_{|f_i|}(T;\widetilde{W}a,A') \ll_E \frac{\log \log \log N}{\log N} \exp \bigg( \sum_{w(N)<p<T} \frac{|f_i(p)|}{p} \bigg) $$ Inserting these bounds into the expression for $\mathcal{N}''$ above and setting $M = 2^{s+3} \dim G$, concludes in view of \eqref{eq:main} the deduction of Theorem \ref{t:non-corr}. \end{proof} It remains to establish Proposition \ref{p:equid-non-corr}. \section{Linear subsequences of equidistributed nilsequences} \label{s:linear-subsecs} Our aim in this section is to study the equidistribution properties of families $$ \Big\{ (g(Dn+D')\Gamma)_{n\leq T/D} : D \in [K,2K) \Big\} $$ of linear subsequences of an equidistributed sequence $(g(n)\Gamma)_{n \leq T}$, where $D$ runs through dyadic intervals $[K,2K)$ for $K \leq T^{1-1/t}$. We begin by recalling some essential definitions and notation. Let $P: \mathbb{Z} \to \mathbb{R}/\mathbb{Z}$ be a polynomial of degree at most $d$ and let $\alpha_0, \dots, \alpha_d \in \mathbb{R}/\mathbb{Z}$ be defined via $$ P(n) = \alpha_0 + \alpha_1 \binom{n}{1} + \dots + \alpha_d \binom{n}{d}.$$ Then the \emph{smoothness norm} of $g$ with respect to $T$ is defined (c.f.\ Green--Tao \cite[Def.\ 2.7]{GT-polyorbits}) as $$ \|P\|_{C^{\infty}[T]} = \sup_{1 \leq j \leq d} T^j \|\alpha_j\|_{\mathbb{R}/\mathbb{Z}}. $$ If $\beta_0, \dots, \beta_d \in \mathbb{R}/\mathbb{Z}$ are defined via $$ P(n) = \beta_dn^d + \dots + \beta_1 n + \beta_0, $$ then (cf.\ \cite[equation (14.3)]{lmr}) the smoothness norm is bounded above by a similar expression in terms of the $\beta_i$, namely \begin{equation} \label{eq:alpha-beta} \|P\|_{C^{\infty}[T]} \ll_d \sup_{1 \leq j \leq d} T^j \|j!\beta_j\|_{\mathbb{R}/\mathbb{Z}} \ll_d \sup_{1 \leq j \leq d} T^j \|\beta_j\|_{\mathbb{R}/\mathbb{Z}}. \end{equation} On the other hand, \cite[Lemma 3.2]{GT-polyorbits} shows that there is a positive integer $q \ll_d 1$ such that $$ \|q \beta_j\|_{\mathbb{R}/\mathbb{Z}} \ll T^{-j} \|P\|_{C^{\infty}[T]}. $$ Apart from smoothness norms, we also require the notion of a horizontal character as defined in \cite[Definition 1.5]{GT-polyorbits}. A continuous additive homomorphism $\eta: G \to \mathbb{R}/\mathbb{Z}$ is called a \emph{horizontal character} if it annihilates $\Gamma$. In order to formulate quantitative results, one defines a height function $|\eta|$ for these characters. A definition of this height, called the \emph{modulus} of $\eta$, may be found in \cite[Definition 2.6]{GT-polyorbits}. All that we require to know about these heights is that there are at most $M^{O(1)}$ horizontal characters $\eta:G\to \mathbb{R}/\mathbb{Z}$ of modulus $|\eta| \leq M$. The interest in smoothness norms and horizontal characters lies in Green and Tao's `quantitative Leibman Theorem': \begin{proposition}[Green--Tao, Theorem 2.9 of \cite{GT-polyorbits}] \label{p:leibman} Let $m_G$ and $d$ be non-negative integers, let $0 < \delta < 1/2$ and let $N \geq 1$. Suppose that $G/\Gamma$ is an $m_G$-dimensional nilmanifold together with a filtration $G_{\bullet}$ of degree $d$ and that $\mathcal{X}$ is a $\frac{1}{\delta}$-rational Mal'cev basis adapted to $G_{\bullet}$. Suppose that $g \in \mathrm{poly}(\mathbb{Z}, G_{\bullet})$. If $(g(n)\Gamma)_{n \leq N}$ is not $\delta$-equidistributed, then there is a non-trivial horizontal character $\eta$ with $0 < |\eta| \ll \delta^{-O_{d,m_G}(1)}$ such that $$\|\eta \circ g \|_{C^{\infty}[N]} \ll \delta^{-O_{d,m_G}(1)} .$$ \end{proposition} The following lemma shows that for polynomial sequences the notions of equidistribution and total equidistribution are equivalent with a polynomial dependence in the equidistribution parameter. \begin{lemma} \label{l:equi/totally-equi} Let $N$ and $A$ be positive integers and let $\delta: \mathbb{N} \to [0,1]$ be a function that satisfies $\delta(x)^{-t} \ll_t x$ for all $t>0$. Suppose that $G$ has a $\frac{1}{\delta(N)}$-rational Mal'cev basis adapted to the filtration $G_{\bullet}$. Suppose that $g \in \mathrm{poly}(\mathbb{Z},G_{\bullet})$ is a polynomial sequence such that $(g(n)\Gamma)_{n\leq N}$ is $\delta(N)^A$-equidistributed. Then there is $1\leq B \ll_{d,m_G} 1$ such that $(g(n)\Gamma)_{n\leq N}$ is totally $\delta(N)^{A/B}$-equidistributed, provided $A/B>1$ and provided $N$ is sufficiently large. \end{lemma} \begin{rem} The Green--Tao factorisation theorem (cf.\ property (3) of Lemma \ref{l:factorisation}) usually allows one to arrange for $A > B$ to hold. \end{rem} \begin{proof} We allow all implied constants to depend on $d$ and $m_G$. Let $B\geq1$ and suppose that $(g(n)\Gamma)_{n\leq N}$ fails to be totally $\delta(N)^{A/B}$-equidistributed. Then there is a subprogression $P=\{ \ell n + b : 0 \leq n \leq m-1\}$ of $\{1, \dots, N\}$ of length $m > \delta(N)^{A/B}N$ such that the sequence $(\tilde g(n))_{0 \leq n < m}$, where $\tilde g(n)=g(\ell n + b)$, fails to be $\delta(N)^{A/B}$-equidistributed. Provided $A>B$, Proposition \ref{p:leibman} implies that there is a non-trivial horizontal character $\eta: G \to \mathbb{R}/\mathbb{Z}$ of modulus $|\eta| < \delta(N)^{-O({A/B})}$ such that $$ \|\eta \circ \tilde g\|_{C^{\infty}[m]} \ll \delta(N)^{-O({A/B})}. $$ The lower bound on $m$ implies that this is equivalent to the assertion $$ \|\eta \circ \tilde g\|_{C^{\infty}[N]} \ll \delta(N)^{-O({A/B})}, $$ where we recall that the implied constant may depend on $d$. Observing that $\eta \circ g$ is a polynomial of degree at most $d$, let $\eta \circ g(n)= \beta_d n^d + \dots + \beta_0$. Then $$ \eta \circ \tilde g(n) = \sum_{i=0}^d n^i \sum_{j=i}^d \beta_j \binom{j}{i}\ell^i b^{j-i}, $$ and, hence, $$ \sup_{1 \leq i \leq d} N^i \left \|\sum_{j=i}^d \beta_j \binom{j}{i}\ell^i b^{j-i} \right\| \ll \delta(N)^{-O({A/B})}. $$ This yields the bound \begin{equation} \label{eq:beta-downwards} \left \|\sum_{j=i}^d \beta_j \binom{j}{i}\ell^i b^{j-i} \right\| \ll N^{-i} \delta(N)^{-O({A/B})} \end{equation} for $1 \leq i \leq d$. Note that the lower bound on $m$ implies that $\ell < \delta(N)^{-A/B}$. Using a downwards induction argument, we aim to show that \begin{equation}\label{eq:beta-induction} \|\ell^d \beta_j \| \ll N^{-j} \delta(N)^{-O(A/B)} \end{equation} for all $1\leq j \leq d$. For $j=d$, this is clear from the above. Suppose \eqref{eq:beta-induction} holds for all $j>i$. For each $i<j$ we then, in particular, have that $$ \left\|\ell^{d} \beta_j \binom{j}{i} b^{j-i}\right\| \ll_d \|\ell^{d} \beta_j \| b^{j-i} \ll_d N^{-j} \delta(N)^{-O(A/B)} b^{j-i} \ll_d N^{-i} \delta(N)^{-O(A/B)}. $$ Using the fact that $\delta(N)^{-t} \ll_t N$ for all $t>0$, we deduce that \eqref{eq:beta-induction} holds for $j=i$ from the above bounds and from \eqref{eq:beta-downwards}. This shows that there is a non-trivial horizontal character, namely $\ell^d \eta$, of modulus at most $\delta(N)^{-O(A/B)}$, such that $$ \|\ell^d \eta \circ g \|_{C^{\infty}[N]} \ll \sup_{1 \leq i \leq d} N^{i} \|\ell^d \beta_i \|_{\mathbb{R}/\mathbb{Z}} \ll \delta(N)^{-O({A/B})}, $$ where we made use of \eqref{eq:alpha-beta}. Choosing $B$ sufficiently large in terms of $m$ and $d$, \cite[Proposition 14.2(b)]{lmr} implies that $g$ is not $\delta(N)^A$-equidistributed, which is a contradiction. \end{proof} We are now ready to address the equidistribution properties of linear subsequences. \begin{proposition} \label{p:linear-subsecs} Let $N$, $T$ and $W(N)$ be as before and let $E_1 \geq 1$. Let $(A_D)_{D \in \mathbb{N}}$ be a sequence of integers such that $|A_D| \leq D$ for every $D \in \mathbb{N}$. Further, let $\delta: \mathbb{N} \to (0,1)$ be a function that satisfies $\delta(x)^{-t} \ll_t x$ for all $t>0$. Suppose $G/\Gamma$ has a $\frac{1}{\delta(N)}$-rational Mal'cev basis adapted to a filtration $G_{\bullet}$ of degree $d$. Let $g \in \mathrm{poly}(G_{\bullet},\mathbb{Z})$ be a polynomial sequence and suppose that the finite sequence $(g(n)\Gamma)_{n\leq T}$ is totally $\delta(T)^{E_1}$-equidistributed in $G/\Gamma$. Then there is a constant $c_1 \in (0,1)$, depending only on $d$ and $m_G$, such that the following assertion holds for all integers $$K \in [(\log T)^{\log \log T}, T^{1-1/t}],$$ provided $c_1E_1 \geq 1$. Write $g_D(n) = g(Dn + A_D)$ and let $\mathcal{B}_{K}$ denote the set of integers $D \in [K,2K)$ for which $$(g_D(n)\Gamma)_{n \leq T/D}$$ fails to be $\delta(T)^{c_1 E_1}$-equidistributed. Then $$ \# \mathcal{B}_{K} \ll K \delta(T)^{c_1 E_1}. $$ \end{proposition} \begin{proof} Let $K\in [(\log T)^{\log \log T}, T^{1-1/t}]$ be a fixed integer and let $c_1 > 0$ to be determined in the course of the proof. Suppose that $E_1>1/c_1$. Lemma \ref{l:equi/totally-equi} implies that for every $D \in \mathcal{B}_{K}$, the sequence $(g_D(n)\Gamma)_{n\leq T/ D}$ fails to be $\delta(T)^{c_1E_1B}$-equidistributed on $G/\Gamma$ for some $B>0$ only depending on $d$ and $m_G$. We continue to allow implied constants to depend on $d$ and $m_G$. By Proposition \ref{p:leibman}, there is a non-trivial horizontal character $\eta_D: G \to \mathbb{R}/\mathbb{Z}$ of magnitude $|\eta_D| \ll \delta(T)^{-O(c_1E_1)}$ such that \begin{align} \label{eq:smoothness-D} \| \eta_D \circ g_D\|_{C^{\infty}[T/D]} \ll \delta(T)^{-O(c_1E_1)}. \end{align} For each non-trivial horizontal character $\eta \to \mathbb{R}/\mathbb{Z}$ we define the set $$ \mathcal{D}_{\eta} = \left\{ D \in \mathcal{B}_K : \eta_D = \eta \right\}. $$ Note that this set is empty unless $|\eta| \ll \delta(T)^{-O(c_1E_1)}$. Suppose that $$ \# \mathcal{B}_K \geq K \delta(T)^{c_1E_1}. $$ By the pigeon hole principle, there is some $\eta$ of modulus $|\eta| \ll \delta(T)^{-O(c_1E_1)}$ such that $$ \# \mathcal{D}_{\eta} \geq K \delta(T)^{O(c_1E_1)}. $$ Suppose $$ \eta \circ g(n) = \beta_d n^d + \dots \beta_1 n + \beta_0 $$ and let $$ \eta \circ g_D(n) = \alpha_d^{(D)} n^d + \dots + \alpha_1^{(D)} n + \alpha_0^{(D)} $$ for any $D \in \mathcal{B}_K$. The quantities $\alpha_j^{(D)}$ and $\beta_j$ are linked through the relation \begin{align} \label{eq:rec} \alpha_j^{(D)} = D^{j} \sum_{i=j}^d \binom{j}{i} A_D^{i-j} \beta_i \end{align} for each $1 \leq j \leq d$. Thus, the bound \eqref{eq:smoothness-D} on the smoothness norm asserts that \begin{align} \label{eq:smoothness-D-2} \sup_{1 \leq j \leq d} \frac{T^j}{K^j} \| \alpha_j^{(D)} \| \ll \delta(T)^{-O(c_1E_1)}. \end{align} With a downwards induction we deduce from \eqref{eq:smoothness-D-2} and \eqref{eq:rec} that \begin{align}\label{eq:pre-waring-D} \sup_{1 \leq j \leq d} \frac{T^j}{K^j} \left\| D^j \beta_j \right\| \ll \delta(T)^{-O(c_1E_1)}. \end{align} The bound \eqref{eq:pre-waring-D} provides information on rational approximations of $D^j \beta_j$ for many values of $D$. Our next aim is to use this information in order to deduce information on rational approximations of the $\beta_j$ themselves. To achieve this, we employ the Waring trick that appeared in the \emph{Type I} sums analysis in \cite[\S3]{GT-nilmobius}, and begin by recalling the two lemmas that this trick rests upon. The first one is a recurrence result, \cite[Lemma 3.2]{GT-polyorbits}. \begin{lemma}[Green--Tao \cite{GT-polyorbits}]\label{l:GT-rec} Let $\alpha \in \mathbb{R}$, $0< \delta <1/2$ and $0 < \sigma < \delta/2$, and let $I \subseteq \mathbb{R}/\mathbb{Z}$ be an interval of length $\sigma$ such that $\alpha n \in I$ for at least $\delta N$ values of $n$, $1 \leq n \leq N$. Then there is some $k \in \mathbb{Z}$ with $0 < |k| \ll \delta^{-O(1)}$ such that $\| k\alpha \| \ll \sigma \delta^{-O(1)}/N$. \end{lemma} The second, \cite[Lemma 3.3]{GT-nilmobius}, is a consequence of the asymptotic formula in Waring's problem. \begin{lemma}[Green--Tao \cite{GT-nilmobius}] \label{l:GT-waring} Let $K \geq 1$ be an integer, and suppose that $S \subseteq \{1, \dots, K\}$ is a set of size $\alpha K$. Suppose that $t\geq 2^j + 1$. Then $ \gg_{j,t} \alpha^{2t}K^j$ integers in the interval $[1,tK^j]$ can be written in the form $k_1^j + \dots + k_t^j$, $k_1, \dots, k_t \in S$. \end{lemma} Returning to the proof of Proposition \ref{p:linear-subsecs}, let us consider the set \begin{align*} \overline{\mathcal{D}}_j = \left\{ m \leq s(2K)^j: \begin{array}{l} m = D_1^j + \dots + D_s^j \\ D_1, \dots, D_s \in \mathcal{D}_{\eta} \end{array} \right\} \end{align*} for some $s \geq 2^j+1$. Each element $m$ of this set satisfies \begin{align}\label{eq:bd-waring-D} \|\beta_j m \| \ll \delta(T)^{-O(c_1)} (K/T)^j, \quad 1\leq j \leq d, \end{align} in view of \eqref{eq:pre-waring-D}. Thus, Lemma \ref{l:GT-waring} implies that there are $$ \# \overline{\mathcal{D}}_j \gg \delta(T)^{O(c_1E_1)} K^j $$ elements in this set. In view of the restrictions on $K$ and the assumptions on the function $\delta(x)$, the conditions of Lemma \ref{l:GT-rec} (on $\sigma$ and $\delta$) are satisfied provided $T$ is sufficiently large. We conclude that there is an integer $k_j$ such that $$1 \leq k_j \ll \delta(T)^{-O(c_1E_1)} $$ and such that \begin{align*} \| k_j \beta_j \| \ll \delta(T)^{-O(c_1E_1)} T^{-j}. \end{align*} Thus \begin{align}\label{eq:alpha_j-kappa_j-D} \beta_j = \frac{a_j}{\kappa_j} + \tilde\beta_j, \end{align} where $\kappa_j| k_j $, $\gcd(a_j,\kappa_j)=1$ and $$0 \leq \tilde\beta_j \ll \delta(T)^{-O(c_1E_1)} T^{-j}.$$ Hence, \begin{align}\label{eq:kappa_j-D} \| \kappa_j \beta_j \| \ll \delta(T)^{-O(c_1E_1)} T^{-j}. \end{align} Let $\kappa=\lcm(\kappa_1, \dots, \kappa_d)$ and set $\tilde \eta = \kappa \eta$. We proceed as in \cite[\S3]{GT-nilmobius}: The above implies that $$ \|\tilde \eta \circ g (n)\|_{\mathbb{R}/\mathbb{Z}} \ll \delta(T)^{-O(c_1E_1)} n/T, $$ which is small provided $n$ is not too large. Indeed, if $T'= \delta(T)^{c_1E_1C}T$ for some sufficiently large constant $C \geq 1$, only depending on $d$ and $m_G$, and if $n \in \{1, \dots , T'\}$, then $$ \|\tilde \eta \circ g (n)\|_{\mathbb{R}/\mathbb{Z}} \leq 1/10. $$ Let $\chi: \mathbb{R}/\mathbb{Z} \to [-1,1]$ be a function of bounded Lipschitz norm that equals $1$ on $[-\frac{1}{10},\frac{1}{10}]$ and satisfies $\int_{\mathbb{R}/\mathbb{Z}} \chi(t)\d t = 0$. Then, by setting $F := \chi \circ \tilde \eta$, we obtain a Lipschitz function $F:G/\Gamma \to [-1,1]$ that satisfies $\int_{G/\Gamma} F = 0$ and $\|F\|_{\mathrm{Lip}} \ll \delta(T)^{-O(c_1E_1)}$. Choosing, finally, $c_1$ sufficiently small, only depending on $d$ and $m_G$, we may ensure that $$\|F\|_{\mathrm{Lip}} < \delta(T)^{-E_1}$$ and, moreover, that $$ T'> \delta(T)^{E_1}T. $$ This choice of $T'$, $F$ and $c_1$ implies that $$ \Big| \frac{1}{T'} \sum_{1 \leq n \leq T'} F(g(n)\Gamma) \Big| = 1 > \delta(T)^{E_1} \|F\|_{\mathrm{Lip}}, $$ which contradicts the fact that $(g(n)\Gamma)_{n\leq T}$ is totally $\delta(T)^{E_1}$-equidistributed. This completes the proof of the proposition. \end{proof} \section{Equidistribution of product nilsequences} \label{s:products} In this section we prove, building on material and techniques from \cite[\S 3]{GT-nilmobius}, a result on the equidistribution of products of nilsequences which will allow us to perform applications of the Cauchy--Schwarz inequality in Section \ref{s:proof-of-prop}. The specific form of the result is adjusted to the requirements of Section \ref{s:proof-of-prop}. We begin by introducing the product sequences we shall be interested in. Suppose $g \in \mathrm{poly}(G_{\bullet},\mathbb{Z})$ is a polynomial sequence. This is equivalent to the assertion that there exists an integer $k$, elements $a_1,\dots, a_k$ of $G$, and integral polynomials $P_1 ,\dots, P_k \in \mathbb{Z}[X]$ such that $$g(n)=a_1^{P_1(n)}a_2^{P_2(n)} \dots a_k^{P_k(n)}.$$ Then, for any pair of integers $(m, m')$, the sequence $n \mapsto (g(mn),g(m'n)^{-1})$ is a polynomial sequence on $G\times G$ that may be represented by $$(g(mn),g(m'n)^{-1}) = \left(\prod_{i=1}^k (a_i,1)^{P_i(mn)}\right) \left(\prod_{i=1}^k(1,a_i)^{P_i(m'n)}\right)^{-1}.$$ The horizontal torus of $G \times G$ arises as the direct product $G/\Gamma[G,G]\times G/\Gamma[G,G]$ of horizontal tori for $G$. Let $\pi:G \to G/\Gamma[G,G]$ be the natural projection map. Any horizontal character on $G \times G$ restricts to a horizontal character on each of its factors. Thus, it takes the form $\eta \oplus \eta'(g_1,g_2):= \eta(g_1) + \eta'(g_2)$ for horizontal characters $\eta,\eta'$ of $G$. The following proposition will be applied in the proof of Proposition \ref{p:equid-non-corr} to sequences $g=g_D$ for unexceptional $D$ in the sense of Proposition \eqref{p:linear-subsecs}. \begin{proposition}\label{p:equid} Let $N$, $T$ and $\widetilde{W}(N)$ be as before and let $E_2 \geq 1$. Let $(\tilde D_m)_{m \in \mathbb{N}}$ be a sequence of integers satisfying $|\tilde D_m| < m$ for every $m \in \mathbb{N}$. Further, let $\delta: \mathbb{N} \to (0,1)$ be a function that satisfies $\delta(x)^{-t} \ll_t x$ for all $t>0$. Suppose $G/\Gamma$ has a $\frac{1}{\delta(T)}$-rational Mal'cev basis adapted to a filtration $G_{\bullet}$ of degree $d$. Let $P \subset \{1,\dots, T \}$ be a discrete interval of length at least $\delta(T) T$. Suppose $F: G/\Gamma \to \mathbb{C}$ is a $1$-bounded function of bounded Lipschitz norm $\|F\|_{\mathrm{Lip}}$ and suppose that $\int_{G/\Gamma} F = 0$. Let $g \in \mathrm{poly}(G_{\bullet},\mathbb{Z})$ and suppose that the finite sequence $(g(n)\Gamma)_{n\leq T}$ is totally $\delta(T)^{E_2}$-equidistributed in $G/\Gamma$. Then there is a constant $c_2 \in (0,1)$, only depending on $d$ and $m_G:= \dim G$, such that the following assertion holds for all integers $$K \in \left[\exp \left( (\log \log T)^2 \right), \exp \left(\frac{\log T -(\log T)^{1/U}}{t} \right)\right],$$ where $1 \leq U \ll 1$, provided $c_2E_2\geq1$. Let $\mathcal{E}_{K}$ denote the set of integer pairs $(m,m') \in (K,2K]^2$ such that $$ I_{m,m'} =\left\{n \in \mathbb{N} : \begin{array}{c} nm + \tilde D_m \in P, \\ nm' + \tilde D_{m'} \in P \end{array} \right\} > \delta(N)^{c_2E_2} T/K $$ and such that \begin{align*} \bigg| \sum_{\substack{n \leq T/\max(m,m') \\ n \in I_{m,m'}}} F\Big(g_{D}\Big(mn + \tilde D_m \Big)\Gamma\Big) \overline{F\Big(g_{D}\Big(m'n+\tilde D_{m'}\Big)\Gamma\Big)} \bigg| > \frac{(1 + \|F\|_{\mathrm{Lip}}) \delta(T)^{c_2E_2} T}{K} \end{align*} holds. Then, $$ \# \mathcal{E}_{K} < K^2 \delta(T)^{O(c_2E_2)}, $$ uniformly for all $K$ as above. \end{proposition} \begin{rem}\label{r:primes} The above proposition essentially continues to hold when the variables $(m,m')$ are restricted to pairs of primes. Due to a suitable choice of a cut-off parameter, $X$, that appears in Section \ref{ss:MV}, we will not need this variant of the proposition (cf. Section \ref{ss:small-primes}) and only provide a very brief account of it at the very end of this section. \end{rem} \begin{proof} To begin with, we endow $G/\Gamma \times G/\Gamma$ with a metric by setting $$d((x,y),(x',y'))=d_{G/\Gamma}(x,x')+d_{G/\Gamma}(y,y').$$ Let $\tilde F: G/\Gamma \times G/\Gamma \to \mathbb{C}$ be defined via $\tilde F (\gamma,\gamma') = F(\gamma) \overline{F(\gamma')}$. This is a Lipschitz function. Indeed, the fact that $F$ and $\overline{F}$ are $1$-bounded Lipschitz functions allows us to deduce that $\|\tilde F\|_{\mathrm{Lip}} \leq \|F\|_{\mathrm{Lip}}$. Let $g_{m,m'}: \mathbb{N} \to G \times G$ be the polynomial sequence defined by $$g_{m,m'}(n)= (g(nm+\tilde D_m),g(nm'+\tilde D_{m'})).$$ Furthermore, we write $\Gamma'=\Gamma \times \Gamma$. Then $\tilde F$ satisfies $$ \int_{G/\Gamma \times G/\Gamma} \tilde F(\gamma,\gamma') \d(\gamma,\gamma') = \int_{G/\Gamma} F(\gamma) \int_{G/\Gamma} \overline{F(\gamma')} \d\gamma' \d\gamma = 0. $$ Now, suppose that $$K \in \left[\exp \left( (\log \log T)^2 \right), \exp \left(\frac{\log T -(\log T)^{1/U}}{t} \right)\right] $$ and that $$ \mathcal{E}_{K} \geq K^2 \delta(T)^{c_2E_2}. $$ For each pair $(m,m') \in \mathcal{E}_{K}$, we have \begin{align} \label{eq:tech-lowerbound} \nonumber &\sum_{\substack{n \leq T/\max(m,m')\\ n \in I_{m,m'}}} F(g(mn + \tilde D_m)\Gamma) \overline{F(g(m'n + \tilde D_{m'})\Gamma)} \\ &= \sum_{\substack{n \leq T/\max(m,m')\\ n \in I_{m,m'}}} \tilde{F} (g_{m,m'}(n) \Gamma) > \frac{(1 + \|F\|_{\mathrm{Lip}})\delta(T)^{c_2E_2} T}{K}. \end{align} Thus for every pair $(m,m') \in \mathcal{E}_K$ the corresponding sequence $$(\tilde{F} (g_{m,m'}(n) \Gamma))_{n \leq T/ \max(m,m')}$$ fails to be totally $\delta(T)^{c_2E_2}$-equidistributed. Lemma \ref{l:equi/totally-equi} implies that this finite sequence also fails to be $\delta(T)^{c_2E_2B}$-equidistributed for some $B\geq1$ that only depends on $d$ and $m_G$. All implied constants in the sequel will be allowed to depend on $d$ and $m_G$, without explicit mentioning. By \cite[Theorem 2.9]{GT-polyorbits}\footnote{The $Q$-rational Mal'cev basis for $G/\Gamma$ induces one for $G/\Gamma \times G/\Gamma$. Thus \cite[Theorem 2.9]{GT-polyorbits} is applicable.}, there is for each pair $(m,m') \in \mathcal{E}_K$ a non-trivial horizontal character $$\tilde \eta_{m,m'} = \eta_{m,m'} \oplus \eta'_{m,m'}: G\times G \to \mathbb{R}/\mathbb{Z}$$ of magnitude $\ll \delta(T)^{-O(c_2E_2)}$ such that \begin{equation}\label{eq:smoothness-bound} \| \tilde \eta_{m,m'} \circ \tilde g_{m,m'}\|_{ C^{\infty}[T/\max(m,m')]} \ll \delta(T)^{-O(c_2E_2)}. \end{equation} Given any non-trivial horizontal character $\tilde \eta : G \times G \to \mathbb{R}/\mathbb{Z}$, we define the set $$ \mathcal{M}_{\eta} = \left\{ (m,m') \in \mathcal{E}_{K} \mid \tilde \eta_{m,m'} = \tilde \eta \right\}. $$ This set is empty unless $|\tilde \eta| \ll \delta(T)^{-O(c_2E_2)}$. Pigeonholing over all non-trivial $\tilde \eta$ of modulus bounded by $\delta(T)^{-O(c_2E_2)}$, we find that there is some $\tilde \eta$ amongst them for which $$ \# \mathcal{M}_{\tilde \eta} > K^2 \delta(T)^{O(c_2E_2)}. $$ Let us fix such a character $\tilde \eta = \eta \oplus \eta'$ and suppose without loss of generality that the component $\eta$ is non-trivial. Suppose $$ \tilde \eta \circ (g(n),g(n')) = (\alpha_d n^d + \alpha'_d n'^d) + \dots + (\alpha_1 n + \alpha'_1 n') + (\alpha_0 + \alpha'_0 )$$ and define for $(m,m') \in \mathcal{E}_K$ the coefficients $\alpha_j(m,m')$, $1 \leq j \leq d$, via $$ \tilde \eta \circ g_{m,m'}(n) = \alpha_d(m,m') n^d + \dots + \alpha_1(m,m') n + \alpha_0(m,m'). $$ Then the bound \eqref{eq:smoothness-bound} on the smoothness norm asserts that \begin{align} \label{eq:smoothness-2} \sup_{1 \leq j \leq d} \frac{T^j}{K^j} \left\| \alpha_j (m,m') \right\| \ll \delta(T)^{-O(c_2E_2)}. \end{align} Observe that each $\alpha_j(m,m')$, $1 \leq j \leq d$, satisfies \begin{align} \label{eq:rec-1} \alpha_j(m,m') = \sum_{i=j}^d \binom{i}{j} \Big(\tilde D_m^{i-j} \alpha_i m^{j} + \tilde D_{m'}^{i-j} \alpha'_i m'^j \Big). \end{align} We now aim to show with a downwards induction starting from $j=d$ that \begin{align}\label{eq:alpha_j-kappa_j} \alpha_j = \frac{a_j}{\kappa_j} + \tilde\alpha_j, \end{align} where $1 \leq \kappa_j \ll \delta(T)^{-O(c_2E_2)}$, $\gcd(a_j,\kappa_j)=1$, and \begin{align}\label{eq:tilde-alpha-j} \tilde \alpha_j \ll \delta(T)^{-O(c_2E_2)} T^{-j}. \end{align} Suppose $j_0 \leq d$ and that the above holds for all $j>j_0$. Set $k_{j_0}=\lcm(\kappa_{j_0+1}, \dots, \kappa_d)$ if $j_0<d$, and $k_{j_0}=1$ when $j_0=d$. Note that $k_{j_0} \ll \delta(T)^{-O(c_2E_2)}$. Pigeonholing, we find that there is $\tilde m'$ such that $m'=\tilde m'$ for $\gg K\delta(T)^{O(c_2E_2)}$ pairs $(m,m') \in \mathcal{M}_{\tilde \eta}$. Amongst these there are furthermore $\gg K\delta(T)^{O(c_2E_2)}$ values of $m$ that belong to the same fixed residue class modulo $k_{j_0}$. Denote this set of integers $m$ by $\mathcal{M}'$. Suppose $m=k_{j_0}m_1 + m_0 \in \mathcal{M}'$. Letting $\{x\}$ denote the fractional part of $x \in \mathbb{R}$, we then have \begin{align*} \{\tilde D_m^{i-j_0} \alpha_i m^{j_0}\} = \Big\{ \tilde D_m^{i-j_0} \tilde \alpha_i m^{j_0} + \frac{a_i m_0^{j_0}}{\kappa_i} \Big\}, \qquad (i \geq j_0), \end{align*} where, in view of \eqref{eq:tilde-alpha-j}, $$ \tilde D_m^{i-j_0} \tilde \alpha_i m^{j_0} \ll \delta(T)^{-O(c_2E_2)} K^{i} T^{-i}. $$ Since $m_0$ is fixed, it thus follows from \eqref{eq:smoothness-2}, \eqref{eq:rec-1}, \eqref{eq:alpha_j-kappa_j} and the above bound that as $m$ varies over $\mathcal{M}'$, the value of $$ \|\alpha_{j_0} m^{j_0}\| $$ lies in a fixed interval of length $\ll \delta(T)^{-O(c_2E_2)} K^{j_0} T^{-j_0}$. We aim to make use of this information in combination with the Waring trick from \cite[\S 3]{GT-nilmobius} that was already employed in Section \ref{s:linear-subsecs}. For this purpose, we consider the set of integers \begin{align*} \mathcal{M}^*=\left\{ m \leq s(2K)^{j_0}: \begin{array}{l} m = m_1^{j_0} + \dots + m_s^{j_0} \\ m_1, \dots, m_s \in \mathcal{M}' \end{array} \right\} \end{align*} with $s \geq 2^{j_0}+1$. For each element $m \in \mathcal{M}^*$ of this set, $\|\alpha_{j_0} m\|$ lies in an interval of length $\ll_s \delta(T)^{-O(c_2E_2)} K^{j_0} T^{-j_0}$. Furthermore, Lemma \ref{l:GT-waring} implies that $\# \mathcal{M}^* \gg \delta(T)^{O(c_2E_2)} K^{j_0}$. The restrictions on the size of $K$ and the assumptions on the function $\delta$ imply that the conditions of Lemma \ref{l:GT-rec} are satisfied once $T$ is sufficiently large. Thus, assuming $T$ is sufficiently large, there is an integer $1 \leq \kappa'_{j_0} \ll \delta(T)^{-O(c_2E_2)}$ such that \begin{align*} \| \kappa'_{j_0} \alpha_{j_0} \| \ll \delta(T)^{-O(c_2E_2)} T^{-j_0}, \end{align*} i.e. \begin{align*} \alpha_{j_0} = \frac{a_{j_0}}{\kappa_{j_0}} + \tilde\alpha_{j_0}, \end{align*} where $\kappa_{j_0}| \kappa'_{j_0} $, $\gcd(a_{j_0},\kappa_{j_0})=1$ and $\tilde \alpha_{j_0} \ll \delta(T)^{-O(c_2E_2)} T^{-j_0}$, as claimed. In particular, we have \begin{align}\label{eq:kappa_j} \| \kappa_{j} \alpha_{j} \| \ll \delta(T)^{-O(c_2E_2)} T^{-j} \end{align} for $1\leq j \leq d$. Proceeding as in \cite[\S3]{GT-nilmobius}, let $\kappa=\lcm(\kappa_1, \dots, \kappa_d)$ and set $\tilde \eta = \kappa \eta$. Then \eqref{eq:kappa_j} implies that $$ \|\tilde \eta \circ g\|_{C^{\infty}[T]} = \sup_{1\leq j \leq d} T^j \| \kappa \alpha_{j} \| \ll \delta(T)^{-O(c_2E_2)}, $$ which in turn shows that $$ \|\tilde \eta \circ g (n)\|_{\mathbb{R}/\mathbb{Z}} \ll \delta(T)^{-O(c_2E_2)}n/T $$ for every $n \in \{1, \dots, T\}$. Note that the latter bound can be controlled by restricting $n$ to a smaller range. For this, set $T'= \delta(T)^{c_2E_2C}T$ for some constant $C \geq 1$ depending only on $d$ and $m_G$, chosen sufficiently large to guarantee that $$ \|\tilde \eta \circ g (n)\|_{\mathbb{R}/\mathbb{Z}} \leq 1/10, $$ whenever $n \in \{1, \dots , T'\}$. Let $\chi: \mathbb{R}/\mathbb{Z} \to [-1,1]$ be a function of bounded Lipschitz norm that equals $1$ on $[-\frac{1}{10},\frac{1}{10}]$ and satisfies $\int_{\mathbb{R}/\mathbb{Z}} \chi(t)\d t = 0$. Then, by setting $F := \chi \circ \tilde \eta$, we obtain a function $F:G/\Gamma \to [-1,1]$ such that $\int_{G/\Gamma} F = 0$ and $\|F\|_{\mathrm{Lip}} \ll \delta(T)^{-O(c_2E_2)}$. Choosing $c_2$ sufficiently small, we may ensure that $$\|F\|_{\mathrm{Lip}} < \delta(T)^{-E_2}$$ and, moreover, that $$ T'> \delta(T)^{E_2}T. $$ The quantities $T'$, $F$ and $c_2$ are chosen in such a way that $$ \Big| \frac{1}{T'} \sum_{1 \leq n \leq T'} F(g(n)\Gamma) \Big| = 1 > \delta(T)^{E_2} \|F\|_{\mathrm{Lip}}, $$ This contradicts the fact that $(g(n)\Gamma)_{n\leq T}$ is totally $\delta(T)^{E_2}$-equidistributed and completes the proof of the proposition. \end{proof} \subsection{Restriction of Proposition \ref{p:equid} to pairs of primes} We end this section by making the contents of Remark \ref{r:primes} more precise. The variables $(m,m')$ in Proposition \ref{p:equid} can without much additional effort be restricted to range over pairs of primes. It is clear that in the above proof all applications of the pigeonhole principle that involve the parameters $m$ and $m'$ have to be restricted to the set of primes. The only true difference lies in the application of Waring's result: Lemma \ref{l:GT-waring} needs to be replaced by the following one. \begin{lemma} Let $K \geq 1$ be an integer and let $S \subset \{1, \dots, K\} \cap \mathcal{P}$ be a subset of the primes less than $K$. Suppose $\#S= \alpha \frac{K}{\log K}$. Let $s \geq 2^k + 1$. Let $X \subset \{1, \dots, sK^k\}$ denote the set of integers that are representable as $p_1^k + \dots + p_s^k$ with $p_1, \dots, p_s \in S$. Then $$|X|\gg_{k,s} \alpha^{2s} K^k,$$ as $K \to \infty$. \end{lemma} \begin{proof} Let $I_s(N)$ denote the number of solutions to the equation $$p_1^k + \dots + p_s^k = N$$ in positive prime numbers $p_1, \dots, p_s$. Hua's asymptotic formula \cite[Theorem 11]{hua} for the Waring--Goldbach problem implies that $$I_s(N) \ll_{k,s} \frac{N^{s/k-1}}{(\log N)^{s}}.$$ Thus, for $1 \leq n \leq sK^k$, we have $$ I_s(n) \ll_{k,s} \frac{K^{s-k}}{(\log K)^{s}}. $$ Hence, $$ \alpha^{2s} \frac{K^{2s}}{(\log K)^{2s}} =\bigg( \sum_{n=1}^{sK^k} I_s(n)\bigg)^2 \leq |X| \sum_{n} I_s^2(n) \ll_{k,s} |X| K^k\frac{K^{2s-2k}}{(\log K)^{2s}} \ll_{k,s} |X| \frac{K^{2s-k}}{(\log K)^{2s}}. $$ Rearranging completes the proof of the lemma. \end{proof} \section{Proof of Proposition \ref{p:equid-non-corr}} \label{s:proof-of-prop} In this section we prove Proposition \ref{p:equid-non-corr} by invoking the possibly trivial Dirichlet decomposition from Definition \ref{def:M}. Let $f \in \mathcal{F}_H(E)$ and let $f=f_1 * \dots * f_t$ be a Dirichlet decomposition with the properties of Definition \ref{def:M} and \ref{def:M*} for some $t\leq H$. We are given integers $q$ and $r$ such that $0 \leq r < q \leq (\log N)^E$ and such that $\widetilde{W}r + A \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*$. Recall that $g \in \mathrm{poly}(\mathbb{Z},G_{\bullet})$ is a polynomial sequence with the property that $(g(n)\Gamma)_{n \leq T/(\widetilde{W} q)}$ is totally $\delta(N)^{E_0}$-equidistributed in $G/\Gamma$. Let $I \subset \{1,\dots,T/\widetilde{W}q\}$ be a discrete interval of length at least $T/(\widetilde{W}q (\log N)^E)$. Our aim is to bound above the expression $$ \frac{\widetilde{W}q}{T}\sum_{n \in I} f(\widetilde{W}(qn+r)+A)F(g(n)\Gamma). $$ \subsection{Reduction by hyperbola method} \label{ss:hyperbola} Taking into account the identity $f=f_1 * \dots * f_t$, the correlation from Proposition \ref{p:equid-non-corr} may be written as \begin{align}\label{eq:full-sum} & \frac{\widetilde{W}q}{T} \sum_{n \leq T/\widetilde{W}} \mathbf{1}_I(n)f(\widetilde{W}(qn+r)+A)F(g(n)\Gamma) \\ \nonumber &= \frac{\widetilde{W}q}{N} \sum_{\substack{d_1 \dots d_t \leq T \\ d_1 \dots d_t \equiv \widetilde{W}r + A \\ \Mod{\widetilde{W}q} }} f_1(d_1) f_2(d_2) \dots f_t(d_t) F\left(g\left(\frac{d_1 \dots d_t -A-r\widetilde{W}}{\widetilde{W}q} \right)\Gamma \right) \mathbf{1}_{P}\left(d_1 \dots d_t\right), \end{align} where $P$ is the finite progression defined via $P=\widetilde{W}(qI+r)+A$. Our first step is to split this summation via inclusion-exclusion into a finite sum of weighted correlations of individual factors $f_i$ with a nilsequence. To describe these weighted correlations, let $i \in \{1, \dots, t\}$. For every $j \not=i$, let $d_j$ be a fixed positive integer and write $D_i:=\prod_{j \not= i} d_j$. Let $a_i \in [0, T/D_i)$ be an integer. Weighted correlations involving $f_i$ will then take the form: \begin{align}\label{eq:fi-corr} & \frac{\widetilde{W}}{T} \bigg(\prod_{j \not= i} f_j(d_j)\bigg) \sum_{\substack{a_i < d_i \leq T/D_i \\ d_iD_i \equiv A + r\widetilde{W} \Mod{\widetilde{W}q}}} \mathbf{1}_{P}(d_iD_i) f_i(d_i) F\left(g\left( \frac{d_i D_i -A-r\widetilde{W}}{\widetilde{W}q} \right)\Gamma \right) \\ \nonumber &= \frac{\widetilde{W}}{T} \bigg(\prod_{j \not= i} f_j(d_j)\bigg) \sum_{\frac{a_i - D'_i}{\widetilde{W}q} < n \leq \frac{T-D'_i}{D_i\widetilde{W}q}} f_i(\widetilde{W} q n + D'_i) F\Big(g\left(D_i n + D''_i \right)\Gamma \Big) \mathbf{1}_I(D_i n + D''_i ), \end{align} for suitable integers $D'_i, D''_i$, determined by the values of $D_i \Mod{\widetilde{W}q}$ and $A+r\widetilde{W}$. In order to bound correlations of the form \eqref{eq:fi-corr}, we need to ensure that $d_i$ runs over a sufficiently long range, which will be achieved by arranging for $D_i \leq T^{1-1/t}$ to hold. Let $\tau=T^{1-1/t}$ and note that $D_i = D_j \frac{d_j}{d_i}$. Hence, $$ D_i > \tau \iff d_j > \frac{\tau d_i}{D_j}. $$ With the help of this equivalence, the function $\mathbf{1}: \mathbb{Z}^t \to 1$ can be decomposed as follows. Suppose $d_1 \dots d_t \leq T$. Then \begin{align*} \mathbf{1}(d_1,\dots, d_t) &= \mathbf{1}_{D_1 \leq \tau} + \mathbf{1}_{D_1 > \tau} \Big(\mathbf{1}_{D_2 \leq \tau} +\mathbf{1}_{D_2 > \tau} \Big(\mathbf{1}_{D_3 \leq \tau} + \dots \Big(\mathbf{1}_{D_t \leq \tau} + \mathbf{1}_{D_t > \tau}\Big) \dots \Big) \Big) \\ &=\mathbf{1}_{D_1 \leq \tau} + \mathbf{1}_{D_1 > \tau} \Big(\mathbf{1}_{D_2 \leq \tau} \mathbf{1}_{d_2 > \frac{\tau d_1}{D_2}} +\mathbf{1}_{D_2 > \tau} \Big(\mathbf{1}_{D_3 \leq \tau} \mathbf{1}_{d_3 > \frac{\tau \max(d_1,d_2)}{D_3}} + \dots\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \dots + \mathbf{1}_{D_{t-1} > \tau} \Big(\mathbf{1}_{D_t \leq \tau} + \mathbf{1}_{D_t > \tau}\Big) \dots \Big) \Big)\\ &= \mathbf{1}_{D_1 \leq \tau} +\mathbf{1}_{D_2 \leq \tau} \mathbf{1}_{d_2 > \frac{\tau d_1}{D_2}} +\mathbf{1}_{D_3 \leq \tau} \mathbf{1}_{d_3 > \frac{\tau \max(d_1,d_2)}{D_3}} + \dots +\mathbf{1}_{D_t \leq \tau} \mathbf{1}_{d_t > \frac{\tau \max(d_1, \dots, d_{t-1})}{D_t}}. \end{align*} Thus, $$ \sum_{d_1 \dots d_t < T} =\sum_{i=1}^t \sum_{D \leq T^{1-1/t}} \sum_{\substack{d_1, \dots \widehat{d_{i}} \dots, d_t \\ D_i=D}} \sum_{\substack{d_i \leq T/D_i \\ d_i > \tau \max(d_1, \dots, d_{i-1})/D_i}}. $$ This shows that the original summation \eqref{eq:full-sum} may be decomposed as a sum of summations of the shape \eqref{eq:fi-corr} while only increasing the total number of terms by a factor of order $O(t)$. Expressing, if necessary, the summation range $$(\tau \max(d_1, \dots, d_{i-1})/D_i, T/D_i)$$ of $d_i$ as the difference of two intervals starting from $1$, we can ensure that $d_i$ runs over an interval of length $\gg T/D_i \gg T^{1/t}$. The correlation now decomposes as: \begin{align}\label{eq:pre-decomposition} \nonumber & \sum_{\substack{d_1 \dots d_t \leq T \\ d_1 \dots d_t \equiv \widetilde{W}r + A \\ \Mod{\widetilde{W}q} }} f_1(d_1) f_2(d_2) \dots f_t(d_t) F\left(g\left(\frac{d_1 \dots d_t -A-r\widetilde{W}}{\widetilde{W}q} \right)\Gamma \right) \mathbf{1}_{P}\left(d_1 \dots d_t\right) \\ &\quad \leq~ \sum_{i=1}^t \sum_{k=1}^{\frac{1-1/t}{\log2}\log T} \sum_{D \sim 2^k} \sum_{\substack{d_1, \dots \widehat{d_{i}} \dots, d_t \\ D_i=D}} \bigg( \prod_{j \not= i} |f_j(d_j)| \bigg) \\ \nonumber &\qquad \qquad \qquad \qquad \Bigg| \sum_{\substack{n \leq T/D \\ n > \frac{\tau}{D} \max(d_1, \dots, d_{i-1}) \\ Dn + D'' \in I}} \hspace*{-.5cm} f_i(\widetilde{W}qn+D')F(g(Dn+D'')) \Bigg|. \end{align} Our next aim is to analyse the innermost sum of \eqref{eq:pre-decomposition} as $D\sim 2^k$ varies. Setting $E_1 = E_0$, we deduce from Proposition \ref{p:linear-subsecs} that whenever $2^k \in [\exp ((\log \log T)^2),(\log T)^{1-1/t}]$ then there is a set $\mathcal{B}_{2^k}$ of cardinality at most $O(\delta(N)^{c_1 E_0}2^k)$ such that for each $D \sim 2^k$ with $D \not\in \mathcal{B}_{2^k}$ the sequence $$(g_D(n)\Gamma)_{n\leq T/\widetilde{W} q}, \qquad g_D(n):=g(Dn+D''),$$ is $\delta(N)^{c_1 E_0}$-equidistributed. Before turning to the case of $D \not\in \mathcal{B}_{2^k}$, we bound the total contribution from exceptional $D$, that is, from $D \in \mathcal{B}_{2^k}$ and from $D \leq \exp((\log \log T)^2)$. \subsection{Contribution from exceptional $D$} \label{ss:D-contribution} Let $\mathcal{B}_{2^k}$ denote the exceptional set from the previous section. \begin{lemma}\label{l:exceptional-D} Whenever $E_0$ is sufficiently large in terms of $d$, $m_G$ and $H$, the following two estimates hold: \begin{align*} &\sum_{\frac{(\log \log T)^2}{\log 2} < k \leq \frac{(1-1/t)\log T}{\log 2} } \sum_{D \in \mathcal{B}_{2^k}} \sum_{\substack{d_1 \dots d_t \leq T \\ d_1 \dots d_t \equiv \widetilde{W}r + A \Mod{\widetilde{W}q} \\ D_i=D }} |f_1(d_1) f_2(d_2) \dots f_t(d_t)| \mathbf{1}_{P}\left(d_1 \dots d_t\right) \\ &\ll_t \frac{T}{\widetilde{W}q} \frac{1}{(\log T)^2}, \end{align*} and \begin{align*} & \sum_{\substack{D \leq \exp((\log \log T)^2) \\ \gcd(D,W)=1}} \sum_{\substack{d_1 \dots d_t \leq T \\ d_1 \dots d_t \equiv \widetilde{W}r + A \Mod{\widetilde{W}q} \\ D_i=D }} |f_1(d_1) f_2(d_2) \dots f_t(d_t)| \mathbf{1}_{P}\left(d_1 \dots d_t\right) \\ &\ll (\log \log T)^{2H} \frac{T}{\widetilde{W}q} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|f_i|}(T; \widetilde{W}q, A'). \end{align*} \end{lemma} \begin{rem*} Note that the above bounds only contribute to the bound in Proposition \ref{p:equid-non-corr} in the case where the Dirichlet convolution is non-trivial, that is, when $t>1$. In view of \eqref{eq:mean-fi-lowerbound}, the contribution from the first part is dominated by the contribution from the second part and may therefore be ignored. Note that the second part is accounted for in the bound from Proposition \ref{p:equid-non-corr}. \end{rem*} \begin{proof} Set $$\bar f_i (n) := |f_1 * \dots \widehat f_i \dots * f_t(n)|$$ and note that $\bar f_i (n) \leq (CH)^{\Omega(n)}$ for some constant $C$. This implies a second moment bound of the form $$ \sum_{n \leq x} \bar f_i (n)^2 \leq x (\log x)^{O(H)}. $$ Similarly, we have $$ \sum_{n \leq x} |f_i(n)| \ll x (\log x)^{O(H)}. $$ Since Proposition \ref{p:linear-subsecs} provides the bound $\# \mathcal{B}^*_{2^k} \ll \delta(N)^{c_1E_0} 2^k,$ a trivial application of the Cauchy--Schwarz inequality yields \begin{align*} & \sum_{D \in \mathcal{B}^*_{2^k}} |f_1 * \dots \widehat f_i \dots * f_t (D)| \sum_{\substack{n \leq T/D \\ nD \equiv A + \widetilde{W}r \Mod{\widetilde{W}q} \\ nD \in P}} |f_i(n)| \\ &= \sum_{\substack{n \leq T/2^k\\ \gcd(n,W)=1}} |f_i(n)| \sum_{\substack{D \in \mathcal{B}^*_{2^k} \\ nD \equiv A + \widetilde{W}r \Mod{\widetilde{W}q} \\ nD \in P}} \bar f_i (D) \\ &\leq \sum_{\substack{n \leq T/2^k \\ \gcd(n,W)=1}} |f_i(n)| 2^k \delta(N)^{c_1E_0} k^{O(H)} \\ &\leq T (\log T)^{O(H)} \delta(N)^{c_1E_0} k^{O(H)}. \end{align*} Recall that $c_1$ only depends on $d$ and $m_G$, and that by assumption of Proposition \ref{p:equid-non-corr} we have $\delta(N) \ll (\log T)^{-1}$. Thus, the first part of the lemma follows by taking into account that the sum in $k$ has length at most $\log T$ and $k^{O(H)} < (\log T)^{O(H)}$ and by choosing $E_0$ sufficiently large in terms of $d$, $m_G$ and $H$. Concerning the small values of $D$, we have \begin{align*} & \sum_{\substack{D \leq \exp((\log \log T)^2) \\ \gcd(D,W)=1}} \bar f_i(D) \sum_{\substack{n \leq T/D \\ nD \equiv A +\widetilde{W}r\Mod{\widetilde{W}q} \\ nD \in P}} |f_i(n)| \\ &\ll \sum_{\substack{D \leq \exp((\log \log T)^2) \\ \gcd(D,W)=1}} \bar f_i(D) \frac{T}{D\widetilde{W}q} S_{|f_i|}(T/D; \widetilde{W}q, \bar D (A +\widetilde{W}r)), \end{align*} where $\bar D D \equiv 1 \Mod{\widetilde{W}q}$. By property (3) of Definition \ref{def:M}, this in turn is bounded by \begin{equation} \label{eq:small-D-bound} \ll \frac{T}{\widetilde{W}q} \sum_{\substack{D \leq \exp((\log \log T)^2) \\ \gcd(D,W)=1}} \frac{\bar f_i(D)}{D} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|f_i|}(T; \widetilde{W}q, A'). \end{equation} Since \begin{align*} \sum_{\substack{D \leq \exp((\log \log T)^2) \\ \gcd(D,W)=1}} \frac{\bar f_i(D)}{D} &\leq \prod_{j\not=i} \prod_{w(N)< p \leq \exp((\log \log T)^2)} \bigg( 1 + \frac{|f_j(p)|}{p} + \frac{|f_j(p^2)|}{p^2} + \dots \bigg) \\ &\ll \exp\left( \sum_{p \leq \exp((\log \log T)^2)} \frac{|f(p)|}{p}\right) \ll (\log \log T)^{2H}, \end{align*} this completes the proof of the second part of the lemma. \end{proof} \subsection{Montgomery--Vaughan approach}\label{ss:MV} From now on we assume that $D$ is unexceptional, that is, $D\sim 2^k$ for $k \geq (\log \log T)^2 / \log 2$ and $D \not\in \mathcal{B}_{2^k}$, where $\mathcal{B}_{2^k}$ is the exceptional set from Section \ref{ss:hyperbola}. To bound the inner sum of \eqref{eq:pre-decomposition} for unexceptional $D$, we shall employ the strategy Montgomery and Vaughan followed in their improvement \cite{mv77} of a result of Daboussi, and begin by introducing a factor $\log n$ into the average. This will later allow us to reduce matters to understanding equidistribution along sequences defined in terms of primes. We set $h=f_i$. Cauchy--Schwarz and several integral comparisons show that \begin{align*} &\sum_{n \leq T/(D\widetilde{W}q)} \mathbf{1}_I(Dn + D'') h(\widetilde{W}qn+D') F(g(Dn+D'')\Gamma) \log\bigg(\frac{(T/D)}{\widetilde{W}qn+D'} \bigg) \\ &\leq \bigg(\sum_{n \leq T/(D\widetilde{W}q)} \Big(\log (T/(D\widetilde{W}q)) - \log n \Big)^2 \bigg)^{1/2} \bigg( \sum_{n \leq T/(D\widetilde{W}q)} h^2(\widetilde{W}qn+D') \bigg)^{1/2} \\ &\ll \frac{T}{D\widetilde{W}q} \sqrt{ \frac{D\widetilde{W}q}{T} \sum_{n\leq T/(D\widetilde{W}q)} h^2(\widetilde{W}qn+D')}, \end{align*} and hence, invoking $D \leq T^{1-1/t}$, \begin{align} \label{eq:initial-CS} &\sum_{\substack{n \leq T/(D\widetilde{W}q) \\ Dn+D'' \in I}} h(\widetilde{W}qn+D') F(g(Dn+D'')\Gamma) \\ \nonumber &\ll_t~ \frac{1}{\log T} \sqrt{\frac{D\widetilde{W}q}{T} \sum_{n\leq \frac{T}{D\widetilde{W}q}} h^2(\widetilde{W}qn+D')} \\ \nonumber &\quad + \frac{1}{\log T} \Bigg| \frac{DWq}{T} \sum_{\substack{n\leq T/(D\widetilde{W}q) \\Dn+D'' \in I}} h(\widetilde{W}qn+D') F(g(Dn+D'') \Gamma) \log (\widetilde{W}qn+D') \Bigg|. \end{align} By Definition \ref{def:M}, condition (2), the contribution of the first term in the bound above to \eqref{eq:pre-decomposition} is at most $$ O_t\Big( (\log T)^{-\theta_f}S_{|f|}(T;\widetilde{W}q,A+\widetilde{W}r)\Big) $$ Note that this error term is negligible in view of the bound stated in Proposition \ref{p:equid-non-corr}. It remains to estimate the second term in the bound from \eqref{eq:initial-CS}. It will be convenient to abbreviate $$g_D(n):=g(Dn+D''),$$ and to introduce the discrete interval $I_D$, defined via $n \in I_D$ if and only if $Dn+D'' \in I$. Furthermore, let $P_D$ denote the progression for which $n \in P_D$ if and only if $\frac{n - D'}{\widetilde{W}q}\in I_D$. Since $\log n = \sum_{m|n} \Lambda(m)$, our task is to bound \begin{align}\label{eq:task} \frac{D\widetilde{W}q}{T\log T} \Bigg| \sum_{\substack{mn\leq T/D \\ mn \equiv D' \Mod{\widetilde{W}q}}} \mathbf{1}_{P_D}(nm) h(nm) \Lambda(m) F\Big(g_D\Big(\frac{nm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \Bigg|. \end{align} We may, in fact, restrict the summation in \eqref{eq:task} to pairs $(m,n)$ of co-prime integers for which $m=p$ is prime. To see this, recall that $F$ is $1$-bounded and observe that \begin{align*} \sum_{\substack{ nm \leq T/D \\ \Omega(m)\geq 2 \text{ or } \gcd(n,m)>1 \\ mn \equiv D' \Mod{\widetilde{W}q} }} |h(nm)| \Lambda (m) & \leq 2 \sum_{p} \sum_{k \geq 2} k \log p \sum_{\substack{n\leq T/D, p^k\|n \\ n \equiv D' \Mod{\widetilde{W}q} }} |h(n)| \\ & \leq 2 \sum_{p > w(N)} \sum_{k \geq 2} H^k k \log p \sum_{\substack{n\leq T/(Dp^k) \\ p^kn \equiv D' \Mod{\widetilde{W}q} }} |h(n)| . \end{align*} By Definition \ref{def:M} part (3), the order of the mean value of $h$ is stable under small perturbations, i.e.\ if $p^k \leq (T/D)^{1/2}$, then $$ \sum_{\substack{n\leq T/(Dp^k) \\ p^kn \equiv D' \Mod{\widetilde{W}q} }} |h(n)| \ll \frac{1}{p^k} \max_{A' \in (\mathbb{Z}/q\widetilde{W}\mathbb{Z})^*} \sum_{\substack{n\leq T/D \\ n \equiv A' \Mod{\widetilde{W}q}}} |h(n)|. $$ The contribution of such $p^k$ is bounded as follows. Provided $\varepsilon>0$ is sufficiently small (e.g.\ $\varepsilon= \frac{1}{4}$), and $N$, and hence $w(N)$ is sufficiently large, we have \begin{align*} & \ll \sum_{p > w(N)} \sum_{\substack{k \geq 2 \\ p^k \leq (T/D)^{1/2}}} \frac{H^k \log p^k}{p^k} \max_{A' \in (\mathbb{Z}/q\widetilde{W}\mathbb{Z})^*} \sum_{\substack{n\leq T/D \\ n \equiv A' \Mod{\widetilde{W}q}}} |h(n)| \\ & \ll \sum_{p > w(N)} \frac{1}{p^{2-\varepsilon}} \max_{A' \in (\mathbb{Z}/q\widetilde{W}\mathbb{Z})^*} \sum_{\substack{n\leq T/D \\ n \equiv A' \Mod{\widetilde{W}q}}} |h(n)| \\ & \ll \frac{1}{w(N)^{1-\varepsilon}} \max_{A' \in (\mathbb{Z}/q\widetilde{W}\mathbb{Z})^*} \sum_{\substack{n\leq T/D \\ n \equiv A' \Mod{\widetilde{W}q}}} |h(n)|\\ & \ll \frac{T}{D\widetilde{W}q} \frac{1}{w(N)^{1-\varepsilon}} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|h|}(T/D;\widetilde{W}q,A'). \end{align*} The remaining sum over $p^k > (T/D)^{1/2}$ satisfies: \begin{align*} & \sum_{p > w(N)} \sum_{\substack{k \geq 2 \\ p^k > (T/D)^{1/2}}} H^k \log p^k \sum_{\substack{n\leq T/(Dp^k) \\ p^kn \equiv D' \Mod{\widetilde{W}q} }} |h(n)| \\ & \ll \sum_{p > w(N)} \sum_{\substack{k \geq 2 \\ p^k > (T/D)^{1/2}}} H^k \log p^k \frac{T}{D p^k} \left(\log \frac{T}{D p^k}\right)^{O(H)} \\ &\ll \frac{T}{D} \left(\log \frac{T}{D}\right)^{O(H)} \sum_{p > w(N)} \sum_{\substack{k \geq 2 \\ p^k > (T/D)^{1/2}}} \frac{H^k \log p^k}{p^k} \\ &\ll \frac{T}{D} \left(\log \frac{T}{D}\right)^{O(H)} \sum_{p > w(N)} \sum_{\substack{k \geq 2 \\ p^k > (T/D)^{1/2}}} p^{-k+\varepsilon} \\ &\ll \frac{T}{D} \left(\log \frac{T}{D}\right)^{O(H)} \sum_{p > w(N)} (T/D)^{-\varepsilon} p^{-2 + 2\varepsilon} \\ &\ll \left(\frac{T}{D}\right)^{1-\varepsilon} \left(\log \frac{T}{D}\right)^{O(H)}, \end{align*} again, provided $\varepsilon>0$ is sufficiently small, e.g.\ $\varepsilon= \frac{1}{4}$, and $N$, and hence $w(N)$, sufficiently large. Thus, the total contribution to \eqref{eq:pre-decomposition} of pairs $(m,n)$ that are not of the form $(m,p)$, where $p$ is prime and does not divide $m$, is bounded by \begin{align*} & \frac{T}{\widetilde{W}q} \sum_{i=1}^t \sum_k \sum_{D \sim 2^k} \sum_{d_1 \dots \hat{d_i} \dots d_t = D} \bigg(\prod_{j \not= i} |f_j(d_j)|\bigg) \frac{1}{\log T} \frac{1}{w(N)^{1-\varepsilon}} \frac{1}{D} \max_{A' \in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} S_{|h|}(T/D;\widetilde{W},A')\\ &\leq \frac{T}{\widetilde{W}q} \frac{1}{\log T} \frac{1}{w(N)^{1-\varepsilon}} \max_{A' \in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} S_{|f_1|* \dots *|f_t|}(T;\widetilde{W},A'), \end{align*} which is negligible in view of the bound claimed in Proposition \ref{p:equid-non-corr}. It therefore suffices to bound the expression $$ \frac{D\widetilde{W}q}{T} \sum_{\substack{mp\leq T/D \\ mp \equiv D' \Mod{\widetilde{W}q}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big). $$ We shall split this summation into large and small divisors with respect to the parameter $$X=X(D) =\left(\frac{T}{D}\right)^{1-1/\left(\log\frac{T}{D}\right)^{\frac{U-1}{U}}}, $$ where $1 \leq U \ll 1$ is any integer satisfying $U^{-1}<\theta_f /2$, e.g.\ $$U = \max\{1, \lceil 2/\theta_f \rceil\}.$$ With this choice of $X$ we obtain \begin{align}\label{eq:two-sums} & \frac{\widetilde{W}q D}{T}\sum_{\substack{m<X \\ \gcd(m,W)=1}} \nonumber \sum_{\substack{p \leq T/(m D)\\ p \equiv D \bar{m} \Mod{\widetilde{W}}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \\ &+ \frac{\widetilde{W}qD}{T} \sum_{\substack{m>X \\ \gcd(m,\widetilde{W})=1}} \sum_{\substack{p \leq T/(mD) \\ p \equiv D \bar{m} \Mod{\widetilde{W}}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big). \end{align} In order to analyse these expressions, we dyadically decompose in each of the two terms the sum with shorter summation range. The cut-off parameter $X$ is chosen in such a way that one of the dyadic decompositions is of short length, depending on $U$. Indeed, we have $$\log_2 X \sim \log_2 (T/D),$$ while $$ \log_2 \frac{T}{DX} = \frac{(\log \frac{T}{D})^{1/U}}{\log 2}. $$ Let $$T_0 = \exp((\log\log T)^2).$$ Then the two sums from \eqref{eq:two-sums} decompose as \begin{align}\label{eq:two-sums-large-p} &\frac{\widetilde{W}qD}{T} \sum_{\substack{m<T_0 \\ \gcd(m,W)=1}} \sum_{\substack{p \leq T/(mD) \\ p \equiv D \bar{m} \Mod{\widetilde{W}}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \\ \nonumber &+ \frac{\widetilde{W}qD}{T}\sum_{j=1}^{\log_2 (X/T_0)} \sum_{\substack{m \sim 2^{-j} X \\ \gcd(m,W)=1}} \sum_{\substack{p \leq T/(mD) \\ p \equiv D \bar{m} \Mod{\widetilde{W}}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \end{align} and \begin{align}\label{eq:two-sums-small-p} & \frac{\widetilde{W}qD}{T} \Bigg\{ \sum_{\substack{m>X \\ \gcd(m,W)=1}} \sum_{\substack{p \leq \min(T/(mD),T_0) \\ p \equiv D \bar{m} \Mod{\widetilde{W}}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \\ \nonumber &+ \sum_{j=1}^{\log_2 (T/(XDT_0))} \hspace*{-.5cm} \sum_{\substack{m>X \\ \gcd(m,W)=1}} \sum_{\substack{p \sim 2^{-j}T/(XD) \\ p \equiv D \bar{m} \Mod{\widetilde{W}}}} \mathbf{1}_{pm<T/D} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \Bigg\}. \end{align} We now analyse the contribution from these four sums to \eqref{eq:pre-decomposition} in turn, beginning with the two short sums up to $T_0$. \subsection{Short sums} The following lemma gives bounds on the contribution of the short sums in \eqref{eq:two-sums-large-p} and \eqref{eq:two-sums-small-p} to \eqref{eq:pre-decomposition}. \begin{lemma} \label{l:short-sums} Writing $\bar{f}_i(n) = |f_1 * \dots * \widehat{f_i} * \dots * f_t(n)|$, we have \begin{align*} &\sum_{D \leq T^{1-1/t}} \frac{\bar{f}_i(D)}{\log T}\bigg| \sum_{\substack{m<T_0 \\ \gcd(m,W)=1}} \sum_{\substack{p \leq T/(mD) \\ p \equiv D \bar{m} \\ \Mod{\widetilde{W}q}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big)\bigg| \\ &\ll \frac{T}{\widetilde{W}q} (\log \log T)^{2} \frac{Wq}{\phi(Wq)} \frac{1}{\log T} \exp \bigg( \sum_{w(N)< p < T } \frac{|f_1(p) + \dots + f_t(p) - f_i(p)|}{p} \bigg) \end{align*} and \begin{align*} &\sum_{D \leq T^{1-1/t}} \frac{\bar{f}_i(D)}{\log T} \bigg| \sum_{\substack{m>X \\ \gcd(m,W)=1}} \sum_{\substack{p \leq \min(\frac{T}{Dm},T_0) \\ p \equiv D \bar{m} \Mod{\widetilde{W}q}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big)\bigg| \\ &\ll \frac{T}{\widetilde{W}q} \frac{(\log \log T)^2}{\log T} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|f_1| * \dots * |f_t|}(T;\widetilde{W},A'). \end{align*} \end{lemma} \begin{rem*} Note that both the above bounds make sense in the case where $t=1$. \end{rem*} \begin{proof} The short sum in \eqref{eq:two-sums-large-p} satisfies \begin{align*} &\frac{1}{\log T} \bigg| \frac{\widetilde{W}qD}{T} \sum_{\substack{m<T_0 \\ \gcd(m,W)=1}} \sum_{\substack{p \leq T/(mD) \\ p \equiv D \bar{m} \Mod{\widetilde{W}q}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big)\bigg| \\ &\ll \frac{1}{\log T} \frac{\widetilde{W}qD}{T} \sum_{\substack{m<T_0 \\ \gcd(m,W)=1}} \sum_{\substack{p \leq T/(mD) \\ p \equiv D \bar{m} \Mod{\widetilde{W}q}}} |\mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p)| \\ &\ll \frac{\widetilde{W}q}{\phi(\widetilde{W}q)} \frac{1}{\log T} \sum_{\substack{m < T_0 \\ \gcd(m,W)=1}} \frac{|h(m)|}{m} \ll \frac{Wq}{\phi(Wq)} \frac{1}{\log T} \exp \bigg( \sum_{w(N)< p < T_0 } \frac{|h(p)|}{p} \bigg)\\ &\ll (\log \log T)^{2} \frac{Wq}{\phi(Wq)} \frac{1}{\log T}, \end{align*} where we made use of part (3) of Definition \ref{def:M} This yields the first bound, since \begin{align*} \sum_{D \leq T^{1-1/t}} \frac{\bar{f}_i(D)}{D} \ll \exp \bigg( \sum_{w(N)< p < T } \frac{|f_1(p) + \dots + f_t(p) - f_i(p)|}{p} \bigg), \end{align*} which continues to hold when $t=1$. The short sum in \eqref{eq:two-sums-small-p} is bounded by \begin{align*} &\frac{1}{\log T} \bigg| \frac{\widetilde{W}qD}{T}\sum_{\substack{m>X \\ \gcd(m,W)=1}} \sum_{\substack{p \leq \min(T/(mD),T_0) \\ p \equiv D \bar{m} \Mod{\widetilde{W}q}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big)\bigg| \\ &\ll \frac{1}{\log T} \sum_{\substack{w(N) < p < T_0 }} \frac{\Lambda(p)}{p} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|h|}(T/(pD);\widetilde{W}q,A') \\ &\ll \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*}S_{|h|}(T/D;\widetilde{W}q,A') \frac{1}{\log T} \sum_{\substack{w(N) < p < T_0 }} \frac{\Lambda(p)}{p} \\ &\ll \frac{\log T_0}{\log T} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*}S_{|h|}(T/D;\widetilde{W}q,A') = \frac{(\log \log T)^2}{\log T} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*}S_{|h|}(T/D;\widetilde{W}q,A'). \end{align*} This proves the second part of the lemma, since \begin{align*} \sum_{D \leq T^{1-1/t}} \frac{\bar{f}_i(D)}{D} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|h|}(T/D;\widetilde{W}q,A') \ll \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|f_1| * \dots * |f_t|}(T;\widetilde{W}q,A'). \end{align*} \end{proof} Since the statement of Proposition \ref{p:equid-non-corr} takes care of both bounds from the above lemma, it remains to bound the contribution from the dyadic parts of \eqref{eq:two-sums-large-p} and \eqref{eq:two-sums-small-p}. \subsection{Large primes} \label{ss:large-primes} In this subsection we prove the following lemma which handles the contribution of the dyadic parts of \eqref{eq:two-sums-large-p} to \eqref{eq:pre-decomposition}. \begin{lemma}[Contribution from large primes] \label{l:large-primes} Under the assumptions of Proposition \ref{p:equid-non-corr}, the following holds. Let $E^{\sharp}_h(T;D,q,j,I)$ denote the expression \begin{align*} \bigg| \frac{D\widetilde{W}q}{T} \sum_{\substack{m \sim 2^{-j} X \\ \gcd(m,W)=1}} \sum_{\substack{p \leq T/(mD) \\ p \equiv D' \bar{m} \Mod{\widetilde{W}q}}} \mathbf{1}_{P_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \bigg|. \end{align*} Then, provided the parameter $E_0$ from Proposition \ref{p:equid-non-corr} is sufficiently large depending on $d$, $m_G$ and $H$, we have \begin{align} \label{eq:large-primes} &\sum_{i=1}^t \sum_{k=\frac{(\log \log T)^2}{\log 2}}^{(1-1/t)\frac{\log T}{\log2}} \sum_{D \sim 2^k} \mathbf{1}_{D \not\in \mathcal{B}_{2^k}} \sum_{\substack{d_1, \dots \widehat{d_{i}} \dots, d_t \\ D_i=D}} \bigg( \prod_{i' \not= i} \frac{|f_{i'}(d_{i'})|}{d_{i'}} \bigg) \sum_{j=0}^{\log_2(X/T_0)} \frac{E^{\sharp}_{f_i}(T;D,q,j,I)}{\log T}\\ \nonumber &\ll_{d,s,m_G, E, H, U} \left( (\log \log T)^{-1/(2^{s+2} \dim G)} + \frac{Q^{10^s \dim G}}{(\log \log T)^{1/2}} \right) \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|f_1|* \dots *|f_t|}(T;\widetilde{W}q,A'). \end{align} \end{lemma} \begin{rem*} Note that this contribution does not exceed the bound \eqref{eq:prop-bound}. \end{rem*} The remainder of this subsection is concerned with the proof of \eqref{eq:large-primes}. Considering $E^{\sharp}_h(T;D,q,j,I)$ for a fixed value of $j$, $1 \leq j \leq \log_2 \frac{X}{T_0}$, the Cauchy--Schwarz inequality yields \begin{align} \label{eq:two-sums-1} & \sum_{\substack{m \sim 2^{-j}X \\ \gcd(m,W)=1 }} \sum_{\substack{p \leq 2^{j} T/(XD) \\ p \equiv D \bar{m} \Mod{\widetilde{W}q} \\ mp \in P_D}} \mathbf{1}_{mp\leq N} h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big)\\ \nonumber &\leq \bigg( \sum_{p \leq 2^j T/(XD)} |h(p)|^2 \Lambda(p) \bigg)^{1/2} \Bigg( \frac{\widetilde{W}q}{\phi(\widetilde{W}q)} \sum_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} \sum_{\substack{m,m' \sim 2^{-j}X \\ m \equiv m' \equiv A' \Mod{\widetilde{W}q} \\ \gcd(W,m)=1}} h(m)h(m') \\ \nonumber & \qquad \qquad \qquad \frac{\phi(\widetilde{W}q)}{\widetilde{W}q} \sum_{\substack{p \leq T/(D\max(m,m')) \\ pA' \equiv D' \Mod{\widetilde{W}q} \\ pm, pm' \in P_D}} \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \overline{ F\Big(g_D\Big(\frac{pm'-D'}{\widetilde{W}q}\Big) \Gamma\Big)} \Bigg)^{1/2}. \end{align} The first factor is easily seen to equal $O(1)$, since $h(p) \ll_H 1$ at primes. To estimate the second factor, we seek to employ the orthogonality of the `$W$-tricked von Mangoldt function' with nilsequences in combination with the fact that Proposition \ref{p:equid} implies that for most pairs $(m,m')$ the product nilsequence that appears in the above expression is equidistributed. For this purpose, we effect the change of variables $p=\widetilde{W}qn+D'_m$ in the inner sum of the second factor, where $D'_m$ is such that $D'_m \equiv D' \bar{m} \Mod{\widetilde{W}q}$. This yields \begin{align} \label{eq:cov} & \frac{\phi(\widetilde{W}q)}{\widetilde{W}q} \sum_{\substack{p \leq T/(D\max(m,m')) \\ pA' \equiv D' \Mod{\widetilde{W}q} \\ pm,pm' \in P_D}} \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \overline{ F\Big(g_D\Big(\frac{pm'-D'}{\widetilde{W}q}\Big) \Gamma\Big)} \\ \nonumber &= \sum_{\substack{n \leq T/(\widetilde{W}qD\max(m,m')) \\ nm+\widetilde D_{m}, nm'+\widetilde D_{m'} \in I_D}} \frac{\phi(\widetilde{W}q)}{\widetilde{W}q} \Lambda(\widetilde{W}qn+D'_m) F(g_D(nm+\widetilde D_m)\Gamma) \overline{F(g_D(nm'+\widetilde D_{m'})\Gamma)}, \end{align} for suitable values of $0\leq \widetilde D_{m}<m$, $0\leq \widetilde D_{m'}<m'$. Recall that for any fixed unexceptional value of $D$, the finite sequence $$ (g_D(n)\Gamma)_{n\leq T/(Dq)} $$ is totally $\delta(N)^{c_1E_0}$-equidistributed. Thus, applying Proposition \ref{p:equid} with $g = g_D$ and with $E_2=c_1E_0$, we obtain for every integer $$K \in \left[T_0, X \right]$$ an exceptional set $\mathcal{E}_{K}$ of size $\# \mathcal{E}_{K} \ll \delta(T)^{O(c_1c_2E_0)}K^2$ such that for all pairs $m,m' \sim K$ with $m \equiv m' \Mod{W}$ and $(m,m') \not\in \mathcal{E}_{K}$ the following estimate holds: \begin{align*} \bigg| \sum_{\substack{n \leq T/(\widetilde{W}qD\max(m,m')) \\ nm+\widetilde D_{m}, nm'+\widetilde D_{m'} \in I_D}} F(g_D(nm+\widetilde D_m)\Gamma) \overline{F(g_D(nm'+\widetilde D_{m'})\Gamma)} \bigg| < \frac{(1 + \|F\|_{\mathrm{Lip}}) \delta(N)^{c_1c_2E_0} T}{KqD}. \end{align*} Before we continue with the analysis of \eqref{eq:cov}, we prove a quick lemma that will allow us to handle the contribution of exceptional sets $\mathcal{E}_{K}$ in the proof of Lemma \ref{l:large-primes}. \begin{lemma}\label{l:E_K'-contribution} Suppose $j \leq \log_2(X/T_0)$ and let $\mathcal{E}_{D,K}$ denote the exceptional set from Proposition \ref{p:equid} with $g=g_D$. Then, provided $E_0$ is sufficiently large, we have \begin{align*} &\frac{1}{\phi(\widetilde{W}q)} \sum_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} \sum_{\substack{m,m' \sim 2^{-j}X \\ m \equiv m' \equiv A' \Mod{\widetilde{W}q}}} |h(m)h(m')| \mathbf{1}_{(m,m') \in \mathcal{E}_{D,2^{-j}X}} \\ &\ll \delta(N)^{O(c_1c_2 E_0)} \bigg( \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} \sum_{\substack{m \sim 2^{-j}X \\ m \equiv A' \Mod{\widetilde{W}q} }} |h(m)| \bigg)^2. \end{align*} \end{lemma} \begin{proof} Cauchy--Schwarz and an application of part (1) of Definition \ref{def:M} yield for each residue $A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*$ that \begin{align*} & \sum_{\substack{m,m' \sim 2^{-j}X \\ m \equiv m' \equiv A' \Mod{\widetilde{W}q}}} |h(m)h(m')| \mathbf{1}_{(m,m') \in \mathcal{E}_{D,2^{-j}X}} \\ &\ll 2^{-j}X \delta(N)^{O(c_1c_2E_0)} \bigg( \sum_{\substack{m, m' \sim 2^{-j}X \\ m \equiv m' \equiv A' \Mod{\widetilde{W}q} }} |h(m)|^2|h(m')|^2 \bigg)^{1/2} \\ &\ll 2^{-j}X \delta(N)^{O(c_1c_2E_0)} \sum_{\substack{m \sim 2^{-j}X \\ m \equiv A' \Mod{\widetilde{W}q} }} |h(m)|^2 \\ &\ll \delta(N)^{O(c_1c_2E_0)} (\log 2^{-j}X)^{2} \bigg( \sum_{\substack{m \sim 2^{-j}X \\ m \equiv A' \Mod{\widetilde{W}q} }} |h(m)| \bigg)^2. \end{align*} Since $\delta(N) \leq (\log N)^{-1}$, any sufficiently large choice of $E_0$ guarantees that $$\delta(N)^{O(c_1c_2E_0)} (\log 2^{-j}X)^{2} \leq \delta(N)^{O(c_1c_2E_0)}$$ holds. This completes the proof. \end{proof} As a final tool for the proof of Lemma \ref{l:large-primes}, we require an explicit bound on the correlation of the `$W$-tricked von Mangoldt function' with nilsequences. The following lemma, which will be proved in Appendix \ref{app} building on the proof of \cite[Proposition 10.2]{GT-linearprimes}, provides such bounds in our specific setting. \begin{lemma} \label{l:Lambda} Let $G/\Gamma$ be an $s$-step nilmanifold, let $G_{\bullet}$ be a filtration of $G$ and let $\mathcal{X}$ be a $Q$-rational Mal'cev basis adapted to it. Let $\Lambda': \mathbb{N} \to \mathbb{R}$ be the restriction of the ordinary von Mangoldt function to primes, that is, $\Lambda'(p^k)=0$ whenever $k>1$. Let $W=W(x)$, let $q'$ and $b'$ be integers such that $0<b'<Wq' \leq (\log x)^{E}$ and $\gcd(W,b')=1$ hold. Let $\alpha \in (0,1)$. Then, for every $y \in [\exp((\log x)^{\alpha}), x]$ and for every polynomial sequence $g \in \mathrm{poly}(\mathbb{Z},G_{\bullet})$, the following estimate holds: \begin{align*} \left|\sum_{n\leq y} \frac{\phi(Wq')}{Wq'}\Lambda'(Wq'n+b') F(g(n)\Gamma) \right| \ll_{\alpha, s, E, \|F\|_{\mathrm{Lip}}} \left| \sum_{n\leq y} F(g(n)\Gamma)\right| + y\mathcal{E}(x)~, \end{align*} where $$ \mathcal{E}(x) := (\log \log x)^{-1/(2^{s+2} \dim G)} + \frac{Q^{10^s \dim G}}{(\log \log x)^{1/2}}. $$ \end{lemma} Employing Lemma \ref{l:Lambda} for the upper endpoint of an interval $[y_0,y_1]$, and either a trivial estimate or the lemma for the lower endpoint, say, depending on whether or not $y_0 \leq y_1^{1/2}$, we obtain as immediate consequence that \begin{align}\label{eq:GT-Lambda-and-N-q-P} &\left|\sum_{y_0 \leq n\leq y_1} \frac{\phi(Wq')}{Wq'}\Lambda'(Wq'n+b') F(g(n)\Gamma) \right| \\ \nonumber &\quad \ll_{\alpha, s, E, \|F\|_{\mathrm{Lip}}} \left|\sum_{n\leq y_0} F(g(n)\Gamma)\right| + \left|\sum_{n\leq y_1} F(g(n)\Gamma)\right| + y_1^{1/2} + y_1 \mathcal{E}(x) ~. \end{align} for any $0 < y_0 < y_1 \leq x$ such that $y_1 \geq \exp((\log x)^{\alpha})$. To continue with the proof of Lemma \ref{l:large-primes}, we now return to the right hand side of \eqref{eq:cov}. Recall that $n \in I_D$ if and only if $Dn + D'' \in I$. Since $I$ is a discrete interval, $I_D$ is a discrete interval too and, for $m,m' \sim 2^{-j}X$, we have $$ \#\Big\{ n \in \mathbb{N}: nm+\widetilde D_{m} \in I_D \Big\} \ll |I_D|2^{j}/X \ll |I|2^{j}/(DX) \leq T 2^j/(DX \widetilde{W}q) $$ and, similarly, $ \#\{ n \in \mathbb{N}: nm'+\widetilde D_{m'} \in I_D \} \ll |I_D|2^{j}/X \ll T 2^j/(DX \widetilde{W}q). $ We now consider separately the cases of $(m,m') \not\in \mathcal{E}_{D,2^{-j}X}$ for which \begin{align} \label{eq:mm'-cases} I_{m,m'} =\left\{n \in \mathbb{N} : \begin{array}{c} nm + \widetilde D_m \in I_D, \\ nm' + \widetilde D_{m'} \in I_D \end{array} \right\} \leq \delta(N) 2^jT/(DX\widetilde{W}q) \end{align} holds and those of $(m,m') \not\in \mathcal{E}_{D,2^{-j}X}$ for which $I_{m,m'}>\delta(N) 2^jT/(DX\widetilde{W}q)$. In the former case, the trivial bound asserts \begin{align*} \sum_{\substack{n \leq \frac{T}{\widetilde{W}qD\max(m,m')} \\ nm+\widetilde D_{m} \in I \\ nm'+\widetilde D_{m'} \in I}} \frac{\phi(\widetilde{W}q)}{\widetilde{W}q} \Lambda(\widetilde{W}qn+D'_m) F(g_D(nm+\widetilde D_m)\Gamma) \overline{F(g_D(nm'+\widetilde D_{m'})\Gamma)} \leq \frac{\delta(N)T2^j}{DX\widetilde{W}q}. \end{align*} In the latter case, we seek to apply \eqref{eq:GT-Lambda-and-N-q-P} with $[y_0,y_1]=I_{m,m'}$ and $x=N=T^{1+o(1)}$. Note that \begin{align*} \log y_1 &\geq \log \Big( \delta(N) T2^j/(DX \widetilde{W}q) \Big) \\ &\geq \log \frac{T}{DX} + j\log 2 + E_0 \log \delta(N) - \log (\widetilde{W}q) \\ &\geq (\log (T/D))^{1/U} + j \log 2 - O_{d, \dim G, E_0, U}(1) \log \log N - 2E \log \log T \\ &\geq \Big( \frac{1}{t}\log T \Big)^{1/U} - O_{d,\dim G, E_0, U}(1) \log \log N - 2E \log \log T \\ &\gg_{d, \dim G, E, E_0, H, U} (\log T )^{1/U}, \end{align*} where we used the definition of $X$, the assumptions that Proposition \ref{p:equid-non-corr} makes on $\delta$, and the fact that $t \leq H$. Thus, the conditions of Lemma \ref{l:Lambda} are satisfied for every sufficiently large $T$ (with respect to $d, \dim G, E, E_0, H$ and $U$) when choosing $\alpha = 1/(2U)$. Hence, \eqref{eq:GT-Lambda-and-N-q-P} yields the following estimate for the interval $[y_0,y_1]= I_{m,m'}$~: \begin{align*} \sum_{\substack{n \leq T/(\widetilde{W}qD\max(m,m')) \\ nm+\widetilde D_{m}, nm'+\widetilde D_{m'} \in I}} &\frac{\phi(\widetilde{W}q)}{\widetilde{W}q} \Lambda(\widetilde{W}qn+D'_m) F(g_D(nm+\widetilde D_m)\Gamma) \overline{F(g_D(nm'+\widetilde D_{m'})\Gamma)} \\ \ll_{d, \dim G, s, E, E_0, H, U}\quad &\sum_{n \leq y_0} F(g_D(nm+\widetilde D_m)\Gamma) \overline{F(g_D(nm'+\widetilde D_{m'})\Gamma)} \\ +&\sum_{n \leq y_1} F(g_D(nm+\widetilde D_m)\Gamma) \overline{F(g_D(nm'+\widetilde D_{m'})\Gamma)} +\frac{T2^j}{DX\widetilde{W}q} \mathcal{E}(T). \end{align*} Proposition \ref{p:equid} shows that the right hand side is small for most pairs $(m,m')$. Indeed, together with Proposition \ref{p:equid}, the above implies that \eqref{eq:two-sums-1} is bounded above by \begin{align*} \ll_{d, \dim G, s, E, E_0, H, U} \Big(\frac{T 2^j}{DX}\Big)^{1/2} &\Bigg( \frac{\widetilde{W}q}{\phi(\widetilde{W}q)} \sum_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} \sum_{\substack{m,m' \sim 2^{-j}X \\ m \equiv m' \equiv A' \Mod{\widetilde{W}q}}} |h(m)h(m')| \times \\ &\times \frac{T 2^j}{\widetilde{W}qDX} \bigg(\delta(N)^{O(c_1c_2E_0)} + \mathbf{1}_{(m,m') \in \mathcal{E}_{D,2^{-j}X}} + \mathcal{E}(T) \bigg) \Bigg)^{1/2}, \end{align*} An application of Lemma \ref{l:E_K'-contribution} show that this in turn is bounded by \begin{align*} \ll_{d, \dim G, s, E, E_0, H, U}~ &\frac{T}{\widetilde{W}qD} \Bigg( \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} \Bigg( \frac{\widetilde{W}q2^j}{X} \sum_{\substack{m\sim 2^{-j}X \\ m\equiv A' \Mod{\widetilde{W}q}}} |h(m)|\Bigg)^2 \\ & \qquad \qquad \times \Big(\delta(N)^{O(c_1c_2E_0)} + \mathcal{E}(T) \Big) \Bigg)^{1/2}\\ \ll_{d, \dim G, s, E, E_0, H, U}~ &\frac{T}{\widetilde{W}qD} \Big(\delta(N)^{O(c_1c_2E_0)} + \mathcal{E}(T) \Big) \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|h|}(2^{-j}X;\widetilde{W}q,A'). \end{align*} Summing the above expression over $j \leq \log_2(\frac{X}{T_0})$ and taking into account the factor $(\log T)^{-1}$, we deduce that the inner sum in \eqref{eq:large-primes} is bounded by \begin{align*} \ll_{d,\dim G, s, E, E_0, H, U} \Big(\delta(N)^{O(c_1c_2E_0)} + \mathcal{E}(T) \Big) \frac{1}{\log T} \sum_{j=0}^{\log_2(X/T_0)} \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|h|}(2^{-j}X;\widetilde{W}q,A'). \end{align*} Properties (3) and (4) of Definition \ref{def:M} show that this in turn is at most \begin{align*} &\ll_{d, \dim G, s, E, E_0, H, U} \Big(\delta(N)^{O(c_1c_2E_0)} + \mathcal{E}(T) \Big) \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|h|}(X;\widetilde{W}q,A') \\ &\ll_{d, \dim G, s, E, E_0, H, U} \Big(\delta(N)^{O(c_1c_2E_0)} + \mathcal{E}(T) \Big) \max_{A' \in (\mathbb{Z}/\widetilde{W}q\mathbb{Z})^*} S_{|h|}(T/D;\widetilde{W}q,A'). \end{align*} Since $\delta(N) \leq (\log N)^{-1}$, it suffices to choose $E_0$ sufficiently large in terms of $d$ and $m_0$ in order to obtain $$ \delta(N)^{O(c_1c_2E_0)} + \mathcal{E}(T) \ll (\log N)^{-1} + \mathcal{E}(T) \ll \mathcal{E}(T). $$ This completes the proof of Lemma \ref{l:large-primes}. \subsection{Small primes} \label{ss:small-primes} To complete the proof of Proposition \ref{p:equid-non-corr}, it remains to bound the contribution of the dyadic parts of \eqref{eq:two-sums-small-p} to \eqref{eq:pre-decomposition}. This is achieved by the following lemma. \begin{lemma}[Contribution from small primes] \label{l:small-primes} Let $E^{\flat}_h(T;D,q,j,I)$ denote the expression $$ \bigg| \frac{D\widetilde{W}q}{T} \sum_{\substack{m>X \\ \gcd(m,W)=1}} \sum_{\substack{p \sim 2^{-j}T/(XD) \\ p \equiv D' \bar{m} \Mod{\widetilde{W}q}}} \mathbf{1}_{pm<T/D} \mathbf{1}_{I_D}(mp) h(m) h(p) \Lambda(p) F\Big(g_D\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \bigg|. $$ Then \begin{align*} &\sum_{i=1}^t \sum_{k=\frac{(\log \log T)^2}{\log 2}}^{(1-1/t)\frac{\log T}{\log2}} \sum_{D \sim 2^k} \mathbf{1}_{D \not\in \mathcal{B}_{2^k}} \sum_{\substack{d_1, \dots \widehat{d_{i}} \dots, d_t \\ D_i=D}} \bigg( \prod_{j \not= i} \frac{|f_j(d_j)|}{d_j} \bigg) \sum_{j=0}^{\log_2(T/(XDT_0))} \frac{E^{\flat}_{f_i}(T;D,q,j,P')}{\log T}\\ &\ll (\log T)^{-\theta_f/2} \max_{r\in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} S_{|f_1| * \dots * |f_t|}(T;\widetilde{W};r). \end{align*} \end{lemma} \begin{proof} Applying Cauchy--Schwarz to the expression $E^{\flat}_h(T;D,q,j,I_D)$ for a fixed value of $j$, $0 \leq j \leq \log_2(T/(XDT_0))$, we obtain \begin{align} \label{eq:another-CS} &\frac{\widetilde{W}D}{T} \sum_{\substack{m>X \\ \gcd(m,W)=1 }} \sum_{\substack{p \sim 2^{-j} T/(XD) \\ p \equiv D' \bar m \Mod{\widetilde{W}q} }} 1_{pm<T/D} h(m) h(p) \Lambda(p) F\Big(g\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \mathbf{1}_{I_D}(mp) \\ \nonumber &\leq \bigg(\frac{W}{\phi(W)} \frac{1}{2^j X} \sum_{\substack{X < m < 2^j X \\ \gcd(m,W)=1 }} |h(m)|^2 \bigg)^{1/2}\\ \nonumber &\quad \times \Bigg( \phi(\widetilde{W}q) \Big(\frac{2^{j}XD}{T}\Big)^2 \sum_{\substack{p,p' \sim 2^{-j} T/(XD) \\ p \equiv p' \Mod{\widetilde{W}q} }} h(p)h(p') \Lambda(p) \Lambda(p') \\ \nonumber & \qquad \qquad \qquad \qquad \frac{\widetilde{W}q}{X 2^j} \sum_{\substack{X<m<T/(D\max(p,p')) \\ mp \equiv D' \Mod{\widetilde{W}q} \\ pm\in I_D}} F\Big(g\Big(\frac{pm-D'}{\widetilde{W}q}\Big) \Gamma\Big) \overline{ F\Big(g\Big(\frac{p'm-D'}{\widetilde{W}q}\Big) \Gamma\Big)} \Bigg)^{1/2}. \end{align} We estimate the second factor trivially as $O(1)$ by using the bounds $|h(p)h(p')| \ll 1$ and $\|F\|_{\infty}=\|\bar F\|_{\infty} \leq 1$. Thus, \eqref{eq:another-CS} is bounded by \begin{align*} \bigg(\frac{W}{\phi(W)} \frac{1}{2^j X} \sum_{\substack{X < m < 2^j X \\ \gcd(m,W)=1 }} |h(m)|^2 \bigg)^{1/2} &= \bigg(\frac{1}{\phi(\widetilde{W})} \sum_{A' \in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} \frac{\widetilde{W}}{2^j X} \sum_{\substack{X < m < 2^j X \\ m \equiv A' \Mod{\widetilde{W}}}} |h(m)|^2 \bigg)^{1/2} \\ & \leq \max_{A' \in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} \bigg(\frac{\widetilde{W}}{2^j X} \sum_{\substack{m < 2^j X \\ m \equiv A' \Mod{\widetilde{W}}}} |h(m)|^2 \bigg)^{1/2}. \end{align*} Note that $X \leq 2^j X \leq T/(DT_0)$, where $$X= \left(\frac{T}{D}\right)^{1-1/\left(\log \frac{T}{D}\right)^{(U-1)/U}} \gg (T/D)^{1/2}$$ and $$ \frac{T}{DT_0} = \left(\frac{T}{D}\right)^{ 1 - \left(\log \log \frac{T}{D}\right)^2/\left( \log \frac{T}{D} \right) }. $$ Thus, the local stability of average values we assumed in property (3) of Definition \ref{def:M} implies that the above is bounded by \begin{align*} & \ll \max_{A' \in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} \bigg(\frac{\widetilde{W}}{2^j X} \sum_{\substack{m < T/D \\ m \equiv A' \Mod{\widetilde{W}}}} |h(m)|^2 \bigg)^{1/2}. \end{align*} The summation range in $j$ is short: it is bounded by $$\log_2(T/(XDT_0)) \ll (\log T)^{1/U} \ll (\log T)^{\theta_f/2}.$$ Thus, invoking property (2) of Definition \ref{def:M}, we deduce that \begin{align*} &\sum_{i=1}^t \sum_{k=1}^{(1-1/t)\frac{\log T}{\log2}} \sum_{D \sim 2^k} \mathbf{1}_{D \not\in \mathcal{B}_{2^k}} \sum_{\substack{d_1, \dots \widehat{d_{i}} \dots, d_t \\ D_i=D}} \bigg( \prod_{j \not= i} \frac{|f_j(d_j)|}{d_j} \bigg) \sum_{j=0}^{\log_2(T/(XDT_0))} \frac{E^{\flat}_{f_i}(T;D,q,j,I)}{\log T}\\ &\ll (\log T)^{-1+\frac{\theta_f}{2}} \\ &\qquad \sum_{i=1}^t \sum_{D \leq T^{1-1/t}} \sum_{\substack{d_1, \dots \widehat{d_{i}} \dots, d_t \\ D_i=D}} \bigg( \prod_{j \not= i} \frac{|f_j(d_j)|}{d_j} \bigg) \max_{A' \in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} \bigg(\frac{D}{T} \sum_{\substack{m < T/D \\ m \equiv A' \Mod{\widetilde{W}}}} |f_i(m)|^2 \bigg)^{1/2}\\ &\ll (\log T)^{-\theta_f/2} \max_{A' \in (\mathbb{Z}/\widetilde{W}\mathbb{Z})^*} S_{|f_1|* \dots *|f_t|}(T;\widetilde{W};A'). \end{align*} This completes the proof of Lemma \ref{l:small-primes} as well as the proof of Proposition \ref{p:equid-non-corr}. \end{proof}
1,116,691,498,832
arxiv
\section{Introduction} \par Distributed excitable media are found in a wide range of natural systems including neural cells \cite{izhikevich-book}, heart tissue \cite{keener-book-II} or chemical systems \cite{mik-book-I}. They consist of coupled elements obeying an activator-inhibitor dynamics with a single stable fixed point of rest where small perturbations are damped out. However, a large enough perturbation causes a burst of activity after which the elements return back to their resting state. This results in propagation of an excitation wave. Such media have been broadly studied with continuous reaction-diffusion equations and support a variety of self-organised spatiotemporal patterns like pulses, expanding target waves, or rotating spirals \cite{mik-book-I}. \par Within the last decade, self-organisation of patterns has been considered in networks, where reactions occur on the network's nodes and diffusion is carried out through the links connecting them. Such systems can be formed by diffusively coupled chemical reactors \cite{kar02}, biological cells \cite{big01} or dispersal habitats. The rapid development in network science provides increasing insight on the impact of their architecture upon the emerging collective dynamics \cite{barrat-book-08}. A variety of self-organisation phenomena has been studied in such complex systems including epidemic spreading \cite{COL06,COL08}, synchronisation \cite{BOC06,ARE08} and chimera states \cite{Omelchenko2013}, stationary Turing \cite{nak10} and self-organised oscillatory \cite{HAT13} patterns, as well as pinned fronts \cite{KOU12}. Collective phenomena induced by feedback control \cite{LEH11,Hata2012,KOU13a} or by noise \cite{Atsumi2013,son13} have also been analysed in networks. \par Recent theoretical and experimental studies in networks of coupled excitable nodes have shown that self-sustained activity \cite{ROX04,tattini12} and spreading of excitation waves \cite{Steele2006} are possible and depend strongly on the network architecture. However, propagation failure of excitation waves, which is a very important aspect in neural and cardiac physiology \cite{keener-book-I,keener-book-II,dockery89,DAH08}, as well as in chemical systems has not yet been systematically analysed in networks. \par In this letter we show that propagation failure of excitation waves is significantly influenced by the degree of the nodes. Waves propagate more slowly through nodes with larger degrees and beyond some critical degree they disappear. For regular trees with fixed branching ratio a numerical continuation method could be employed. It reveals that a wave loses its stability and dies out through a saddle-node bifurcation that occurs at the critical degree. For the trees with strong diffusive coupling a kinematical theory \cite{MIK91}, which allows for the analytical determination of the critical degree, could be developed. These approximations hold locally for the early stage of excitation spreading in random networks, where numerical simulations have been performed. \section{Excitable systems on regular trees} Let us consider a two-component excitable system, where only the activator can diffuse and the inhibitor varies slowly. Such a classical continuous medium is described by, \begin{eqnarray} \label{eq:rdcon} \dot{u}(\mathbf{x},t) &=& f(u,v) +D\nabla^{2} u(\mathbf{x},t)\,,\nonumber\\ \dot{v}(\mathbf{x},t) &=& \varepsilon g(u,v)\,, \end{eqnarray} \noindent where $u(\mathbf{x},t)$ and $v(\mathbf{x},t)$ denote local densities of the activator and inhibitor species, respectively. Functions $f(u,v)$ and $g(u,v)$ specify local dynamics of activator and inhibitor, respectively, parameter $\varepsilon$ represents the ratio of their characteristic time scales and $D$ is the diffusion constant. The terms ``activator'' and ``inhibitor'' refer here to the dynamical roles of variables which may have different origins. As an example we choose the FitzHugh-Nagumo (FHN) \cite{izhikevich-book} system, \begin{equation} \label{eq:fhn} f(u,v)=u-\frac{u^{3}}{3}-v\, \ \text{ and } \ g(u,v)=u-\beta\,, \end{equation} \noindent where the activator variable $u$ represents fast changes of the electrical potential across the membrane of a neural cell, while the inhibitor variable $v$ has no direct physiological significance; however, it is related to the gating mechanism of the membrane channels. \par If activator and inhibitor species occupy the nodes of a network and the activator can diffusively be transported over network links to other nodes, then the analog of system (\ref{eq:rdcon}) reads, \begin{eqnarray} \label{eq:rdnet} \dot{u}_i &=& f(u_i,v_i) + D\sum_{j=1}^{N}\! T_{ij}(u_j-u_{i})\,,\nonumber\\ \dot{v}_i &=& \varepsilon g(u_i,v_i) \,, \end{eqnarray} \noindent where $u_i$ and $v_i$ are the densities of the activator and inhibitor in a network node $i$. The local dynamics on the nodes is described by the functions $f(u_i,v_i)$ and $g(u_i,v_i)$. Diffusional mobility of the activator is taken into account in the summation term, where $T_{ij}$ is the adjacency matrix determining the architecture of the network, whose elements are $1$, if there is a link connecting nodes $i$ and $j$ and $0$ otherwise. Only undirected networks are considered here, {\it i.e.} $T_{ij}=T_{ji}$. The degree $k_i=\sum_jT_{ji}$ of node $i$ is the number of its connections and plays an essential role in the dynamics of excitation waves. \begin{figure}[t!] \includegraphics[width=0.5\textwidth]{figure1.pdf} \caption{(Colour online) Snapshot of an excitation wave which propagates from the root towards the periphery in a regular tree of $10$ shells with branching ratio $2$ and $N=2047$ nodes (a). The wave can also be represented as a pulse by grouping the nodes with the same distance from the root into a single shell (b). The time evolution of a single node (or shell) is shown in (c). Other parameters: $D=0.04,\ \beta=-1.1$, $\varepsilon=0.02$.} \label{fig:tree} \end{figure} \par Here we consider a hierarchical organisation of the system (\ref{eq:rdnet}) on a regular tree with branching ratio $k-1$. In trees, all nodes with the same distance $r$ from the root can be grouped into a single shell \cite{KOU12,KOU13a}. The activator of a node which belongs to the shell $r$ can diffusively be transported to $k-1$ nodes in the next shell $r+1$ and to just one node in the previous shell $r-1$. Introducing the densities $u_r$ and $v_r$ for the activator and the inhibitor in the shell $r$, the evolution of their distribution on the tree can be described by the equations, \begin{eqnarray} \label{eq:kchain} \dot{u}_r &=& f(u_r,v_r) + D [u_{r-1}-ku_{r} +(k-1)u_{r+1}]\,,\nonumber\\ \dot{v}_r&=& \varepsilon g(u_r,v_r) \,. \end{eqnarray} \noindent Note that although $k$ is the degree of the nodes and thus can take only integer values, it can be treated as a continuous parameter. In our approximation, instead of investigating propagation of excitation waves directly on a tree network (fig.~\ref{fig:tree} (a)) we study it, using the sequence of coupled shells (fig.~\ref{fig:tree} (b)) described by eqs. (\ref{eq:kchain}). Contrary to chains ($k=2$), where both propagation directions (left or right) of excitation waves are equivalent, propagation from the root to the periphery of a tree ($k>2$) is physically different from the propagation in the opposite direction, i.e. towards the tree root. Here, only excitation waves that start from the root which can -- under appropriate conditions -- propagate towards the periphery are considered. \par Excitation waves can be generated by applying a large enough external perturbation to the root of the tree while all other nodes are in the resting state. This perturbation can excite the root. Subsequently, it is possible for excitation to be passed from one node to another, due to the diffusional transport of the activator and it reaches nodes within the same shell at the same time. Thus, propagation of undamped excitation waves, which represent an excursion from the resting state and a relaxation back to it (fig.~\ref{fig:tree} (c)) for each node of the tree, can be supported. A snapshot of such a wave is shown for a particular tree in fig.~\ref{fig:tree}. \begin{figure}[t!] \includegraphics[width=0.491\textwidth]{figure2.pdf} \caption{(Colour online) Space-time plots of the activator density $u_r$ shows the evolution of an excitation wave in the trees with different node degree $k$. Other parameters as in fig. \ref{fig:tree}.} \label{fig:spacetime} \end{figure} \par Not all excitable trees can, however, lead to such propagating waves. In fig.~\ref{fig:spacetime} we see for a given set of parameters that excitation waves can propagate from the root towards the periphery in trees with $k=4$ and $k=5$. However, trees with larger branching ratio, {\it e.g.}{} $k=6$, fail to support the undamped propagation of waves; starting from the root, excitation may be passed to all nodes of some shells with short distance (shortest path length) from the root, but then fails to propagate further and vanishes (see fig.~\ref{fig:spacetime} for $k=6$). Numerical simulations have revealed that excitation waves propagate more slowly in trees with larger $k$ (see fig. \ref{fig:ck}). As $k$ increases, it reaches a critical value where waves propagate with the minimum (positive) velocity. In contrast to bistable trees, where fronts can be pinned or retreated as $k$ becomes larger \cite{KOU12,KOU13a}, excitation waves cannot stop or reverse the direction of their propagation. They become unstable and disappear beyond this critical degree (see fig. \ref{fig:ck}). When $k$ is fixed, the same transition to unstable waves occurs when some critical value of the time scale separation constant $\varepsilon$ is exceeded. The stability analysis of these waves is performed with a numerical continuation method. \par For this purpose we assume a ring of shells, as described by system (\ref{eq:kchain}). On this ring, each shell is coupled with weight $k-1$ to its neighbour in clockwise direction and with weight $1$ to its neighbour in counterclockwise direction. Locally, such a ring of shells resembles the shells of a tree, globally however, it cannot be mapped to a tree. By locally we mean in this context that the wave-like solutions we are examining are constrained to such a small part of the ring that they do not interact with themselves; or formulated differently, that the fraction of nodes lying in the resting state is always large enough to clearly separate the leading edge of one excitation wave from the refractory phase of the preceding wave. Therefore, in the regimes of parameters where the wave-like solutions are localized within only a small fraction of this ring, we expect to observe the same dynamics as in the shells of an actual tree network. \begin{figure}[t!] \includegraphics[width=0.5\textwidth]{figure3.pdf} \caption{(Colour online) (Left) Dependence of the propagation velocity $c$ on the degree $k$ as calculated from numerical simulations (dots) and from numerical continuation (curves) on the ring of 50 shells for different $\varepsilon$. Solid curves show stable solutions while dashed curves show the unstable ones. The letters (a)-(e) denote selected solutions that are shown in fig.~\ref{fig:timeseries}. (Right) The location of the saddle-node bifurcation points (LP) is shown in the ($\varepsilon$,$k$) plane. Other parameters as in fig.~\ref{fig:tree}.} \label{fig:ck} \end{figure} \par For continuation purposes this construction has the advantage that a traveling wave on this ring of shells is a periodic orbit of $d\times S$ ordinary differential equations (ODE), where $S$ is the number of shells of the tree and $d$ is the dimension of the local dynamics on each shell ($2$ in the case of the FHN system), Such a system can easily, but possibly at large numerical expense, be continued, using a numerical continuation software, {\it e.g.}{} AUTO-07p \cite{auto07}. \par We consider such a ring consisting of $S=50$ shells, obeying the FHN dynamics and diffusively coupled with strength $D=0.04$. Thus, we proceed to the continuation of the periodic solutions of a system of $100$ coupled ODEs. From the continuation, we directly obtain the stability properties of the solution, as well as the location of the limit points (LP) of the saddle-node bifurcations, which are marked in fig.~\ref{fig:ck}. We see that the continuation of periodic orbits in the ring of shells gives exactly the same results for the propagation velocity $c$ and for the stability as those obtained from the direct simulation on trees. Clearly, the transition from undamped propagation to unstable excitation waves in trees takes place through a saddle-node bifurcation. Similar dynamical behaviour is observed for a given value of the degree $k$ as $\varepsilon$ increases (see fig.~\ref{fig:ce}). \begin{figure}[t!] \includegraphics[width=0.491\textwidth]{figure3_1.pdf} \caption{(Colour online) Dependence of the propagation velocity $c$ on the parameter $\varepsilon$ as calculated from numerical simulations (dots) and from numerical continuation (curves) on the ring of 50 shells for different $k$. Solid curves show stable solutions while dashed curves show the unstable ones. The letters (a),(b),(d) denote selected solutions that are shown in fig.~\ref{fig:timeseries}. Other parameters as in fig.~\ref{fig:tree}.} \label{fig:ce} \end{figure} \par The asymmetric coupling (weight $1$ to the previous and $k-1$ to the next shells) in combination with the discreteness of the system affects also the shape of the excitation waves. As can clearly be seen in fig.~\ref{fig:timeseries}, the trajectory in the ($u$,$v$) plane followed by each shell, and thus the corresponding timeseries of one shell $r$, appears with two ``dips'' (see also supplementary movie1.avi). The reason is that in eqs.~(\ref{eq:kchain}), the diffusive coupling term for a shell $r$ whose current state is already moving on the slow manifold and thus changing on the slow timescale, can vary on the fast timescale, when shell $r+1$ moves on the fast manifold from the rest state to the excited state (see upper right panel in fig.~\ref{fig:timeseries}). Because shell $r+1$ is coupled to $r$ with weight $k-1$, whereas shell $r-1$ is coupled to $r$ with weight 1, the ``force'' exerted by the coupling term, ``pulls'' the shell $r$ in the direction of the resting state, when shell $r+1$ is still close to the resting state. It does so more strongly for larger $k$. \begin{figure}[t!] \includegraphics[width=0.5\textwidth]{figure4.pdf} \caption{(Colour online) Different solutions of eqs.~(\ref{eq:kchain}) obtained by continuation on the ring of $50$ shells are shown: (a) stable solution for $k=2$ with $c=0.1561$ ($T=320.349$), (b) stable solution for $k=5$ with $c=0.1006$ ($T=497.047$), (c) Limit Point solution for $k\approx5.9766$ with $c=0.0631$ ($T=791.849$), (d) unstable solution for $k=5$ with $c=0.0392$ ($T=1274.64$) and (e) unstable solution for $k=2$ with $c=0.0222$ ($T=2256.39$). Other parameters as in fig.~\ref{fig:tree}. In the upper right panel, solution (b) is shown as the path of densities in phase space ($u$,$v$) together with a snapshot of the state of all shells at one instant of time (magenta crosses) and the nullclines (grey) of system (\ref{eq:fhn}), while in the lower panel the evolution of activator (red curve) and inhibitor (blue curve) densities on one shell are shown. The upper left panel shows a comparison of the solutions (a)-(e) as the paths of densities in phase space ($u$,$v$). The location of the solutions is also marked in fig.~\ref{fig:ck}. Other parameters as in fig.~\ref{fig:tree}. } \label{fig:timeseries} \end{figure} \section{Kinematical theory for excitable trees} \par The propagation velocity $c$ is unique in regular trees with fixed branching ratio and depends on the parameters $k,\ \varepsilon,\ D$ and $c_0$; $c_0$ is the velocity of a bistable front in the absence of inhibitor ({\it cf.}{} \cite{KOU12}). Here we calculate an analytical expression for the dependence $c=c(k)$ by extending for the trees the kinematical theory proposed by {\it Mikhailov} and {\it Zykov} in \cite{MIK91}. \par Let us assume a regular tree with infinite hierarchical levels and strong diffusive coupling ({\it i.e.}{}, large $D$). In such a tree, we can obtain an approximation for the continuous limit by substituting $u_{r-1}$ and $u_{r+1}$ in eqs.~(\ref{eq:kchain}) with their Taylor expansions $u_{r-1} \approx u_{r} -\nabla u + \Delta u/2\,$ and $u_{r+1} \approx u_{r} + \nabla u + \Delta u/2\,$. Then, in the continuous limit, the system (\ref{eq:kchain}) reads, \begin{eqnarray} \label{eq:kchain2con} \dot{u} &=& f(u,v) + \frac{Dk}{2} \Delta u + D(k-2) \nabla u\,,\nonumber\\ \dot{v} &=& \varepsilon g(u,v) \,. \end{eqnarray} \noindent By introducing the moving reference frame $\xi = r-ct\,$ and by assuming that the profile of the wave is stationary in this frame, we can reduce the system (\ref{eq:kchain2con}) to a system of two ODEs, \begin{eqnarray} \label{eq:CRDA} -[c+D(k-2)] u^{\prime} &=& f(u,v) + \frac{kD}{2} u^{\prime\prime}\,,\nonumber\\ -cv^{\prime} &=& \varepsilon g(u,v) \,, \end{eqnarray} \noindent where $u=u(\xi)$ and $v=v(\xi)$; prime denotes a derivative with respect to the moving coordinate $\xi$. Subsequently, if we replace $\varepsilon$ by the modified parameter $\varepsilon^{*}$ in the latter equation, where, \begin{equation} \label{eq:epsilonstar} \varepsilon^{*}=\varepsilon\left[1+\frac{D(k-2)}{c}\right]\,, \end{equation} \noindent we take the system of equations, \begin{eqnarray} \label{eq:rdestar} -[c+D(k-2)] u^{\prime} &=& f(u,v) + \frac{kD}{2} u^{\prime\prime}\,,\nonumber\\ -[c+D(k-2)] v^{\prime} &=& \varepsilon^{*} g(u,v) \,, \end{eqnarray} \noindent which describes the propagation of an excitation wave from the root towards the periphery in the same tree as system (\ref{eq:CRDA}), but with time scale separation parameter $\varepsilon^{*}$ instead of $\varepsilon$. Therefore, the propagation velocity is \begin{equation} \label{eq:vel1} c^*= c + D(k-2)\,. \end{equation} Substitution of $c$ from eq. (\ref{eq:vel1}) into (\ref{eq:epsilonstar}) yields, \begin{equation} \label{eq:rewrite} c \equiv c(\varepsilon)= \frac{D (k-2)\varepsilon}{\varepsilon^{*}-\varepsilon}\,. \end{equation} If we know the function $c(\varepsilon)$ we can find the solution of eq. (\ref{eq:rewrite}) \cite{MIK91}. Here we do not have an analytical expression for this function. However, we have found in the numerical simulations that for very small $\varepsilon$ the velocity $c$ depends linearly on this parameter, {\it i.e.}{}, \begin{equation} \label{eq:vel0} c(\varepsilon) = c_0 (1-\chi\varepsilon) \,, \end{equation} \noindent where $\chi$ is a numerical factor independent of $\varepsilon$. Substituting expressions for $\varepsilon^{*}$ and $c^*$ into eq.~(\ref{eq:vel0}) and solving the resulting equation we find an analytical expression for the velocity, \begin{eqnarray} \label{eq:velsols} c &=& \frac{1}{2} \left[c_0 (1- \varepsilon\chi) - D\left(k-2\right)\right] \\ &&\:\pm \frac{1}{2}\left\{\left[D(k-2)-c_{0}\left(1-\varepsilon\chi\right)\right]^{2} -4c_{0}D\varepsilon\chi\left(k-2\right)\right\}^{1/2}\, \nonumber \,. \end{eqnarray} \par The two branches of eq.~(\ref{eq:velsols}) are shown in fig.~\ref{fig:vel} (red curves). The lower (dashed) and the upper (solid) branch correspond to the unstable and the stable solution of $c=c(k)$, respectively. As $k$ increases both solutions move toward each other and they merge at the critical value \begin{equation} \label{eq:kcr} k_\text{cr}=\frac{2D + c_{0}\left(1-\sqrt{\varepsilon\chi}\right)^2}{D}\,, \end{equation} \noindent where the first derivative of $c(k)$ tends to infinity. No excitation waves can propagate in trees with $k>k_\text{cr}$. Note that at $k_\text{cr}$, waves can still propagate with their minimum critical velocity, \begin{equation} \label{eq:vcr} c_\text{cr}=c_{0}(\sqrt{\varepsilon\chi}-\varepsilon\chi)\,. \end{equation} \begin{figure}[t!] \includegraphics[width=0.5\textwidth]{figure5.pdf} \caption{(Colour online) Propagation velocity $c$ vs. node degrees $k$. Red curves correspond to the two branches of eq.~(\ref{eq:velsols}), blue curves are obtained by continuation of the eqs.~(\ref{eq:profile_eqs}), green dots have been calculated from the numerical simulations. At the right edge of the figure, the leading part of the spectrum $\lambda \in \mathbbm{C}$ around the wave is shown for stable and unstable solutions at $k=2.0$ ($\blacktriangledown,\triangledown$) and $2.2$ ($\blacklozenge,\lozenge$) as well as for the limit point solution at $k\approx2.406$ ($\bullet$), including the unstable eigenvalue and the zero eigenvalue corresponding to the Goldstone mode of translation invariance. Other parameters: $\varepsilon=0.02,\ D=4,\ c_0=2.178,\ \chi=1.074$.} \label{fig:vel} \end{figure} \noindent As we see in fig.~\ref{fig:vel}, our theory allows for a very good estimate of the critical degree $k_\text{cr}$, however, it fails to predict the exact critical velocity $c_\text{cr}$. This disagreement is a consequence of the additional assumption of the linear dependence (\ref{eq:vel0}) of the velocity $c$ of a wave on $\varepsilon$. Such dependence is approximately valid only for small values of this parameter, {\it i.e.}{} if $\varepsilon\ll 1$. It should hold not only for $\varepsilon$, but also for $\varepsilon^{*}$. While $\varepsilon = 0.02$ and thus small, the renormalised parameter $\varepsilon^{*}$ increases with $k$ and, at the critical point in fig.~\ref{fig:vel}, it reaches the value $\varepsilon^{*} = 0.14$ which is not small enough. The accuracy of the kinematical theory can be further approved (see \cite{MIK91}) by using the actual numerically calculated and nonlinear dependence of the velocity on $\varepsilon$, instead of equation (\ref{eq:vel0}). \par At the continuum limit, we can calculate the velocity and the stability of the excitation waves also by using the profile equations, which are obtained by writing the system (\ref{eq:CRDA}) as a system of first order ODEs, \begin{eqnarray} \label{eq:profile_eqs} u' &=& w \,, \nonumber \\ v' &=& - c^{-1} \varepsilon g(u,v) \,, \nonumber \\ w' &=& -2(Dk)^{-1}\left\{f(u,v) + \left[c+ D (k-2)\right]w \right\}\,. \end{eqnarray} \noindent The root of eqs.~(\ref{eq:fhn}) is a fixed point of eqs. (\ref{eq:profile_eqs}) (with $w=0$). An excitation wave on an infinitely extended chain of shells as described by eq.~(\ref{eq:kchain}) appears as a homoclinic trajectory of eqs.~(\ref{eq:profile_eqs}). But such a special trajectory exists only at a definite value of the parameter $c$, which is the propagation velocity of the excitation wave \cite{mik-book-I}. \par Here, instead of numerically continuing these homoclinic trajectories, we continue periodic trajectories, which are very close to the homoclinic ones for large periods. The stability analysis of these periodic trajectories has been performed using Bloch expansion and continuation to calculate the (essential) spectrum which determines the stability of these traveling waves. The method is explained in detail in \cite{RAD07b,SAN02b}. We find that the destabilisation of the excitation waves occurs through a saddle-node bifurcation, where an isolated eigenvalue crosses the imaginary axis exactly at the limit point. The rest of the spectrum is always in the left half-plane, except for one eigenvalue exactly at zero which corresponds to the Goldstone mode of translation invariance. The spectra for selected values of $k$ , including $k\approx2.406$ which corresponds to the limit point solution, are plotted at the right edge of fig.~\ref{fig:vel}. \section{Application to random networks} \begin{figure}[b!] \includegraphics[width=0.5\textwidth]{figure6.pdf} \caption{(Colour online). Evolution of the activator density for an Erd\"os-R\'enyi random network with $N=50$ nodes and mean degree $\langle k\rangle=10$. Excitation is applied to the hub (a), it consequently propagates to the neighbouring nodes with degrees $k<k_\text{cr}$ (b),(c), and finally dies out before the nodes with degrees $k>k_\text{cr}$ (d). The propagation path is shown with thick green colour. Node labels denote their degrees. Other parameters as in fig.~\ref{fig:tree}.} \label{fig:er} \end{figure} Propagation failure of excitation waves has also been observed in random networks. Here we provide an example of system (\ref{eq:rdnet}) on an Erd\"os-R\'enyi network, where the hub node is initially set in the excited state as shown in fig.~\ref{fig:er}(a). Consequently, excitation propagates to certain neighbouring nodes (see fig.~\ref{fig:er}(b),(c)) and finally disappears (fig.~\ref{fig:er}(d)). This behaviour can be understood by our approximate theory for the trees, which holds also locally, for the early stage of propagation, in random networks. For the parameters $D=0.04$ and $\varepsilon=0.02$ our theory predicts that excitation waves become unstable at $k_\text{cr}\approx 5.9766$ (see fig.~\ref{fig:ck}). Indeed, we see in fig.~\ref{fig:er} that excitation can propagate only to the neighbouring nodes with degrees $k<6$. Once it reaches a node whose neighbours have degrees $k>6$, it cannot propagate further and disappears. An interesting behaviour that appears in random networks is that excitation may follow certain paths and not others, depending on the degrees of the corresponding nodes. However, if excitation has already spread far from the origin and a large fraction of network nodes have thus become excited, our proposed theory does not hold for random networks. \section{Discussion} Excitation waves have been analysed in trees and random networks. They can be initiated at the root of excitable tree networks and propagate towards their periphery, representing an excursion from the resting state and relaxation back to it for each node. The propagation velocity decreases in trees with larger degrees until a critical value $k_\text{cr}$. At this critical degree waves are still stable and propagate, however, with their minimum velocity. Once this threshold is exceeded, undamped propagation is not possible. In contrast to the bistable fronts, excitation waves cannot be pinned or reverse their propagation direction. They become unstable and disappear. The approximate theory we have developed for the trees reveals that the destabilisation of the waves takes place through a saddle-node bifurcation which occurs at the critical degree. Same behaviour has been found in trees of given degree, when parameter $\varepsilon$ is increased. \par The results of such analysis are also relevant for understanding the early stage of excitation spreading in random networks. When activation is applied to a node and we look locally at its vicinity with its first neighbours, activation propagates only to the nodes with degrees smaller than the critical degree. Depending on the system parameters, excitation may propagate through some nodes or disappear before nodes with larger degrees. This degree heterogeneity in random networks gives rise to the appearance of some preferred paths where excitation can propagate. In future studies this property should be considered in the design of networks which might adapt their links according to the emerging dynamics in order to drive the excitation through desired paths and nodes. \acknowledgments Support from DFG in the framework of SFB 910 ``Control of Self-Organizing Nonlinear Systems'' is gratefully acknowledged. NK acknowledges financial support from the LASAGNE project EU/FP7-2012-STREP-318132, Spanish DGICYT Grant No. FIS2012-38266-C02-02 and J. S. Latsis Public Benefit Foundation in Greece.
1,116,691,498,833
arxiv
\section{\label{Intro}Introduction} A Landau-Ginzburg model is a nonlinear sigma model with a superpotential. For a \emph{heterotic} Landau-Ginzburg model \cite{Witten:Phases, DistlerKachru:0-2-Landau-Ginzburg, AdamsBasuSethi:0-2-Duality, MelnikovSethi:Half-twisted, GuffinSharpe:A-twistedheterotic, MelnikovSethiSharpe:Recent-Developments, GaravusoSharpe:Analogues, Garavuso:Curvaure-constraints}, the nonlinear sigma model possesses only $ (0, 2) $ supersymmetry and the superpotential is a Grassmann-odd function of the superfields which may or may not be holomorphic. It was claimed in \cite{GaravusoSharpe:Analogues} that, for various heterotic Landau-Ginzburg models with nonholomorphic superpotentials, supersymmetry imposes a constraint which relates the nonholomorphic parameters of the superpotential to the Hermitian curvature. Details supporting that claim were worked out in \cite{Garavuso:Curvaure-constraints}. The analysis revealed an additional constraint imposed by supersymmetry which was not anticipated in \cite{GaravusoSharpe:Analogues}. This talk will summarize the analysis found in \cite{Garavuso:Curvaure-constraints}. This talk is organized as follows: In section \ref{section:Action}, we will write down the action for the class of heterotic Landau-Ginzburg models that we are considering. In section \ref{section:Supersymmetry-holomorphic}, for the case of a holomorphic superpotential, we demonstrate that the action is invariant on-shell under supersymmetry transformations up to a total derivative. Finally, in section \ref{section:Supersymmetry-nonholomorphic}, we will extend the analysis to the case in which the superpotential is not holomorphic. In this case, we obtain two constraints imposed by supersymmetry. \section{\label{section:Action}Action} Heterotic Landau-Ginzburg models have field content consisting of $ (0, 2) $ bosonic chiral superfields $ \varPhi^i = (\phi^i, \psi^i_+ )$ and $ (0, 2) $ fermionic chiral superfields $ \varLambda^a = \left( \lambda^a_-, H^a, E^a \right) $, along with their conjugate antichiral superfields $ \varPhi^{\overline{\imath}} = \left( \phi^{\overline{\imath}}, \psi^{\overline{\imath}}_+ \right) $ and $ \varLambda^{\overline{a}} = \left( \lambda^{\overline{a}}_-, \overline{H}^{\overline{a}}, \overline{E}^{\overline{a}} \right) $. The $ \phi^i $ are local complex coordinates on a K{\"a}hler manifold $ X $. The $ E^a $ are local smooth sections of a Hermitian vector bundle $ \mathcal{E} $ over $ X $, i.e. $ E^a \in \varGamma(X, \mathcal{E}) $. The $ H^a $ are nonpropagating auxiliary fields. The fermions couple to bundles as follows: \begin{align*} \psi^i_+ \in \varGamma \left( K^{1/2}_{\varSigma} \otimes \varPhi^* \! \left( T^{1,0} X \right) \right), \qquad & \lambda^a_- \in \varGamma \left( \overline{K}^{1/2}_{\varSigma} \otimes \left( \varPhi^* \overline{\mathcal{E}} \right)^{\vee} \right), \\ \psi^{\overline{\imath}}_+ \in \varGamma \left( K^{1/2}_{\varSigma} \otimes \left( \varPhi^* \! \left( T^{1,0} X \right) \right)^{\vee} \right), \qquad & \lambda^{\overline{a}}_- \in \varGamma \left( \overline{K}^{1/2}_{\varSigma} \otimes \varPhi^* \overline{\mathcal{E}} \right), \end{align*} where $ \varPhi: \varSigma \rightarrow X $ and $ K_{\varSigma} $ is the canonical bundle on the worldsheet $ \varSigma $. In \cite{GuffinSharpe:A-twistedheterotic}, heterotic Landau-Ginzburg models with superpotential of the form \begin{equation} W = \varLambda^a \, F_a \, , \end{equation} where $ F_a \in \varGamma \left( X, \mathcal{E}^{\vee} \right) $, were considered. In this talk, we will study supersymmetry in these heterotic Landau-Ginzburg models with $ E^a = 0 $. Assume that $ X $ has metric $ g $, antisymmetric tensor $ B $, and local real coordinates $ \phi^{\mu} $. Furthermore, assume that $ \mathcal{E} $ has Hermitian fiber metric $ h $. Then the action of a Landau-Ginzburg model on $ X $ with gauge bundle $ \mathcal{E} $ and $ E^a = 0 $ can be written \cite{Garavuso:Curvaure-constraints} \begin{align} \label{action} S &= 2t \int_{\varSigma} d^2 z \left[ \frac{1}{2} \left( g_{\mu \nu} + i B_{\mu \nu} \right) \partial_z \phi^{\mu} \partial_{\overline{z}} \phi^{\nu} \right. \nonumber \\ &\phantom{= 2t \int_{\varSigma} d^2 z \left[ \right.} + i g_{\overline{\imath} i} \psi_+^{\overline{\imath}} \overline{D}_{\overline{z}} \psi_+^i + i h_{a \overline{a}} \lambda_-^a D_z \lambda_-^{\overline{a}} + F_{i \overline{\imath} a \overline{a}} \, \psi_+^i \psi_+^{\overline{\imath}} \lambda_-^a \lambda_-^{\overline{a}} \nonumber \\ &\phantom{= 2t \int_{\varSigma} d^2 z \left[ \right.} \biggl. \, + \, h^{a \overline{a}} F_{a} \overline{F}_{\overline{a}} + \psi_+^i \lambda_-^{a} D_i F_a + \psi_+^{\overline{\imath}} \lambda_-^{\overline{a}} \overline{D}_{\overline{\imath}} \overline{F}_{\overline{a}} \biggr]. \end{align} Here, $ t $ is a coupling constant, $ d^2 z = -i \, dz \wedge d{\overline{z}} $, and \begin{align*} \overline{D}_{\overline{z}} \, \psi^i_+ &= \overline{\partial}_{\overline{z}} \, \psi_+^i + \overline{\partial}_{\overline{z}} \, \phi^j \, \varGamma^i_{jk} \psi^k_+ \, , &D_z \lambda^{\overline{a}}_- &= \partial_z \lambda_-^{\overline{a}} + \partial_z \phi^{\overline{\imath}} A^{\overline{a}}_{\overline{\imath} \overline{b}} \, \lambda^{\overline{b}}_- \, , \\ D_i F_a &= \partial_i F_a - A^{b}_{i a} F_b \, , & \overline{D}_{\overline{\imath}} \overline{F}_{\overline{a}} &= \overline{\partial}_{\overline{\imath}} \, \overline{F}_{\overline{a}} - A^{\overline{b}}_{\overline{\imath} \, \overline{a}} \, \overline{F}_{\overline{b}} \, , \\ A^b_{i a} &= h^{b \overline{b}} \, h_{\overline{b} a, i} \, , & A^{\overline{b}}_{\overline{\imath} \, \overline{a}} &= h^{\overline{b} b} \, h_{b \overline{a}, \overline{\imath}} \, , \\ \varGamma^i_{jk} &= g^{i \overline{\imath}} \, g_{\overline{\imath} k, j} \, , & F_{i \overline{\imath} a \overline{a}} &= h_{a \overline{b}} \, A^{\overline{b}}_{\overline{\imath} \, \overline{a}, i} \, . \end{align*} The action \eqref{action} is invariant on-shell under the supersymmetry transformations \begin{equation} \begin{aligned} \delta \phi^i &= i \alpha_- \psi^i_+ \, , \\ \delta \phi^{\overline{\imath}} &= i \tilde{\alpha}_- \psi^{\overline{\imath}}_+ \, , \\ \delta \psi^i_+ &= - \tilde{\alpha}_- \overline{\partial}_{\overline{z}} \phi^i \, , \\ \delta \psi^{\overline{\imath}}_+ &= - \alpha_- \partial_z \phi^{\overline{\imath}} \, , \\ \delta \lambda^{a}_- &= - i \alpha_- \psi^j_+ \, A^a_{j b} \, \lambda^{b}_- + i \alpha_- h^{a \overline{a}} \, \overline{F}_{\overline{a}} \, , \\ \delta \lambda^{\overline{a}}_- &= - i \tilde{\alpha}_- \psi^{\overline{\jmath}}_+ \, A^{\overline{a}}_{\overline{\jmath} \, \overline{b}} \, \lambda^{\overline{b}}_- + i \tilde{\alpha}_- h^{\overline{a} a} \, F_a \end{aligned} \label{SUSY} \end{equation} up to a total derivative. \section{\label{section:Supersymmetry-holomorphic}Supersymmetry invariance for holomorphic superpotential} In this section, we will show that, when the superpotential is holomorphic, the action \eqref{action} is invariant on shell under the supersymmetry transformations \eqref{SUSY} up to a total derivative. For this purpose, it is sufficient to set $ \tilde{\alpha}_- = 0 $,\footnote{ The calculations for the case in which $ \alpha_- = 0 $ and $ \tilde{\alpha}_- \neq 0 $ are analogous to those we will perform explicitly for the case in which $ \alpha_- \neq 0 $ and $ \tilde{\alpha}_- = 0 $. The general case, i.e. $ \alpha_- $ and $ \tilde{\alpha}_- $ both nonzero, is obtained by combining the above two cases. } yielding \begin{equation} \begin{aligned} \delta \phi^i &= i \alpha_- \psi^i_+ \, , \qquad &\delta \phi^{\overline{\imath}} &= 0 \, , \\ \delta \psi^i_+ &= 0 \, , \qquad &\delta \psi^{\overline{\imath}}_+ &= - \alpha_- \partial_z \phi^{\overline{\imath}} \, , \\ \delta \lambda^{a}_- &= - i \alpha_- \psi^j_+ \, A^a_{j b} \, \lambda^{b}_- + i \alpha_- h^{a \overline{a}} \, \overline{F}_{\overline{a}} \, , \qquad &\delta \lambda^{\overline{a}}_- &= 0 \, . \end{aligned} \end{equation} With this simplification, using the $ \lambda^a_- $ equation of motion,\footnote{This is valid because we have integrated out the auxiliary fields $ H^a $.} the action \eqref{action} can be written \cite{Garavuso:Curvaure-constraints} \begin{align} \label{action:Q-exact+non-exact} S &= it \int_{\varSigma} d^2 z \, \left\{ Q, V \right\} + t \int_{\varSigma} \varPhi^*(K) \nonumber \\ &\phantom{=} + 2 t \int_{\varSigma} d^2 z \left( \psi_+^{\overline{\imath}} \lambda_-^{\overline{a}} \overline{D}_{\overline{\imath}} \overline{F}_{\overline{a}} - \psi_+^i \lambda_-^{a} D_a F_a \right), \end{align} where $ Q $ is the BRST operator, \begin{equation} V = 2 \left( g_{\overline{\imath} i} \psi^{\overline{\imath}}_+ \overline{\partial}_{\overline{z}} \phi^i + i \lambda^a_- F_a \right), \end{equation} and \begin{equation} \int_{\varSigma} \varPhi^*(K) = \int_{\varSigma} d^2 z \left( g_{i \overline{\imath}} + i B_{i \overline{\imath}} \right) \left( \partial_z \phi^i \, \overline{\partial}_{\overline{z}} \phi^{\overline{\imath}} - \overline{\partial}_{\overline{z}} \phi^i \partial_z \phi^{\overline{\imath}} \right) \end{equation} is the integral over the worldsheet $ \varSigma $ of the pullback to $ \varSigma $ of the complexified K{\"a}hler form $ K = - i \left( g_{i \overline{\imath}} + i B_{i \overline{\imath}} \right) d \phi^i \wedge d \phi^{\overline{\imath}} $. Since $ \delta f = -i \alpha_- \{ Q, f \} $, where $ f $ is any field, the $ Q $-exact part of \eqref{action:Q-exact+non-exact} is $ \delta $-exact and hence $ \delta $-closed. We will now establish that the remaining terms of \eqref{action:Q-exact+non-exact} are $ \delta $-closed on shell up to a total derivative. For the non-exact term of \eqref{action:Q-exact+non-exact} involving $ \varPhi^*(K) $, note that \begin{equation*} \int_{\varSigma} \varPhi^*(K) = \int_{\varPhi(\varSigma)} K = \int_{\varPhi(\varSigma)} \left[ - i \left( g_{i \overline{\imath}} + i B_{i \overline{\imath}} \right) \right] d \phi^i \wedge d \phi^{\overline{\imath}} \end{equation*} and $ K $ satisfies \begin{equation*} \partial K = - i \, \partial_k \left( g_{i \overline{\imath}} + i B_{i \overline{\imath}} \right) d \phi^k \wedge d \phi^i \wedge d \phi^{\overline{\imath}} = 0 \, . \end{equation*} Thus, \begin{equation} \delta \left[ \varPhi^*(K) \right] = \left[ \varPhi^*(K) \right]_k \delta \phi^k = 0 \, . \end{equation} It remains to consider the non-exact expression of \eqref{action:Q-exact+non-exact} involving $$ \psi_+^{\overline{\imath}} \lambda_-^{\overline{a}} \overline{D}_{\overline{\imath}} \overline{F}_{\overline{a}} - \psi_+^i \lambda_-^{a} D_i F_a \, . $$ First, we compute \begin{align} \label{delta-non-exact-2a} \delta \left( \psi^{\overline{\imath}}_+ \lambda^{\overline{a}}_- \overline{D}_{\overline{\imath}} \overline{F}_{\overline{a}} \right) &= \left( \delta \psi_+^{\overline{\imath}} \right) \lambda^{\overline{a}}_- \, \overline{D}_{\overline{\imath}} \, \overline{F}_{\overline{a}} + \psi^{\overline{\imath}}_+ \left( \delta \lambda^{\overline{a}}_- \right) \overline{D}_{\overline{\imath}} \, \overline{F}_{\overline{a}} + \psi^{\overline{\imath}}_+ \lambda^{\overline{a}}_i \left[ \delta \! \left( \, \overline{D}_{\overline{\imath}} \, \overline{F}_{\overline{a}} \right) \right] \allowdisplaybreaks \nonumber \\[1ex] &= \left( - \alpha_- \partial_z \phi^{\overline{\imath}} \right) \lambda_-^{\overline{a}} \, \overline{D}_{\overline{\imath}} \, \overline{F}_{\overline{a}} + \psi^{\overline{\imath}}_+ \lambda^{\overline{a}}_- \left[ \delta \left( \overline{\partial}_{\overline{\imath}} \overline{F}_{\overline{a}} - A^{\overline{b}}_{\overline{\imath} \, \overline{a}} \, \overline{F}_{\overline{b}} \right) \right] \nonumber \\[1ex] &= \left( - \alpha_- \partial_z \phi^{\overline{\imath}} \right) \lambda_-^{\overline{a}} \, \overline{D}_{\overline{\imath}} \, \overline{F}_{\overline{a}} + \psi^{\overline{\imath}}_+ \lambda_-^{\overline{a}} \left\{ \overline{\partial}_{\overline{\imath}} \left[ \, \overline{F}_{\overline{a},k} \left( \delta \phi^k \right) \right] \right. \nonumber \\ &\phantom{=} \left. - \left[ A^{\overline{b}}_{\overline{\imath} \, \overline{a}, k} \left( \delta \phi^k \right) \right] \overline{F}_{\overline{b}} - A^{\overline{b}}_{\overline{\imath} \, \overline{a}} \left[ \overline{F}_{\overline{b}, k} \left( \delta \phi^k \right) \right] \right\} \allowdisplaybreaks \nonumber \\[1ex] &= \left( - \alpha_- \partial_z \phi^{\overline{\imath}} \right) \lambda^{\overline{a}}_- \, \overline{D}_{\overline{\imath}} \, \overline{F}_{\overline{a}} - \psi_+^{\overline{\imath}} \lambda_-^{\overline{a}} A^{\overline{b}}_{\overline{\imath} \, \overline{a}, k} \left( i \alpha_- \psi^k_+ \right) \overline{F}_{\overline{b}} \, , \end{align} where we have used $ \overline{F}_{\overline{a},k} = 0 $ in the last step. Now, we compute \begin{align} \label{delta-non-exact-2b} \delta \left( - \psi^i_+ \lambda^a_- D_i F_a \right) &= - \, \left( \delta \psi^i_+ \right) \lambda^a_- D_i F_a - \psi^i_+ \left( \delta \lambda^a_- \right) D_i F_a \nonumber \\ &\phantom{\, = \,} - \psi^i_+ \lambda^a_- \left[ \delta \! \left( D_i F_a \right) \right] \nonumber \\ &= - \, \alpha_- \overline{F}_{\overline{a}} D_z \lambda^{\overline{a}}_- + \left( i \alpha_- h^{a \overline{b}} \, \overline{F}_{\overline{b}} \right) F_{i \overline{\imath} a \overline{a}} \, \psi^i_+ \psi^{\overline{\imath}}_+ \lambda^{\overline{a}}_- \, , \end{align} where we have used the $ \lambda^a_- $ equation of motion. Note that the first term on the right-hand side of \eqref{delta-non-exact-2b} cancels the first term on the right-hand side of \eqref{delta-non-exact-2a} up to a total derivative: \begin{align} \label{cancel-first-terms-rhs-nonexact-2a-2b} - \, \alpha_- \overline{F}_{\overline{a}} \, D_z \lambda^{\overline{a}}_- &= - \, \alpha_- \overline{F}_{\overline{a}} \left( \partial_z \lambda^{\overline{a}}_- + \partial_z \phi^{\overline{\imath}} \, A^{\overline{a}}_{\overline{\imath} \, \overline{b}} \, \lambda^{\overline{b}}_- \right) \nonumber \\[1ex] &= \alpha_- \left( \overline{F}_{\overline{a}, k} \, \partial_z \phi^k + \overline{F}_{\overline{a}, \overline{k}} \, \partial_z \phi^{\overline{k}} \right) \lambda^{\overline{a}}_- - \alpha_- \partial_z \! \left( \overline{F}_{\overline{a}} \, \lambda^{\overline{a}}_- \right) \nonumber \\ &\phantom{=} - \alpha_- \overline{F}_{\overline{a}} \, \partial_z \phi^{\overline{\imath}} \, A^{\overline{a}}_{\overline{\imath} \, \overline{b}} \, \lambda^{\overline{b}}_- \nonumber \\[1ex] &= \left( \alpha_- \partial_z \phi^{\overline{\imath}} \right) \lambda^{\overline{a}}_- \left( \overline{\partial}_{\overline{\imath}} \overline{F}_{\overline{a}} - A^{\overline{b}}_{\overline{\imath} \, \overline{a}} \, \overline{F}_{\overline{b}} \right) - \alpha_- \partial_z \! \left( \overline{F}_{\overline{a}} \, \lambda^{\overline{a}}_- \right) \nonumber \\[1ex] &= \left( \alpha_- \partial_z \phi^{\overline{\imath}} \right) \lambda^{\overline{a}}_- \, \overline{D}_{\overline{\imath}} \, \overline{F}_{\overline{a}} - \alpha_- \partial_z \! \left( \overline{F}_{\overline{a}} \, \lambda^{\overline{a}}_- \right), \end{align} where we used $ \overline{F}_{\overline{a}, k} = 0 $ in the third step. Furthermore, the second term on the right-hand side of \eqref{delta-non-exact-2b} cancels the second term on the right-hand side of \eqref{delta-non-exact-2a}: \begin{align} \left( i \alpha_- h^{a \overline{b}} \, \overline{F}_{\overline{b}} \right) F_{i \overline{\imath} a \overline{a}} \, \psi^i_+ \psi^{\overline{\imath}}_+ \lambda^{\overline{a}}_- &= \left( i \alpha_- h^{a \overline{c}} \, \overline{F}_{\overline{c}} \right) \left( h_{a \overline{b}} \, A^{\overline{b}}_{\overline{\imath} \, \overline{a}, i} \right) \psi^{\overline{\imath}}_+ \lambda^{\overline{a}}_- \psi^i_+ \nonumber \\[1ex] &= \psi^{\overline{\imath}}_+ \lambda^{\overline{a}}_- \, A^{\overline{b}}_{\overline{\imath} \, \overline{a}, k} \, \left( i \alpha_- \psi^k_+ \right) \overline{F}_{\overline{b}} \, , \end{align} where we have used $ F_{i \overline{\imath} a \overline{a}} = h_{a \overline{b}} \, A^{\overline{b}}_{\overline{\imath} \, \overline{a}, i} $ in the first step. It follows that \eqref{delta-non-exact-2b} cancels \eqref{delta-non-exact-2a} up to a total derivative, i.e. \begin{equation} \delta \left( - \psi^i_+ \lambda^a_- D_i F_a \right) = - \, \delta \left( \psi^{\overline{\imath}}_+ \lambda^{\overline{a}}_- \, \overline{D}_{\overline{\imath}} \overline{F}_{\overline{a}} \right) - \alpha_- \partial_z \! \left( \overline{F}_{\overline{a}} \, \lambda^{\overline{a}}_- \right). \end{equation} This completes our argument for the case of a holomorphic superpotential. \section{\label{section:Supersymmetry-nonholomorphic}Supersymmetry invariance for nonholomorphic superpotential} In this section, we will extend the analysis of section \ref{section:Supersymmetry-holomorphic} to the case in which the superpotential is not holomorphic. This requires revisting the steps in \eqref{delta-non-exact-2a} and \eqref{cancel-first-terms-rhs-nonexact-2a-2b} where we used $ \overline{F}_{\overline{a}, k} = 0 $. Allowing for $ \overline{F}_{\overline{a}, k} \ne 0 $, \eqref{cancel-first-terms-rhs-nonexact-2a-2b} becomes \begin{align*} - \, \alpha_- \overline{F}_{\overline{a}} \, D_z \lambda^{\overline{a}}_- &= \left( \alpha_- \partial_z \phi^{\overline{\imath}} \right) \lambda^{\overline{a}}_- \, \overline{D}_{\overline{\imath}} \, \overline{F}_{\overline{a}} - \alpha_- \partial_z \! \left( \overline{F}_{\overline{a}} \, \lambda^{\overline{a}}_- \right) + \alpha_- \overline{F}_{\overline{a}, k} \, \partial_z \phi^k \lambda^{\overline{a}}_- \, . \end{align*} It follows that the cancellation described by \eqref{cancel-first-terms-rhs-nonexact-2a-2b} will still apply provided that the constraint \begin{equation} \label{constraint-1} \overline{F}_{\overline{a}, k} \, \partial_z \phi^k \lambda^{\overline{a}}_- = 0 \end{equation} is satisfied. Furthermore, in the next to last step of \eqref{delta-non-exact-2a}, we now have \begin{align} \psi^{\overline{\imath}}_+ \lambda_-^{\overline{a}} & \left\{ \overline{\partial}_{\overline{\imath}} \left[ \, \overline{F}_{\overline{a},k} \left( \delta \phi^k \right) \right] - A^{\overline{b}}_{\overline{\imath} \, \overline{a}} \left[ \overline{F}_{\overline{b}, k} \left( \delta \phi^k \right) \right] \right\} \nonumber \\[1ex] &= \psi^{\overline{\imath}}_+ \lambda_-^{\overline{a}} \left\{ \overline{\partial}_{\overline{\imath}} \left[ \overline{F}_{\overline{a},k} \left( i \alpha_- \psi^k_+ \right) \right] - A^{\overline{b}}_{\overline{\imath} \, \overline{a}} \, \overline{F}_{\overline{b}, k} \left( i \alpha_- \psi^k_+ \right) \right\} \nonumber \\[1ex] &= \psi^{\overline{\imath}}_+ \lambda_-^{\overline{a}} \left( \overline{\partial}_{\overline{\imath}} \overline{F}_{\overline{a},i} - A^{\overline{b}}_{\overline{\imath} \, \overline{a}} \, \overline{F}_{\overline{b}, i} \right) \left( i \alpha_- \psi^i_+ \right) \nonumber \\[1ex] &= \psi^{\overline{\imath}}_+ \lambda_-^{\overline{a}} \left[ \overline{\partial}_{\overline{\imath}} \overline{F}_{\overline{a}, i} + A^{\overline{b}}_{\overline{\imath} \, \overline{a}, i} \, \overline{F}_{\overline{b}} - \partial_i \! \left( A^{\overline{b}}_{\overline{\imath} \, \overline{a}} \, \overline{F}_{\overline{b}} \right) \right] \left( i \alpha_- \psi^i_+ \right) \nonumber \\[1ex] &= \psi^{\overline{\imath}}_+ \lambda_-^{\overline{a}} \left[ \partial_i \! \left( \overline{\partial}_{\overline{\imath}} \overline{F}_{\overline{a}} - A^{\overline{b}}_{\overline{\imath} \, \overline{a}} \, \overline{F}_{\overline{b}} \right) + A^{\overline{b}}_{\overline{\imath} \, \overline{a}, i} \, \overline{F}_{\overline{b}} \right] \left( i \alpha_- \psi^i_+ \right) \nonumber \\[1ex] &= \psi^{\overline{\imath}}_+ \lambda_-^{\overline{a}} \left( \partial_i \overline{D}_{\overline{\imath}} \overline{F}_{\overline{a}} + F_{i \overline{\imath} \, a \overline{a}} \, h^{a \overline{b}} \, \overline{F}_{\overline{b}} \right) \left( i \alpha_- \psi^i_+ \right), \end{align} where we have used $ A^{\overline{b}}_{\overline{\imath} \, \overline{a}, i} = h^{a \overline{b}} \, F_{i \overline{\imath} \, a \overline{a}} $ in the last step. It follows that, in addition to requiring \eqref{constraint-1}, supersymmetry imposes the constraint \begin{equation} \label{constraint-2} \partial_i \overline{D}_{\overline{\imath}} \overline{F}_{\overline{a}} + F_{i \overline{\imath} \, a \overline{a}} \, h^{a \overline{b}} \, \overline{F}_{\overline{b}} = 0 \, . \end{equation} Various special cases of \eqref{constraint-2} were used in \cite{GaravusoSharpe:Analogues} to establish properties of Mathai-Quillen form analogues which arise in the corresponding heterotic Landau-Ginzburg models. In that paper, it was claimed that supersymmetry imposes those constraints. In this talk, we have presented details worked out in \cite{Garavuso:Curvaure-constraints} supporting that claim. It would be interesting to see what constraints are imposed by supersymmetry in other models when the superpotential is nonholomorphic.
1,116,691,498,834
arxiv
\section{Introduction} \label{sec:1} Heavy fermion pair production in $e^+e^-$ annihilation is a fundamental process in the Standard Model (SM). The threshold region is of particular interest. For example, the precise prediction of the production cross section for $e^+e^-\rightarrow\tau^+\tau^-$ in the threshold region is important in order to improve the measurement of the $\tau$-lepton mass~\cite{Asner:2008nq}. Precise theoretical predictions for the production cross section of $e^+e^-\rightarrow c\bar{c}/b\bar{b}$ at the thresholds are crucial for determining accurate values for the charm and bottom quark masses, as well as the the QCD coupling constant $\alpha_s$; e.g., as determined from the sum rule method~\cite{Novikov:1976tn, Novikov:1977dq, Voloshin:1995sf}. One of the most important physics goals of future high energy electron-positron colliders is the precise measurement of properties of the top quark, especially the top quark mass and its width near the threshold region~\cite{Seidel:2013sqa}. A crucial input is the precise prediction of the top quark pair production cross section. An essential feature of heavy quark pair production in the threshold region of $e^+e^-$ annihilation is the presence of singular terms from the QCD Coulomb corrections. Physically, the renormalization scale which reflects the subprocess virtuality should become very soft in this region. It is conventional to set the renormalization scale to the mass of the heavy fermion $\mu_r=m_f$. This conventional procedure obviously violates the physical behavior of the QCD corrections and will lead inevitably to unreliable predictions for the production cross sections in the threshold region. The resummation of logarithmically enhanced terms is thus required. It is often argued that one should set the renormalization scale as the typical momentum scale of the process with the purpose of eliminating the large logarithms; this guessed scale is then varied over an arbitrary range to ascertain its uncertainty. However, this conventional procedure gives scheme-dependent predictions, and it thus violates the fundamental principle of renormalization group invariance. The resulting nonconformal perturbative QCD series also has renormalon n-factorial divergences; one thus introduces inherent renormalization scheme-and-scale uncertainties. One often argues that the renormalization scale uncertainty by guessing the initial scale will be suppressed by including enough higher-order terms; however, the scale uncertainties become increasingly large at each order, and the renormalon contributions such as $n!\beta^n_0\alpha^n_s$ prevent convergence. One also cannot decide whether poor pQCD convergence is an intrinsic property of the pQCD series, or is simply due to the improper choice of the scale. In contrast to pQCD, the renormalization scale in Quantum Electrodynamics (QED) is set unambiguously by using the Gell-Mann-Low method~\cite{GellMann:1954fq} where the renormalization scales are set by the virtuality of each photon propagator; this automatically sums all the proper and improper vacuum polarization contributions to each photon propagator to all orders. Note that the conventional scale setting method used for pQCD is incorrect when applied to the Abelian QED theory. In fact, a correct scale-setting method in pQCD must reduce in the Abelian limit $N_C\rightarrow0$ to the Gell-Mann-Low method~\cite{Brodsky:1997jk}. The Principle of Maximum Conformality (PMC)~\cite{Brodsky:2011ta, Brodsky:2012rj, Brodsky:2011ig, Mojaza:2012mf, Brodsky:2013vpa} provides a systematic way to eliminate renormalization scheme-and-scale ambiguities. The PMC determines the renormalization scales by absorbing all the $\{\beta_i\}$-terms that govern the behavior of the running coupling via the renormalization group equation. The resulting pQCD series matches the conformal series with $\beta=0$; i.e., it is maximally conformal. Since the PMC predictions do not depend on the choice of the renormalization scheme, PMC scale setting satisfies the principles of renormalization group invariance~\cite{Brodsky:2012ms, Wu:2014iba, Wu:2019mky}. The PMC provides the underlying principle for the well-known Brodsky-Lepage-Mackenzie (BLM) method~\cite{Brodsky:1982gc}, and generalizes the BLM procedure at all orders. By applying PMC scale-setting, the divergent renormalon series disappear, and the convergence of pQCD series is greatly improved. The PMC approach has been successfully applied to various high energy processes. Recently, we have shown that the correct physical behavior can be obtained using PMC scale setting for the event-shape observables such as the thrust $T$ in electron-positron annihilation~\cite{Wang:2019ljl, Wang:2019isi, DiGiustino:2020fbk}. The PMC scale is not a single fixed value, but it depends continuously on the value of the event-shape observable, reflecting the virtuality of the QCD dynamics. Thus one can determine the QCD running coupling $\alpha_s(Q^2) $ over a large range of $Q^2$ from a single measurement of $e^+ e^- \to Z^0 \to X$ at $\sqrt{s}=M_Z$. In this paper, we shall apply the PMC to make comprehensive analyses for the heavy fermion pair production in $e^+e^-$ annihilation near the threshold region. We will show that two distinctly different scales are determined for the heavy fermion pair production near the threshold region. We also will demonstrate the consistency of PMC scale setting in the QED limit. The remaining sections of this paper are organized as follows. In Sec.\ref{sec:2}, we calculate the QCD process of the quark pair production in $e^+e^-$ annihilation near the threshold region in both the modified minimal subtraction scheme ($\overline{\rm MS}$ scheme) and the V-scheme. In Sec.\ref{sec:3}, we calculate the QED process of the lepton pair production in $e^+e^-$ annihilation near the threshold region. Section \ref{sec:4} is reserved for a summary. \section{The heavy quark pair production near the threshold region} \label{sec:2} \subsection{The QCD process of the quark pair production in the $\overline{\rm MS}$ scheme } \label{sec:21} The quark pair production cross section for $e^{+}e^{-}\rightarrow\gamma^{*}\rightarrow Q\bar{Q}$ at the two-loop level can be written as \begin{eqnarray} \sigma=\sigma^{(0)}\left[1 + \delta^{(1)}\,a_s(\mu_r) + \delta^{(2)}(\mu_r)\,a^2_s(\mu_r) + {\cal O}(a^3_s)\right], \label{sigma:1} \end{eqnarray} where $a_s(\mu_r)={\alpha_s(\mu_r)}/{\pi}$, $\mu_r$ is the renormalization scale. The LO cross section is \begin{eqnarray} \sigma^{(0)}=\frac{4}{3}\frac{\pi\,\alpha^2}{s}N_c\,e^2_Q\frac{v\,(3-v^2)}{2}, \label{qqpairLOpmc} \end{eqnarray} and the quark velocity $v$ is \begin{eqnarray} v=\sqrt{1-\frac{4\,m_Q^2}{s}}. \end{eqnarray} Here, $N_c$ is the number of colors, $e_Q$ is the $Q$ quark electric charge, $s$ is the center-of-mass energy squared and $m_Q$ is the mass of the quark $Q$. The one-loop correction $\delta^{(1)}$ near the threshold region can be written as \begin{eqnarray} \delta^{(1)}=C_F\left(\frac{\pi^2}{2\,v}-4\right). \end{eqnarray} The two-loop correction $\delta^{(2)}$ can be conveniently split into terms proportional to various $SU(3)$ color factors, \begin{eqnarray} \delta^{(2)} &=& C_F^2\,\delta^{(2)}_A + C_F\,C_A\,\delta^{(2)}_{NA} \nonumber\\ && + C_F\,T_R\,n_f\,\delta^{(2)}_L + C_F\,T_R\,\delta^{(2)}_H. \end{eqnarray} The terms $\delta^{(2)}_{A}$, $\delta^{(2)}_{L}$ and $\delta^{(2)}_{H}$ are the same in either Abelian or non-Abelian theories; the term $\delta^{(2)}_{NA}$ only arises in the non-Abelian theory. This process provides the opportunity to explore rigorously the scale-setting method in the non-Abelian and Abelian theories. The Coulomb correction plays an important role in the threshold region; it is proportional to powers of $(\pi/v)$. The renormalization scale is thus relatively soft in this region. In fact, the PMC scales must be determined separately for the non-Coulomb and Coulomb corrections~\cite{Brodsky:1995ds, Brodsky:2012rj}. When the quark velocity $v\rightarrow0$, the Coulomb correction dominates the contribution of the production cross section, and the contribution of the non-Coulomb correction will be suppressed. On general grounds one expects that threshold physics is governed by the nonrelativistic Coulomb instantaneous potential. The potential affects the cross section through final state interactions when the scale is above threshold; it leads to bound states when the scale is below threshold. The cross section given in Eq.(\ref{sigma:1}) is further divided into the $n_f$-dependent and $n_f$-independent parts, i.e., \begin{widetext} \begin{eqnarray} \sigma &=&\sigma^{(0)}\left[1+\delta^{(1)}_h\,a_s(\mu_r) + \left(\delta^{(2)}_{h,in}(\mu_r)+\delta^{(2)}_{h,n_f}(\mu_r)\,n_f\right)\,a^2_s(\mu_r) \right. \nonumber\\ &&\left. + \left(\frac{\pi}{v}\right)\,\delta^{(1)}_{v}\,a_s(\mu_r) + \left(\frac{\pi}{v}\right)\,\left(\delta^{(2)}_{v,in}(\mu_r)+\delta^{(2)}_{v,n_f}(\mu_r)\,n_f\right)\,a^2_s(\mu_r)+ \left(\frac{\pi}{v}\right)^2\,\delta^{(2)}_{v^2}\,a^2_s(\mu_r) + {\cal O}(a^3_s)\right]. \label{eqpairNNLO} \end{eqnarray} \end{widetext} The coefficients $\delta^{(1)}_h$ and $\delta^{(2)}_h$ are for the non-Coulomb corrections, and the coefficients $\delta^{(1)}_v$, $\delta^{(2)}_v$ and $\delta^{(2)}_{v^2}$ are for the Coulomb corrections. These coefficients in the $\overline{\rm MS}$ scheme are calculated in Refs.~\cite{Czarnecki:1997vz, Beneke:1997jm, Bernreuther:2006vp} and at the scale $\mu_r=m_Q$ they can be written as \begin{eqnarray} \delta^{(1)}_h=-4\,C_F,~\delta^{(1)}_{v}=\frac{C_F\,\pi}{2}, \end{eqnarray} \begin{eqnarray} \delta^{(2)}_{h,in}&=&-{1\over72}\,C_F\,(C_A\,(302+468\zeta_3 + \pi^2(-179+192\ln2)) \nonumber\\ && - 2\,(-16(-11+\pi^2)\,T_R + C_F\,(351+6\pi^4-36\zeta_3 \nonumber\\ && + \pi^2\,(-70+48\ln2))) + 24(3\,C_A+2\,C_F)\,\pi^2\ln v), \nonumber\\ \delta^{(2)}_{h,n_f}&=&{11\,C_F\,T_R\over9}, \nonumber\\ \delta^{(2)}_{v,in}&=&-{1\over72}\,C_F\,\pi\,(-31\,C_A+144\,C_F+66\,C_A\ln(2v)), \nonumber\\ \delta^{(2)}_{v,n_f}&=&{1\over18}\,C_F\,\pi\,T_R\,(-5+6\ln(2v)), \nonumber\\ \delta^{(2)}_{v^2}&=&{C_F^2\,\pi^2\over12}. \end{eqnarray} After absorbing the nonconformal term $\beta_0=11/3\,C_A-4/3\,T_R\,n_f$ into the coupling constant using the PMC, we obtain \begin{eqnarray} \sigma &=& \sigma^{(0)}\left[1+\delta^{(1)}_h\,a_s(Q_h) + \delta^{(2)}_{h,\rm sc}(\mu_r)\,a^2_s(Q_h) \right. \nonumber\\ &&\left. + \left(\frac{\pi}{v}\right)\,\delta^{(1)}_v\,a_s(Q_v) + \left(\frac{\pi}{v}\right)\,\delta^{(2)}_{v,\rm sc}(\mu_r)\,a^2_s(Q_v)\right. \nonumber\\ &&\left. + \left(\frac{\pi}{v}\right)^2\,\delta^{(2)}_{v^2}\,a^2_s(Q_v) + {\cal O}(a^3_s)\right]. \label{eqpairNNLOpmc} \end{eqnarray} The PMC scales $Q_i$ can be written as \begin{eqnarray} Q_i = \mu_r\exp\left[\frac{3\,\delta^{(2)}_{i,n_f}(\mu_r)}{2\,T_R\,\delta^{(1)}_i}\right], \end{eqnarray} and the coefficients $\delta^{(2)}_{i,\rm sc}(\mu_r)$ are \begin{eqnarray} \delta^{(2)}_{i,\rm sc}(\mu_r) = \frac{11\,C_A\,\delta^{(2)}_{i,n_f}(\mu_r)}{4\,T_R}+\delta^{(2)}_{i,in}(\mu_r), \end{eqnarray} where, $i=h$ and $v$ stand for the non-Coulomb and Coulomb corrections, respectively. The nonconformal $\beta_0$ term is eliminated, and the resulting pQCD series matches the conformal series and thus only the conformal coefficients remain in the cross section. The conformal coefficients are independent of the renormalization scale $\mu_r$. At the present two-loop level, the PMC scales are also independent of the renormalization scale $\mu_r$. Thus, the resulting cross section in Eq.(\ref{eqpairNNLOpmc}) eliminates the renormalization scale uncertainty. Taking $C_A=3$, $C_F=4/3$ and $T_R=1/2$ for QCD, the PMC scales in the $\overline{\rm MS}$ scheme are \begin{eqnarray} Q_h =e^{(-11/24)}\,m_Q \label{scaleQCDhms} \end{eqnarray} for the non-Coulomb correction, and \begin{eqnarray} Q_v =2\,e^{(-5/6)}\,v\,m_Q \label{scaleQCDbms} \end{eqnarray} for the Coulomb correction. The scale $Q_h$ originates from the hard gluon virtual corrections, and thus it is determined for the short-distance process. The scale $Q_v$ originates from Coulomb rescattering. Since the PMC scales are determined by absorbing the nonconformal $\{\beta_i\}$-terms, the behavior of the scale is controlled by the coefficient of the QCD $\beta$ function. It is noted that the coefficient of the $\beta_0$ function for the non-Coulomb correction is independent of the quark velocity $v$, whereas the logarithmic term $\ln(2v)$ appears in the coefficient of the $\beta_0$ function for the Coulomb correction. As expected, the resulting scale $Q_h$ is of the order $m_Q$, whereas the scale $Q_v$ is of the order $v\,m_Q$. \begin{figure}[htb] \centering \includegraphics[width=0.40\textwidth]{PMCscalebeta0.eps} \caption{The PMC scale $Q_v$ versus the center-of-mass energy $\sqrt{s}$ for the $b$ quark pair production in the $\overline{\rm MS}$ scheme. $m_Q=4.89$ GeV. } \label{figPMCscalebeta0} \end{figure} In the following, we will take the bottom quark pair production as an example to make a detailed analysis near the threshold region. Taking $m_Q=4.89$ GeV~\cite{Bernreuther:2016ccf}, we obtain \begin{eqnarray} Q_h = 3.09~\rm GeV, \end{eqnarray} which is smaller than $m_Q$ in the $\overline{\rm MS}$ scheme. For the Coulomb correction part, the scale $Q_v$ is shown in Fig.(\ref{figPMCscalebeta0}). It shows that the scale $Q_v$ depends continuously on the quark velocity $v$, and it becomes soft for $v\rightarrow0$, yielding the correct physical behavior of the scale and reflecting the virtuality of the QCD dynamics. Also the number of active flavors $n_f$ changes with the quark velocity $v$ according to the PMC scale. When the quark velocity $v\rightarrow0$, the small scale in the coupling constant demonstrates that the perturbative QCD theory becomes unreliable and non-perturbative effects must be taken into account. One can adopt the Light Front Holographic QCD (LFHQCD)~\cite{Brodsky:2014yha} to evaluate the coupling constant $\alpha_s(Q)$ in the low scale region. According to the LFHQCD, the coupling constant $\alpha_s(Q)$ is finite for $Q\rightarrow0$. In contrast, the renormalization scale is simply fixed at $\mu_r=m_Q$ using conventional scale setting. Our calculations show that in the $\overline{\rm MS}$ scheme, the scale should be $e^{(-11/24)}\,m_Q$, which is smaller than $m_Q$ for the non-Coulomb correction. For the Coulomb correction, since the scale becomes soft for $v\rightarrow0$, simply fixing the renormalization scale $\mu_r=m_Q$ obviously violates the physical behavior and lead to unreliable predictions in the threshold region. The resummation of logarithmically enhanced terms is thus required. \begin{figure}[htb] \centering \includegraphics[width=0.40\textwidth]{Coedetah2.eps} \caption{The two-loop coefficients $\delta^{(2)}_h$ of the non-Coulomb correction in the $\overline{\rm MS}$ scheme for the $b$ quark pair production, where $\delta^{(2)}_h=(\delta^{(2)}_{h,in}+\delta^{(2)}_{h,n_f}\,n_f)$ is for conventional scale setting while $\delta^{(2)}_h=\delta^{(2)}_{h,\rm sc}$ is for PMC scale setting.} \label{figCoedetah2} \end{figure} We present the two-loop coefficients $\delta^{(2)}_h$ of the non-Coulomb correction in the $\overline{\rm MS}$ scheme using conventional and PMC scale settings in Fig.(\ref{figCoedetah2}). Figure (\ref{figCoedetah2}) show that the $v$-dependent behavior of the coefficients $\delta^{(2)}_h$ is the same but their magnitudes are different using conventional and PMC scale settings. When the quark velocity $v\rightarrow0$, the behavior of the non-Coulomb correction coefficients using conventional and PMC scale settings is divergent $\delta^{(2)}_h\rightarrow+\infty$ due to the presence of the term $-\ln v$. As expected, after multiplying this term by the $v$-factor in the LO cross section $\sigma^{(0)}$ given in Eq.(\ref{qqpairLOpmc}), the contribution of the non-Coulomb corrections is finite and is suppressed near the threshold region, i.e., $(\sigma^{(0)}\,\delta^{(2)}_h\,a^2_s)\rightarrow0$ for the quark velocity $v\rightarrow0$. \begin{figure}[htb] \centering \includegraphics[width=0.40\textwidth]{Coedetab2.eps} \caption{The Coulomb terms of the form $(\pi/v)\,\delta^{(2)}_{v}$ in the $\overline{\rm MS}$ scheme for the $b$ quark pair production, where $\delta^{(2)}_v=(\delta^{(2)}_{v,in}+\delta^{(2)}_{v,n_f}\,n_f)$ is for conventional scale setting and $\delta^{(2)}_v=\delta^{(2)}_{v,\rm sc}$ is for PMC scale setting. } \label{figCoedetab2} \end{figure} For the Coulomb correction, the resummation of the Coulomb term of the form $(\pi/v)^2\,\delta^{(2)}_{v^2}$ results in the well known Sommerfeld rescattering formula~\cite{Czarnecki:1997vz}. For the Coulomb term of the form $(\pi/v)\,\delta^{(2)}_{v}$, we present its $v$-dependent behavior in the $\overline{\rm MS}$ scheme using conventional and PMC scale settings in Fig.(\ref{figCoedetab2}). It shows that when the quark velocity $v\rightarrow0$, the $v$-dependent behavior of the Coulomb term $(\pi/v)\,\delta^{(2)}_{v}$ is dramatically different using conventional and PMC scale settings. In the case of conventional scale setting, its behavior is $(\pi/v)\delta^{(2)}_v\rightarrow+\infty$ for $v\rightarrow0$ due to the presence of the term $-\ln v/v$. After multiplying this term by the $v$-factor in the LO cross section $\sigma^{(0)}$, the contribution from the Coulomb term $(\pi/v)\,\delta^{(2)}_v$ using conventional scale setting is not finite, i.e., $(\sigma^{(0)}\,(\pi/v)\,\delta^{(2)}_v)\rightarrow+\infty$ for $v\rightarrow0$. It should be stressed that the term $\ln v$ vanishes, and the term $-1/v$ remains in the conformal coefficient after applying PMC scale setting. Thus the $v$-dependent behavior is $(\pi/v)\delta^{(2)}_v\rightarrow-\infty$ for $v\rightarrow0$. This term $-1/v$ is canceled by multiplying it by the $v$-factor in the LO cross section $\sigma^{(0)}$, and thus the contribution from the Coulomb term $(\pi/v)\,\delta^{(2)}_{v}$ using PMC scale setting is finite for $v\rightarrow0$. It is noted that the contributions of the Coulomb correction using conventional and PMC scale settings are suppressed for $v\rightarrow1$. \subsection{The QCD process of the quark pair production in the V-scheme } \label{sec:22} The quark pair production cross section in the above analysis is calculated in the $\overline{\rm MS}$ scheme. Effective charge $a^V_s=\alpha_V/\pi$ (V-scheme) defined by the interaction potential between two heavy quarks~\cite{Appelquist:1977tw, Fischler:1977yf, Peter:1996ig, Schroder:1998vy, Smirnov:2008pn, Smirnov:2009fh, Anzai:2009tm}, \begin{eqnarray} V(Q^2) = -{4\,\pi^2\,C_F\,a^V_s(Q)\over Q^2}, \end{eqnarray} provides a physically-based alternative to the usual $\overline{\rm MS}$ scheme. As in the case of QED, when the scale of the coupling $a^V_s$ is identified with the exchanged momentum, all vacuum polarization corrections are resummed into $a^V_s$. By using the relation between $a_s$ and $a^V_s$ at the one-loop level, i.e., \begin{eqnarray} a^V_s(Q) = a_s(Q) + \left({31\over36}C_A-{5\over9}T_R\,n_f\right)a^2_s(Q) + {\cal O}(a^3_s), \label{Vscheme:as} \end{eqnarray} we convert the quark pair production cross section from the $\overline{\rm MS}$ scheme to the V-scheme. The corresponding perturbative coefficients in Eq.(\ref{eqpairNNLO}) in the V-scheme are \begin{eqnarray} \sigma^{(0)}|_V=\sigma^{(0)}, \end{eqnarray} \begin{eqnarray} \delta^{(1)}_h|_V=\delta^{(1)}_h,~\delta^{(1)}_{v}|_V=\delta^{(1)}_{v}, \end{eqnarray} \begin{eqnarray} \delta^{(2)}_{h,in}|_V&=&\delta^{(2)}_{h,in} - {31\over36}\,C_A\,\delta^{(1)}_h, \nonumber\\ \delta^{(2)}_{h,n_f}|_V&=&\delta^{(2)}_{h,n_f} + {5\over9}\,T_R\,\delta^{(1)}_h, \nonumber\\ \delta^{(2)}_{v,in}|_V&=&\delta^{(2)}_{v,in} - {31\over36}\,C_A\,\delta^{(1)}_{v}, \nonumber\\ \delta^{(2)}_{v,n_f}|_V&=&\delta^{(2)}_{v,n_f} + {5\over9}\,T_R\,\delta^{(1)}_{v}, \nonumber\\ \delta^{(2)}_{v^2}|_V&=&\delta^{(2)}_{v^2}. \end{eqnarray} After applying PMC scale setting in the V-scheme, we obtain the PMC scales \begin{eqnarray} Q_h =e^{(3/8)}\,m_Q \label{scaleQCDh} \end{eqnarray} for the non-Coulomb correction, and \begin{eqnarray} Q_v =2\,v\,m_Q \label{scaleQCDb} \end{eqnarray} for the Coulomb correction. Again, in the V-scheme, $Q_h$ is of order $m_Q$, while $Q_v$ is of order $v\,m_Q$, since the scale $Q_h$ originates from the hard gluon virtual corrections, and $Q_v$ originates from Coulomb rescattering. The physical behavior of the scales does not change using different renormalization schemes. We note that the PMC scales in the usual $\overline{\rm MS}$ scheme are different from the scales in the physically-based V-scheme. This difference is due to the convention used in defining the $\overline{\rm MS}$ scheme. The PMC predictions eliminate the dependence on the renormalization scheme; this is explicitly displayed in the form of ``commensurate scale relations" (CSR)~\cite{Brodsky:1994eh, Lu:1992nt}. \begin{figure}[htb] \centering \includegraphics[width=0.40\textwidth]{PMCscalebeta.eps} \caption{The PMC scale $Q_v$ versus the center-of-mass energy $\sqrt{s}$ for the $b$ quark pair production in the V-scheme. $m_Q=4.89$ GeV. } \label{figPMCscalebeta} \end{figure} Taking $m_Q=4.89$ GeV for the $b$ quark pair production, we obtain $Q_h=7.11$ GeV for the non-Coulomb correction, and its value is larger than the conventional choice $\mu_r=m_Q$. For the Coulomb correction, we present its PMC scale $Q_v$ versus the center-of-mass energy $\sqrt{s}$ for the $b$ quark pair production in the V-scheme in Fig.(\ref{figPMCscalebeta}). The exponent disappears in Eq.(\ref{scaleQCDb}) compared to the scale in Eq.(\ref{scaleQCDbms}) in the $\overline{\rm MS}$ scheme. The scale $Q_v$ becomes soft for $v\rightarrow0$, and $Q_v\rightarrow2m_Q$ for $v\rightarrow1$, yielding the correct physical behavior. \begin{figure}[htb] \centering \includegraphics[width=0.40\textwidth]{CoedetabV2.eps} \caption{The Coulomb terms of the form $(\pi/v)\,\delta^{(2)}_{v}$ in the V-scheme for the $b$ quark pair production, where $\delta^{(2)}_v=(\delta^{(2)}_{v,in}|_V+\delta^{(2)}_{v,n_f}|_V\,n_f)$ is for conventional scale setting and $\delta^{(2)}_v=\delta^{(2)}_{v,\rm sc}|_V$ is for PMC scale setting. } \label{figCoedetabV2} \end{figure} As in the case of the $\overline{\rm MS}$ scheme, the $v$-dependent behavior of the coefficients $\delta^{(2)}_h$ of the non-Coulomb correction in the V-scheme using conventional and PMC scale settings is the same. For the Coulomb correction, the high-order Coulomb term of the form $(\pi/v)^2\,\delta^{(2)}_{v^2}$ are not finite for $v\rightarrow0$ before and after using the PMC. This is because the coefficient $\delta^{(2)}_{v^2}$ of the high-order Coulomb term is independent of the nonconformal $\{\beta_i\}$-terms and the logarithmic term $\ln(v)$. After absorbing the nonconformal $\beta_0$-term using the PMC, the behavior of the Coulomb term of the form $(\pi/v)\,\delta^{(2)}_{v}$ is dramatically changed. More explicitly, the Coulomb terms of the form $(\pi/v)\,\delta^{(2)}_{v}$ in the V-scheme using conventional and PMC scale settings are presented in Fig.(\ref{figCoedetabV2}). When the quark velocity $v\rightarrow0$, the Coulomb term is $(\pi/v)\delta^{(2)}_v\rightarrow+\infty$ due to the presence of the term $-\ln v/v$ using conventional scale setting. After applying PMC scale setting, the logarithmic term $\ln(v)$ vanishes in the coefficient $\delta^{(2)}_v$; the Coulomb term is $(\pi/v)\delta^{(2)}_v\rightarrow-\infty$ due to the term $-(\pi/v)$. Thus, multiplying by the $v$-factor in the LO cross section $\sigma^{(0)}$, the Coulomb term $\sigma^{(0)}(\pi/v)\,\delta^{(2)}_v$ for $v\rightarrow0$ is not finite using conventional scale setting, but it is finite after using PMC scale setting. It is noted that the quark pair and lepton pair productions in $e^+e^-$ annihilation near the threshold region should show similar physical behavior. This dramatically different behavior of the $(\pi/v)\delta^{(2)}_v$ between conventional and PMC scale settings near the threshold region should be checked in QED. \section{The QED process of the lepton pair production} \label{sec:3} Similar to quark pair production, the lepton pair production cross section for the QED process $e^{+}e^{-}\rightarrow\gamma^{*}\rightarrow l\bar{l}$ is expanded in the QED coupling constant $\alpha$. The cross section can also be divided into the non-Coulomb and Coulomb parts. As in the Eq.(\ref{eqpairNNLO}), the corresponding perturbative coefficients for the lepton pair production cross section are~\cite{Czarnecki:1997vz, Hoang:1995ex, Hoang:1997sj}, \begin{eqnarray} \sigma^{(0)}=\frac{4}{3}\frac{\pi\,\alpha^2}{s}\,\frac{v\,(3-v^2)}{2}, \end{eqnarray} \begin{eqnarray} \delta^{(1)}_h=-4\,C_F,~\delta^{(1)}_{v}=\frac{C_F\,\pi}{2}, \end{eqnarray} \begin{eqnarray} \delta^{(2)}_{h,in}&=&{1\over36}\,C_F\,(-16(-11+\pi^2)\,T_R + C_F\,(351+6\pi^4 \nonumber\\ &&-36\zeta_3 + \pi^2\,(-70+48\ln2)) - 24\,C_F\,\pi^2\ln v), \nonumber\\ \delta^{(2)}_{h,n_f}&=&{11\,C_F\,T_R\over9}, \nonumber\\ \delta^{(2)}_{v,in}&=&-2\,C_F^2\,\pi, \nonumber\\ \delta^{(2)}_{v,n_f}&=&{1\over18}\,C_F\,\pi\,T_R\,(-5+6\ln(2v)), \nonumber\\ \delta^{(2)}_{v^2}&=&{C_F^2\,\pi^2\over12}. \end{eqnarray} The one-loop correction coefficients $\delta^{(1)}_h$ and $\delta^{(1)}_{v}$ and the two-loop correction coefficients $\delta^{(2)}_{h,n_f}$, $\delta^{(2)}_{v,n_f}$ and $\delta^{(2)}_{v^2}$ have the same form in QCD and QED with only some replacements: $C_A=3$, $C_F=4/3$ and $T_R=1/2$ in QCD and $C_A=0$, $C_F=1$ and $T_R=1$ in QED. By using the PMC, the vacuum polarization corrections can be absorbed into the QED running coupling: \begin{eqnarray} \alpha(Q) = \alpha\left[1 + \left({\alpha\over\pi}\right)\sum^{n_f}\limits_{i=1}{1\over3}\left(\ln\left({Q^2\over m^2_i}\right)-{5\over3}\right)\right], \end{eqnarray} where $m_i$ is the mass of the light virtual lepton, and it is far smaller than the final state lepton mass $m_l$. We then obtain \begin{eqnarray} \sigma &=& \sigma^{(0)}\left[1+\delta^{(1)}_h\,{\alpha(Q_h)\over\pi} + \delta^{(2)}_{h,\rm in}\,\left({\alpha(Q_h)\over\pi}\right)^2 \right. \nonumber\\ &&\left. + \left(\frac{\pi}{v}\right)\,\delta^{(1)}_v\,{\alpha(Q_v)\over\pi} + \left(\frac{\pi}{v}\right)\,\delta^{(2)}_{v,\rm in}\,\left({\alpha(Q_v)\over\pi}\right)^2\right. \nonumber\\ &&\left. + \left(\frac{\pi}{v}\right)^2\,\delta^{(2)}_{v^2}\,\left({\alpha(Q_v)\over\pi}\right)^2 + {\cal O}(\alpha^3)\right]. \label{QED:afPMC} \end{eqnarray} The resulting PMC scales can be written as \begin{eqnarray} Q_i = m_l\,\exp\left[{5\over6}+{3\over2}\,{\delta^{(2)}_{i,n_f}\over\delta^{(1)}_i}\right], \end{eqnarray} where, $i=h$ and $v$ stand for the non-Coulomb and Coulomb corrections, respectively. Taking $C_A=0$, $C_F=1$ and $T_R=1$ for QED, the PMC scales are \begin{eqnarray} Q_h =e^{(3/8)}\,m_l \label{scaleQEDh} \end{eqnarray} for the non-Coulomb correction, and \begin{eqnarray} Q_v =2\,v\,m_l \label{scaleQEDb} \end{eqnarray} for the Coulomb correction. Since the scales $Q_h$ stem from the hard virtual photons corrections and $Q_v$ originates from the Coulomb rescattering, $Q_h$ is of order $m_l$ and $Q_v$ is of order $v\,m_l$. The scales show the same physical behavior from QCD to QED after using PMC scale setting. It is noted that the PMC scales in Eqs.(\ref{scaleQCDh}) and (\ref{scaleQCDb}) for QCD in the V-scheme coincide with the scales in Eqs.(\ref{scaleQEDh}) and (\ref{scaleQEDb}) for QED, respectively. This scale self-consistency shows that the PMC method in QCD agrees with the standard Gell-Mann-Low method~\cite{GellMann:1954fq} in QED. The V-scheme provides a natural scheme for the QCD process for the quark pair productions. In the following, we take the $\tau$ lepton pair production as an example to make a detailed analysis near the threshold region. Taking $m_\tau=1.777$ GeV~\cite{Tanabashi:2018oca}, we obtain the scale $Q_h = 2.59$ GeV, which is larger than $m_\tau$ for the non-Coulomb correction. For the Coulomb correction, as in the case of QCD, the scale becomes soft for $v\rightarrow0$ and $Q_v\rightarrow2m_l$ for $v\rightarrow1$. The PMC scales thus rigorously yield the correct physical behavior for the lepton pair production near the threshold region. \begin{figure}[htb] \centering \includegraphics[width=0.40\textwidth]{CoeQEDb2.eps} \caption{The Coulomb terms of the form $(\pi/v)\,\delta^{(2)}_{v}$ for the $\tau$ lepton pair production, where $\delta^{(2)}_v=(\delta^{(2)}_{v,in}+\delta^{(2)}_{v,n_f}\,n_f)$ is for conventional scale setting and $\delta^{(2)}_v=\delta^{(2)}_{v,\rm in}$ is for PMC scale setting. } \label{figCoeQEDb2} \end{figure} For the non-Coulomb correction, the $v$-dependent behavior of the coefficients $\delta^{(2)}_h$ using conventional and PMC scale settings is the same, as in the case of QCD. For the Coulomb correction, the high-order Coulomb term of the form $(\pi/v)^2\,\delta^{(2)}_{v^2}$ are not finite for $v\rightarrow0$ before and after using the PMC. The Coulomb terms of the form $(\pi/v)\,\delta^{(2)}_{v}$ using conventional and PMC scale settings are presented in Fig.(\ref{figCoeQEDb2}). It is noted that in different from the case of QCD, when the quark velocity $v\rightarrow0$, the Coulomb terms are $(\pi/v)\delta^{(2)}_v\rightarrow-\infty$ due to the presence of the term $\ln v/v$ using conventional scale setting, and the term $-1/v$ using PMC scale setting. Multiplying by the $v$-factor in the LO cross section $\sigma^{(0)}$, the Coulomb term for $v\rightarrow0$ is $(\sigma^{(0)}\,(\pi/v)\,\delta^{(2)}_v)\rightarrow-\infty$ using conventional scale setting, and is a finite using PMC scale setting. Thus, we can see from Figs.(\ref{figCoedetab2}), (\ref{figCoedetabV2}) and (\ref{figCoeQEDb2}) that after using the PMC, the behavior of the coefficients $\delta^{(2)}_v$ is dramatically changed; the coefficients $\delta^{(2)}_v$ in the threshold region are not finite using conventional scale setting, and are finite using PMC scale setting for both QCD and QED. \section{Summary} \label{sec:4} Heavy fermion pair production in $e^+e^-$ annihilation is a fundamental process in the SM. However, the conventional procedure of simply setting the renormalization scale as $\mu_r=m_f$ violates the physical behavior of the reaction and leads to the unreliable predictions near the threshold region. In contrast, the PMC scale-setting method provides a self-consistent analysis, and reveals the correct physical behavior of the scale for the heavy fermion pair production near the threshold region, both in QCD and QED. \begin{itemize} \item It is remarkable that two distinctly different scales are determined for the heavy fermion pair production near the threshold region using the PMC. The scale determined for the hard virtual correction is of order the fermion mass $m_f$; the scale determined for the Coulomb rescattering is of order $v\,m_f$, which becomes soft for $v\rightarrow0$. Thus, PMC scale-setting provides a rigorous method for setting unambiguously the renormalization scale as function of the quark velocity $v$, reflecting the virtuality of the propagating gluons (photons) for the QCD (QED) processes. \item For the non-Coulomb correction of the fermion pair production, the contributions will be suppressed in the threshold region. For the Coulomb correction, the contribution in the threshold region is not finite using conventional scale setting. A resummation of the logarithmically enhanced terms is thus required. After using PMC scale setting, the logarithmic terms $\ln(v)$ vanishes in the coefficient $\delta^{(2)}_v$ from QCD to QED, and thus the coefficient $\delta^{(2)}_v$ is finite in the threshold region. \item The V-scheme provides a natural scheme for the QCD calculation for the quark pair production. After converting the QCD calculation from the $\overline{\rm MS}$ scheme to the V-scheme, the resulting PMC predictions in the Abelian limit are consistent with the results of QED. The scales are $Q_h =e^{(3/8)}\,m_f$ for the hard virtual correction and $Q_v =2\,v\,m_f$ for the Coulomb rescattering for both QCD and QED. The PMC scales for QCD and QED are identical after applying the relation between PMC scales: $Q^2_{\rm QCD}/Q^2_{\rm QED}=e^{-5/3}$; this factor converts the scale underlying predictions in the $\overline{\rm MS}$ scheme used in QCD to the scale of the V-scheme conventionally used in QED~\cite{Brodsky:1994eh}. We emphasize that the predictions based on the conventional scale-setting method are incorrect when applied to the Abelian theory. The renormalization scale in QED can be set unambiguously by using the Gell-Mann-Low method. The PMC scale-setting method in QCD reduces correctly in the Abelian limit $N_C\rightarrow0$ to the Gell-Mann-Low method. This consistency provides rigorous support for the PMC scale-setting method. \end{itemize} \hspace{1cm} {\bf Acknowledgements}: S. Q. W. thanks the SLAC theory group for kind hospitality. L. D. G. wants to thank the SLAC theory group for its kind hospitality and support. This work was supported in part by the Natural Science Foundation of China under Grants No. 11625520, No. 11705033, and No. 11905056; by the Project of Guizhou Provincial Department under Grant No. KY[2017]067; and by the Department of Energy Contract No.DE-AC02-76SF00515. SLAC-PUB-17512.
1,116,691,498,835
arxiv
\section{Introduction} The possibility to confine a plasma into a finite region of space necessarily requires the application of external magnetic fields able to produce the requested balance of the fundamental forces acting on the plasma itself \cite{landau}. However, the achievement of a steady equilibrium for a fusion relevant laboratory plasma is far from being available in experimental operations, both for the limited time duration of the discharge and for the unavoidable plasma instabilities \cite{wesson,biskamp}. Therefore, in the context of tokamak machines we can consider as equilibrium a magnetic configuration which lives for a much longer time than the typical instability inverse rates and, on the other hand, sufficiently short to prevent the observation of significant changes in the global distribution of the magnetic and internal energy of the plasma. Historically (see the original studies in \cite{grad,shafranov}) the basic magnestostatic equation underlying the equilibria consists of the balance between the pressure gradient and the $\vec{j}\wedge\vec{B}$ force, without considering any contribution due to velocity fields, and to the mass density as a consequence. This scenario was appropriate for the pioneering experiments of the plasma confinement and also for many later machines, but, since the early nineties it became clear \cite{hassam93} that tokamak plasmas could be endowed with both toroidal and poloidal rotation fields (known as \emph{spontaneous rotation}), even reaching values comparable to the sound speed \cite{rice07}. The necessity to include velocity fields in the description of a tokamak equilibrium is increasingly relevant to achieve a satisfactory level of predictivity. In the next generation large sized tokamaks like ITER \cite{iter18}, JT-60SA \cite{giruzzi19}, DTT \cite{dtt19}, but also in present medium sized tokamak experiments, for instance MAST-U \cite{harrison19} and TCV \cite{coda17}, the request to include toroidal and poloidal velocity fields is a consequence of the spontaneous rotation phenomenon, cited above, but also due to the interaction of the plasma with additional heating systems, in particular when injection of hot neutral beams is utilized. Furthermore, the operation of the current machines in the H-mode regime seems to be favoured by transport barriers due to the radial electric field which lives in the equilibrium by virtue of the presence of poloidal velocity fields \cite{burrell97}. Studies to include the velocity contributions in tokamak equilibria are present in literature, see for instance \cite{maschke,ogilvie,guazzotto05,greci}, and are mainly focused on the possibility to include toroidal motions of the plasma, by making use of the concepts of generalized pressure or Bernoulli functions. For more recent systematic approaches see \cite{farengo,guazzotto21}; moreover, a specific semi-analytical technique to investigate, on one hand the influence on equilibria of the resistive diffusion and, on the other hand of the toroidal field velocity, has been considered in \cite{egse} and \cite{dpmarxiv}, respectively. In the present letter, we construct an equilibrium configuration of a plasma endowed simultaneously with toroidal and poloidal velocities, arriving to a specific reformulation of the standard Grad--Shafranov equation. We base our analysis on two main assumptions: i) the poloidal velocity field follows the corresponding poloidal magnetic lines; ii) the plasma mass density depends on the magnetic flux function, or equivalently, the fluid can be regarded as incompressible. The implementation of the obtained Grad--Shafranov-like equation has the peculiarity to be nonlinear in the poloidal magnetic field component, because the magnetic pressure gradient remain, together with the thermodynamical pressuire gradient, into the force balance fixing the equilibrium. The implementation to the tokamak equilibrium configuration immediately outline the expected deviation of the magnetic surfaces form the isobar ones. As a consequence, three different regions of the plasma confinament can be identified: i) a region which is within the separatrix and the isobar of zero pressure (the most confined region); ii) the region which is out of the isobar of vanishing pressure, but which is still within the separatrix (a lower confinement condition); iii) the usual scrape--off layer region (there no real confinement exists). However, in general, the separatrix and the zero pressure region intersect each other and therefore more hybrid situations can take place, like the possibility for the plasma costituents to cross the isobar of vanishing pressure along the magnetic surfaces and therefore, in principle, due to parallel transport. However, the most interesting result of our analysis emerges when we implement the present scenario to a TCV-like configuration in the presence of a magnetic profile endowed with two (up and down) X-points \cite{tcvlike}. We constructed an equilibrium for such a double null profile and observed the formation of a closed line of zero pressure which encircles the two X-points. The presence of these two plasma lobi suggests that, in the presence of velocity fields, the plasma around the null is better confined with respect to the absence of any motion. This has reasonable implications on the heat and particle transport form the X-point towards the divertor. By other words, the "private region'' (\emph{i.e.} the region enclosed by the divertor surface and the magnetic configuration outer legs) results divided into two parts: one inside a lobe of zero pressure and one at all equivalent to that region commonly observed in the scrape--off layer. The existence of these symmetric (up and down) minor lobi can have an important role in protecting the divertor from excessive power tranfers from the plasma and it should be further investigated under many points of view. \section{Basic equations}\label{sec1} We consider the equations describing a steady ideal MHD fluid, characterized by a mass density $\rho$, a pressure $p$, a velocity field $\vec{v}$ and embedded in a magnetic field $\vec{B}$. We write the mass conservation equation and the ideal steady momentum conservation equation as \begin{align} \vec{\nabla}\cdot \left( \rho \vec{v}\right) = 0 \,, \label{equid1} \\ \rho \vec{v}\cdot \vec{\nabla}\vec{v} = - \vec{\nabla} \left(p + \frac{B^2}{2\upmu_0}\right) + \frac{1}{\upmu_0}\vec{B}\cdot \vec{\nabla}\vec{B} \,, \label{equid2} \end{align} where in the latter we separated the contributions of the magnetic pressure and tension. Combining together the electron force balance, $\vec{E} = \vec{v}\,\wedge\vec{B}$, with the irrotational character of the static electric field, $\vec{\nabla}\wedge\vec{E}=0$, we get the equation \begin{equation} \vec{\nabla}\wedge \left( \vec{v}\,\wedge \vec{B}\right) = 0 \,. \label{equid3} \end{equation} Finally, the temperature $T$ verifies the stationary equation \begin{equation} \vec{v}\cdot \vec{\nabla}T + \frac{2}{3}T\vec{\nabla}\cdot \vec{v} = 0 \,. \label{equid4} \end{equation} This configurational set is sufficient to properly characterize a plasma equilibrium, which we will specialize to the axial symmetry. \subsection{Axisymmetric equilibrium}\label{sec1.1} We now consider axial symmetry in cylindrical coordinates $\{ r,\phi ,z\}$, so that all the partial derivatives $\partial_{\phi}(...) = 0$ vanish. Under this hypothesis, Eq.(\ref{equid1}) has the form \begin{equation} \frac{1}{r}\partial_r\left(\rho rv_r\right) + \partial_z\left(\rho v_z\right) = 0 \,, \label{equid5} \end{equation} which admits the solution \begin{equation} \rho\vec{v} = -\frac{1}{r}\partial_z\Theta \hat{e}_r + \rho\omega r\hat{e}_{\phi} + \frac{1}{r}\partial_r\Theta \hat{e}_z \,, \label{equid6} \end{equation} $\Theta$ and $\omega$ being generic functions, the latter denoting the angular velocity of the plasma. We now consider a magnetic field having the form \begin{equation} \vec{B} = -\frac{1}{r}\partial_z\psi \,\hat{e}_r + rK(\psi ) \,\hat{e}_{\phi} + \frac{1}{r}\partial_r\psi \,\hat{e}_z \,, \label{equid7} \end{equation} where $\psi$ is the magnetic flux function (times $1/2\pi$) and $K$ denotes a generic function. Since, in axial symmetry, the steady toroidal component of the electric field $E_{\phi}$ must vanish, then according to Eqs.(\ref{equid6}) and (\ref{equid7}) we get \begin{equation} E_{\phi} = - \frac{1}{\rho r^2}\left( \partial_z\Theta\partial_r\psi - \partial_r\Theta\partial_z\psi \right) = 0 \,, \label{equid8} \end{equation} which implies the relation $\Theta = \Theta (\psi)$. Hence, comparing Eqs.(\ref{equid6}) and (\ref{equid7}), we immediately see the validity of the following relation: \begin{equation} \rho \vec{v}_p\equiv \rho \left( v_r\hat{e}_r + v_z\hat{e}_z\right) = \frac{d\Theta}{d\psi}\vec{B}_p \,, \label{equid9} \end{equation} where $\vec{v}_p$ and $\vec{B}_p$ denote the poloidal components of the velocity and the magnetic field, respectively. We now adopt the simplifying assumption $\rho = \rho (\psi )$, which implies, according to Eq.(\ref{equid1}), the two basic relations \begin{equation} \vec{v}_p\cdot \vec{\nabla}\rho = 0 \, ; \quad \vec{\nabla}\cdot \vec{v} = 0 \,. \label{equid10} \end{equation} It is immediate to recognize that we also have \begin{equation} \rho \vec{v}_p\cdot \vec{\nabla}\vec{v}_p = \frac{1}{\rho}\left(\frac{d\Theta}{d\psi}\right)^2 \vec{B}_p\cdot \vec{\nabla}\vec{B}_p \,. \label{equid11} \end{equation} Thus, the poloidal magnetic tension can be cancelled in Eq.(\ref{equid2}), by imposing the condition \begin{equation} \frac{d\Theta}{d\psi} = \pm \sqrt{\frac{\rho (\psi )}{\upmu_0}} \,. \label{equid12} \end{equation} This equation also allows to cancel from the steady momentum equation the nonlinear poloidal advection term. Eq.(\ref{equid4}) for the plasma temperature, taking (\ref{equid10}) into account, gives \begin{equation} \vec{v}_p\cdot \vec{\nabla}T = 0 \, \Rightarrow T = T(\psi ) \,. \label{equid13} \end{equation} Analogously, the toroidal component of Eq.(\ref{equid3}) (the poloidal one is now an identity) yields \begin{equation} \partial_r\omega \partial_z\psi - \partial _z\omega\partial_r\psi = 0 \, \Rightarrow \omega = \omega (\psi) \,, \label{equid14} \end{equation} which expresses the corotation theorem \cite{ferraro37}. \subsection{Momentum conservation equation}\label{sec2} Now we analyze the momentum conservation equation (\ref{equid2}), starting from its toroidal component: \begin{equation} \partial_z\psi \left(\mp\frac{\omega}{r}\sqrt{\frac{\rho}{\upmu_0}}+\frac{K}{\upmu_0 r}\right) = -\rho\omega^2 r+\frac{rK^2}{\upmu_0} \,, \label{equid15} \end{equation} where we used Eq.(\ref{equid12}). Then we find the simple relation for $K(\psi)$: \begin{equation} K(\psi ) = \upmu_0 \omega \frac{d\Theta}{d\psi} = \pm \omega \sqrt{\upmu_0 \rho} \,. \label{equid16} \end{equation} Finally, we study the poloidal component of Eq.(\ref{equid2}), that is \begin{equation} -r\left(\rho\omega^2-\frac{2K^2}{\upmu_0} \right) \hat{e}_r = -\vec{\nabla}\left( p + \frac{B_p^2}{2\upmu_0}\right) - \frac{r^2}{2\upmu_0}\frac{dK^2}{d\psi}\vec{\nabla}\psi \,. \label{equid17} \end{equation} which, after some algebra and making use of Eq.(\ref{equid16}), can be rewritten in the following convenient form: \begin{equation} \vec{\nabla}\left( p + \frac{B_p^2}{2\upmu_0} + \rho\frac{\omega^2r^2}{2} \right) = 0 \,. \label{equid18} \end{equation} This equation clearly implies the existence of a constant quantity equal to the expression in brackets, which we label $E$. Its value can be determined considering the magnetic axis, \emph{i.e.} the point $A(r_A,z_A)$ where the magnetic flux function takes its maximum value, and its derivatives vanish. Hence we write the expression for $E$, along with the final equation for the plasma equilibrium, as: \begin{align} E \equiv p_A + \rho_A \frac{\omega_A^2r_A^2}{2}\,, \label{equid19} \\ \left(\partial_r\psi\right)^2 +\left(\partial_z\psi\right)^2 = 2 \upmu_0 r^2 \left[ E - p(r,z) - \rho \frac{\omega^2 r^2}{2} \right] \,, \label{equid20} \end{align} where we introduced the notation $f_A=f(r_A,z_A)$ for the quantities calculated along the magnetic axis. Eq.(\ref{equid20}) is the configurational equation which allows to determine the equilibrium features of the considered plasma in the presence of matter flows. \section{Tokamak relevant solution}\label{sec3} We now search for a solution of the equilibrium which can be relevant in the context of a tokamak device. As a first step, in order to work with dimensionless variables, let us introduce the following normalizations and definitions: \begin{align} r_n = \frac{r}{r_A} \,,\,\, z_n = \frac{z}{r_A} \,,\,\, \omega_n = \frac{\omega}{\omega_A} \,,\,\, \rho_n = \frac{\rho}{\rho_A} \,,\,\, B_n = \frac{B}{B_A} \,, \nonumber \\ \psi_n = \frac{\psi}{B_A r_A^2} \,,\,\, p_n = 2\upmu_0 \frac{p}{B_A^2} \,,\,\, \chi_A \equiv \frac{\upmu_0 \rho_A \omega_A^2 r_A^2}{B_A^2} \,. \nonumber \end{align} Using these variables, and dropping all subscripts $n$ from the notation for simplicity, Eq.(\ref{equid20}) can be recast as: \begin{equation} \left(\partial_r\psi\right)^2 +\left(\partial_z\psi\right)^2 = r^2 \left[ p_A - p(r,z) + \chi_A (1 - \rho \omega^2 r^2 ) \right] \,, \label{equid21} \end{equation} The usual approach to solving the Grad--Shafranov equation analytically is to assign a specific functional form for the source terms on the right--hand side (here $p$, $\rho$ and $\omega$), and then to solve the differential equation for the unknown function $\psi$. This method is natural when the equilibrium equation is linear in the magnetic flux $\psi$, and the source terms are functions of $\psi$ itself, allowing for suitable linearizations of the equation. In the present analysis, neither of these conditions hold; therefore, we look for a more convenient procedure, appropriate to evaluate the influence of the velocity field on the pressure profile. If we choose the pressure $p$ as unknown function, we can rewrite Eq.(\ref{equid21}) as: \begin{equation} p(r,z) = p_A + \chi_A (1 - \rho \omega^2 r^2 ) - \frac{\left(\partial_r\psi\right)^2 +\left(\partial_z\psi\right)^2}{r^2} \,, \label{equid22} \end{equation} Now if the magnetic flux function is known, we can compute its derivatives and plug them in the right--hand side; then, after assuming a reasonable functional form for $\rho$ and $\omega$, the pressure is readily obtained and its morphology can be studied. One way to obtain the function $\psi$ is to solve an equivalent equilibrium problem in the absence of plasma motion. In particular, we consider the well--known Solov'ev configuration, governed by \begin{equation} \Delta^*\psi + S_1 r^2 + S_2 = 0 \,, \label{equid23} \end{equation} where $S_1$ and $S_2$ are some constants linked to the plasma pressure and toroidal magnetic field \cite{solovev}. Hence we can write $\psi = \psi_0 - S_1 r^4/8 - S_2 z^2/2$, \emph{i.e.} the sum of the homogeneous and a particular solution, where $\psi_0$ satisfies $\Delta^*\psi_0 = 0$. Let us consider the following homogeneous solution for the up--down symmetric case \cite{egse,dpmarxiv}: \begin{equation} \psi_0(r,z) = r \sum_{i=1}^N \left[c_i I_1(r k_i) + d_i K_1(r k_i) \right] \cos(k_i z)\,, \end{equation} where $I_1$, $K_1$ denote the modified Bessel functions of order 1. $\{k_i\}$ is a set of $N$ arbitrary wavenumbers, which can roughly be taken of the order $\pi/\kappa$, $\kappa$ being the elongation of the plasma configuration. Then, the constants $c_i$, $d_i$ are determined by assigning a sufficient number of boundary points along the plasma magnetic separatrix, where $\psi=\psi_B$ (some constant value), and solving the resulting set of algebraic equations (this step is performed through a dedicated Mathematica code). Once the function $\psi$ is known, we need to fix the dependence of the plasma density and rotation frequency. At this point, a consideration is important. According to Eq.(\ref{equid16}), the toroidal magnetic field $B_{\phi} = rK(\psi)$ is proportional to $\sqrt{\rho} \omega$. In tokamaks, the plasma equilibrium solution within the separatrix is necessarily linked to the vacuum solution of the surrounding region (the scrape--off layer) by matching boundary conditions. In the vacuum region, the magnetic field is essentially equal to the one generated by the external machine coils, thus it is generally different from zero. We conclude that in the present scenario $\rho$ and $\omega$ cannot vanish on the plasma magnetic separatrix, since this would imply a discontinuous geometry of the magnetic field lines. In other words, our solution is associated with density and rotation profiles which are non vanishing along the magnetic separatrix: the details related to such a discontinuous behaviour are not investigated further here, although they can be taken as a starting point to study H-mode plasma regimes, which are usually characterized by profiles with pedestals in proximity of the separatrix. Let us consider the following normalized expressions: \begin{equation} \rho(\psi) = \frac{\rho_0 + \psi^2}{\rho_0 + 1} \,,\,\, \omega(\psi) = \frac{\omega_0 + \psi^2}{\omega_0 + 1} \,, \label{equid24} \end{equation} where the constants $\rho_0$ and $\omega_0$ can be assigned to control the pedestal height. In order to introduce the main features of our solution, let us show two illustrative cases, where we reproduce two generic plasma configurations in the absence of X-points (limited plasma), characterized by positive and negative triangularity, respectively (Figs.\ref{fig1} and \ref{fig2}). We emphasize the expected behaviour of the separation between surfaces of constant $\psi$ and $p$, due to plasma motion. \begin{figure}[ht] \centering \begin{minipage}[b]{0.48\linewidth} \includegraphics[width=0.91\linewidth]{mathematica/fig1c}\vspace{-1pt} \end{minipage} \quad \begin{minipage}[b]{0.45\linewidth} \includegraphics[width=0.9\linewidth]{mathematica/fig1a}\vspace{5pt}\\ \includegraphics[width=0.9\linewidth]{mathematica/fig1b} \end{minipage} \caption{(Left) Plasma magnetic separatrix (solid black) and isobar of null pressure (dashed blue) of a generic limiter plasma scenario with triangularity $\delta=0.35$. (Right) Profiles of normalized plasma density $\rho_n$ (top) and rotation velocity $r_n \omega_n$ (bottom) along the plasma equator.} \label{fig1} \end{figure} \begin{figure}[ht] \centering \begin{minipage}[b]{0.48\linewidth} \includegraphics[width=0.91\linewidth]{mathematica/fig2c}\vspace{-1pt} \end{minipage} \quad \begin{minipage}[b]{0.45\linewidth} \includegraphics[width=0.9\linewidth]{mathematica/fig2a}\vspace{5pt}\\ \includegraphics[width=0.9\linewidth]{mathematica/fig2b} \end{minipage} \caption{(Left) Plasma magnetic separatrix (solid black) and isobar of null pressure (dashed blue) of a generic limiter plasma scenario with triangularity $\delta=-0.35$. (Right) Profiles of normalized plasma density $\rho_n$ (top) and rotation velocity $r_n \omega_n$ (bottom) along the plasma equator.} \label{fig2} \end{figure} The situation becomes interesting when we study a plasma scenario compatible with the TCV machine regime of operation \cite{tcvlike}. In particular, we assign a double--null magnetic geometry with high positive triangularity, with its X-points close to the machine first wall (Fig.\ref{fig3}). \begin{figure}[ht] \centering \begin{minipage}[b]{0.48\linewidth} \includegraphics[width=0.91\linewidth]{mathematica/fig3c}\vspace{-1pt} \end{minipage} \quad \begin{minipage}[b]{0.45\linewidth} \includegraphics[width=0.9\linewidth]{mathematica/fig3a}\vspace{5pt}\\ \includegraphics[width=0.9\linewidth]{mathematica/fig3b} \end{minipage} \caption{(Left) Plasma magnetic separatrix (solid black) and isobar of null pressure (dashed blue) of a TCV-like equilibrium in the presence of two (symmetric) X-points. (Right) Profiles of normalized plasma density $\rho_n$ (top) and rotation velocity $r_n \omega_n$ (bottom) along the plasma equator.} \label{fig3} \end{figure} Here the most striking feature is the behaviour of the curve of null pressure, which, departing from the magnetic separatrix, encircles the X-points, defining three distinct plasma confinement regions: \begin{enumerate} \item The region close to the plasma magnetic axis where $\psi > \psi_B$, $p > 0$ is qualitatively unaffected with respect to the static case, and corresponds to the usual plasma confinement condition. \item The region inside the magnetic separatrix close the inner and outer plasma edges, where $\psi > \psi_B$ but outside of the positive pressure region, seemingly determining a deterioration of the plasma confinement due to the absence of a pressure gradient in the forces balance. \item The region enclosed in the $p>0$ surface but outside of the magnetic separatrix, and especially its intersection with the ``private region'' of the plasma, that is defined by the divertor surface and the magnetic legs outside of the main plasma region. \end{enumerate} Concerning this latter case, our solution shows how, in this particular regime of toroidal and poloidal plasma motion, there is a confining effect due to the plasma pressure lines which is acting in the private region. Due to this enhanced confinement, the macroscopic effect would be that of relieving the particle and power exhaust of the plasma towards the divertor, with possible useful applications in upcoming devices where the management of the power output will be a main concern. \section{Concluding remarks} We constructed a new type of plasma equilibrium in axial symmetry, based on the assumption that the poloidal velocity and magnetic flux lines coincide along the configuration. Furthermore, we considered the plasma mass density only dependent on the magnetic flux function, basically assuming an incompressible plasma. In this scenario, we arrived to a generalization of the Grad--Shafranov equation \cite{landau,grad,shafranov}, with the peculiar feature that it turns to be nonlinear in the gradient of the magnetic flux function. This can be explained by the fact that we were able to simultaneously cancel the nonlinear poloidal velocity term and the magnetic tension contribution. Thus, the resulting equilibrium equation contains the magnetic pressure only, from which arises the nonlinearity already mentioned. We then applied the obtained axisymmetric configuration to tokamak relevant situation and, as expected, we saw that the presence of velocity fields induces a deviation of the isobar surfaces (in particular that one at zero pressure) from the morphology of the closed magnetic surfaces (in particular the separatrix). This fact is not surprising, but it was a relevant achievement to consider the implications of such a discrepancy between the separatrix and the zero pressure surface with respect to the presence of X-points in the magnetic profile. In fact, analyzing configurations close to the operational conditions of the TCV device, we could observe a very interesting phenomenon, consisting in the inclusion of the X-points of a double--null scenario inside the surface of vanishing pressure. More specifically, such a surface envelopes these two symmetric (up and down) points, shielding the wall of the machine from power transfers to a degree which should be determined according to further studies. The private region is consequently divided into two parts, the one close to the divertor retains the usual morphology of the scrape--off layer, while the portion closer to the X-point acquires a pressure barrier since inside the surface $p=0$ the thermodynamical pressure supports the plasma confinement. This result can impact the operation of a tokamak with diverted plasma, especially medium sized ones due to the presence of higher velocity fields, since the possibility to reach such configurations with "protected X-points'' could significantly reduce the heat and particle fluxes towards the divertor, favouring a better control of the power exhaust during the discharges.
1,116,691,498,836
arxiv
\section{Introduction} \label{intro} The characterization of the flavor conversions for neutrinos emitted by a stellar collapse is a field of intense activity. In particular, the flavor transformation probabilities in supernovae (SNe) not only depend on the matter background~\cite{Matt,Dighe:1999bi}, but also on the neutrino fluxes themselves: neutrino-neutrino interactions provide a nonlinear term in the equations of motion~\cite{Pantaleone:1992eq, Sigl:1992fn,McKellar:1992ja} that causes collective flavor transformations~\cite{Qian:1994wh,Samuel:1993uw, Kostelecky:1993dm, Kostelecky:1995dt, Samuel:1996ri, Pastor:2001iu, Wong:2002fa, Abazajian:2002qx, Pastor:2002we, Sawyer:2004ai, Sawyer:2005jk}. Only recently~\cite{Duan:2005cp,Duan:2006an,Hannestad:2006nj} it has been fully appreciated that in the SN context these collective effects give rise to qualitatively new phenomena (see, e.g.,~\cite{Duan:2010bg} for a recent review). The main consequence of this unusual type of flavor transitions is an exchange of the spectrum of the electron species $\nu_e$ (${\bar\nu}_e$) with the non-electron ones $\nu_x$ (${\bar\nu}_x$) in certain energy intervals. These flavor exchanges are called ``swaps'' marked by the ``splits'', which are the boundary features at the edges of each swap interval~\cite{Duan:2006an,Duan:2010bg, Fogli:2007bk,Fogli:2008pt, Raffelt:2007cb, Raffelt:2007xt, Duan:2007bt, Duan:2008za, Gava:2008rp, Gava:2009pj,Dasgupta:2009mg,Friedland:2010sc,Dasgupta:2010cd}. The location and the number of these splits, as well as their dependence on the neutrino mass hierarchy, is crucially dependent on the flux ordering among different neutrino species~\cite{Fogli:2009rd,Choubey:2010up}. In this context, one of the main complication in the simulation of the flavor evolution is that the flux of neutrinos emitted from a supernova core is far from isotropic. The current-current nature of the weak-interaction Hamiltonian implies that the interaction energy between neutrinos of momenta ${\bf p}$ and ${\bf q}$ is proportional to $(1-{\bf v}_{\bf p} \cdot {\bf v}_{\bf q})$, where ${\bf v}_{\bf p}$ is the neutrino velocity~\cite{Qian:1994wh,Pantaleone:1992xh}. In a non-isotropic medium this velocity-dependent term would not average to zero, producing a different refractive index for neutrinos propagating on different trajectories. This is the origin of the so-called ``multi-angle effects''~\cite{Duan:2006an}, which hinder the maintenance of the coherent oscillation behavior for different neutrino modes~\cite{Raffelt:2007yz,Fogli:2007bk,EstebanPretel:2007ec,Sawyer:2008zs}. In~\cite{Raffelt:2007yz} it has been shown that in a dense neutrino gas initially composed of only $\nu_e$ and $\overline\nu_e$ with equal fluxes, multi-angles effects would rapidly lead to flavor decoherence, resulting in flux equilibration among electron and non-electron (anti)neutrino species. On the other hand, in the presence of relevant flavor asymmetries between $\nu_e$ and ${\overline \nu}_e$ multi-angle effects can be suppressed. In particular, during the early SN accretion phase, one expects as neutrino flux ordering $\Phi_{\nu_e}\gg \Phi_{\overline\nu_e}\gg \Phi_{\nu_x}= \Phi_{\overline\nu_x}$ ~\cite{Raffelt:2003en, Fischer:2009af, Huedepohl:2009wh}, defined in terms of the total neutrino number fluxes $\Phi_\nu$ for the different flavors. This case would practically correspond to neutrino spectra with a \emph{single crossing} point at $E \to \infty$ (where $F_{\nu_e}= F_{\nu_x}$, and $F_{\overline\nu_e}= F_{\overline\nu_x}$) since $F_{\nu_e}(E) > F_{\nu_x}(E)$ for all the relevant energies (and analogously for $\bar\nu$). For spectra with a single crossing, it has been shown in~\cite{EstebanPretel:2007ec} that the presence of significant flavor asymmetries between $\nu_e$ and $\overline\nu_e$ fluxes guarantees the synchronization of different angular modes at low-radii ($r\lesssim 100$~km), so that essentially nothing happens close to the neutrinosphere because the in-medium mixing is extremely small. Therefore, the possible onset of multi-angle effects is delayed after the synchronization phase. Then, the flavor evolution is adiabatic to produce spectral splits and swaps but not enough to allow the multi-angle instability to grow and produce significant decoherence effects. Therefore, the resultant neutrino flavor conversions would be described by an effective ``quasi single-angle'' behavior. In this case, the self-induced spectral swaps and splits would be only marginal smeared by multi-angle effects, as explicitly shown in~\cite{Duan:2006an,Fogli:2007bk,Fogli:2008pt}. This reassuring result has been taken as granted in most of the further studies that typically adopted the averaged single-angle approximation. However, this nice picture does not represent the end of the story for multi-angle effects \footnote{ We mention that recently multi-angle effects have been included also in the study of the flavor evolution of the $\nu_e$ neutronization burst in O-Ne-Mg supernovae. We address the interested reader to~\cite{Cherry:2010yc}.} A different result has been recently shown in~\cite{Duan:2010bf}, where multi-angle effects are explored, assuming a flux ordering of the type $\Phi_{\nu_x} \gtrsim \Phi_{\nu_e} \gtrsim \Phi_{\overline\nu_e}$, possible during the SN cooling phase, where one expects a moderate flavor hierarchy among different species and a ``cross-over'' among non-electron and electron species is possible~\cite{Raffelt:2003en, Fischer:2009af, Huedepohl:2009wh}. This case would correspond to neutrino spectra with \emph{multiple crossing} points, i.e. with $F_{\nu_e}(E) > F_{\nu_x}(E)$ at lower energies, and $F_{\nu_e}(E) < F_{\nu_x}(E)$ at higher energies (and analogously for $\bar\nu$). For such spectra, it has been shown in~\cite{Raffelt:2008hr} that the synchronization is not a stable solution for a neutrino gas in presence of a large neutrino density. Therefore, collective flavor conversions would be possible at low-radii in the single-angle scheme~\cite{Raffelt:2008hr}, in a region where one would have naively expected synchronization. However, it has been shown in~\cite{Duan:2010bf} that the presence of a large dispersion in the neutrino-neutrino refractive index, induced by multi-angle effects, seems to block the development of these collective flavor conversions close to the neutrinosphere. This recent result extends the finding obtained with a toy model in~\cite{Raffelt:2008hr}. The delay of the self-induced flavor conversions for this case is also visible in the multi-angle simulations in~\cite{animations}. So, it is apparent that multi-angle effects are relevant not only for fluxes with small flavor asymmetries, where they trigger a quick flavor \emph{decoherence}, but also in cases of spectra with multiple crossing points, where multi-angle effects can \emph{suppress} flavor conversions at low-radii. Triggered by the contrasting impact of the multi-angle effects for fluxes typical of the accretion and cooling phase, we take a closer look at the dependence of these effects on the neutrino flux ordering. The plan of our work is as follows. In Sec.~II we introduce our supernova flux models, and describe the equations for the flavor conversions in the multi-angle and single-angle case. In Sec.~III we show and explain our numerical results for the single-angle and multi-angle flavor evolution for some representative choices of SN neutrino fluxes. In particular, we select three cases corresponding respectively to $a)$ single-crossed neutrino spectrum with $\Phi_{\nu_e}\gg \Phi_{\overline\nu_e}\gg \Phi_{\nu_x}$, producing a ``quasi single-angle'' flavor evolution, $b)$ multiple-crossed spectrum with $\Phi_{\nu_x} \gtrsim \Phi_{\nu_e} \gtrsim \Phi_{\overline\nu_e}$, where single-angle and multi-angle evolutions give significantly different final neutrino spectra, $c)$ small flavor asymmetries, i.e. $\Phi_{\nu_e} \approx \Phi_{\overline\nu_e}\approx\Phi_{\nu_x}$, where the multi-angle suppression is small, and multi-angle decoherence produces a partial flavor equilibration among the different species. Finally, in Sec.~IV we draw inferences from our results and summarize. Technical aspects are discussed in the Appendix. \section{Supernova neutrino fluxes and equations of motion} \subsection{Supernova flux models} In the presence of neutrino-neutrino interactions the flavor conversions for SN neutrinos are described by non-linear equations. Therefore, SN neutrino oscillations will crucially depend on the initial neutrino fluxes. We define $F_{\nu_\alpha}$ as the number flux of a given neutrino species $\nu_{\alpha}$ emitted with energy $E$ in any direction at the neutrinosphere. In a supernova $\nu_e$ and ${\bar\nu}_e$ are distinguished from other flavors due to their charged-current interactions. The $\nu_\mu$, $\nu_\tau$ and their antiparticles, on the other hand, are produced at practically identical rates. Following the standard terminology~\cite{Dasgupta:2007ws}, we define the two relevant non-electron flavor states as $\nu_{x,y} = \cos \theta_{23}\nu_\mu \mp \cos \theta_{23}\nu_\tau$, where $\theta_{23}\simeq\pi/4$ is the atmospheric mixing angle. Since the initial $\nu_x$ and $\nu_y$ fluxes are identical, the primary neutrino fluxes are best expressed in terms of $\nu_e$, ${\bar\nu}_e$ and $\nu_x$. For half-isotropic emission these three relevant SN $\nu$ original neutrino number fluxes for the different species are given by~\cite{Fogli:2007bk} \begin{equation} \label{Ybeta} F_{\nu_\alpha}^0(E)= {\Phi^0_{{\nu_\alpha}}}\,\varphi_{\nu_\alpha}(E)\ , \end{equation} where \begin{equation} \Phi_{{\nu_\alpha}}^0 = \frac{1}{4 \pi^2 R^2}\frac{L_{{\nu_\alpha}}}{\langle E_{\nu_\alpha}\rangle} \end{equation} is the total number flux at the neutrinosphere radius $R$, that in our numerical examples we will take $R=10$~km. The neutrino luminosity is $L_{{\nu_\alpha}}$ and the neutrino average energy $\langle E_{\nu_\alpha}\rangle$. The function $\varphi_{\nu_\alpha}(E)$ is the normalized neutrino spectrum ($\int dE \; \varphi_{\nu_\alpha}=1$) and parametrized as~\cite{Keil:2002in}: \begin{equation} \label{varphi} \varphi_{\nu_\alpha}(E)=\frac{\beta^\beta}{ \Gamma(\beta)}\frac{E^{\beta-1}}{\langle E_{\nu_\alpha} \rangle^{\beta}} e^{-\beta E/\langle E_{\nu_\alpha}\rangle}\; , \end{equation} where $\beta$ is a spectral parameter, and $\Gamma(\beta)$ is the Euler gamma function. The values of the parameters are model dependent (e.g. see the Fig.~3 in~\cite{Fogli:2009rd}). For our numerical illustrations, we choose \begin{equation} \label{Eave} (\langle E_{\nu_e} \rangle,\;\langle E_{\bar\nu_e} \rangle,\;\langle E_{\bar\nu_x} \rangle ) = (12,\;15,\;18)\ \mathrm{MeV}\ , \end{equation} and $\beta=4$, from the admissible parameter ranges~\cite{Keil:2002in}. We will consider three representative cases for the ratios of the neutrino fluxes, namely \begin{eqnarray} \Phi^0_{\nu_e}\;:\;\Phi^0_{{\bar{\nu}}_e}\;:\;\Phi^0_{\nu_x} &=& 2.40\;:\;1.60\;:\;1.0 \,\ , \nonumber \\ \Phi^0_{\nu_e}\;:\;\Phi^0_{{\bar{\nu}}_e}\;:\;\Phi^0_{\nu_x} &=& 0.85\;:\;0.75\;:\;1.0 \,\ , \nonumber \\ \Phi^0_{\nu_e}\;:\;\Phi^0_{{\bar{\nu}}_e}\;:\;\Phi^0_{\nu_x} &=& 0.81\;:\;0.79\;:\;1.0 \,\ . \label{eq:cases} \end{eqnarray} The first case represents a flux ordering with a $\nu_e$ dominance, typical of the accretion phase, practically producing a single-crossed spectrum. The other two cases represent fluxes possible during the cooling phase, with a moderate $\nu_x$ dominance and different flavor asymmetries, producing a multiple-crossed spectrum. We will see how multi-angle effects would have a different impact for these three cases. \subsection{Equations of motion} Mixed neutrinos are described by matrices of density $\rho_{\bf p}$ and ${\bar \rho}_{\bf p}$ for each (anti)neutrino mode. The diagonal entries are the usual occupation numbers whereas the off-diagonal terms encode phase information. We are studying the spatial evolution of the neutrino fluxes in a quasi-stationary situation. Therefore, the matrices $\rho_{\bf p}$ do not explicitly depend on time, so that the evolution reduces to the Liouville term involving only spatial derivatives. Moreover, we assume spherical symmetry so that the only relevant spatial variable is the radial coordinate $r$. In this case, the explicit form of the equations of motion (EoMs) has been obtained in~\cite{Sigl:1992fn,McKellar:1992ja,Hannestad:2006nj,EstebanPretel:2007ec,Dasgupta:2008cu,Duan:2008fd}. \begin{equation} \textrm{i} {\bf v}_{\bf p} \cdot {\nabla}_r \rho_{\bf p} = [{\sf H}_{\bf p}, \rho_{\bf p}] \,\ , \label{eomMA} \end{equation} where ${\bf v}_{\bf p}$ is the velocity and the Hamiltonian reads \begin{equation} {\sf H}_{\bf p} = {\sf\Omega}_{\bf p} + {\sf V}_{\rm MSW} + {\sf V}_{\nu\nu} \,\ . \label{eq:Ham} \end{equation} In a three flavor scenario, the matrix of the vacuum oscillation frequencies for neutrinos is ${\sf \Omega}_{\bf p}= \textrm{diag}(m_1^2,m_2^2,m_3^2)/2|{\bf p}|$ in the mass basis. For antineutrinos ${\sf \Omega}_{\bf p} \to -{\sf \Omega}_{\bf p}$. It will prove convenient to cast the matrix of vacuum oscillation frequencies in its traceless form \begin{equation} {\sf \Omega}_\omega = -\frac{\omega}{\sqrt{3}} \,\ \lambda_8 - \alpha \frac{\omega}{2} \lambda_3 \end{equation} where $\lambda_3$ and $\lambda_8$ are the two diagonal Gell-Mann matrices, which read respectively~\cite{Dasgupta:2007ws} \begin{eqnarray} \lambda_3 &=& \textrm{diag}(1,-1,0) \nonumber \,\ , \\ \lambda_8 &=& \frac{1}{\sqrt{3}} \textrm{diag}(1,1,-2) \nonumber \,\ . \end{eqnarray} The vacuum oscillation frequency \begin{equation} \omega = \frac{\Delta m^2_{\rm atm}}{2E} \end{equation} is associated to the atmospheric mass-square difference ${\Delta m^2_{\rm atm}} = m_3^2-m_{1,2}^2$. The mass hierarchy parameter is \begin{equation} \alpha \equiv \textrm{sgn}(\Delta m^2_{\rm atm}) \frac{\Delta m^2_{\rm sol}}{{\Delta m^2_{\rm atm}}} \,\ , \end{equation} ${\Delta m^2_{\rm sol}} = m_2^2-m_{1}^2$ being the solar mass-square difference. ${\Delta m^2_{\rm atm}}>0$ defines the normal mass hierarchy (NH), while ${\Delta m^2_{\rm atm}}<0$ the inverted hierarchy (IH). For the numerical illustrations, we take the neutrino mass-squared differences to be $|\Delta m^2_{\rm atm}|= 2\times 10^{-3}$~eV$^2$ and $\Delta m^2_{\rm sol}= 8\times 10^{-5}$~eV$^2$, close to their current best-fit values~\cite{GonzalezGarcia:2010er}. In SN neutrino flavor conversions, the parameters $(\Delta m^2_{\rm atm},\theta_{13})$ are responsible of conversions between $e-y$ states, while $(\Delta m^2_{\rm sol},\theta_{12})$ determine conversions between $e-x$ states~\cite{Dasgupta:2007ws}. The mass hierarchy implies the dominance of $e-y$ conversion effects over the $e-x$ ones. However, are possible situations in which one finds an interesting interplay between the ``atmospheric'' and ``solar'' sectors~\cite{Friedland:2010sc,Dasgupta:2010cd}, as we will show in the following. The matter effect in Eq.~(\ref{eq:Ham}), due to the background electron density $n_e$, in the weak interaction basis $(\nu_e,\nu_\mu,\nu_\tau)$ is represented by~\cite{Matt} \begin{equation} {\sf V}_{\rm MSW}=\sqrt{2}G_F n_e \textrm{diag}(1,0,0) \,\ . \end{equation} Except at very early times $(t \lesssim 300$~ms) when the effective electron density $n_e$ would become larger than the neutrino density $n_\nu$ suppressing the self-induced flavor conversions~\cite{EstebanPretel:2008ni}, one can account for matter effects in the region of collective oscillations just by choosing small (matter suppressed) mixing angles~\cite{Duan:2005cp,EstebanPretel:2008ni}, which we take to be $\theta_{13}=\theta_{12}= 10^{-3}$. Matter effects in the region of collective oscillations (up to a few 100 km) also slightly modify the neutrino mass-square differences. Therefore, we take the effective mass-square differences $\Delta {\tilde m}^2_{\rm atm} = \Delta m^2_{\rm atm} \cos \theta_{13}\simeq \Delta m^2_{\rm atm}$ and $\Delta {\tilde m}^2_{\rm sol} = \Delta m^2_{\rm sol} \cos \theta_{12} \simeq 0.4 \Delta m^2_{\rm sol} ~\cite{Duan:2008za,EstebanPretel:2008ni}. Mikheyev-Smirnov-Wolfenstein (MSW) conversions typically occur after collective effects have ceased~\cite{Fogli:2007bk, Dasgupta:2007ws}. Their effects then factorize and can be included separately~\cite{Dighe:1999bi}. Therefore, we neglect them in the following. Finally, the effective potential due to the neutrino-neutrino interactions is given by~\cite{Pantaleone:1992eq,Sigl:1992fn,Pantaleone:1992xh,McKellar:1992ja} \begin{equation} {\sf V}_{\nu \nu} = \sqrt{2} G_F\int \frac{\textrm{d}^3 {\bf q}}{(2\pi)^3}(\rho_{\bf q}- {\bar\rho}_{\bf q}) (1-{\bf v}_{\bf p}\cdot {\bf v}_{\bf q}) \,\ , \end{equation} where the factor $(1-{\bf v}_{\bf p}\cdot {\bf v}_{\bf q})$ implies \emph{multi-angle} effects for neutrinos moving on different trajectories~\cite{Duan:2006an}, as explained in Sec.~I. We will solve the EoM's in the multi-angle case and compare the behavior of the flavor evolution with the solution obtained in the \emph{single-angle} approximation~\cite{Duan:2006an}. This latter requires the occurrence of the self-maintained coherence of the neutrino ensemble, i.e. that at a given location all the neutrino modes are aligned with each other, assuming they were aligned at the source~\cite{Dasgupta:2008cu}. In this case one obtains as angle-averaged EoM for the different energy modes, classified in terms of the frequency $\omega$ \begin{equation} \textrm{i} \partial_r \rho_\omega = [{\sf H}_{{\omega}}, \rho_{\omega}] \,\ , \label{eomSA} \end{equation} where we have defined the ``reduced neutrino density matrix'' as~\cite{Duan:2008za} \begin{equation} \rho_\omega \sim \left\{ \begin{array}{cc} + \rho_{{\bf q}} & \textrm{if} \,\ \omega >0 \nonumber \\ - {\bar\rho}_{{\bf q}} & \textrm{if} \,\ \omega <0 \,\ , \nonumber \end{array} \right. \end{equation} whose diagonal components \begin{equation} \rho_{\omega(\alpha \alpha)} (r) = \frac{F_{\nu_\alpha}(\omega,r)}{F_\nu(\omega,r)} \,\ , \end{equation} are normalized to the sum of the fluxes of all the neutrino species $F_\nu(\omega,r) = F_{\nu_e}(E,r)+F_{\nu_x}(E,r)+F_{\nu_y}(\omega,r)$ (and analogously for antineutrinos). We clarify that when we use the $\omega$-variable, with the notation $F_\nu(\omega)$ we would mean $F_\nu(E)\times dE/d\omega$. Neglecting the matter effects, the single-angle Hamiltonian reads~\cite{Dasgupta:2008cu} \begin{equation} {\sf H}_{\omega} = {\sf \Omega}_{{\omega}_r} + \mu^{\ast}_r \rho \,\ . \end{equation} In this equation the vacuum oscillation frequency $\omega$ when projected over the radial direction becomes~\cite{Dasgupta:2008cu} \begin{equation} \omega_r = \frac{\omega}{\langle v_{r}\rangle} \,\ , \end{equation} where \begin{equation} \langle v_{r}\rangle = \frac{1}{2}\left[1 +\sqrt{1-\left(\frac{R}{r}\right)^2} \right] \end{equation} is the angle-averaged radial neutrino velocity. The modification of the vacuum oscillation frequencies is relevant only near the neutrinosphere, therefore in the following we neglect it in our estimations. The neutrino-neutrino interaction term depends on the matrix of total density \begin{equation} \rho = \frac{1}{\Phi^0_\nu + \Phi^0_{\overline\nu}} \int_{-\infty}^{+\infty} d\omega F_\nu \rho_\omega \,\, \end{equation} normalized to the sum of the total neutrinos and antineutrinos flux at the neutrinosphere. The radial dependence of the neutrino-neutrino interaction strength can be written as~\cite{Fogli:2007bk,Dasgupta:2008cu} \begin{equation} \mu^\ast_r = \mu_R \frac{R^2}{2 r^2} C_r \,\ , \end{equation} where \begin{equation} \mu_R = 2 \pi \sqrt{2}G_F (\Phi_{\nu}^0+\Phi_{\bar\nu}^0) \,\ , \end{equation} represents the strength of the neutrino-neutrino potential at the neutrinosphere. The $r^{-2}$ scaling comes from the geometrical flux dilution, and the collinearity factor \begin{eqnarray} C_r &=& 4 \left[\frac{1-\sqrt{1-(R/r)^2}}{(R/r)^2}\right]^2 -1 \nonumber \\ & \approx & \frac{1}{2}\left(\frac{R}{r}\right)^2 \,\ \,\ \,\ \,\ \textrm{for} \,\ r\to\infty \,\ , \end{eqnarray} arises from the $(1-\cos\theta)$ structure of the neutrino-neutrino interaction. The asymptotic behavior for large $r$ agrees with what one obtains by considering that all neutrinos are launched at $45^{\circ}$ to the radial-direction~\cite{EstebanPretel:2007ec}. The known decline of the neutrino-neutrino interaction strength, $\mu^\ast_r \sim r^{-4}$ for $r\gg R$, is evident~\cite{Pantaleone:1992eq}. The behavior of the neutrino ensemble at large densities in the single-angle case can be characterized in terms of the invariants of the system. Namely, two \emph{lepton-numbers}, which for small in-medium mixing angles read in flavor basis~\cite{Duan:2008za,Dasgupta:2008cd} \begin{eqnarray} {\cal L}_3 &=& \frac{1}{\Phi^0_\nu + \Phi^0_{\bar\nu}} \int_{-\infty}^{+\infty} d\omega F_\nu \textrm{Tr}(\rho_\omega \lambda_3)\nonumber \\ {\cal L}_8 &=& \frac{1}{\Phi^0_\nu + \Phi^0_{\bar\nu}} \int_{-\infty}^{+\infty} d\omega F_\nu \textrm{Tr}(\rho_\omega \lambda_8) \,\ , \nonumber \\ \label{eq:lepton} \end{eqnarray} and the \emph{effective energy} of the system~\cite{Raffelt:2007yz} \begin{equation} {\cal E} = -\frac{1}{\sqrt{3}}\textrm{Tr}(\tilde{\rho}\lambda_8) - \frac{\alpha}{2} \textrm{Tr}(\tilde{\rho}\lambda_3) +\frac{\mu^\ast_r}{2} \textrm{Tr}(\rho^2) \,\ , \label{eq:energy} \end{equation} where \begin{equation} \tilde{\rho} = \frac{1}{\Phi^0_\nu + \Phi^0_{\bar\nu}} \int_{-\infty}^{+\infty} d\omega \,\ \omega \,\ F_\nu \rho_\omega \,\ , \end{equation} is the first momentum of the density matrix. \section{Multi-angle effects for different neutrino fluxes } In this Section we compare the results of the SN neutrino flavor evolution, obtained using the single-angle [Eq.~(\ref{eomSA})] and multi-angle [Eq.~(\ref{eomMA})] EoM's for the three representative SN flux models introduced in Eq.~(\ref{eq:cases}). For the sake of brevity, we will show only results of the supernova neutrino flavor evolution in the case of inverted neutrino mass hierarchy. We think that this case is more interesting, since possible three-flavor effects, associated with $\Delta m^2_{\rm sol}$ have been recently found in the single-angle scheme for SN neutrino fluxes with multiple crossing points~\cite{Friedland:2010sc,Dasgupta:2010cd}. We will show that multi-angle effects can strongly suppress the impact of these three-flavor conversions. In our numerical simulations we fix the value of the neutrino-neutrino interaction strength at the neutrinosphere \begin{equation} \mu^\ast_r (R) = 2.1 \times 10^{6} \,\ \textrm{km}^{-1} \,\ , \end{equation} unless otherwise stated. This choice will imply that neutrino luminosities in the three different cases will be (in units of $10^{51}$~erg/s) \begin{eqnarray} & & L_{\nu_e}= 2.40 \,\ , \,\ L_{{\bar\nu}_e}= 2.00 \,\ , \,\ L_{\nu_x}= 1.50 \,\ , \nonumber \\ & & L_{\nu_e}= 1.20 \,\ , \,\ L_{{\bar\nu}_e}= 1.34 \,\ , \,\ L_{\nu_x}= 2.14 \,\ , \nonumber \\ & & L_{\nu_e}= 1.16 \,\ , \,\, L_{{\bar\nu}_e}= 1.41 \,\ , \,\ L_{\nu_x}= 2.14 \,\ . \nonumber \\ \end{eqnarray} In order to get stable results in our multi-angle simulations we typically use $\sim1500$ angular modes. Finally, to saturate self-induced oscillation effects, we integrate the EoM's till $r=10^3$~km. Technical details are discussed in the Appendix: in the following we focus only on the results and their interpretation. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig1.eps} \end{center} \caption{Case with $\Phi^0_{\nu_e}: \Phi^0_{{\bar{\nu}}_e}:\Phi^0_{\nu_x} = 2.40:1.60:1.0$. Three-flavor evolution in inverted mass hierarchy for the \emph{single-angle} case for neutrinos (left panels) and antineutrinos (right panels). Upper panels: Initial energy spectra for $\nu_e$ (long-dashed curve) and $\nu_{x,y}$ (short-dashed curve) and for $\nu_e$ after collective oscillations (solid curve). Lower panels: probabilities $P_{ee}$ (solid red curve), $P_{ey}$ (dashed blue curve), $P_{ey}$ (dotted black curve). \label{fig:1}} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig2.eps} \end{center} \caption{As in Fig.~1, but for the \emph{multi-angle} case. \label{fig:2}} \end{figure} \subsection{Spectrum with a single crossing } We start our investigation with the SN neutrino flux ordering $\Phi^0_{\nu_e}: \Phi^0_{{\bar{\nu}}_e}:\Phi^0_{\nu_x} = 2.40:1.60:1.0$, as representative of the accretion phase. This case has been the benchmark for most of the previous multi-angle studies (see, e.g.,~\cite{Duan:2006an,Fogli:2007bk}). As known, in this case the dynamics can be studied into the $e-y$ two-neutrino system associated with $(\Delta m^2_{\rm atm}, \theta_{13})$. Three-flavor effects in the $e-x$ sector, associated with ($\Delta m^2_{\rm sol}, \theta_{12}$) are negligible~\cite{Dasgupta:2007ws,Fogli:2008fj}. In Fig.~1 we represent the initial neutrino fluxes at the neutrinosphere for all the different species and the final electron (anti)neutrino fluxes after collective oscillations (at $r= 10^3$~km) in the single-angle approximation (upper panels) and the corresponding $P_{ee} = P(\nu_e\to\nu_e)$, $P_{ex} = P(\nu_e\to\nu_x)$ and $P_{ey} = P(\nu_e\to\nu_y)$ conversion probabilities (lower panels). Neutrinos are shown in the left panels and antineutrinos in the right panels. The corresponging results for the multi-angle case are given in Fig.~2. We stress that, except in the high-energy tails, the original neutrino spectra in the case under study always have an excess of electron (anti)neutrinos over the non-electron species, i.e. $F_{\nu_e} > F_{\nu_x}$ and $F_{{\overline\nu}_e} > F_{{\overline\nu}_x}$. In the frequency variable $-\infty <\omega<+\infty$, the crossing point at $E\to \infty$ ($\omega=0$) and the other two at $E\gtrsim 20$~MeV appear so narrowly spaced, that the neutrino spectra superficially appear with only a \emph{single crossing} point at $\omega=0$, where $F_{\nu_e} = F_{\nu_x}=0$ and $F_{{\overline\nu}_e} = F_{{\overline\nu}_x}=0$. We address the interested reader to Fig.~3 of~\cite{Dasgupta:2009mg} and to the relative discussion on the case neutrino spectrum effectly behaving as a single-crossing one. We will show how this property is crucial in characterizing the flavor evolution. Concerning the neutrino fluxes after self-induced oscillations, in the single-angle case one finds a swap between $\nu_e$ and $\nu_y$ spectra above $E\simeq 10$~MeV, producing the typical split in the final neutrino spectra. For antineutrinos, the swap between ${\bar\nu}_e$ and ${\bar\nu}_y$ is almost complete, the splitting energy $E\simeq 2$~MeV being very low. Neglecting this low-energy feature~\cite{Fogli:2008pt}, the position of the $\nu_e$ split can be calculated using the conservation of the lepton number ${\cal L}_8$ in Eq.~(\ref{eq:lepton}) (see, e.g.,~\cite{Raffelt:2007cb,Raffelt:2007xt,Duan:2007bt}). In the multi-angle case, the swap features remain unchanged, except for the smearing of the low-energy anti-neutrino spectral split, shown also in~\cite{Fogli:2008pt}. In Fig.~3 and 4 we represent the radial evolution of the diagonal elements of the density matrix $\rho_{ee}$, $\rho_{yy}$, $\rho_{xx}$ for different energy modes for neutrinos (left panels) and antineutrinos (right panels). In particular, in the $\rho_{ee}$ panels the order of the energy modes is $E= 0.91, 5, 11, 43$~MeV going from the curve starting with the highest value to the lowest one. This order is reversed in the $\rho_{yy}$ and $\rho_{xx}$ panels. Figure~3 represents the single-angle evolution, while Fig.~4 is for the multi-angle evolution, where the density matrix elements have been integrated over the angular distribution. Except for very low-energy antineutrino modes, we find that the evolution of the density matrix is rather similar in the single-angle and multi-angle case. We find the presence of synchronized oscillations~\cite{Kostelecky:1993dm,Hannestad:2006nj} till $r \simeq 85$~km. Till there all the $\rho_{\omega}$ stay pinned to their original value. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig3_24sa.eps} \end{center} \caption{Case with $\Phi^0_{\nu_e}: \Phi^0_{{\bar{\nu}}_e}:\Phi^0_{\nu_x} = 2.40:1.60:1.0$. Three-flavor evolution in inverted mass hierarchy in the \emph{single-angle} case. Radial evolution of the diagonal components of the density matrix $\rho$ for neutrinos (left panels) and antineutrinos (right panels) for different energy modes. \label{fig:3}} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig3_24ma.eps} \end{center} \caption{As in Fig.~3, but for the \emph{multi-angle} case. \label{fig:4}} \end{figure} Significant flavor conversions start only after synchronization when bipolar oscillations~\cite{Hannestad:2006nj} start to swap the flavor content of the system. The synchronization radius can be found through the condition~\cite{Fogli:2007bk} \begin{equation} \mu^\ast_r > \frac{4 \,\ \omega_{\rm s}}{\sqrt{3}} \frac{\textrm{Tr}(\sigma\lambda_8)}{[\textrm{Tr}(\rho \lambda_8)]^2} \,\ . \label{eq:synccond} \end{equation} Using the alignment approximation for the two blocks of neutrinos and antineutrinos, the synchronization frequency for $e-y$ conversions, reads~\cite{Fogli:2007bk} \begin{eqnarray} \omega_{\rm s} &=& \frac{\int_{0}^{+\infty} d{\omega} \,\ \omega (F^0_{\nu_e}(E)-F^0_{\nu_y}(\omega))} {2 (\Phi^0_{\nu_e}-\Phi^0_{{\nu}_y})} \nonumber \\ &+& \frac{\int_{0}^{+\infty} d{\omega} \,\ \omega (F^0_{{\bar\nu}_e}(\omega)-F^0_{{\bar\nu}_y}(\omega))} {2 (\Phi^0_{{\bar\nu}_e}-\Phi^0_{{\bar\nu}_y})} \,\ \nonumber \\ &=& \frac{2\Delta m^2_{\rm atm}}{3} \bigg[\frac{ L_{\nu_e}/\langle E_{\nu_{e}}\rangle^2 - L_{\nu_x} /\langle E_{\nu_{x}}\rangle^2} {L_{\nu_e}/\langle E_{\nu_{e}}\rangle - L_{\nu_x} /\langle E_{\nu_{x}}\rangle} \,\ \nonumber \\ &+& \frac{ L_{\overline\nu_e}/\langle E_{\overline\nu_{e}}\rangle^2 - L_{\nu_x} /\langle E_{\nu_{x}}\rangle^2} {L_{\overline\nu_e}/\langle E_{\overline\nu_{e}}\rangle - L_{\nu_x} /\langle E_{\nu_{x}}\rangle} \bigg] \,\ \nonumber \\ &\simeq& 0.68 \,\ \textrm{km}^{-1} \,\ , \nonumber \end{eqnarray} for our input fluxes. The function \begin{displaymath} \sigma = \frac{1}{\Phi^0_\nu + \Phi^0_{\overline\nu}} \int_{-\infty}^{+\infty} d\omega \,\ \textrm{sgn}(\omega) F_\nu {\rho}_{\omega} \,\ . \nonumber \end{displaymath} Then \begin{eqnarray} & & \textrm{Tr}(\sigma\lambda_8) = \frac{\rho_{ee} +{\bar\rho}_{ee}- 2 \rho_{xx}}{\sqrt{3}} = \nonumber \\ &=& \frac{1}{\sqrt{3}}\frac{L_{\nu_e}\langle E_{\overline\nu_{e}} \rangle \langle E_{\nu_x} \rangle + L_{\overline\nu_e} \langle E_{\nu_{e}} \rangle \langle E_{\nu_x} \rangle -2 L_{\nu_x} \langle E_{\overline\nu_{e}} \rangle \langle E_{\nu_{e}} \rangle} {L_{\nu_e} \langle E_{\nu_x} \rangle \langle E_{\overline\nu_{e}} \rangle + L_{\overline\nu_e} \langle E_{\nu_{e}}\rangle \langle E_{\nu_x} \rangle + 4 L_{\nu_x} \langle E_{\nu_{e}}\rangle \langle E_{\overline\nu_{e}} \rangle} \,\ \nonumber \\ &\simeq& 0.144 \,\ , \nonumber \end{eqnarray} and \begin{eqnarray} & & \textrm{Tr}(\rho\lambda_8) = \frac{\rho_{ee}-\bar\rho_{ee}}{\sqrt{3}} = \nonumber \\ &=& \frac{1}{\sqrt{3}} \frac{\langle E_{\nu_x} \rangle [L_{\nu_e} \langle E_{\overline\nu_{e}} \rangle - L_{\overline\nu_e} \langle E_{\nu_{e}}\rangle ]}{L_{\nu_e} \langle E_{\nu_x} \rangle \langle E_{\overline\nu_{e}} \rangle + L_{\overline\nu_e} \langle E_{\nu_{e}}\rangle \langle E_{\nu_x} \rangle + 4 L_{\nu_x} \langle E_{\nu_{e}}\rangle \langle E_{\overline\nu_{e}} \rangle} \,\ \nonumber \\ &\simeq& 0.06 \,\ . \nonumber \end{eqnarray} These numbers imply that bipolar conversions would start when $\mu^{\ast}_r \simeq 100 \,\ \omega_H$, i.e. at $r \simeq 85$~km, as observed in the Fig.~3. We stress that the presence of synchronization at low-radii in this case is crucially related with the original spectrum with a single crossing point. As discussed in~\cite{Raffelt:2008hr}, energy conservation at large neutrino densities (i.e. when $\mu^\ast_r \to \infty$) implies that $\rho^2$ and thus $\rho$ are conserved [see Eq.~(\ref{eq:energy})], and therefore behave as collective objects. If $\rho$ is conserved, energy conservation implies that also $\textrm{Tr}(\tilde{\rho}\lambda_8)$ has to be conserved (neglecting the subleading three-flavor effects associated to $\Delta m^2_{\rm sol})$. For a single-crossed spectrum, the quantity $\textrm{Tr}(\tilde{\rho}\lambda_8) = [(\tilde{\rho}_{ee}-\tilde{\rho}_{yy})+ (\tilde{\bar\rho}_{ee}-\tilde{\rho}_{yy})]/\sqrt{3}$ is maximal~\cite{Dasgupta:2009mg}. Therefore moving any energy mode $\rho_\omega$ from its initial value would make the energy of the system larger. In this situation, each $\rho_\omega$ must remain pinned with the global density matrix $\rho$. Therefore, only the synchronization among different modes is possible~\cite{Raffelt:2008hr}. Since during the synchronization phase the different $\rho_\omega$ remain aligned to their original value, multi-angle affects are suppressed. Then, during the phase of the spectral swapping, the spectrum near the crossing point acts like an inverted pendulum~\cite{Dasgupta:2009mg}. The swap sweeps through the spectrum on each side of the crossing, and the modes at the edge of the swap precess at an average oscillation frequency ${\kappa}\simeq \Delta m^2_{\rm atm}/2E \simeq 0.5$~km$^{-1}$ for a typical energy $E \simeq 10$~MeV in the region of the swap. Since, the length scale $l_{\mu} \equiv |d \ln \mu^{\ast}_r(r)/dr|^{-1} = r/4$ is larger than ${\kappa}^{-1}$, the evolution is adiabatic concerning the swapping dynamics~\cite{Friedland:2010sc,Dasgupta:2010cd}. However, the time-scale of multi-angle effects is determined by the bipolar oscillation frequency $\omega_{\rm H} \sim \sqrt{2 \omega \mu^{\ast}_r} \sim r^{-2}$~\cite{Raffelt:2007yz} that decreases faster than $l_{\mu}^{-1}$. Qualitatively, the effect of the synchronization is to postpone conversions at large radii, where multi-angle effects would require more ``time'' to develop since $\mu^{\ast}_r$ is smaller, and the relatively fast decrease of $\mu^{\ast}_r$ would reduce the adiabaticity for multi-angle effects. As a consequence these do not grow significantly. This is qualitatively the origin of the so called ``quasi single-angle'' behavior~\cite{EstebanPretel:2007ec}, observed for neutrino flux ordering typical of the accretion phase. We have explicitly checked that modifying artificially the adiabaticity of the evolution, i.e. choosing a very slow decrease of $\mu^{\ast}_r$ multi-angle decoherence is unavoidable also in the case we are considering. Therefore, as suggested in~\cite{EstebanPretel:2007ec}, the absence of multi-angle decoherence during the accretion phase seems due to a luckily conspiracy of different time-scales. Unfortunately, till now it has not been developed yet a complete theory for that. Therefore, the interpretation of these effects is mostly based on the experience gained through numerical observations. \subsection{Spectrum with multiple crossings} We pass now to the case $\Phi^0_{\nu_e}:\Phi^0_{{\bar{\nu}}_e}:\Phi^0_{\nu_x} =0.85:0.75:1.0$ representative of recent simulation results for the neutrino flux ordering during the cooling phase~\cite{Raffelt:2003en}. This case has been widely studied in the single-angle approximation in~\cite{Dasgupta:2009mg,Friedland:2010sc,Dasgupta:2010cd} and corresponds to spectra that in $\omega$ variable present well separated multiple crossings, in which multiple spectral splits can arise around the crossing points (see Fig.~2 of~\cite{Dasgupta:2009mg}). Moreover, in this case peculiar three-flavor effects, associated with conversions between $e$ and $x$ states have been recently discussed~\cite{Friedland:2010sc,Dasgupta:2010cd}. Multi-angle simulations in this case have shown a smearing of the splitting features~\cite{Dasgupta:2009mg,Duan:2010bf} and a suppression the flavor evolution at low-radii~\cite{Duan:2010bf,animations}. In Fig.~5 we show the initial (anti)neutrino spectra for all the flavors and the final electron (anti)neutrino ones for the single-angle case (upper panels) and the corresponding conversion probablities at the end of the flavor evolution. Multi-angle results are shown in Fig.~6. Starting with the single-angle case, we see that both $\nu_e \leftrightarrow \nu_y$ as well as $\bar\nu_e \leftrightarrow \bar\nu_y$ swaps appear at intermediate energies $5$ MeV $\lesssim E \lesssim 20$ MeV. Moreover, for $E \gtrsim 25$ MeV, there are additional $\nu_e \leftrightarrow \nu_x$ and $\bar\nu_e \leftrightarrow \bar\nu_x$ swaps. As result of this complex dynamics, in the single-angle case the oscillated $\nu_e$ spectrum shows a single split, the one at low-energies, producing the swap with $\nu_y$ after the split, while the high-energy split is canceled by the further swap between $\nu_e$ and $\nu_x$. We observe that in the antineutrino sector the spectral swaps are not complete, due to smaller spectral differences between ${\bar\nu}_e$ and ${\bar\nu}_x$ that lead to a less adiabatic evolution, as explained in~\cite{Dasgupta:2010cd}. Passing to the multi-angle case, the differences with the single-angle case in the final neutrino spectra and in the conversion probabilities are significant. In general, all the splitting features are less pronounced than in the single-angle case. Starting with the neutrino case, we see that the low-energy spectral split is smeared. More remarkably, the high-energy swap between $\nu_e$ and $\nu_x$ is strongly suppressed. As a consequence, the final neutrino spectra keep track only of the swap between $\nu_e$ and $\nu_y$, which now is less sharp than in the single-angle case. This effect would produce a high-energy splitting feature in the final $\nu_e$ spectrum at $E\simeq 25$~MeV, not present in the single-angle three-flavor calculations. For antineutrinos, the impact of multi-angle effects is even stronger: flavor transformations are significantly suppressed, except in the energy range close to crossing point of the spectra, where two-flavor ${\bar\nu}_e \to {\bar\nu}_y$ transformations take place. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig5_85sa.eps} \end{center} \caption{Case with $\Phi^0_{\nu_e}:\Phi^0_{{\bar{\nu}}_e}:\Phi^0_{\nu_x} =0.85:0.75:1.0$. Three-flavor evolution in the \emph{single-angle} case for neutrinos (left panels) and antineutrinos (right panels). Upper panels: Initial energy spectra for $\nu_e$ (long-dashed curve) and $\nu_{x,y}$ (short-dashed curve) and for $\nu_e$ after collective oscillations (solid curve). Lower panels: probabilities $P_{ee}$ (solid red curve), $P_{ey}$ (dashed blue curve), $P_{ey}$ (dotted black curve). \label{fig:5}} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig6_85ma.eps} \end{center} \caption{As in Fig.~5, but for the \emph{multi-angle} case. \label{fig:6}} \end{figure} To explain the above observations, we represent the radial evolution of the diagonal elements of the density matrix $\rho_{ee}$, $\rho_{yy}$, $\rho_{xx}$ for different energy modes, for neutrinos (left panels) and antineutrinos (right panels) in Figs.~7 and 8. Figure~7 represents the single-angle case, while Fig.~8 represents the multi-angle case. In this latter case, the density matrix elements have been integrated over the angular distribution. In the $\rho_{ee}$ panels the order of the energy modes is $E= 3.3, 5.7, 19, 30$~MeV going from the curve starting with the highest value to the lowest one. This order is reversed in the $\rho_{yy}$ and $\rho_{xx}$ panels. Starting with the single-angle case, we see that differently from the case studied in the Sec.~III A, flavor conversions are possible at low radii ($r\gtrsim 30$~km) in a region where we would have expected synchronization. The effect of the small in-medium mixing is only to logarithmically delay the onset of the flavor conversions. As explained in~\cite{Raffelt:2008hr}, the crucial difference with respect to the previous case is that, applying the energy conservation [Eq.~(\ref{eq:energy})] to the case of multiple-crossed spectra, it is possible to conserve $\textrm{Tr}(\tilde{\rho}\lambda_8)$ and $\alpha \textrm{Tr}(\tilde{\rho}\lambda_3)$, by flipping parts of the original neutrino spectra around the crossing points. Therefore, the synchronization behavior found in the case of single-crossed spectra is not necessarily stable in this case. Indeed, in~\cite{Raffelt:2008hr} it has been discussed the possibility of a novel form of flavor conversions for large $\mu^\ast_r$ in terms of a self-induced \emph{parametric resonance} that would destabilize the synchronization. The origin of this effect is that since each $\rho_{\omega}$ would precess around the Hamiltonian ${\sf H}_\omega$ with a frequency $\mu^\ast_r$, ${\sf H}_\omega$ itself must vibrate with the same frequency. In this situation, for large $\mu^{\ast}_r$ the total density matrix $\rho$ continues to behave as a collective object, but is not static and vibrates itself with a frequency $\mu^\ast_r$. For single-crossed spectra, the energy conservation prevents any seizable change in the different $\rho_{\omega}$, therefore the synchronized behavior prevails. Conversely, in presence of multiple-crossed neutrino spectra, the different $\rho_\omega$'s do not remain aligned with the global ${\rho}$ and start to librate relative to each other. We also note that in this case three-flavor effects, associated with $(\Delta m^2_{\rm sol},\theta_{12})$ are present. In particular, from the figure, one realizes that collective librations first start in the $e-y$ system (at $r\gtrsim 30$~km), and then trigger the $e-x$ conversions (at $r\gtrsim 60$~km). The explanation of this dynamics has been recently given in~\cite{Dasgupta:2010cd}, where we redirect the interested reader. Passing to the multi-angle case of Fig.~8, we find that now the low-radii flavor conversions are suppressed. This effect has been recently described in~\cite{Duan:2010bf}, even if there it has not made the connection between this behavior and the multi-angle suppression of the parametric resonance. This effect has been pointed out in~\cite{Raffelt:2008hr} for generic neutrino gases with half-isotropic neutrino distributions, in cases where in the single-angle scheme the parametric resonance was found. The point is that in the multi-angle case, for large $\mu^\ast_r$ the different $\rho_{\omega}$'s would precess with different velocities whose spread, due to multi-angle effects, is much larger than $\omega$. As a consequence of this dispersion, there cannot exist a collective parametric resonance for all the neutrino modes. Therefore, the collective behavior found in the idealized single-angle case is ``fragile'' and easily suppressed by multi-angle effects in the realistic anisotropic supernova environment. In this case, as long as multi-angle effects are dominant, they would suppress any flavor conversion at small radii. As discussed in~\cite{Duan:2010bf}, the role of multi-angle effects in this case is analogous to the one that they have in presence of a large matter term ($n_e \gg n_\nu$)~\cite{EstebanPretel:2008ni}. There, multi-angle effects introduce a significant dispersion for the matter potential encountered by neutrinos on different trajectories, that once more would suppress self-induced conversions. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig7_85sa.eps} \end{center} \caption{Case with $\Phi^0_{\nu_e}:\Phi^0_{{\bar{\nu}}_e}:\Phi^0_{\nu_x} =0.85:0.75:1.0$. Three-flavor evolution in inverted mass hierarchy in single-angle case. Radial evolution of the diagonal components of the density matrix $\rho$ for neutrinos (left panels) and antineutrinos (right panels) for different energy modes. \label{fig:7}} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig8_85ma.eps} \end{center} \caption{As in Fig.~7, but for the \emph{multi-angle} case. \label{fig:8}} \end{figure} In the presence of multi-angle suppression, we expect that flavor conversions would start only when neutrino-neutrino interactions are not strong enough to maintain the collective behavior of $\rho$~\cite{Duan:2010bf}. Neutrino modes would remain frozen to their initial condition, till the neutrino-neutrino interaction $\mu^\ast_r$ becomes comparable with the averaged vacuum oscillation frequency of the neutrino ensemble~\cite{Dasgupta:2007ws} \begin{equation} \langle \omega \rangle = \frac{\textrm{Tr}({\tilde\rho} \lambda_8)}{\textrm{Tr}(\rho \lambda_8)} = \frac{\tilde{\rho}_{ee}+{\tilde{\bar{\rho}}}_{ee}-2\tilde{\rho}_{yy}}{\rho_{ee}-\bar\rho_{ee}} \,\ , \end{equation} where for our input spectra \begin{eqnarray} & &\tilde{\rho}_{ee}+{\tilde{\bar{\rho}}}_{ee}-2\tilde{\rho}_{yy} \nonumber \\ &=&\frac{2 \Delta m^2_{\rm atm}}{3} \frac{ L_{\nu_e}/\langle E_{\nu_{e}}\rangle^2 + L_{\overline\nu_e} \langle E_{\overline\nu_{e}} \rangle^2 -2 L_{\nu_x}/\langle E_{\nu_{x}}\rangle^2 }{L_{\nu_e}/\langle E_{\nu_{e}}\rangle + L_{\overline\nu_e} \langle E_{\overline\nu_{e}} \rangle + 4 L_{\nu_x}/\langle E_{\nu_{x}}\rangle} \,\ \nonumber \\ &\simeq& 1.1 \times 10^{-2} \,\ \textrm{km}^{-1} \,\ , \nonumber \end{eqnarray} and \begin{eqnarray} & & \rho_{ee}-{\bar\rho}_{ee} \nonumber \\ &=& \frac{L_{\nu_e}\langle E_{\overline\nu_{e}} \rangle \langle E_{\nu_x} \rangle + L_{\overline\nu_e} \langle E_{\nu_{e}} \rangle \langle E_{\nu_x} \rangle -2 L_{\nu_x} \langle E_{\overline\nu_{e}} \rangle \langle E_{\nu_{e}} \rangle} {L_{\nu_e} \langle E_{\nu_x} \rangle \langle E_{\overline\nu_{e}} \rangle + L_{\overline\nu_e} \langle E_{\nu_{e}}\rangle \langle E_{\nu_x} \rangle + 4 L_{\nu_x} \langle E_{\nu_{e}}\rangle \langle E_{\overline\nu_{e}} \rangle} \,\ \nonumber \\ &\simeq & 1.6 \times 10^{-2} \,\ , \end{eqnarray} which numerically leads to $\langle \omega \rangle \simeq 0.68$~km$^{-1}$. We expect flavor conversions to start in the $e-y$ sector when \begin{equation} \langle \omega \rangle \simeq \sqrt{3} \mu^\ast_r \textrm{Tr}(\rho \lambda_8) \,\ , \label{eq:multisup} \end{equation} that would correspond to $\mu^\ast_r \simeq 60 \,\ \langle \omega \rangle$. For our input spectra, we would get as onset radius $r_{\rm ons} \simeq 92$~km, in qualitative agreement with the simulation of Fig.~8. We note that numerically the conversions' onset radius in this case is very similar to the one of Sec.~III A, even if the conditions that determine it are different, as well as the further flavor evolution. In particular, we expect that at large radii the evolution in this case would be less adiabatic than in the case of Sec.~III A. As explained in~\cite{Dasgupta:2009mg}, in the final phases of the swapping dynamics, the neutrino and antineutrino spectra evolve quite independently, and the precession frequencies of the two blocks are not governed by a common $\mu^{\ast}_r$, but by individual $\mu^{\ast}_r$'s proportional to the flux differences $|F_{\nu_e}-F_{\nu_x}|$ ($|F_{{\bar\nu}_e}-F_{{\bar\nu}_x}|$). They behave essentially as two uncoupled oscillators because the neutrino-neutrino interaction $\mu^{\ast}_r$ is now smaller than the frequency difference of the two blocks. Since in this case the flux differences are smaller than in the previous example of Sec.~III A, the effective precession frequencies are smaller, with a resultant less adiabatic evolution. This would explain the more pronounced smearing of the sharp splitting features observed in this case in the multi-angle simulations. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig9_81sa.eps} \end{center} \caption{Case with $\Phi^0_{\nu_e}:\Phi^0_{{\bar{\nu}}_e}:\Phi^0_{\nu_x} =0.81:0.79:1.0$. Three-flavor evolution in the \emph{single-angle} case for neutrinos (left panels) and antineutrinos (right panels). Upper panels: Initial energy spectra for $\nu_e$ (long-dashed curve) and $\nu_{x,y}$ (short-dashed curve) and for $\nu_e$ after collective oscillations (solid curve). Lower panels: probabilities $P_{ee}$ (solid red curve), $P_{ey}$ (dashed blue curve), $P_{ey}$ (dotted black curve). \label{fig:9}} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig10_81ma.eps} \end{center} \caption{As in Fig.~9, but for the \emph{multi-angle} case. \label{fig:10}} \end{figure} The effect of the multi-angle delay is that flavor conversions start in a region in which the neutrino-neutrino interaction strength is weaker than in the corresponding single-angle case. The typical time-scale at which the off-diagonal components in the density matrix grow is given by the bipolar period which, for $e-y$ transitions, is $\tau_H \sim (2\omega \mu^{\ast}_r)^{-1/2}$~\cite{Hannestad:2006nj}. The delay of the conversions means that they start at a smaller $\mu^{\ast}_r$ where they require more time to develop. Moreover, going at larger $r$ makes the evolution less adiabatic, resulting in weaker flavor conversions than in the single-angle case. This would explain the suppression of the $e-x$ conversions. At this regard, we remind that $e-x$ conversions are triggered by the dominant $e-y$ conversions, via the $\theta_{13}$ coupling between the two sectors~\cite{Dasgupta:2010cd}. Due to the suppression of $e-y$ conversions, the $e-x$ transitions would also be delayed. The typical time-period at which the off-diagonal components in the $e-x$ sector grow, is $\tau_L \sim (\alpha\omega \mu^{\ast}_r)^{-1/2}$. The mass hierarchy implies that $\tau_L\simeq 8 \tau_H$. As a consequence of this slower growth of the $e-x$ conversions, the multi-angle suppression is enough to make $\tau_L$ too slow to be effective. The suppression of conversions between ${\nu}_e$ and ${\nu}_x$ explains the appearance of the high-energy spectral split in $\nu_e$ observed in the multi-angle case of Fig.~7. Concerning antineutrinos, we realize that not only $e-x$, but also $e-y$ conversions are strongly inhibited. We associate this behavior with the stronger violation of adiabaticity in the antineutrino sector, due to the smaller spectral differences among ${\bar\nu}_e$ and ${\bar\nu_x}$~\cite{Dasgupta:2010cd}. We stress that the adiabaticity plays a crucial role in determining the impact of the multi-angle suppression in the flavor evolution. Indeed, we explicitly checked that for the same flux ordering, increasing the adiabaticity, by increasing the neutrino-neutrino potential by a factor of five, the multi-angle suppression is less dramatic. In particular, $e-x$ conversions reappear. Moreover, also the suppression of flavor oscillations in the antineutrino sector is relieved. We find that this result is in agreement with the case shown in~\cite{Duan:2010bf}. There the choice of higher neutrino luminosities with respect to the ones we are considering as benchmark case, allows the multi-angle suppression to be strong only at low-radii, while the final spectra are similar to the ones obtained in the single-angle case. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig11_81sa.eps} \end{center} \caption{Case with $\Phi^0_{\nu_e}: \Phi^0_{{\bar{\nu}}_e}: \Phi^0_{\nu_x} = 0.81:0.79:1.0$. Three-flavor evolution in inverted mass hierarchy in single-angle case. Radial evolution of the diagonal components of the density matrix $\rho$ for neutrinos (left panels) and antineutrinos (right panels) for different energy modes. \label{fig:11}} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{fig12_81ma.eps} \end{center} \caption{As in Fig.~11, but for the \emph{multi-angle} case. \label{fig:12}} \end{figure} \subsection{Spectrum with small flavor asymmetries} Finally, we consider the case $\Phi^0_{\nu_e}: \Phi^0_{{\bar{\nu}}_e}: \Phi^0_{\nu_x} = 0.81:0.79:1.0$ which is intended to represent a case which a flux ordering possible at late times, where asymmetries among $\nu_e$ and $\overline\nu_e$ can become small~\cite{Huedepohl:2009wh,Fischer:2009af}. The small asymmetry case has been pointed as representative of flavor decoherence associated to multi-angle effects~\cite{Raffelt:2007yz,EstebanPretel:2007ec}. In Fig.~9 show the initial (anti)neutrino spectra for all the flavors and the final electron (anti)neutrino ones for the single-angle case (upper panels) and the corresponding conversion probablities at the end of the flavor evolution. Multi-angle results are shown in Fig.~10. The splitting features in the single-angle case are similar to the ones observed in Fig.~5 of Sec. III~B. However, the multi-angle results are dramatically different. We observe that neutrino and antineutrino spectra tend toward flavor equilibration with different conversion probabilites displaced around $1/3$. In Fig.~11 and 12 we represent the radial evolution of the diagonal elements of the density matrix $\rho_{ee}$, $\rho_{yy}$, $\rho_{xx}$ for different energy modes for neutrinos (left panels) and antineutrinos (right panels). In the $\rho_{ee}$ panels the order of the energy modes is $E= 2.5, 5, 19, 30$~MeV going from the curve starting with the highest value to the lowest one. This order is reversed in the $\rho_{yy}$ and $\rho_{xx}$ panels. Figure~11 represents the single-angle case, while Fig.~12 represents the multi-angle case. The difference of the flavor evolution in the two cases is striking. In particular, multi-angle suppression blocks $e$-$y$ flavor conversions till $r\simeq 40$~km, as predictable from Eq.~(\ref{eq:multisup}). Then, since conversions start at small $r$ where the evolution is more adiabatic than in the case of Sec. III~B, also $e$-$x$ oscillations have chance to develop at $r\gtrsim 60$~km. The stronger adiabaticity in this case allow also multi-angle effects to have enough time to develop before the neutrino-neutrino interaction term becomes small. Then, the multi-angle effects smear the flavor conversions, producing a tendency towards a three-flavor decoherence of the ensemble in both $\nu$ and ${\overline\nu}$ sectors. \begin{table*}[!t] \begin{center} \caption{Summary of multi-angle effects, $3\nu$ effects and spectral splts for different SN neutrino fluxes.} \begin{tabular}{cccc} \hline \hline Initial spectral pattern & Multi-angle effects & $\Delta m^2_{\rm sol}$-effects & Spectral splits \vbox to12pt{}\\ \hline Single crossing & marginal & absent & robust \vbox to12pt{} \\ \hline Multiple crossings & relevant & present/absent & smeared \vbox to12pt{} \\ \hline Small flavor asymmetries & strong & present & washed-out \vbox to12pt{} \\ \hline \end{tabular} \label{tab:nuebar-effects} \end{center} \end{table*} \section{Conclusions} We have performed an exploration on the dependence of multi-angle effects in self-induced supernova neutrino oscillations on the original neutrino fluxes. Most of the previous studies~\cite{Duan:2006an,Fogli:2007bk,Fogli:2008pt,EstebanPretel:2007ec} focused on neutrino fluxes typical of the accretion phase, with a pronounced $\nu_e$ excess, \emph{de facto} behaving like spectra with a single crossing. In this situation, the synchronization of different angular modes at low-radii prevails over multi-angle effects. Then, when flavor conversions start, these are adiabatic to produce the spectral swaps and splits, but not enough to allow multi-angle decoherence to emerge. The result is the known ``quasi single-angle'' evolution. However, one has to be cautious in generalizing this reassuring result. In this context, we have shown that multi-angle effects can produce significant deviations in the flavor evolution with respect to the three-flavor single-angle case, for neutrino fluxes with a moderate flavor hierarchy and a $\nu_x$ excess, as possbile during the supernova cooling phase. In this situation, the presence of multiple crossing points in the original neutrino spectra destabilizes the synchronization at large neutrino densities~\cite{Raffelt:2008hr}. In absence of synchronization, in the single-angle scheme collective conversions would be possible at low-radii. However, multi-angle effects introduce a large dispersion in the neutrino-neutrino potential, that prevents any possible collective flavor conversion at low radii. As a consequence, in the multi-angle scheme there would be a significant delay of the onset of the flavor conversions, as recently observed~\cite{Duan:2010bf}. We have shown that this multi-angle delay can produce dramatic changes, not only in the deepest supernova regions, as shown in~\cite{Duan:2010bf}, but also in the final oscillated neutrino spectra. In particular, the multi-angle suppression can be so strong to allow the onset of the flavor evolution only at a large radius, when the evolution is less adiabatic. Depending on the violation of adiabaticity, in the inverted mass hierarchy there could be a suppression of the three-flavor effects, associated to the solar sector. This would dramatically change the pattern of swaps and splits in the final neutrino spectra. Finally, if the flavor asymmetries between $\nu_e$ and ${\overline \nu}_e$ are very small, the multi-angle suppression occurs only close to the neutrinosphere. In this situation, the stronger adiabaticity of the evolution allows multi-angle effects to act efficiently also after the onset of the conversions, tending to establish a three-flavor equilibration in both neutrino and antineutrino sectors. In Table~I we summarize our results on the role of multi-angle effects, $3\nu$ effects and spectral splits for different SN neutrino fluxes. In this work we have explicitly shown numerical results only for the case of neutrino inverted mass hierarchy. However, we have checked that the impact of multi-angle effects is qualitatively similar also for the normal hierarchy case. From our numerical explorations, it results that self-induced flavor transformations of supernova neutrinos during the cooling phase are a continuous source of surprises. The richness of the phenomenology, in the presence of neutrino spectra with multiple crossing points, was first realized in~\cite{Dasgupta:2009mg} with the discovery of the possibility of multiple spectral splits in both the mass hierarchies. Then, it was realized that for these neutrino fluxes, three-flavor effects can play a significant role in inverted mass hierarchy, changing the splitting pattern expected from two-flavor calculations~\cite{Dasgupta:2010cd,Friedland:2010sc}. Now, we show that also multi-angle effects are crucial in characterizing the flavor evolution in this case, and could potentially kill the three-flavor effects. In general, self-induced flavor conversions for spectra with multiple crossing points challenge most of the naive expectations on which was based the original picture of the collective supernova neutrino conversions: low-radii synchronization, subleading role of multi-angle and three-flavor effects. The discovery of these new effects adds additional layers of complications in the simulation of the flavor evolution for supernova neutrinos. In particular, our result shows that during the cooling phase three-flavor multi-angle simulations are crucial to obtain a correct result. Multi-angle effects would be taken into account to assess the impact of collective neutrino oscillations on the r-process nucleosynthesis in supernovae, as recently investigated in~\cite{Duan:2010af}. The impact of the multi-angle effects would crucially depend on different SN input: neutrino luminosities and flavor asymmetries, neutrinosphere radius, etc. Since all these quantities significantly change during the neutrino emission, one would expect time-dependent effects. At this regard, the possibility to detect signatures of these effects in the next galactic supernova neutrino burst~\cite{Choubey:2010up} would motivate further analytical and numerical investigations. \subsection*{Appendix} We discuss here a few technical aspects of the multi-angle numerical simulations, we performed on our local computer facility (with Fortran 77 codes running a Linux cluster with 48 processors per CPU with 128 Gb of shared RAM memory). Equation~(\ref{eomMA}), after discretization, provides a set of $16\times N_E \times N_u$ ordinary differential equations in $r$, where $N_E$ and $N_u$ are the number of points sampling the (anti)neutrino energy $E$ and emission angle. In particular, we find convenient to label the neutrino angular modes in terms of the variable~\cite{EstebanPretel:2007ec} \begin{equation} u = \sin^2 \theta_R \,\ , \end{equation} where $\theta_R$ is the zenith angle at the neutrino sphere $r = R$ of a given mode relative to the radial direction. With this choice, the parameter $u$ is fixed for every neutrino trajectory. We have then performed our simulations of the three-flavor neutrino evolution using a Runge-Kutta integration routine taken from the CERNLIB libraries~\cite{cernlib}. We fixed the numerical tolerance of the integrator at the level of $10^{-6}$ and increased the number of sampling points in angle and energy till we reach a stable numerical result. In this situation we estimate a numerical (fractional) accuracy of our results better than $10^{-2}$. In order to have a clear energy resolution of the spectral splits we took $N_E= 10^2$ energy points, equally distributed in the range $E \in [0.1,80]$~MeV. The number of angular modes is also a crucial choice, since it is well known that a sparse sampling in angle can lead to numerical artifacts that would destroy the collective behavior of the neutrino self-induced conversions~\cite{EstebanPretel:2007ec}. Typically, numerical stability would require $N_u = {\mathcal O}(10^3)$ angular modes. Since we can claim to have reached stable numerical simulations, we are confident in the accuracy of the results obtained in this work. Moreover, we have been able to reproduce previous results presented in literature for cases similar to the ones we are investigating (see, e.g.,~\cite{Fogli:2007bk,Duan:2010bf}). \begin{acknowledgments} We thank Sovan Chakraborty, Basudeb Dasgupta and Georg Raffelt for reading the manuscript and for comments on it. The work of A.M. was supported by the German Science Foundation (DFG) within the Collaborative Research Center 676 ``Particles, Strings and the Early Universe''. \end{acknowledgments}
1,116,691,498,837
arxiv
\section{Introduction} \label{sec:intro} The CERN Large Hadron Collider (LHC) is designed to provide proton collisions at an energy of 7 TeV per beam and heavy-ion collisions at the equivalent magnetic rigidity~\cite{lhcdesignV1}. It consists of eight arcs and eight straight Insertion Regions (IRs). The accelerated beams collide in four out of the eight IRs, where the particle-physics experiments are located. The baseline design of the LHC includes operation with protons and lead ion beams ($^{208}$Pb$^{82+}$~), but other beam particle species have also been considered \cite{YR_WG5_2018, Schaumann:IPAC2018-MOPMF039}. The Physics Beyond Colliders initiative \cite{alemany2019summary} is a dedicated research campaign steered by CERN, focused on exploring alternative options to colliders for future particle-physics experiments. One of the projects under consideration is the Gamma Factory (GF), a design study for a novel type of light source \cite{krasny2015gamma}. The goal of the study group is to explore the possibilities to use the LHC beams for creating high-intensity, high-energy photon beams that can serve many applications, spanning from atomic to high-energy physics. The concept relies on using partially stripped ions (PSI) beams in the LHC as a driver. PSI retain one or more bound electrons. To produce photons, the remaining electrons in PSI are exited using a laser. The energy of the photons emitted during the spontaneous de-excitation of the excited atomic states is proportional to the square of the Lorentz factor of the ion beam, which allows photon energies of up to \unit[400]{MeV} in the LHC. While the proof-of-principle experiment for the GF is proposed to operate at the Super Proton Synchrotron (SPS) at a beam energy of \unit[450]{Z GeV}~\cite{Krasny:2690736}, the ultimate implementation is intended to operate at the LHC's top energy of \unit[7]{Z TeV}. The GF depends on the acceleration and storage of PSI beams in CERN's accelerator complex. PSI at varying states of ionisation are routinely used at CERN during the different stages of acceleration of the typical lead or argon beams in the injectors \cite{lhcdesignV3,kuchler14,Arduini:308372}. However, the LHC was never used to accelerate PSI beams. In 2018, the first operational tests with PSI beams in the LHC were performed with the goal of studying the beam lifetime and characterising the beam losses for such a beams~\cite{Schaumann_2019_ipac_part_strip_ions,Schaumann:2670544}. During one dedicated machine development (MD) session, PSI beams were successfully injected, accelerated, and stored. At the same time, it was found that the current LHC configuration poses a critical limitation on PSI operation. The collimation-system efficiency recorded during this test was found to be orders of magnitude worse than in standard operation and hence prohibitively low, effectively imposing an intensity limit. The worst losses were localised in the dispersion suppressor (DS) of the betatron-cleaning insertion. These findings clearly put in question the overall feasibility to operate the LHC with PSI beams of sufficient intensities for a future GF facility. In this article we study the underlying physical processes responsible for the worsening in collimation efficiency with PSI beams and study possible mitigation measures. In Section~\ref{sec:lhc-recap} we provide a brief recap of the LHC operation, collimation, and beam loss limitations. Section~\ref{sec:experiments} describes the experiments performed in the LHC using PSI beams, as well as the encountered limitations. Section~\ref{sec:simulations} describes the interpretation of the observed losses with PSI beams and then describes the available simulation tools for the case of partially stripped ions. In the same section, the measured loss maps are compared against results from simulations. Finally, in Section \ref{sec:mitigations}, different mitigation strategies for the found limitations are outlined and investigated, including a new DS collimator, crystal collimation, or an orbit bump. \section{LHC collimation and beam losses} \label{sec:lhc-recap} \subsection{LHC collimation system} In the LHC, the proton-beam stored energy in Run~2 (2015-2018) at \unit[6.5]{TeV} exceeded \unit[300]{MJ}, approaching the design value of \unit[362]{MJ} planned for the operation at \unit[7]{TeV}. Uncontrolled beam losses of even a tiny fraction of the full beam could cause a superconducting magnet to quench or even cause material damage of exposed accelerator components. Therefore, a multi-stage collimation system is installed. It consists of more than 100 movable devices, installed to provide beam cleaning and passive protection against beam losses during regular operation and accidents~\cite{lhcdesignV1,assmann05chamonix,assmann06,bruce14_PRSTAB_sixtr,bruce15_PRSTAB_betaStar,valentino17_PRSTAB}. The betatron-halo cleaning is done by a three-stage collimator hierarchy in IR7 while off-momentum cleaning is done in IR3 by a similar system. Collimators for local triplet-magnet protection are located in all experimental IRs (IR1, IR2, IR5 and IR8) and, in addition, collimators for the physics-debris cleaning are installed in the high-luminosity experiments in IR1 and IR5. In the three-stage collimation system of the LHC, the primary collimators (TCP) are the devices closest to the beam of the whole machine and their purpose is to intercept any halo particles drifting out to large amplitudes~\cite{redaelli_coll}. Particles that are not absorbed by the TCPs, but scattered to larger amplitudes should be caught by the secondary collimators (TCS), which are designed to intercept the secondary beam halo. As a third stage that uses shower absorber collimators (TCLAs) is in place to intercept the tertiary halo and the products of hadronic showers, leaking from the TCSs. Additional tertiary collimators (TCT) are placed around the experimental insertions. Figure~\ref{fig:collimators-around-LHC} illustrates the collimator locations around the LHC. It is important to note that beam-halo particles interacting with the TCPs are not always deflected onto the TCSs; in some cases they can escape the collimation insertion and complete further revolutions around the machine before being disposed of at collimators. Beam particles with momentum offsets escaping the collimation section can be lost on the cold aperture in the dispersion suppressor immediately downstream, where the rising dispersion affects their trajectories~\cite{bruce14ipac_DS_coll}. The DS in IR7 is thus the main bottleneck for beam-halo losses in the LHC and the amount of local losses in the DS may impose limitations on the total achievable intensity, in particular for heavy-ion operation~\cite{epac2004,hermes16_ion_quench_test,hermes16_nim} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{collimation_system_lhc.pdf} \caption{Conceptual sketch of the layout of LHC collimators around the ring. Collimators are located mainly in IR3 and IR7, but also protect the experiments, the beam dump, and the injection regions.} \label{fig:collimators-around-LHC} \end{figure} \subsection{Beam loss measurements and loss maps} In order to prevent quenches and damage of sensitive equipment, beam loss monitors (BLMs) are installed around the LHC ring~\cite{blmSystem,holzer05,holzer08a}. They trigger a beam dump if the local losses exceed a pre-defined threshold. The LHC BLM system uses about 4000 ionization chambers, placed outside the cryostat on cold magnets and on other key equipment such as collimators. The BLM system provides a measurement of electromagnetic and hadronic showers resulting from nearby impacts of beam losses. They provide data at a time resolution down to \unit[40]{$\mu$s}, i.e. around half the beam revolution time. In order to validate any operational configuration in the LHC, including the optical configuration and the collimator positions, validation tests called loss maps (LMs) are performed. During these tests a safe, low-intensity beam is artificially excited to create losses while the BLMs provide a continuous measurement of the loss distribution around the ring. An example of a loss map from standard operation with $^{208}$Pb$^{82+}$~ ions can be seen in the top graph of Fig.\ref{fig:example-loss-map}. Loss maps are carried out at all stages of the LHC operational cycle. Firstly, the beams are injected in the machine at \unit[450]{GeV}, at the so-called injection plateau. After the injection of bunches is finished, the energy of the two beams is increased for about 20 minutes during the energy ramp during which the first optics change occur. Once at top energy, the optics is adjusted in order to achieve a lower $\beta$-function at the collision points, which is called the squeeze. Then, the beams are put in collision, and this part of operation is called physics. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{two_lms_wide2.pdf} \caption{LHC loss maps, indicating losses along the ring circumference. Black bars represent losses intercepted at the collimators. Blue regions represent the parts where superconducting magnets are located, and the red regions are related to the warm part of the ring. The top plot shows a loss map using a regular $^{208}$Pb$^{82+}$~ ion beam at \unit[6.37] {Z TeV} and the bottom plot shows a loss map with PSI ($^{208}$Pb$^{81+}$~) taken at \unit[6.5] {Q TeV}, where $Z$ is an ion atomic number, $Q$ is the ion charge defined as $Q=Z-1$. } \label{fig:example-loss-map} \end{figure*} \section{PSI beam tests in the LHC} \label{sec:experiments} \subsection{Experimental setup} The collimation system in the LHC has been designed and optimised for proton operation and it is important to evaluate its performance for any other species considered for operation. The first run with PSI beams in the LHC was performed during MD studies in July 2018 \cite{Schaumann_2019_ipac_part_strip_ions,Schaumann:2670544}, where $^{208}$Pb$^{81+}$~ ions with one electron left were injected and stored in the LHC. While this experiment had as a main objective the demonstration of the possibility to store in the LHC PSI beams with good lifetime, it also gave the opportunity for the first tests of the PSI collimation process. The beam provided by the injectors for this experiment consisted of two $^{208}$Pb$^{81+}$~ bunches spaced by \unit[200]{ns} per SPS injection in the LHC. Each bunch featured an intensity of up to about $1.1\times10^{10}$ charges, or $1.3\times10^{8}$ ions. Because of machine protection requirements, the total circulating beam intensity had to stay below $3\times10^{11}$ charges during the experiment. Table~\ref{tab:paramaters} lists all the beam and machine parameters used during the test. In Table~\ref{tab:summary} we list all detailed settings of the collimators used for the experiment and later for the simulation. \begin{table}[h] \begin{center} \caption{Collection of beam and machine parameters used in experiments with $^{208}$Pb$^{81+}$~ and for the simulated beam in the SixTrack-FLUKA setup. The HL--LHC beam parameters are taken from Ref.~\cite{bruce20_HL_ion_report}.} \begin{tabular}{l c c } \hline \hline Parameter & LHC MD & HL--LHC \\% relevant parameters for simulation \hline Ion & $^{208}_{82}$Pb$^{81+}$ & $^{208}_{82}$Pb$^{81+}$ \\ Equiv. proton beam energy [TeV]& 6.5 & 7\\ PSI beam energy [TeV]& 526.5 \footnote{ it is \unit[6.5]{TeV}$\times Q$ where $Q=Z-1$ in this case is the charge number of the ion.}& 567 \\ Proton beam emmitance [$\mu$m]& 3.5 & 2.5 \\ Ion beam emmitance [$\mu$m] & 1.39 & 1.00 \\ Bunch population [Pb ions $\times10^{8}$]& 0.9 \footnote{first fill during the MD featured $1.3\times 10^8$ ions.}& 1.8 \\ Nb of bunches & 6 & 1240 \\ \hline $\beta^*$ in IP1/2/5/8 (Top energy) [m] & 1/1/1/1.5 & -- \\ \hline \hline \end{tabular} \label{tab:paramaters} \end{center} \end{table} The cleaning performance for $^{208}$Pb$^{81+}$~ was tested through dedicated loss maps at injection and at top energy. The latter case can be seen in Fig.~\ref{fig:example-loss-map} (bottom graph) together with the loss pattern obtained for $^{208}$Pb$^{82+}$~ beams (top graph). Losses are normalized by the signal recorded at the primary collimator with highest losses. Both at injection and at top energy, severe losses are observed in the DS of IR7 with PSI beams. These losses turned out to be a real operational limitation, when a beam dump was triggered, two minutes after reaching top energy in the first fill. At the time, only 24 low-intensity bunches were stored, and the beam was dumped because of too high losses around $s=$\unit[20410]{m}, i.e. in the DS of IR7. In the second and last fill of the experiment, the number of bunches was reduced to six and the intensity per bunch was reduced to about $0.75\times10^{10}$ charges. This beam could successfully be accelerated to \unit[6.5]{Z TeV} and stored for about two hours~\cite{Schaumann_2019_ipac_part_strip_ions,Schaumann:2670544}. Still, the losses reached around 60\% of the dump threshold level. \begin{table}[h] \begin{center} \caption{\label{tab:summary} Summary of the collimator settings at top energy optics as used in the 2018 MD, as well as for the HL--LHC collision optics. The settings are given in units of the nominal one sigma beam size, $\sigma _N$, expressed with respect to a proton-equivalent emittance of $\varepsilon=3.5\mu$m. } \begin{tabular}{ l r r } \hline \hline Collimator name & LHC & HL--LHC \\ & [$\sigma_N$] & [$\sigma_N$] \\ \hline \multicolumn{3}{l}{Betatron cleaning IR7}\\ TCP & 5.0 & 5.0 \\ TCSG & 6.5 & 6.5 \\ TCLA & 10.0& 10.0 \\ TCLD &n/a &14.0 \\ \hline \multicolumn{3}{l}{Momentum cleaning IR3}\\ TCP & 15.0 & 15.0 \\ TCSG & 18.0 & 18.0 \\ TCLA & 20.0& 20.0 \\ \hline \multicolumn{3}{l}{Experimental areas}\\ TCT IR1/2/5 & 15 & 10 \\ TCT IR8 & 15 & 15 \\ \hline \hline \end{tabular} \end{center} \end{table} \subsection{Measured cleaning performance of PSI beams} In the second fill, the collimation performance at \mbox{\unit[6.5]{Z TeV}} was tested through loss maps. The worst losses were observed for Beam~1 in the horizontal plane (B1H and the measured loss map for this case can be seen in the bottom graph of Fig.~\ref{fig:example-loss-map}, showing the full-ring loss pattern, and in the bottom graph of Fig.~\ref{fig:psi-lossmaps-b1h} (IR7 zoom). The peak losses were recorded by the BLMs located around the quadrupole magnet in cell 11, at position $s\approx$\unit[20430]{m} from the center of IR1 (see for the local dispersion plotted in Fig.~\ref{fig:psi-lossmaps-b1h}). Please note, that Beam~2 did not reach the top energy, therefore there is no data for it \cite{Schaumann:2670544}. The performance of the collimation system with PSI beams is significantly worse than for protons or fully stripped ion beams \cite{bruce14_PRSTAB_sixtr,hermes16_nim}. The recorded magnitude of the highest losses on the cold aperture of the DS, normalized to the intensity impacting on the TCP, is about 4~orders of magnitude larger than for standard proton operation, and about two orders of magnitude larger than for standard $^{208}$Pb$^{82+}$~ operation, as can be seen in Fig.~\ref{fig:psi-lossmaps-b1h}. The BLM signal is also about 4~times larger at the peak in the DS than on the collimators, however, this does not mean that primary losses occur in the DS. Instead, this is due to the fact that the BLM response per locally lost particle is different at the two locations, because of the local geometry and materials. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{two_lms_zoom_dispersion.pdf} \caption{Loss map for Beam~1 in the horizontal plane, recorded during the standard $^{208}$Pb$^{82+}$~ fill (top) and during the experiments with $^{208}$Pb$^{81+}$~ beams that were carried out in LHC in 2018 at top energy (bottom). The PSI case shows a visible excess of the BLM signal around $s=$\unit[20430]{m}. The machine lattice and the horizontal locally generated dispersion from the TCP are shown on the very top of the figure.} \label{fig:psi-lossmaps-b1h} \end{figure} \section{Simulations of PSI beam losses in the LHC} \label{sec:simulations} \subsection{Process of DS losses through stripping action} We hypothesize that the loss pattern in Fig.~\ref{fig:psi-lossmaps-b1h} can be explained by the stripping action of the collimators in combination with the increasing value of the dispersion in the DS (see the top plot of the Fig.~\ref{fig:psi-lossmaps-b1h}). When passing through the TCP, partially stripped $^{208}$Pb$^{81+}$~ ions from the beam halo may lose their electron. In this stripping process, they do not experience an angular deflection sufficient to be intercepted by the TCSs. The resulting fully-stripped $^{208}$Pb$^{82+}$~ ions have energies very close to nominal, i.e. \unit[7]{$Z$ TeV}, but have an altered charge-to-mass ratio and thus a magnetic rigidity that differs from that of the circulating beam by about 1.2~\%. If they escape the betatron-cleaning insertion, the dispersion in the DS can push their trajectories onto the cold aperture. The simulation tools used for this study do not support the tracking of the partially stripped ions. Therefore, a simplified simulation setup was used, assuming a 100\% stripping efficiency, i.e. PSI immediately losing their electron when they impact on a primary collimator such that any ions escaping the collimators are fully stripped. This assumption comes from the analysis of the mean free path (MFP) for the stripping process. The MFP was calculated to be \unit[0.04]{mm} for $^{208}$Pb$^{81+}$~ ions in the Carbon-Fiber-Carbon material, of which the 0.6~m-long primary collimators are made of, using the methods in Ref.~\cite{Tolstikhina_2018}. The MFP can be compared to the distance traveled in the material for the different impact parameters, i.e. the distance between the collimator edge and the impact, where we assume that any impact takes place at the phase with maximum amplitude in phase space. This determines the impact angle. In this study we have considered impact parameter values \unit[0.1]{$\mu$m}, \unit[1]{$\mu$m} and \unit[10]{$\mu$m} (as in Ref.~\cite{bruce14_PRSTAB_sixtr}) and we obtained traverse distances of: \unit[4.618]{mm}, \unit[46.18]{mm}, and \unit[461.8]{mm}, respectively. For each impact parameter value in the given range, the distance traveled inside the collimator material is at least two orders of magnitude larger than the MFPs reported earlier. Therefore, it is a very good approximation to assume a full stripping of any PSI that approaches the TCPs. \subsection{Trajectories of fully stripped ions} To test the theory of the stripping action of the collimators as explanation for the observed losses, we performed tracking simulations in MAD-X \cite{herr04,madx} of fully stripped ions emerging from the horizontal TCP. The trajectories of the fully stripped $^{208}$Pb$^{82+}$~ ions were calculated with an effective $\Delta p/p=-1/82$ originating at one of the TCP jaws. Those trajectories were tracked through the betatron-cleaning insertion and the downstream DS, where the point at which they are intercepted by the aperture was calculated. In this simplified study, we do not assume any angular kicks at the TCP, but instead that the particles are at the phase of maximum horizontal excursion in phase space, as would be the case when they first hit the TCP after a slow diffusion process. A selected range of trajectories with the physical aperture overlaid is shown in Fig.~\ref{fig:psi-lhc-experiment-fluka}. As seen in the figure, the MAD-X trajectories of $^{208}$Pb$^{82+}$~ escaping the TCP jaws bypass all downstream collimators and travel directly to the DS. We observe a calculated loss position very similar to the one measured in the machine (see Fig.~\ref{fig:psi-lossmaps-b1h}). This result strengthens the hypothesis on the origin of the large losses in the DS. Furthermore, it shows that since the loss mechanism involves dispersion, the loss location is relatively constant regardless of which TCP and jaw caused the stripping. \begin{figure}[pt!] \centering \includegraphics[width=\linewidth]{comparison_three.pdf} \caption{The top plot shows trajectories of fully stripped off-rigidity $^{208}$Pb$^{82+}$~ ions escaping the B1 TCPs, as calculated with MAD-X. The 4 trajectories depicted correspond to starting positions at both jaws of the horizontal and vertical TCPs. The middle plot shows the loss pattern simulated for PSI beams using the Sixtrack--FLUKA simulations. The bottom plot shows the measured B1H loss map with $^{208}$Pb$^{81+}$~ beams at top energy. } \label{fig:psi-lhc-experiment-fluka} \end{figure} \subsection{Simulations of the LHC experiment} The next step to further investigate the stripping effect of the TCPs was to perform an integrated beam tracking and collimator interaction simulation. Using the coupling between the SixTrack \cite{schmidt94, sixtrack-web,bruce14_PRSTAB_sixtr} and FLUKA \cite{fluka14, Battistoni:2015epi} codes, described further in Refs.~\cite{mereghetti13_ipac,skordis18_tracking_workshop,pascal-thesis}, the aim was first to reproduce the measurement results and second to validate the proposed mitigation solutions (see Section~\ref{sec:mitigations}). These simulations track beam halo particles through the magnetic lattice using SixTrack, accounting for a detailed aperture model to infer beam losses. When a particle hits a collimator, the particle-matter interaction is simulated using FLUKA, and any surviving particles that return to the beam vacuum are sent back to SixTrack for further magnetic tracking. Similar simulations have been extensively compared to measurements in previous publications for protons~\cite{bruce14_PRSTAB_sixtr,auchmann15_PRSTAB,bruce17_NIM_beta40cm,bruce19_PRAB_beam-halo_backgrounds_ATLAS} and Pb ions~\cite{hermes16_nim,hermes16_ipac_coupling}. The simulation was started at the upstream edge of the horizontal TCP, tracking $^{208}$Pb$^{82+}$~ ions with the electron already stripped, but in a machine configuration with the magnetic rigidity adjusted for $^{208}$Pb$^{81+}$~, i.e. \unit[526.5]{TeV}. To simulate the experiment we used $5\times10^5$ macro particles hitting a collimator with an impact parameter of \unit[1]{$\mu$m} (to remain conform to the regular $^{208}$Pb$^{82+}$~ simulations). The cleaning hierarchy setup was reproduced as in the experiment, and it is listed in detail in Table~\ref{tab:summary}. Figure~\ref{fig:psi-lhc-experiment-fluka} shows the simulated loss distribution for the LHC configuration used during the PSI machine test, including the same optics and collimator settings. One can see the very good qualitative agreement between the measured loss map and the simulated one, with the highest cold loss peak at the place of the aperture impact predicted with MAD-X \cite{madx}, used to estimate the single-pass trajectory for off-momentum particles. While the agreement on the peak losses (see Fig.~\ref{fig:psi-lhc-experiment-fluka}) is visible in the DS region around $s=\unit[20400]{m}$, some differences may be noticed in the TCP/TCS region ($s=\unit[19800-19900]{m}$). This apparent discrepancy could be explained by the fact that a full quantitative comparison in Fig.~\ref{fig:psi-lhc-experiment-fluka} cannot be made, since the BLM measurement is sensitive to the secondary shower particles that emerge outside of the impacted elements, while our simulations show the number of primary nuclei impacting on the aperture or disintegrating on the collimators. The measured loss pattern is also affected by a cross-talk between nearby BLMs, where any BLM intercepts the shower not only from the element at which it is placed, but also from losses on nearby upstream elements. Previous studies for protons have shown that the agreement improves dramatically when a further simulation of the shower development and the energy deposition in the BLM is performed~\cite{bruce14_PRSTAB_sixtr, auchmann15_PRSTAB}. Although the experiment was only performed for Beam~1, a simulation for Beam~2 was also carried out. A similar loss peak as for Beam~1 was noticed in the simulation results. However, due to small asymmetries of the optics functions with respect to Beam~1, the Beam~2 loss peak is found more downstream. Figure~\ref{fig:psi-fluka-lhc-beam2} shows the result of the simulation for Beam~2. The main loss position of fully stripped Pb ions given by the trajectories and tracking simulations in Fig.~\ref{fig:psi-lhc-experiment-fluka} is a strong indirect demonstration that the stripping process is the main source of the observed worsening in cleaning performance. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{lhc_beam2_simulation.pdf} \caption{SixTrack--FLUKA simulation results for Beam~2. The presence of an excess losses in the first cell after the dispersion suppressor (around $s=$\unit[7100]{m}) is visible, as for the case of Beam~1 (see Fig.\ref{fig:psi-lhc-experiment-fluka}).} \label{fig:psi-fluka-lhc-beam2} \end{figure} \section{Mitigation techniques} \label{sec:mitigations} \subsection{Dispersion suppressor collimators} The High Luminosity LHC (HL--LHC)~\cite{hl-lhc-tech-design} will start operating in 2027 and will push forward the luminosity frontier, increasing by about a factor of 10 the LHC integrated luminosity. Several hardware upgrades will be implemented for the collimation system of the HL--LHC. Among other upgrades not relevant for this work, new dispersion suppressor collimators -- called TCLD -- will be installed to protect the downstream DS region~\cite{redaelli14_chamonix}. During the current long shutdown period (LS2, in 2019-2021) it is planned to install one TCLD collimator per beam in the DS regions around IR7. The primary purpose of those collimators is to intercept dispersive losses coming from the betatron-collimation insertion and studies show that their presence can reduce the losses in the DS for both proton and ion beams \cite{hl-lhc-tech-design,bruce14ipac_DS_coll,hermes15_ipac}. To make space for the TCLD, a standard 8.33\,$\mathrm{T}$, 15~m-long main dipole will be replaced by two shorter, 5~m-long 11\,$\mathrm{T}$ dipoles, based on the Nb$_3$Sn superconducting alloy, with the collimator assembly in the middle \cite{Zlobin:2019ven}. The planned location for the TCLD installation at the longitudinal position $s$=\unit[20310]{m}, see Fig.6. Since the purpose of the TCLD is to catch dispersive losses, we investigate in detail whether it could potentially be used to intercept the fully stripped $^{208}$Pb$^{82+}$~ ions during PSI operation. A plot of the $^{208}$Pb$^{82+}$~ trajectories after electron stripping at the TCPs is shown in the top graph of Fig.~\ref{fig:hllhc-tcld-in-trajectories-and-losses}. The longitudinal position of the TCLD is indicated by the black line, showing also its expected operational opening of 14\,$\sigma$. The aperture of the fixed layout elements is also shown. It is shown that the TCLD can indeed intercept the fully-stripped $^{208}$Pb$^{82+}$~ ions before they reach the cold aperture of DS magnets. Complete loss maps simulations were performed to confirm more quantitatively this finding. The collimator settings of Table~\ref{tab:summary} were considered. The simulated loss distribution is shown in the middle and bottom graphs of Fig.~\ref{fig:hllhc-tcld-in-trajectories-and-losses}, for the cases with and without TCLD, respectively. The TCLD efficiently catches the fully stripped ions, reducing the downstream cold losses by about four orders of magnitude. It is also noticeable that the TCLD collimator becomes (in case of operation with $^{208}$Pb$^{81+}$~) the collimator with the highest fraction of the energy absorbed. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{hllhc_tcld_3.pdf} \caption{\label{fig:hllhc-tcld-in-trajectories-and-losses}The top figure shows trajectories calculated with MAD-X of fully stripped off-rigidity $^{208}$Pb$^{82+}$~ ions escaping the TCPs together with the aperture model and the TCLD with of 14\,$\sigma$ opening. The middle plot shows the SixTrack-FLUKA coupling simulation of the loss map for HL--LHC version 1.2, including the TCLD at 14~$\sigma$ in cell 9. The bottom plot shows the simulated loss map for the same optics but without the TCLD, resulting in the similar loss pattern as observed in the LHC.} \end{figure} Another option including the dispersion suppressor collimator, would be to install one additional unit in the empty cryostat. The added values oh that solution would be lack of need for additional \unit[11]{T} magnets, and absorber's closer position to the peak losses. Moreover orchestrating the setting in a way to have the losses split between the two collimators is potentially possible. But detailed look in this aspect goes beyond the scope of this paper. \subsection{Power load on \unit[11]{T} coil and TCLD collimator} Even though the TCLD effectively intercepts the losses of fully stripped ions, the risk of quenching the downstream magnets must be assessed by taking into account the energy leaking out of the TCLD. To estimate this risk, we assume a quench limit of \unit[70]{mW/cm$^3$} for the \unit[11]{T} dipole~\cite{bottura18}, and a \unit[0.2]{h} minimum beam lifetime, according to the design specification of the HL--LHC collimation system. We also assume the maximum intensity for the $^{208}$Pb$^{81+}$~ beams as considered for the $^{208}$Pb$^{82+}$~ for HL--LHC, namely $2.2\times10^{11}$~\cite{bruce20_HL_ion_report, hl-lhc-tech-design}. These assumptions correspond to a loss rate of about $3.1\times10^8$ ions per second on the TCP, carrying a total power of about 28~kW~\cite{hl-lhc-tech-design}. Furthermore, we assume that each $^{208}$Pb$^{82+}$~ ion impacting on the TCLD causes an energy deposition of \unit[$5\times10^{-7}$]{mJ/cm$^3$} in the coils of the downstream magnet \cite{anton-et-all-on-TCLD-losses}. This number is extracted from an energy deposition study with FLUKA of betatron losses during standard $^{208}$Pb$^{82+}$~ operation. However, it should be noted that this is a very approximate number, since the impact distribution on the TCLD will be different for PSI losses. Therefore, for a more detailed assessment, specific energy deposition simulations should be repeated for this case. The total maximum $^{208}$Pb$^{81+}$~ beam intensity that is acceptable without quenching can then be calculated to $3\times 10^{11}$~Pb ions, which is beyond the baseline Pb total intensity for the ion runs in HL--LHC. Therefore, it is not expected that the total PSI intensity will be severely limited by the shower on the downstream 11~T magnet. To refine the intensity estimate, energy deposition simulations should be performed using the realistic PSI impact distribution on the TCLD. From the Fig.~\ref{fig:hllhc-tcld-in-trajectories-and-losses} the power load on the TCLD during a 0.2~h beam lifetime drop is estimated to about 30~\% of the total beam losses, i.e. about \unit[9]{kW}. A simple scaling of previous studies on other tungsten collimators~\cite{TCT_fede} shows that the TCLD risks to have a peak temperature of 150~$^\circ$C or more, and that it could temporarily deform by a few hundred microns but without any permanent damage to the collimator. Therefore, we do not expect severe limitations on the total beam intensity below the assumed baseline. However, a more detailed assessment, including further simulations of the energy deposition and thermo-mechanical response of the TCLD, is needed to draw a firm conclusion. In addition to the 0.2~h beam lifetime scenario, the impact of steady-state losses with a 1~h beam lifetime could also be studied as done for the standard HL--LHC running scenario~\cite{hl-lhc-tech-design}. Since the TCLDs are planned to be installed in LS2, the LHC run~3 starting in 2021 will provide a unique opportunity to perform dedicated tests of cleaning efficiency with $^{208}$Pb$^{81+}$~ ion beams with the final DS layouts. \subsection{Alternative mitigation measures} In case of problems with the TCLD collimators, or if a further mitigation would be needed, other alleviation techniques could be envisaged. We discuss these concepts here, although without detailed simulations. Crystal collimation is a novel technique being investigated for the LHC and HL--LHC \cite{daniele-thesis,scandale16,Mirarchi2017_crystals}. A bent silicon crystal is used instead of the amorphous carbon TCP in the standard collimation setup. Beam halo particles incident on the crystal can enter a channeling regime, in which their trajectories are guided by the potential between crystalline planes. The angular deflection achieved by channeling in the crystal is much larger than the deflection achieved by scattering in an amorphous material and beam halo particles can be directed onto a single massive absorber with a large impact parameter, reducing the need for additional secondary collimators and additional absorbers. While the interaction of the PSI with the crystal is currently not well characterised, crystal collimation has shown promise for improving the cleaning efficiency with heavy ion beams~\cite{D'Andrea:2678781} and it is proposed to study if it can also alleviate the losses for PSI beams. If the impacting ions could be channelled by the crystal even if their electron has been stripped, they could could be steered onto the absorber as in the standard crystal collimation scheme. This concept requires thorough experimental feasibility studies before being relied upon. A crystal test stand installation is available in the LHC for collimation studies \cite{Mirarchi2017_crystals}, and it is planned to test this with PSI beams when available again in the LHC. As the stripping action of collimators produces a secondary beam of similar particles, another mitigation strategy may involve an orbit bump to shift the losses to a less sensitive location. Orbit bumps have been successfully employed for the case of secondary beams created in the collisions between $^{208}$Pb$^{82+}$~ beams, formed by the process of Bound-Free Pair Production (BFPP)~\cite{klein01,prl07,prstabBFPP09,schaumann16_md_BFPPquench,jowett16_ipac_bfpp}. The bump is used to shift the impact location of the secondary beam on the mechanical aperture out of the main dipole and into an empty connection cryostat so that no impacts occur directly on superconducting magnets. A similar strategy can be considered for PSI secondary beams. A local closed orbit bump could be used to optimize the loss location of the fully stripped $^{208}$Pb$^{82+}$~ beam, presently around $s$=\unit[20400]{m}, to the nearby empty connection cryostat. However, a rather large bump amplitude of about 6.5~mm would be needed to move the losses away from the quadrupole, and it needs to be studied if such a bump is operationally feasible. It should also be noted that the total peak power of the local losses on the connection cryostat during PSI operation will be much higher than the power of the BFPP losses that were successfully handled in the LHC. The peak collimation losses are assumed to be transient and last only for a few seconds, but steady collimation losses during standard operation might also cause limitations. Therefore, to conclude on the feasibility of this mitigation strategy, the power deposition in the connection cryostat and downstream magnets needs to be studied in further detail for different scenarios of losses on the TCPs. \section{Conclusions and outlook} Partially stripped $^{208}$Pb$^{81+}$~ ions with one electron left were injected, accelerated, and stored in the LHC for the first time. The results of the first measurement of the collimation performance with those beams at the LHC show that the cleaning efficiency is prohibitively low for high-intensity operation, due to very high localized losses in the dispersion suppressor. We have presented studies showing that the likely reason for the poor collimation performance is the stripping action of the primary collimators, which causes nearly every ion that touches the collimator to lose its electron. If an ion does not fragment in the collimator, it re-enters the beam with a higher charge, causing it to be lost in the dispersion suppressor where the dispersion rises. For a detailed investigation of the observed losses, tracking simulations were performed. The simulated impact location of the fully stripped $^{208}$Pb$^{82+}$~ ions, as well as the finding that this loss location is by far dominating, are in excellent agreement with the measurements. This demonstrates that the stripping action of the material in the primary collimators is indeed very likely responsible for the observed loss peaks, which have never been observed before at this magnitude with other particle species. The simulations were extended for the machine layout including upgrades foreseen for HL--LHC. Several mitigation strategies are under consideration for reducing the losses on the dispersion suppressor and thus increasing the intensity reach of partially stripped ions operation. The use of a new TCLD collimator, which is scheduled to be installed before the next LHC run, was identified as the most promising option, as it could efficiently intercept the fully stripped $^{208}$Pb$^{82+}$~ ions before reaching the cold magnets. A preliminary estimate extrapolated from energy deposition studies for $^{208}$Pb$^{81+}$~ indicates that the magnet downstream of the TCLD is not likely to quench with the assumption of beam lifetime and total beam intensity as for the $^{208}$Pb$^{82+}$~ beam, even with a potential full HL--LHC PSI beam. On the other hand, there is a risk that the TCLD itself could suffer significant deformations, potentially limiting the maximum intensity to a factor of a few below the nominal HL--LHC design intensity for $^{208}$Pb$^{82+}$~. However, additional comprehensive energy deposition studies should be carried out to better quantify the current beam intensity limit of safe operation for both the magnets and the TCLD. The non-standard beam particle type and the additional physical interactions the PSI can undergo are not supported by the available simulation tools. An active effort is directed to extending the existing simulation frameworks and developing new ones that would enable future studies of various PSI collimation aspects. Two other mitigation techniques were discussed: crystal collimation and dedicated orbit bumps. Both options, alone or in combination with other mitigation techniques, may be considered as useful in case the TCLD would turn out not being sufficient regarding the quench limits of the nearby magnets, or if the load on the TCLD itself would be too high. \begin{acknowledgments} The authors would like to thank the LHC operations crew for their help during the measurements. Additionally, the authors thank F.~Carra for the detailed and instructive comments on the tungsten collimator power-deposition loads. \end{acknowledgments}
1,116,691,498,838
arxiv
\section{Introduction} \label{sec:introduction} The standard model of particle physics describes neutrinos as neutral spin-$1/2$, massless fermions, organized into three families with three different flavours: $\nu_e, \nu_\mu, \nu_\tau$. It has been observed that neutrinos can change their flavour as they propagate through space, however. This phenomenon, known as neutrino oscillations, implies that neutrinos are massive. Measurements of the neutrino oscillations from laboratory experiments have allowed us to estimate the mass-square differences among the different neutrino mass eigenstates to be \citep{Fogli,Tortola}: \begin{eqnarray} \bigtriangleup m^2_{12}&=&7.5\times10^{-5} ~{\rm eV}^2\\ |\bigtriangleup m^2_{23}|&=&2.3\times10^{-3}~{\rm eV}^2, \end{eqnarray} which implies that at least two of the three neutrino families are massive. A lower bound on the sum of the neutrino masses can be set from the above measurements: $M_\nu\equiv\sum_i m_{\nu_i}\gtrsim0.06$ eV. Unfortunately, the above constraints do not allow us to determine which neutrino is the lightest, or whether it is massless or massive. This gives rise to two different \textit{hierarchies}: a normal hierarchy in which $0\leqslant m_1\textless m_2\textless m_3$, and an inverted hierarchy where $0\leqslant m_3\textless m_1\textless m_2$. The fact that neutrinos are massive is one of the clearest indications of physics beyond the particle physics standard model, and so two of the most important questions in modern physics are: (a) what are the masses of the neutrinos, and (b) which hierarchy do they conform to? Answering these questions with laboratory experiments is extremely challenging. For instance, current bounds on the mass of the electron anti-neutrino from the KATRIN\footnote{\url{https://www.katrin.kit.edu/}} experiment are $m(\bar{\nu}_e)\textless2.3$ eV \citep{Kraus_2005} and are expected to improve to $m(\bar{\nu}_e)\textless0.2$ eV in the coming years. On the other hand, tight upper limits on the neutrino masses have already been obtained by using cosmological observables such as the anisotropies in the cosmic microwave background (CMB), the clustering of galaxies, the abundance of galaxy clusters, the distortion in the shape of galaxies by weak lensing, the Ly$\alpha$ forest, etc. \citep{Hannestad_2003,Reid,Thomas,Swanson,Saito_2010,2011APh....35..177A,dePutter, Xia2012,WiggleZ,Zhao2012, Costanzi,Basse,Planck_2013, Costanzi_2014, Costanzi_2014b, Wyman_2013,Battye_2013,Hamann_2013, Beutler_2014,Giusarma_2014, Palanque-Delabrouille:2014jca,Planck_2015, ades}. These constraints arise because massive neutrinos delay the matter-radiation equality time and slow down the growth of matter perturbation on small scales. At linear order, the resulting effects on the CMB and matter power spectrum are well known and understood, making cosmological observables extremely useful tools for putting upper limits on the sum of the neutrino masses. Currently, the tightest upper limit on the sum of the neutrino masses comes from combining data from the CMB, baryonic acoustic oscillations (BAO) and the Ly$\alpha$ forest: $M_\nu<0.12$ eV ($95\%$ CL) \citep{ades}. These limits also have strong implications for particle physics experiments like neutrinoless double beta decay \citep{delloro}. A new cosmological observable has recently been proposed that is expected to play an important role in future cosmology: 21cm intensity mapping \citep{Bharadwaj_2001A, Bharadwaj_2001B, Battye:2004re,McQuinn_2006, Chang_2008,Loeb_Wyithe_2008, Bull_2015}. The idea of this technique is to measure the integrated 21cm emission from unresolved galaxies by performing a low angular resolution survey \citep{Santos_2015}. Since neutral hydrogen (HI) is a tracer of the underlying matter distribution of the Universe on large scales, the HI power spectrum is expected to follow the shape of the matter power spectrum, but with a different amplitude (the HI bias). This should allow tight constraints to be placed on cosmological parameters through measurements of the power spectrum of the 21cm field \citep{Bull_2015}. Intensity mapping therefore constitutes a promising new cosmological observable that can be used to constrain the neutrino masses \citep{Loeb_Wyithe_2008,2008PhRvD..78f5009P, Tegmark_2008, Metcalf_2010, 2011APh....35..177A, Oyama_2012,Shimabukuro_2014}. In order to do that, however, one needs to understand how the 21cm power spectrum is affected by the presence of massive neutrinos. The aim of this paper is to investigate the signatures left by massive neutrinos on the 21cm power spectrum in the post-reionization universe, in both the linear and fully non-linear regimes. We begin by studying the effects that massive neutrinos have on the spatial distribution of neutral hydrogen in real-space. We do this by running hydrodynamic simulations with massless and massive neutrinos. We investigate how the presence of massive neutrinos affects the HI abundance and clustering properties and, ultimately, the signatures left by neutrinos in the 21cm power spectrum. We also forecast the constraints that the future Square Kilometre Array (SKA) radio telescope will place on the sum of the neutrino masses. We do this using the Fisher matrix formalism, where the spatial distribution of neutral hydrogen is modeled using hydrodynamic simulations at redshifts $3\leqslant z \leqslant 5.5$ (a redshift range where SKA1-LOW will collect data), and with a simple analytic model at redshifts $z\leqslant3$ (the redshift range covered by SKA1-MID). Our approach is conservative in the sense that: (i) at $z>3$ we compare models of the HI distribution using 4 different methods; and (ii) we embed the information from the 21cm power spectrum in the Fisher matrix forecasts in a conservative way. This paper is organized as follows. In Sec. \ref{sec:N-body} we describe the set of hydrodynamic simulations carried out for this work. Our simulations do not account for two crucial processes needed to properly model the spatial distribution of neutral hydrogen: HI self-shielding, and the formation of molecular hydrogen. We correct the outputs of our simulations {\it a posteriori} to account for these effects, depicting the four different methods we use to achieve this in Sec. \ref{sec:HI}. We also describe the method we use to model the spatial distribution of neutral hydrogen at redshifts $z\leqslant3$, which is not covered by our hydrodynamic simulations. In Sec. \ref{sec:neutrinos_effects} we investigate the effect of massive neutrinos on the abundance and spatial distribution of neutral hydrogen. We present our forecasts on the neutrino masses in Sec. \ref{sec:forecasts} and, finally, draw the main conclusions of this paper in Sec. \ref{sec:conclusions}. \section{Hydrodynamic simulations} \label{sec:N-body} \begin{table*}[t] \begin{center} {\renewcommand{\arraystretch}{1.5} \resizebox{16cm}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Name & Box & $\Omega_{\rm cdm}$ & $\Omega_{\rm b}$ & $\Omega_\nu$ & $\Omega_\Lambda$ & $\Omega_k$ & $h$ & $n_s$ & $10^{9}A_s$ & $\sigma_{8,0}$\\ & ($h^{-1}\rm{Mpc}$) & & & & & & & & & \\ \hline \hline $\mathcal{F}$ & 50 & $0.2685$ & $0.049$ & $0.0$ & $0.6825$ & 0 & $0.67$ & $0.9624$ & $2.13$ & $0.834$\\ \hline $\mathcal{\nu}^{+}$ & 50 & $0.2685$ & $0.049$ & $0.007075$ & $0.675425$ & 0 & $0.67$ & $0.9624$ & $2.13$ & $0.778$\\ $\mathcal{\nu}_{\rm m}^{+}$ & 50 & $0.261425$ & $0.049$ & $0.007075$ & $0.6825$ & 0 & $0.67$ & $0.9624$ & $2.13$ & $0.764$\\ $\mathcal{\nu}_{\rm m}^{++}$ & 50 & $0.25435$ & $0.049$ & $0.01415$ & $0.6825$ & 0 & $0.67$ & $0.9624$ & $2.13$ & $0.693$\\ \hline $\mathcal{C}^{+}$ & 50 & $0.287$ & $0.049$ & $0.0$ & $0.664$ & 0 & $0.67$ & $0.9624$ & $2.13$ & $0.868$\\ $\mathcal{C}^{-}$ & 50 & $0.25$ & $0.049$ & $0.0$ & $0.701$ & 0 & $0.67$ & $0.9624$ & $2.13$ & $0.797$\\ \hline $\mathcal{B}^{+}$ & 50 & $0.2685$ & $0.055$ & $0.0$ & $0.6765$ & 0 & $0.67$ & $0.9624$ & $2.13$ & $0.816$\\ $\mathcal{B}^{-}$ & 50 & $0.2685$ & $0.043$ & $0.0$ & $0.6885$ & 0 & $0.67$ & $0.9624$ & $2.13$ & $0.853$\\ \hline $\mathcal{H}^{+}$ & 50 & $0.2685$ & $0.049$ & $0.0$ & $0.6825$ & 0 & $0.71$ & $0.9624$ & $2.13$ & $0.886$\\ $\mathcal{H}^-$ & 50 & $0.2685$ & $0.049$ & $0.0$ & $0.6825$ & 0 & $0.63$ & $0.9624$ & $2.13$ & $0.777$\\ \hline $\mathcal{N}^+$ & 50 & $0.2685$ & $0.049$ & $0.0$ & $0.6825$ & 0 & $0.67$ & $1.0009$ & $2.13$ & $0.846$\\ $\mathcal{N}^-$ & 50 & $0.2685$ & $0.049$ & $0.0$ & $0.6825$ & 0 & $0.67$ & $0.9239$ & $2.13$ & $0.822$\\ \hline $\mathcal{A}^+$ & 50 & $0.2685$ & $0.049$ & $0.0$ & $0.6825$ & 0 & $0.67$ & $0.9624$ & $2.45$ & $0.894$\\ $\mathcal{A}^-$ & 50 & $0.2685$ & $0.049$ & $0.0$ & $0.6825$ & 0 & $0.67$ & $0.9624$ & $1.81$ & $0.769$\\ \hline \end{tabular} }} \end{center} \caption{Summary of our simulation suite. The simulation name indicates the parameter that has been varied with respect to the fiducial, $\mathcal{F}$, model. The superscript, $^+/^-$, designates whether the variation is positive/negative. The simulations $\nu_{\rm m}^+$ and $\nu_{\rm m}^{++}$ are simulations with massive neutrinos having a value of $\Omega_{\rm m}$ equal to the one of the fiducial model.} \label{tab_sims} \end{table*} We model the spatial distribution of neutral hydrogen by running high-resolution hydrodynamic simulations in 14 different cosmological models. The values of the cosmological parameters of our fiducial model are $\Omega_{\rm m}=0.3175$, $\Omega_{\rm cdm}=0.2865$, $\Omega_{\rm b}=0.049$, $\Omega_\nu=0$, $\Omega_\Lambda=0.6825$, $h=0.6711$, $n_s=0.9624$ and $\sigma_8=0.834$, in excellent agreement with the latest results from Planck \citep{Planck_2015}. In all simulations we have assumed a flat cosmology, and therefore the value of $\Omega_\Lambda$ is set to $1-\Omega_{\rm m}$, with $\Omega_{\rm m}=\Omega_{\rm cdm}+\Omega_{\rm b}+\Omega_\nu$, with $\Omega_\nu h^2\cong M_\nu/(94.1~{\rm eV})$. A summary of our simulation suite is shown in Table \ref{tab_sims}. The simulations can be split into two different groups. On one hand we have simulations in which the value of one of the parameters, $\Omega_{\rm cdm}$, $\Omega_{\rm b}$, $\Omega_\nu$, $h$, $n_s$, $A_s$, is varied (with respect to the value in the fiducial model) while the values of the other parameters are kept fixed. We use these simulations in our Fisher matrix analysis to investigate degeneracies between cosmological parameters, and to forecast the constraints that the SKA will place on the neutrino masses. The simulations belonging to this group are $(\mathcal{F}, \nu^+, \mathcal{C}^+, \mathcal{C}^-, \mathcal{B}^+, \mathcal{B}^-, \mathcal{H}^+, \mathcal{H}^-, \mathcal{A}^+, \mathcal{A}^-)$. On the other hand we have simulations in which we vary the value of $\Omega_\nu$, but keep $\Omega_{\rm m}$ fixed. This is the most natural choice to investigate the effect of massive neutrinos, since we assume that a fraction of the total matter content of the Universe is made up of neutrinos. We have run one simulation with $M_\nu=0.3$ eV, and another with $M_\nu=0.6$ eV. Even though these neutrino masses are ruled out by the most recent constraints that combine CMB, BAO and Ly$\alpha$-forest data, our purpose in this paper is to investigate the impact of neutrino masses on the HI spatial distribution. Note that for a realistic sum of neutrino masses, the effect will be very small and may be completely hidden by sample variance in our simulations. Therefore, we decided to run simulations with neutrino masses higher than current bounds to properly resolve the effects of neutrinos on the HI distribution. For the sum of the neutrino masses we use to run the simulations, 0.3 eV and 0.6 eV, the neutrino masses are almost perfectly degenerate, so there is no need to distinguish between the three different families. We use these simulations to investigate the impact of massive neutrinos on the matter and HI spatial distribution. The simulations belonging to this group are $(\mathcal{F},\nu_{\rm m}^+, \nu_{\rm m}^{++})$. In each simulation we follow the evolution of $512^3$ CDM and $512^3$ baryon particles (plus $512^3$ neutrino particles for simulations with $\Omega_\nu>0$) in a periodic box of size $50$ comoving $h^{-1}$Mpc, down to redshift 3. For each simulation we save snapshots at redshifts 5.5, 5, 4.5, 4, 3.5 and 3. Note that evolving our simulations down to $z=0$, for all the cosmological models considered in this paper, would be extremely computationally expensive, so at redshifts lower than $z=3$ we model the spatial distribution of neutral hydrogen using a simple analytic model described in Sec. \ref{subsec:low-z}. The simulations were run using the TreePM+SPH code {\sc GADGET-III} \citep{Springel_2005}. They incorporate radiative cooling by hydrogen and helium, as well as heating by a uniform UV background. Both the cooling routine and the UV background have been modified to obtain the desired thermal history, which corresponds to the reference model of \cite{viel13} that has been shown to provide a good fit to the statistical properties of the transmitted Lyman-$\alpha$ flux. In our simulations, hydrogen reionization takes place\footnote{Note that this reionization redshift is in slight tension with the latest Planck results \citep{Planck_2015, Mitra_2015}. We do not expect our conclusions to be affected by this however, since we use the same reionization history for all models, and are only interested in studying relative effects.} at $z\sim12$ and the temperature-density relation for the low-density IGM $T = T_0(z)(1 + \delta)^{ \gamma(z)-1}$ has $\gamma(z) = 1.3$ and $T_0(z = 2.4, 3, 4) = (16500, 15000, 10000)$ K. For every simulation we generate 5000 quasar mock spectra at the snapshot redshifts, and tune the strength of the UV background to reproduce the observed mean transmitted flux of the Lyman-$\alpha$ forest; this information is needed for two of the HI modeling methods (the pseudo-RT 1 and pseudo-RT 2 methods, described below). Star formation is modeled using the multi-phase effective model of \cite{Springel-Hernquist_2003}. The simulation initial conditions are generated at $z=99$ using the Zel'dovich approximation. We compute the transfer functions of the different components using CAMB \citep{CAMB}. In simulations with massive neutrinos, the initial conditions were generated taking into account the scale-dependent growth present in those cosmological models. We note that the random seeds used to generate the initial conditions are the same in all simulations. We identify dark matter halos using both the Friends-of-Friends (FoF) algorithm \citep{FoF} with $b=0.2$ and {\sc SUBFIND} \citep{Subfind,Dolag_2009}. We require that a minimum of 32 CDM particles belong to the FoF halo to identify it. \begin{table* \centering{ {\renewcommand{\arraystretch}{1.8} \begin{tabular}{|l|cccc|} \hline & \multicolumn{4}{c|}{\bf{Method}} \\ & Pseudo-RT 1 & Pseudo-RT 2 & Halo-based 1 & Halo-based 2 \\ & (fiducial method) & & &\\ \hline HI self-shielding & \large{\ding{52}} & \large{\ding{52}} & \large{\ding{52}} & \large{\ding{52}}\\ Molecular hydrogen & \large{\ding{52}} & \large{\ding{52}} & \large{\ding{56}} & \large{\ding{56}}\\ Modeling $M_{\rm HI}(M,z)$ function & \large{\ding{56}} & \large{\ding{56}} & \large{\ding{52}} & \large{\ding{52}}\\ HI assigned to all gas particles & \large{\ding{52}} & \large{\ding{52}} & \large{\ding{56}} & \large{\ding{56}}\\ HI assigned only to gas particles in halos & \large{\ding{56}} & \large{\ding{56}} & \large{\ding{52}} & \large{\ding{52}}\\ Value of $\Omega_{\rm HI}(z)$ fixed a priori & \large{\ding{56}} & \large{\ding{56}} & \large{\ding{52}} & \large{\ding{52}}\\ Reference work & \cite{Rahmati_2013} & \cite{Dave_2013} & \cite{Bagla_2010} & This work\\ \hline \end{tabular} }} \caption{Differences between the four different methods used to model the spatial distribution of neutral hydrogen.} \label{tbl:HI_methods} \end{table*} \section{HI distribution} \label{sec:HI} Our hydrodynamic simulations do not take into account two crucial physical processes needed to properly simulate the spatial distribution of neutral hydrogen: the formation of molecular hydrogen (H$_2$), and HI self-shielding. In this section we describe the various methods we use to correct for those two processes. We consider four methods here: two models (pseudo-RT 1 and pseudo-RT 2) that aim to mimic the result of a full radiative transfer calculation\footnote{In both pseudo-RT methods, we only account for the radiation from the UV background and do not consider radiation from local sources \citep{Miralda-Escude_2005, Schaye_2006, Rahmati_sources}. We are interested only in studying relative differences here, rather than absolute quantities, and do not expect our conclusions to change by neglecting the radiation from local sources.}, and two (halo-based 1 and halo-based 2) that were constructed by taking into account the fact that all HI should be within dark matter halos \citep{Villaescusa-Navarro_2014a}. These models have the objective of providing the shape and amplitude of the function $M_{\rm HI}(M,z)$ (see subsection \ref{subsec:low-z} for further details). We summarize the main features of each method in Table \ref{tbl:HI_methods}. In this section, we also explain the way we model the spatial distribution of neutral hydrogen at redshifts $z\textless3$. This redshift range is not covered by our simulations, but is needed to forecasts the constraints that the SKA will set on the neutrino masses. \subsection{Pseudo-RT 1 (fiducial model)} In the first \textit{pseudo-radiative transfer} method, neutral hydrogen is assigned to every single particle in the simulation. The HI self-shielding correction is modeled using the fitting formula of \cite{Rahmati_2013} (see their appendix A) that was obtained by performing radiative transfer calculations on top of hydrodynamic simulations. This correction states that the photo-ionization rate seen by a particular gas particle is a function of its density. The HI masses obtained in this way are further corrected to account for the presence of molecular hydrogen, which is only assigned to star-forming particles. We assume that the ratio of molecular to neutral hydrogen scales with the pressure, $P$, as \begin{equation} \frac{\Sigma_{H_2}}{\Sigma_{\rm HI}}=\left(\frac{P}{P_0}\right)^{\alpha}~, \label{Things_Rmol} \end{equation} where $\Sigma_{\rm H_2}$ and $\Sigma_{\rm HI}$ are the molecular and neutral hydrogen surface densities, respectively. In our analysis we use $(P_0,\alpha)=(3.5\times10^4~{\rm cm^{-3}K}, 0.5)$, where $\alpha$ is slightly different to the value measured by Blizt \& Rosolowsky \citep{Blitz_2006}, $\alpha=0.92$. The reason for this choice is purely phenomenological -- using $\alpha=0.5$, we obtain much better agreement with the abundance of absorbers with large column densities than with $\alpha=0.92$. We use the above procedure as our fiducial method to model the spatial distribution of neutral hydrogen in the various simulated cosmologies, and to investigate the effects induced by massive neutrinos on the spatial distribution of neutral hydrogen (see Sec. \ref{sec:neutrinos_effects}). To study the robustness of our SKA neutrino masses forecasts, we also model the distribution of HI using three other methods, that we describe in the following subsections. \subsection{Pseudo-RT 2} The HI self-shielding correction can be implemented using a method proposed by \cite{Dave_2013}. The authors of that paper state that good agreement between this method and the full radiative transfer simulations by \cite{Faucher-Giguere_2010} is achieved. It is also able to reproduce several observations, such as the HIMF \citep{Haynes_2011} at $z=0$. We will briefly describe the method here, but refer the reader to \cite{Dave_2013} for further details. For every gas particle in the simulation, the HI fraction in photo-ionization equilibrium is computed by taking into account the strength of the UV background and the physical properties of the particle, such as its density and temperature. Then, the HI column density, for every gas particle, is computed by integrating the SPH kernel from the radius of the particle up to a given radius (if it exists), $r_{\rm thres}$, where it reaches a given threshold that we set to $10^{17.3}~{\rm cm^{-2}}$. The method implements the HI self-shielded correction by assuming that $90\%$ of the hydrogen between $r=0$ and $r=r_{\rm thres}$ is fully neutral. Finally, the presence of molecular hydrogen is accounted for using the procedure described in the previous subsection. \subsection{Halo-based 1} \label{subsec:Bagla} In contrast with the above two methods, where HI is assigned to each individual gas particle in the simulation according to its physical properties, the method depicted in this and in the next subsection are designed to assign HI to dark matter halos (see subsection \ref{subsec:low-z} for further details). These methods assume that a dark matter halo of mass $M$ at redshift $z$ hosts an amount of HI given by the deterministic function $M_{\rm HI}(M,z)$. \cite{Bagla_2010} proposed a parametrization of the function $M_{\rm HI}(M,z)$ as \begin{equation} M_{\rm HI}(M,z) = \left\{ \begin{array}{l l} f_3(z)\frac{M}{1+M/M_{\rm max}(z)} & \quad \text{if $M_{\rm min}(z)\leqslant M$} \\ 0 & \quad \text{otherwise,}\\ \end{array} \right. \label{M_HI_Bagla3} \end{equation} where the values of the parameters $M_{\rm min}(z)$ and $M_{\rm max}(z)$ correspond to halos with circular velocities equal to $v_{\rm min}=30$ km/s and $v_{\rm max}=200$ km/s at redshift $z$, respectively. The value of $f_3(z)$ is tuned to reproduce the HI density parameter $\Omega_{\rm HI}(z)$ (defined as the ratio between the comoving density of neutral hydrogen at redshift $z$ to the critical density at $z=0$). For the redshifts covered by our simulations ($3\leqslant z \leqslant5.5$), we assume that the value of $\Omega_{\rm HI}(z)$ does not depend on redshift, and that its value is equal to $10^{-3}$, both for halo-based 1 and for the halo-based 2 method. This is in excellent agreement with observations. In practice, this method works as follows: from a simulation snapshot we identify all the dark matter halos. The total HI mass residing in a particular halo of mass $M$ at redshift $z$ is computed from Eq.~\ref{M_HI_Bagla3}. Finally, the HI mass within the halo is distributed according to some HI density profile, $\rho_{\rm HI}(M,z)$. We model the last step by splitting the total HI mass in a given halo equally amongst all the gas particles belonging to it. In \cite{Villaescusa-Navarro_2014a}, it was shown that the above method is capable of reproducing the damped Lyman alpha absorber (DLA) column density distribution function extremely well at redshifts $z\sim[2-4]$. In \cite{Padmanabhan_2015} the authors showed that it also reproduces the HI bias at $z=0$ and the product $b_{\rm HI}(z) \times \Omega_{\rm HI}(z)$ at $z\simeq0.8$ from 21cm observations. \subsection{Halo-based 2} While the halo-based 1 model is capable of reproducing many observables, it does fail at reproducing one: the bias of the DLAs at $z\simeq2.3$ recently measured by the BOSS collaboration: $b_{\rm DLA}=(2.17\pm0.20)\beta_F^{0.22}$ \citep{Font_2012}, where $\beta_F$ is the Ly$\alpha$ forest redshift distortion parameter, whose value is of order 1. Here we propose a simple model that is capable of reproducing this observable as well. We propose a functional form \begin{equation} M_{\rm HI}(M,z) = \left\{ \begin{array}{l l} f_4(z)M & \quad \text{if $M_{\rm min}(z)\leqslant M$} \\ 0 & \quad \text{otherwise,}\\ \end{array} \right. \label{M_HI_Paco} \end{equation} where $M_{\rm min}(z)$ is chosen to be the mass of dark matter halos with circular velocities equal to $v_{\rm min}=62$ km/s at redshift $z$, and $f_4(z)$ is a parameter whose value is set to reproduce the value of $\Omega_{\rm HI}(z)$. This simple model predicts a value of the HI bias at $z=2.3$ (see Eq.~\ref{eq:bias_HI}) equal to 2.15, in perfect agreement with the observational measurements.\footnote{Note that we are making the assumption that the bias of the DLAs is the bias of the HI. This assumption is reasonable since the amount of HI in Lyman limit systems and in the Ly$\alpha$ forest is of order $\sim10\%$} \subsection{HI at $z\textless3$} \label{subsec:low-z} Running all of our high-resolution hydrodynamic simulations down to $z=0$ is infeasible given the computational resources we have access to. On the other hand, the redshift range $0\leqslant z \textless 3$ may be important when forecasting the constraints on the neutrino masses that will be achievable with SKA. We therefore model the spatial distribution of neutral hydrogen at $z\textless3$ using a simple analytic model. In \cite{Villaescusa-Navarro_2014a} it was shown that the fraction of HI outside dark matter halos in the post-reionization era is negligible. One can therefore make the assumption that all HI resides in dark matter halos. Under this assumption, and following the spirit of the halo model \citep{Cooray_Sheth_2002}, we can then predict the shape and amplitude of the HI power spectrum, in real-space, if we have the following ingredients: the halo mass function, $n(M,z)$, the halo bias, $b(M,z)$, the linear matter power spectrum $P_{\rm m}^{\rm lin}(k)$, and the functions $M_{\rm HI}(M,z)$ and $\rho_{\rm HI}(M,z)$. The functions $M_{\rm HI}(M,z)$ and $\rho_{\rm HI}(r|M,z)$ represent the average HI mass and density profile in a dark matter halo of mass $M$ at redshift $z$. On large, linear, scales the HI power spectrum in real-space does not depend on the $\rho_{\rm HI}(r|M,z)$ function, but only on $M_{\rm HI}(M,z)$, and it is given by \begin{equation} P_{\rm HI}(k,z)=b_{\rm HI}^2(z)P_{\rm m}(k,z)~, \end{equation} where the HI bias, $b_{\rm HI}(z)$ is given by \begin{equation} b_{\rm HI}(z)=\frac{\int_0^\infty n(M,z)b(M,z)M_{\rm HI}(M,z)dM}{\int_0^\infty n(M,z)M_{\rm HI}(M,z)dM}~. \label{eq:bias_HI} \end{equation} The 21cm power spectrum is given by \begin{eqnarray} P_{\rm 21cm}(k,z)&=&\overline{\delta T_b}^2(z)b_{\rm HI}^2(z)\left(1+\frac{2}{3}\beta(z)+\frac{1}{5}\beta^2(z)\right)\\ \nonumber &&\times ~P_{\rm m}(k,z), \label{eq:P21} \end{eqnarray} where $\beta$ is the redshift-space distortion parameter given by $\beta(z)=f(z)/b_{\rm HI}(z)$, with $f(z)$ being the growth rate at redshift $z$. The third term on the right hand side arises from the Kaiser formula \citep{Kaiser_1987}. The value of $\overline{\delta T_b}(z)$ is given by \begin{equation} \overline{\delta T_b}(z) = 189\left(\frac{H_0(1+z)^2}{H(z)}\right)\Omega_{\rm HI}(z)h~{\rm mK}~, \label{eq:delta_Tb} \end{equation} where $H(z)$ and $H_0$ are the value of the Hubble parameter at redshifts $z$ and $0$, respectively. $h$ represents the value of $H_0$ in units of $100~{\rm km~s^{-1} Mpc^{-1}}$. Also, $\Omega_{\rm HI}(z)$ does not depend on $\rho_{\rm HI}(r|M,z)$, but only on the function $M_{\rm HI}(M,z)$: \begin{equation} \Omega_{\rm HI}(z)=\frac{1}{\rho_{\rm c, 0}}\int_0^\infty n(M,z)M_{\rm HI}(M,z)dM~. \label{eq:Omega_HI} \end{equation} The above equations make clear the central role played by the function $M_{\rm HI}(M,z)$ in studies related to 21cm intensity mapping in the post-reionization era. Modeling this function is therefore all we need to predict the shape and amplitude of the HI/21cm power spectrum on linear scales. We use the above equations to model the 21cm power spectrum at redshifts $z\textless3$, with $M_{\rm HI}(M,z)$ given by the halo-based 1 prescription \citep{Bagla_2010}. As discussed in Sec. \ref{subsec:Bagla}, the halo-based 1 model has three free parameters: $M_{\rm min}(z)$, $M_{\rm max}(z)$ and $f_3(z)$. While the values of the parameters $M_{\rm min}(z)$ and $M_{\rm max}(z)$ are chosen to correspond to halos with circular velocities of 30 km/s and 200 km/s, the value of $f_3(z)$ is fixed by requiring that $\Omega_{\rm HI}(z)$ reproduces the observational measurements. Since at $z\textless3$ observations disfavor models with constant $\Omega_{\rm HI}$, we follow \cite{Crighton_2015} and assume a redshift dependence $\Omega_{\rm HI}(z)=4\times10^{-4}(1+z)^{0.6}$, which fully determines the value of $f_3(z)$. For the halo mass function and halo bias we use the \cite{ST} and Sheth, Mo \& Tormen \citep{SMT} models, respectively. We use the matter power spectrum from halofit \citep{Halofit_2012} to evaluate $P_{\rm m}(k,z)$ for a given cosmological model, and compute the HI/21cm power spectra at redshifts $z=\{0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.5\}$. In cosmologies with massive neutrinos we use the CDM+baryon field, instead of the total matter density field, to compute the 21cm power spectrum (last term on the r.h.s of Eq.~\ref{eq:P21}), and to evaluate the halo mass function and halo bias \citep{Ichiki-Takada,Castorina_2014}. We now briefly discuss the differences between applying the halo-based 1 model to our hydrodynamic simulations and the formalism we have described in this subsection. By putting the HI in the gas particles belonging to the dark matter halos of our simulations, we not only model the function $M_{\rm HI}(M,z)$, but also the HI density profile within halos, $\rho_{\rm HI}(r|M,z)$, which is ignored in the method above. The above formalism also implicitly assumes a scale-independent bias, and that the redshift-space distortions are accounted for by the Kaiser formula. The fully non-linear clustering and redshift-space distortions are taken into account by placing the HI in the simulations, however. On large, linear scales, both methods should give the same results, while on small scales the method described in this section will break down. As such, we conclude by emphasizing that the above methodology is limited to linear scales, i.e. scales in which the HI bias is constant, and redshift-space distortions can be accounted for by using the Kaiser formula. \section{Effect of massive neutrinos} \label{sec:neutrinos_effects} Here we study the effects induced by massive neutrinos on the spatial distribution of neutral hydrogen. We investigate how the HI abundance and clustering properties are affected by the presence of massive neutrinos by comparing the distribution of neutral hydrogen from the simulations $\nu_{\rm m}^+$ ($M_\nu\!=\!0.3$ eV) and $\nu_{\rm m}^{++}$ ($M_\nu\!=\!0.6$ eV) to the one from the fiducial simulation, $\mathcal{F}$ ($M_\nu\!=\!0.0$ eV). In Fig. \ref{fig:Image} we show the spatial distribution of neutral hydrogen (top row), matter (middle row) and gas (bottom row) in a cosmology with massless neutrinos (left, simulation $\mathcal{F}$) and with $M_\nu=0.6$ eV neutrinos (right, simulation $\nu_{\rm m}^{++}$). The images have been created by taking a slice of $2~h^{-1}$Mpc width. The spatial distribution of (total) matter is shown over the whole box (i.e. in a slice of $50\times50\times2~(h^{-1}{\rm Mpc})^3$), while the gas and HI images display a zoom over the region marked with a red square. As can be seen, the differences in the spatial distribution of matter, gas, and (in particular) neutral hydrogen between the two models are very small. \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{Image.png} \caption{Impact of massive neutrinos on the spatial distribution of neutral hydrogen (upper row), total matter (middle row) and gas (bottom row) at $z=3$. Panels on the left show the results for a massless neutrino cosmology while panels on the right are for a cosmological model with $M_\nu=0.6$ eV neutrinos. The middle panels display the spatial distribution of matter on in a slice of $50\times50\times2~(h^{-1}{\rm Mpc})^3$, while top and bottom panels show a zoom into the region marked with a red square (the width of those slices is also $2~h^{-1}$Mpc).} \label{fig:Image} \end{center} \end{figure*} \subsection{HI abundance} \label{subsec:HI abundance} \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth]{M_HI.pdf}\\ \includegraphics[width=0.95\textwidth]{HI_CDDF.pdf}\\ \caption{\textit{Upper row:} Function $M_{\rm HI}(M)$ at redshifts $z=3$ (left), $z=4$ (middle) and $z=5$ (right) for the cosmological models with massless neutrinos (black), $M_\nu=0.3$ eV (magenta) and $M_\nu=0.6$ eV (green) when the HI is modeled using the pseudo-RT 1 method. For each dark matter halo we have computed the HI mass within it; the lines represent the running median, and error bars show the scatter around the mean. The bottom panels display the results normalized by the $M_{\rm HI}(M)$ function of the massless neutrino model. \textit{Bottom row:} Same as above but for the HI column density distribution function. The observational measurements are from \cite{Noterdaeme_2012} ($z=[2-3.5]$) and \cite{Zafar_2013} (using the whole redshift range: $z=[1.5-5.0]$) for the results at $z=3$, and from \cite{Crighton_2015} ($z=[3.5-5.4]$) and \cite{Zafar_2013} (using only the redshift range $z=[3.1-5.0]$) for the plots at $z=4$ and $z=5$.} \label{fig:HI_abundance} \end{center} \end{figure*} We now investigate how the presence of massive neutrinos impacts the function $M_{\rm HI}(M,z)$. By modeling the HI distribution using the pseudo-RT 1 method, we computed the HI mass within each dark matter halo in the simulations with massless and massive neutrinos. In the upper row of Fig. \ref{fig:HI_abundance} we show the results at redshifts $z=3,4,5$. For a fixed dark matter halo mass, we find that halos in the massless and massive neutrino models contain the same HI mass to well within one standard deviation, although halos in the massive neutrino cosmologies do tend to host a slightly higher HI mass. The HI mass excess with respect to the massless neutrino case is $\sim7\%$ for the cosmology with 0.6 eV neutrinos, decreasing to $\sim3\%$ for the 0.3 eV cosmology, with a very weak dependence on redshift. Our results show that the HI mass excess is not uniform in mass: the most massive halos host a higher fraction of HI compared with the low mass halos. The function $M_{\rm HI}(M,z)$ presents a cut-off at low masses, which can be seen clearly at $z=3$. This cut-off is not physical, but is due to the resolution of our simulations. In order to explore the physical cut-off arising from the fact that a minimum gas density and length is required to have self-shielded HI, simulations with higher resolution are needed. We leave this for a future work. For the mass range accessible in our simulations, we find that the $M_{\rm HI}(M,z)$ function can be fitted by a function of the form: $M_{\rm HI}(M,z)=\left[M/M_0(z)\right]^{\alpha(z)}$. In Table \ref{tbl:M_HI} we show the best fit values of $M_0$ and $\alpha$ for the three different cosmologies at redshifts $z=3,4,5$. The value of the slope, $\alpha$, increases with redshift for all of the models, while at fixed redshift, $\alpha$ increases with the sum of the neutrino masses. This reflects a well known property: at a given redshift, the spatial distribution of matter on small scales in a cosmology with massive neutrinos is effectively younger (has grown less) than its massless neutrino counterpart \citep{Marulli_2011,Villaescusa-Navarro_2013,Castorina_2014,Costanzi_2014, Villaescusa-Navarro_2014,Massara_2014, Massara_2015}. \begin{table}[t] \centering{ {\renewcommand{\arraystretch}{1.6} \begin{tabular}{|c|c|cc|} \hline $M_\nu$ & ~~$z$~~ & $M_0$ & $\alpha$ \\ (eV) & & ($h^{-1}M_\odot$) & \\ \hline \multirow{3}{*}{0.0} & 3 & $0.012\pm0.003$ & $0.699\pm0.006$ \\ & 4 & $0.015\pm0.003$ & $0.712\pm0.005$\\ & 5 & $0.032\pm0.008$ & $0.740\pm0.006$\\ \hline \multirow{3}{*}{0.3} & 3 & $0.015\pm0.005$ & $0.706\pm0.007$ \\ & 4 & $0.031\pm0.008$ & $0.732\pm0.006$\\ & 5 & $0.033\pm0.006$ & $0.743\pm0.005$\\ \hline \multirow{3}{*}{0.6} & 3 & $0.016\pm0.005$ & $0.708\pm0.007$ \\ & 4 & $0.027\pm0.006$ & $0.728\pm0.006$\\ & 5 & $0.061\pm0.016$ & $0.760\pm0.007$\\ \hline \end{tabular} }} \caption{Best-fit parameters for $M_{\rm HI}(M,z)=(M/M_0)^\alpha$ as a function of redshift and cosmology.}\vspace{-1.5em} \label{tbl:M_HI} \end{table} Fig.~\ref{fig:Omega_HI} shows $\Omega_{\rm HI}(z)$ in each cosmology. We find that our fiducial model is capable of reproducing the values of $\Omega_{\rm HI}(z)$ obtained from the observations of \cite{Songaila_2010, Noterdaeme_2012, Crighton_2015} extremely well in the redshift range covered by our simulations. The redshift dependence of the function $\Omega_{\rm HI}(z)$ is very weak, although it is slightly more pronounced with increasing neutrino mass. \begin{figure}[t] \includegraphics[width=\columnwidth]{Omega_HI.pdf} \caption{Redshift evolution of $\Omega_{\rm HI}$ for cosmologies with massless neutrinos (black), $M_\nu=0.3$ eV (magenta) and $M_\nu=0.6$ eV (green) when the HI is modeled using the pseudo-RT 1 method. Values obtained from the observations of \cite{Noterdaeme_2012, Crighton_2015, Songaila_2010} are shown with red, blue, and orange markers respectively.} \label{fig:Omega_HI} \end{figure} For a fixed redshift, the value of $\Omega_{\rm HI}(z)$ decreases as the sum of the neutrino masses increases. This result can be understood if we look at Eq.~\ref{eq:Omega_HI} and take into account the fact that $M_{\rm HI}(M,z)$ barely changes among cosmologies with massive and massless neutrinos. The reason for the decrement in $\Omega_{\rm HI}(z)$ is therefore due to the suppression in the abundance of halos that the presence of massive neutrinos induces \citep{Brandbyge_2010, Marulli_2011, Villaescusa-Navarro_2013, Castorina_2014, Ichiki-Takada, LoVerde_2014, Roncarelli_2015, Castorina_2015}. Using our fitting function to $M_{\rm HI}(M,z)$ from Table \ref{tbl:M_HI} (with a cut-off at $M=2\times10^{9}~h^{-1}M_\odot$), we have checked that using the massive/massless neutrino halo mass functions reproduces the decrement in $\Omega_{\rm HI}(M,z)$ induced by neutrinos. We have also computed the HI column density distribution\footnote{See Appendix B of \cite{Villaescusa-Navarro_2014a} for a description of the procedure used to calculate this.} for the cosmologies with massless/massive neutrinos; the results are shown on the bottom row of Fig.~\ref{fig:HI_abundance}. At $z=3$ we compare our results with the measurements by \cite{Noterdaeme_2012}. While these have a mean redshift $\langle z \rangle=2.5$, they cover a redshift range $z=[2-3.5]$, and we assume no redshift evolution down to $z=3$. The abundance of DLAs in our simulations at $z=4$ and $z=5$ are compared against the recent measurements of \cite{Crighton_2015}, which have data in the redshift range $z=[3.5-5.4]$. The column density distribution function of sub-DLAs in our simulations is compared at $z=3$ against the measurements by \cite{Zafar_2013}, obtained by using the whole redshift range $z=[1.5-5.0]$, while at $z=4$ and $z=5$ we only use data in the redshift range $z=[3.1-5.0]$. We find that our fiducial model reproduces the observed abundance of DLAs and sub-DLAs very well at all redshifts. In the bottom panels we display the ratio between the HI column density distribution function of the models with massive and massless neutrinos. We find that massive neutrinos suppress the abundance of DLAs and sub-DLAs at all redshifts, although the effect is stronger at higher redshift. This is due to the lower value of $\Omega_{\rm HI}(z)$ that is present in cosmologies with massive neutrinos. We conclude that, for a given mass, dark matter halos in cosmologies with massless and massive neutrinos host, on average, the same amount of neutral hydrogen. The suppression on the halo mass function induced by massive neutrinos decreases the total amount of neutral hydrogen in the Universe, given the fact that only halos above a certain mass will host HI. This manifests itself in a lower value of $\Omega_{\rm HI}(z)$, and in a deficit in the abundance of DLAs and sub-DLAs in cosmologies with massive neutrinos, with respect to the massless neutrinos model. \subsection{HI clustering} \label{subsec:HI_clustering} We now investigate the impact of massive neutrinos on the clustering properties of neutral hydrogen. We focus our attention in the HI power spectrum, $P_{\rm HI}(k,z)$, the HI bias, $b_{\rm HI}(k,z)$, and the 21cm power spectrum, $P_{\rm 21cm}(k,z)$. In the upper row of Fig.~\ref{fig:HI_clustering} we show the HI power spectrum for the models with $M_\nu=0.0, 0.3, 0.6$ eV neutrinos at redshifts $z=3,4,5$ when the HI is modeled using the pseudo-RT 1 method. Our results show that the HI is more strongly clustered in cosmologies with massive neutrinos, with the clustering of the neutral hydrogen increasing with the sum of the neutrino masses.\footnote{Note that neutral hydrogen is also more clustered in cosmologies with warm dark matter than in the corresponding models with the standard cold dark matter, as shown in \cite{Carucci_2015}.} The bottom panels of that row show the ratios of the HI power spectra for the massive and massless neutrino cosmologies. We find that the increase in power in the massive neutrino cosmologies, relative to the fiducial model, is almost independent of scale. We find that differences in the HI clustering between models increase with redshift, however. At higher redshift, and for the $M_\nu=0.6$ eV model, the increase in power is also more scale-dependent than at lower redshift. We emphasize that the HI power spectrum, defined as $P_{\rm HI}(k,z)=\langle \delta_{\rm HI}(\vec{k},z) \delta^*_{\rm HI}(\vec{k},z) \rangle$ does not depend on the value of $\Omega_{\rm HI}(z)$, which, as we have seen above, is different for each cosmology. We can easily see why the HI is more strongly clustered in cosmologies with massive neutrinos if we take into account that, for a fixed dark matter halo mass, the halo bias increases with the sum of the neutrino masses \citep{Marulli_2011,Villaescusa-Navarro_2014,Castorina_2014}. This happens because, as we saw above, massive neutrinos induce a suppression of the halo mass function. Thus, for a given mass, halos in cosmologies with massive neutrinos are rarer, and therefore they are more biased (for a detailed description see \cite{Massara_2014}). Since $M_{\rm HI}(M,z)$ barely changes amongst the cosmologies (or, to be more precise, increases only slightly with the sum of the neutrino masses), we should therefore expect the neutral hydrogen to be more clustered in cosmologies with massive neutrinos, as we find.\footnote{We are also implicitly assuming that the low-mass cut-off in the $M_{\rm HI}(M,z)$ function is the same mass in cosmologies with massless/massive neutrinos, which we believe is a good assumption given the fact that that function barely changes amongst the cosmologies in the mass range covered by our simulations.} In the middle row of Fig.~\ref{fig:HI_clustering} we plot the HI bias, defined as $b_{\rm HI}(k,z)=P_{\rm HI-m}(k,z)/P_{\rm m}(k,z)$, where $P_{\rm HI-m}(k,z)$ is the HI-matter cross-power spectrum. As expected, the stronger clustering of HI in massive neutrino cosmologies is reflected in a higher HI bias in these cosmologies with respect to the fiducial, massless neutrinos model. On large scales, the HI bias increases by $\sim\!10\%$ for the model with $M_\nu=0.3$ eV neutrinos, and $\sim\!30\%$ for $M_\nu=0.6$ eV, with respect to the $M_\nu=0.0$ eV cosmology. There is a very weak dependence on redshift. On large scales, the HI bias is more scale-dependent at higher redshift, as already noted by \cite{Carucci_2015}. It is also worth pointing out that at $z=3$, while the HI bias is flat for wavenumbers $k\lesssim 1~h~{\rm Mpc}^{-1}$ in the $M_\nu = 0$ eV model, the models with massive neutrinos exhibit a HI bias that does depend mildly on scale. There are two explanations for this: 1) the bias is higher in cosmologies with massive neutrinos, and therefore more scale dependent; and 2) the HI bias, defined as above, is scale-dependent in massive neutrino cosmologies. The second option arises because it has recently been found that the halo bias in cosmologies with massive neutrinos is scale-dependent, even on very large scales \citep{Villaescusa-Navarro_2014,Castorina_2014}. In \cite{Castorina_2014} it was pointed out that if the halo bias is defined, in cosmologies with massive neutrinos, as $P_{\rm h-cb}(k)/P_{\rm cb}(k)$, where $_{\rm cb}$ denotes the cold dark matter plus baryons field, then it becomes scale-independent on large scales and universal. We have checked whether defining the HI bias as $b_{\rm HI}(k,z)=P_{\rm HI-cb}(k,z)/P_{\rm cb}(k,z)$ helps to decrease the scale-dependence of the bias at $z=3$ for the cosmologies with massive neutrinos. The middle-left panel of Fig. \ref{fig:HI_clustering} shows the results of defining the HI bias with respect to the CDM+baryons field (dashed green line). We find that by using this definition, the amplitude of the HI bias decreases because the CDM+baryon field is more strongly clustered that the total matter field in cosmologies with massive neutrinos. We do not see a significant suppression of the scale-dependence, however. We therefore conclude that HI bias is more scale-dependent in cosmologies with massive neutrinos because its value is higher \citep{Scoccimarro_2001, Cooray_Sheth_2002, Sefusatti_2005, Marin_2010}. \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth]{Pk_HI.pdf}\\ \includegraphics[width=0.95\textwidth]{Pk_HI_bias.pdf}\\ \includegraphics[width=0.95\textwidth]{Pk_21cm.pdf} \caption{\textit{Upper row:} Power spectrum of the neutral hydrogen field for the fiducial model with massless neutrinos (black line), the model with $M_\nu=0.3$ eV (magenta line) and the model with $M_\nu=0.6$ eV (green) at $z=3$ (left), $z=4$ (middle), and $z=5$ (right) when the HI is modeled using the pseudo-RT 1 method. The bottom panels display the ratio of the different HI power spectra to the fiducial model. The dashed vertical line represents the Nyquist frequency for our power spectrum measurements. \textit{Middle row:} same as above but for the HI bias, computed as $b_{\rm HI}(k)=P_{\rm HI-m}(k)/P_{\rm m}(k)$, with $P_{\rm HI-m}(k)$ being the HI-matter cross-power spectrum. The green dashed line is HI bias computed with respect to the CDM+baryon field (see text for details). \textit{Bottom row:} Same as above but for the 21cm field.} \label{fig:HI_clustering} \end{center} \end{figure*} In the bottom row of Fig.~\ref{fig:HI_clustering} we show the 21cm power spectrum for the models with massless and massive neutrinos. The amplitude of the 21cm power spectrum is lower in massive neutrino cosmologies than in the massless neutrinos model. While this result may seem surprising, since we have seen above that HI clustering {\it increases} with the neutrino masses, it is straightforward to understand if we take into account that \begin{equation} P_{\rm 21cm}(k)=\overline{\delta T_b}^2(z)P_{\rm HI}^s(k), \end{equation} where $P_{\rm HI}^s(k)$ denotes the HI power spectrum in redshift-space, and that $\overline{\delta T_b}(z)$ depends linearly on the value of $\Omega_{\rm HI}(z)$ (see Eq.~\ref{eq:delta_Tb}). In other words, the amplitude of the 21cm power spectrum not only depends on the HI clustering, but also on the value of $\Omega_{\rm HI}(z)$. Therefore, even if the HI is more clustered in cosmologies with massive neutrinos, the lower value of $\Omega_{\rm HI}(z)$ in those cosmologies drives the suppression of power we find in the 21cm power spectrum. In contrast with what happens with the HI power spectrum, the ratio of the 21cm power spectra in the massive and massless neutrinos cosmologies exhibits a characteristic dependence on scale. On very small scales the ratio is flat, while it decreases around $k\sim\!2\!-\!3~h\,{\rm Mpc}^{-1}$. The ratio appears to reach a minimum around $k\!\sim\!0.3~h\,{\rm Mpc}^{-1}$, although more realizations would be needed to confirm that this is not an artefact of sample variance. We conclude that the HI is more strongly clustered in cosmologies with massive neutrinos. This is because, for a fixed halo mass, the halo bias and HI mass increase with the sum of the neutrino masses. On the other hand, the amplitude of the 21cm power spectrum decreases as the neutrino masses increase because it depends on the total amount of neutral hydrogen, $\Omega_{\rm HI}(z)$, which is lower in massive neutrino cosmologies. \section{SKA forecasts} \label{sec:forecasts} In this section we present Fisher forecasts for the neutrino mass constraints that will be achievable with measurements of the 21cm power spectrum from future intensity mapping experiments. We consider two surveys, both with Phase 1 of the Square Kilometre Array (SKA): a wide and deep survey at low redshift ($0 \lesssim z \lesssim 3$), using the SKA1-MID array, and a narrow and deep survey at higher redshift ($3 \lesssim z \lesssim 6$) using SKA1-LOW. The forecasts for the latter use 21cm power spectra measured from our high-resolution hydrodynamic simulations, which extend into the non-linear regime, while the former use only linear modes from power spectra derived using halofit \citep{Halofit_2012}. In all cases we include a number of nuisance parameters related to the method used to model the spatial distribution of neutral hydrogen, and introduce conservative priors from contemporary experiments to break degeneracies between cosmological parameters. We also consider more pessimistic choices of instrumental parameters than given in the baseline designs of the SKA1 sub-arrays. The result is a conservative set of forecasts for the neutrino mass constraints that one can expect to obtain with the SKA. Nevertheless, a number of systematic effects could further degrade the constraints; we assess their likely importance at the end of this section. \subsection{Fisher matrix formalism} To produce our forecasts, we adopt the Fisher matrix formalism for 21cm surveys that was described in \cite{Bull_2015}. The Fisher matrix can be derived from a Gaussian expansion of the likelihood for a set of 21cm brightness temperature fluctuation maps about a fiducial cosmological model. For a single redshift bin, the Fisher matrix can be written as \begin{equation} \label{eq:fisher} F_{ij} = \frac{1}{2} \int \frac{d^3k}{(2\pi)^3} V_{\rm eff}(\vec{k}) \frac{\partial \log P_{\rm 21cm}(k)}{\partial \theta_i} \frac{\partial \log P_{\rm 21cm}(k)}{\partial \theta_j}, \end{equation} where $\{\theta\}$ is a set of cosmological and astrophysical parameters to be constrained or marginalised over, and we have neglected cosmological evolution within the redshift bin. The effective survey volume is given by \begin{equation} V_{\rm eff}(\vec{k}) = V_{\rm phys} \left ( \frac{P_{\rm 21cm}}{P_{\rm 21cm} + P_{N}} \right )^2, \end{equation} with $V_{\rm phys}$ the comoving volume of the redshift bin. The measured power spectrum is a combination of the cosmological 21cm brightness temperature power spectrum and the instrumental noise, $P_{\rm tot} = P_{\rm 21cm} + P_{N}$, where we have assumed the noise to be Gaussian and uncorrelated, as well as uncorrelated with the signal. These are reasonable assumptions in the absence of more detailed instrumental simulations, and if shot noise is negligible \citep[this should be the case; see][]{Santos:2015bsa}. The form of the noise power spectrum depends on the type of radio telescope used to observe the 21cm emission. The basic division is between {\it interferometers}, which coherently cross-correlate the signals from pairs of receivers to observe certain Fourier modes on the sky (with mode wavelength corresponding to the inverse separation of the receivers), and {\it autocorrelation experiments}, which construct sky maps pixel-by-pixel from the detected (autocorrelation) signals from individual receivers for many different pointings. The relative merits of the two types are discussed in \cite{Bull_2015}, where models for their noise power spectra are also derived. We will simply quote them here. The basic expression is \begin{equation} \frac{P_{N}}{r^2 r_\nu} = \frac{T^2_{\rm sys} S_{\rm area}}{t_{\rm tot} \nu_{\rm 21cm}} B^{-1}_\parallel B^{-2}_\perp \mathcal{I}, \end{equation} where $T_{\rm sys}$ is the system temperature, $S_{\rm area}$ is the total survey area, $t_{\rm tot}$ is the total observing time, $\nu_{\rm 21cm}$ is the rest frame 21cm emission frequency ($\approx 1420$ MHz), and $r_\nu = c\,(1+z)^2 / H(z)$. The system temperature is approximately the sum of the sky temperature, $T_{\rm sky} \approx 60\, \mathrm{K}\, (\nu / 300 {\rm MHz})^{-2.5}$, and the instrumental temperature, $T_{\rm inst}$. There is an effective beam in the radial (frequency) direction due to the frequency channel bandpass, which we model as \begin{equation} B_\parallel = \exp{\left (-\frac{(k_\parallel r_\nu \delta\nu)^2}{16\log 2\, \nu_{\rm 21}^2}\right )}, \nonumber \end{equation} where $\delta \nu$ is the channel bandwidth (assumed to be 100 kHz), and $k_\parallel$ is the Fourier wavenumber in the radial direction. The remaining factor of $B^{-2}_\perp \mathcal{I}$ describes the sensitivity as a function of angular scale, and is given by \begin{eqnarray} &{\rm FOV} \big / n(d) & ~~~~{\rm (interferometer)} \nonumber\\ &\exp\left( \frac{(k_\perp r\, \theta_{\rm B})^2}{16 \log 2} \right) \frac{1}{N_d N_b} & {\rm ~~~~(autocorrelation)}. \nonumber \end{eqnarray} For interferometers, the important factors are the instantaneous field of view (${\rm FOV}$) and the baseline density distribution, $n(d)$, where $d$ is the baseline length (related to the transverse Fourier wavenumber by $k_\perp r = 2\pi d / \lambda$). For autocorrelation, we have assumed a Gaussian beam of FWHM $\theta_{\rm B}$ for each element of an array with $N_d$ dishes, and $N_b$ beams per dish. The model for the 21cm (signal) power spectrum depends on the range of redshifts and physical scales in question. Our high-resolution simulations, described above, cover significantly non-linear scales at $z \lesssim 5.5$, but cannot be extended into the linear regime or beyond $z=3$ due to resolution requirements. For $z \ge 3$, we use the pseudo-RT 1 model as our fiducial HI model. We take the 21cm power spectrum measured from the simulations at each redshift, and then extrapolate it to larger scales using the halofit matter power spectrum, multiplied by a (scale-independent) bias that matches the amplitude of the spectra at $k = 0.2\, h$Mpc$^{-1}$. These scales are sufficiently linear for the halofit power spectrum to be a good approximation, and the bias has become almost scale-independent by this $k$ value at all relevant redshifts, as shown in Fig.~\ref{fig:HI_clustering}. At redshifts $z < 3$, we restrict ourselves to approximately linear scales only, $k \le 0.2\, h$Mpc$^{-1}$. The power spectra are calculated by multiplying the halofit matter power spectrum by a bias function and brightness temperature model derived from the halo-based 1 prescription for the function $M_{\rm HI}(M,z)$; see Section \ref{subsec:low-z}. In all cases, we assume that only the monopole of the redshift-space 21cm power spectrum can be recovered. In principle, one could use the full redshift-space power spectrum, which would also allow constraints on the linear growth rate to be extracted, but we forego this to keep our analysis conservative, and because the higher order multipoles extracted from the high redshift simulations are noisier than the monopole, making it more difficult to numerically differentiate them reliably. We calculate the derivatives of $\log P_{\rm 21cm}$ required by Eq.~(\ref{eq:fisher}) numerically, using central finite differences: $\partial f/\partial x \approx [f(x+\Delta) - f(x-\Delta)] / 2\Delta$, where $\Delta$ is the finite difference step size and all other parameters are held fixed at their fiducial values. While central differences require twice as many simulated spectra as forward differences, they are more accurate for sufficiently small step sizes. The step size chosen for each parameter can be found from the list of simulations in Table \ref{tab_sims}. \subsection{Survey specifications} Most current and planned 21cm intensity mapping experiments are optimised either for detecting the baryon acoustic oscillations at $z \sim 1$, or studying the epoch of reionization at $z \gtrsim 6$. While it is not a dedicated IM experiment, Phase 1 of the Square Kilometre Array (SKA) will be able to cover the entire redshift range of interest here -- from $z=0$ to $z \sim 6$ -- so we adopt it as the reference experiment for our forecasts. SKA1 will consist of two sub-arrays: SKA1-MID, a mid-frequency dish array to be situated in South Africa, and SKA1-LOW, an array of low-frequency dipole antennas that will be constructed in Western Australia. SKA1-MID will support receivers that cover several different bands, spanning the frequency range from $350$ MHz up to $\sim14$ GHz. Only Bands 1 and 2 are relevant for detecting redshift 21cm emission; these are currently planned to cover the ranges $350 - 1050$ MHz and $900 - 1670$ MHz respectively. For simplicity, however, we will assume a single band from $350 - 1420$ MHz with the specifications of Band 1 (summarized in Table \ref{tbl:specs}). The MID array is expected to consist of approximately $200$ dishes of diameter $15$m, primarily designed for use as an interferometer. It will be able to perform IM surveys much more efficiently if used in an autocorrelation mode, however \citep[assuming that technical issues such as the effects of correlated noise can be mitigated; see][]{Bull_2015, Santos:2015bsa}, so we will assume an autocorrelation configuration in our forecasts. With this setup, a total survey area of $\sim\! 25,000$ deg$^2$ should be achievable for a 10,000 hour survey; we take these values as our defaults in what follows, but also study the effect of changing $S_{\rm area}$ in Section \ref{sec:surveydependence}. SKA1-LOW will cover the band $50 \le \nu \le 350$ MHz ($3 \le z \le 27$), making it one of the few proposed IM experiments that overlaps with the redshifts of our simulations, $3 \lesssim z \lesssim 6$. (The Murchison Wide Field array\footnote{\url{http://www.mwatelescope.org/}} also partially covers this range, $80 \le \nu \le 300$ MHz, $3.7 \le z \le 16.8$.) We use only the high frequency half of the band, and modify the maximum frequency slightly to $375$ MHz in our forecasts, to better fit with the redshift coverage of our simulations. We use a baseline distribution based on the description in \cite{dewdney2013ska1}. All other specifications are given in Table \ref{tbl:specs}. The instantaneous field of view of SKA1-LOW is 2.7 deg$^2$ at $z=3$, increasing to 6 deg$^2$ at $z=5$. As our default, we consider a deep 10,000 hour survey over 20 deg$^2$. This is comparable to what will be required to detect the 21cm power spectrum from the epoch of reionization (EoR). A proposed `deep' EoR survey with SKA1-LOW will perform $1000$ hour integrations in the band $50-200$ MHz at five separate pointings on the sky, for example, with each pointing covering $\sim\! 20$ deg$^2$ at the reference frequency 100 MHz \citep{AASKA_EOR}. These pointings cover a total survey area of $\sim 20$ deg$^2$ at the centre of our chosen SKA1-LOW band ($\approx 290$ MHz), so the surveys could be performed `commensally'. We assume double the total survey time, however; while the EoR survey may be limited to observing only on winter nights to mitigate ionospheric and foreground contamination \citep{AASKA_EOR}, these effects are less of a restriction at higher frequencies. The dependence of our results on $S_{\rm area}$ and $t_{\rm tot}$ is studied in Section \ref{sec:surveydependence}. \begin{table}[t] \centering{ {\renewcommand{\arraystretch}{1.6} \begin{tabular}{|l|cc|} \hline & {\bf SKA1-LOW} & {\bf SKA1-MID} \\ \hline $T_{\rm inst}$ [K] & $40 + 0.1 T_{\rm sky}$ & 28 \\ $N_d \times N_b$ & $911 \times 3$ & $190 \times 1$ \\ $\nu_{\rm min}$ [MHz] & 210 & 375 \\ $\nu_{\rm max}$ [MHz] & 375 & 1420 \\ $A_{\rm eff}(\nu_{\rm crit})$ [m$^2$] & 925 & 140 \\ $S_{\rm area}$ [deg$^2$] & 20 & 25,000 \\ $t_{\rm tot}$ [hrs] & 10,000 & 10,000 \\ \hline \multirow{3}{*}{$z$ bin edges} & \multicolumn{1}{l}{2.75, 3.25, 3.75,} & \multicolumn{1}{l|}{0, 0.125, 0.375, 0.625,} \\ & \multicolumn{1}{l}{4.25, 4.75, 5.25,} & \multicolumn{1}{l|}{0.875, 1.125, 1.375,} \\ & \multicolumn{1}{l}{5.75} & \multicolumn{1}{l|}{1.625, 1.875, 2.2, 2.8} \\ \hline \end{tabular} }} \caption{Array specifications assumed in our forecasts. More detailed specifications are given in \cite{Santos:2015bsa}.} \label{tbl:specs} \end{table} \subsection{Model uncertainties and nuisance parameters}\label{sec:nuisance} As mentioned above, the neutrino mass constraints depend on being able to accurately measure the shape and amplitude of the 21cm power spectrum, also as a function of redshift. While the power spectra derived from our simulations are self-consistent given a particular HI model, most models are calibrated off current, imperfect data, and can be inconsistent with one another. Our forecasts are therefore subject to a number of modelling uncertainties, which we attempt to take into account in this section. (We also explore the dependence of our forecasts on the assumed fiducial model in Section \ref{sec:hidependence}.) \begin{table*}[t] \centering{ {\renewcommand{\arraystretch}{1.6} \resizebox{17cm}{!}{ \begin{tabular}{|l|c|c|c|} \hline \multirow{3}{*}{\bf Massive neutrino datasets} & \multicolumn{3}{c|}{$\sigma(M_\nu)$ / eV (95\% CL)} \\ \cline{2-4} & ~~~~~~~~~~~~~~~~~~~~ & \multicolumn{1}{l|}{+ Planck CMB} & \multicolumn{1}{l|}{+ Planck CMB} \\ & & & \multicolumn{1}{l|}{+ Spectro-z} \\ \hline Planck $M_\nu$ & --- & 0.461 & 0.094 \\ \hline SKA1-LOW & 0.311 & 0.208 & 0.118 \\ SKA1-MID & 0.268 & 0.190 & 0.104 \\ SKA1-LOW + SKA1-MID & 0.183 & 0.145 & 0.082 \\ \hline SKA1-LOW + Planck $M_\nu$ & --- & 0.089 & 0.076 \\ SKA1-MID + Planck $M_\nu$ & --- & 0.071 & 0.065 \\ SKA1-LOW + SKA1-MID + Planck $M_\nu$ & --- & 0.067 & 0.058 \\ \hline \end{tabular} } }} \caption{Marginal 2$\sigma$ (95\% CL) constraints on the neutrino mass, for various combinations of surveys and prior information.} \label{tbl:marginal} \end{table*} An important but poorly-constrained contribution to the HI power spectrum is the HI bias, which is a function of both redshift and scale. To take into account the uncertainty associated with the bias model, we incorporate a simple template-based bias parametrisation into our signal model. This is constructed by first measuring the bias from the simulations as a function of scale and redshift, $b(z, k)$. We then introduce an amplitude, $A$, and shift parameter, $\alpha$, into the definition of the monopole of the redshift-space 21cm power spectrum in each redshift bin, such that \begin{eqnarray} P^{0}_{\rm 21cm}(z_i, k) &=& \overline{\delta T_b}^2(z_i) \left [ A_i\, b_{\rm fid}(z_i, \alpha_i k) \right ]^2 \\ &&\times \left [ 1 + \frac{2}{3}\beta_{\rm fid} + \frac{1}{5}\beta_{\rm fid}^2 \right ] P^{\rm fid}_{\rm m}(z_i, k). \nonumber \end{eqnarray} We marginalise over the amplitudes in all of our forecasts, but the shift parameters are only marginalised for the high-redshift survey; the low-redshift forecasts use only linear scales by construction, where the bias has no scale-dependence, so the $\{\alpha_i\}$ are unconstrained. This marginalisation procedure is pessimistic in that it assumes no prior constraints on the redshift evolution of the HI bias; $A_i$ is left completely free in each bin, while there are in fact some existing constraints on its value. The $A_i$ parameter also absorbs the uncertainty in other parameters that affect the normalization, such as $\beta$, which we do not attempt to constrain separately. We have assumed somewhat more about the scale dependence -- the overall shape of the function is fixed -- but by allowing the shift parameter (and thus the scale at which scale-dependence kicks in) to be free in each redshift bin, we are still being reasonably conservative with our model. \subsection{Prior information}\label{sec:prior} Without additional information about the cosmological model parameters, the Fisher matrix can suffer from degeneracies, severely degrading the forecast constraints. As well as presenting results for the SKA1-LOW and MID surveys alone, we also include Fisher matrix priors for two datasets that will be available contemporary with SKA1: a full-sky CMB experiment (Planck), and a future spectroscopic galaxy redshift survey (e.g. Euclid or DESI). For the CMB prior, we used an approximate Fisher matrix based on the 2015 Planck-only temperature + polarisation constraints on a two-parameter ($M_\nu + N_{\rm eff}$) extension to $\Lambda$CDM.\footnote{We used the \texttt{base\_nnu\_mnu\_plikHM\_TT\_lowTEB} MCMC chains; $N_{\rm eff}$ is the effective number of relativistic species.} To construct it, we took the public Planck MCMC chains for this model, calculated the covariance matrix for all parameters of interest, and then inverted it. This procedure discards the non-Gaussian part of the posterior distribution, which enforces consistency with the Gaussianity assumption of Fisher forecasting but loses some information compared with the actual Planck constraints. Considering the other caveats and simplifications inherent in Fisher forecasting, and the approximate Gaussianity of most subspaces of the posterior distribution, this is a reasonable approximation. Note that this method results in a prior that has been implicitly marginalised over a large number of systematic and instrumental effects, such as foregrounds and calibration errors. We also marginalised over the effective number of relativistic degrees of freedom, $N_{\rm eff}$, assuming that only the CMB gives information on this parameter. For the galaxy redshift survey prior, we performed separate Fisher forecasts for a future spectroscopic survey of $\sim 6 \times 10^7$ galaxies over 15,000 deg$^2$ from $0.65 \lesssim z \lesssim 2.05$, as was also used in \cite{Bull_2015}. This is a similar specification to the planned Euclid mission, and has comparable properties to DESI. Following \cite{Amendola:2012ys}, we took the bias to be $b(z) = \sqrt{1 + z}$, marginalised as a free parameter in each redshift bin of width $\Delta z = 0.1$. We also assumed that the information from the broadband shape of the power spectrum and redshift-space distortions can be used, and marginalised over a non-linear damping of small scales in redshift-space, $\sigma_{\rm NL}$. Conservatively, we assumed that no information on the neutrino mass is obtained from the galaxy survey. When combining the galaxy survey Fisher matrix with the low-redshift SKA1-MID IM Fisher matrix, we also assumed that the surveys are independent, although in reality they both probe the same underlying matter density field. We do not expect the effect of partially-overlapping survey volumes to be large (and note that the SKA1-LOW survey volume has no overlap). With the priors included, we are therefore forecasting for the following parameter set: \begin{eqnarray} \{ h, \Omega_c, \Omega_b, A_s, n_s, M_\nu \}_{\rm base} + \{ A_i, \alpha_i \}_{\rm bias} + \{ b_j, \sigma_{\rm NL} \}_{\rm gal.} \nonumber \end{eqnarray} \subsection{Neutrino mass constraints} The results of our forecasts are summarized in Table \ref{tbl:marginal} and Figs.~\ref{fig:correlations} and \ref{fig:mnu_sig8}. All forecasted errors are quoted at the 2$\sigma$ (95\%) CL. \begin{figure*}[t] \includegraphics[width=2\columnwidth]{LOW_correlations.pdf} \includegraphics[width=2\columnwidth]{MID_correlations.pdf} \caption{Forecast 2$\sigma$ marginal constraints on $M_\nu$ and several cosmological parameters, for various combinations of surveys. SKA1-MID constrains the cosmological parameters significantly better than LOW, but the neutrino mass constraints are comparable.} \label{fig:correlations} \end{figure*} \begin{figure}[t] \includegraphics[width=\columnwidth]{mnu_sigma8.pdf} \caption{Forecast marginal 1- and 2$\sigma$ constraints on $M_\nu$ and $\sigma_8$, for SKA1-LOW and Planck, and for the combination of all experiments considered here. The combined constraint (red) is much better than the individual constraints because multiple parameter degeneracies are broken by combining the datasets. The lower bound, CMB+BAO+Ly$\alpha$, and Planck 2015 95\% limits are shown as vertical dashed lines from left to right respectively.} \label{fig:mnu_sig8} \end{figure} For our approximate Planck 2015 Fisher matrix, the constraint on $M_\nu$ is 0.46 eV, which is consistent with the actual Planck 2015 temperature + polarisation upper limit of 0.49 eV \citep{Planck_2015}.\footnote{These figures can be compared with pre-data release forecasts for Planck \citep[e.g.][]{2003PhRvL..91x1301K, Perotto:2006rj, Kitching:2008dp}, which predicted $\sigma(M_\nu) \sim 0.3 - 0.9$ eV (95\% CL).} Our forecasts suggest that intensity mapping surveys with SKA1-MID and LOW will be able to moderately surpass the Planck constraints, yielding $\sigma(M_\nu) \approx 0.3$ eV without the addition of any prior information to break the strong correlations that exist between some of the cosmological parameters for the IM surveys. This improves to $\approx 0.2$ eV when LOW and MID are combined. Adding a Planck prior on the cosmological parameters only (i.e. ignoring information on $M_\nu$ from the CMB data) improves the constraint slightly, reaching $\sim 0.15$ eV for MID + LOW, but it is the inclusion of spectroscopic redshift survey data that most strongly breaks the cosmological parameter degeneracies; for a Planck CMB + spectro-z prior, we expect $\sigma(M_\nu) \approx 0.08$ eV for SKA1-LOW + MID. Similar precision can be gained by combining either of the IM surveys with the full Planck data (including the neutrino mass constraint), however, without the need to add the spectroscopic galaxy survey: we obtain $\sigma(M_\nu) = 0.067$ eV for SKA1-LOW + MID + Planck. This is a result of the IM and CMB constraints having different correlation directions for the covariances between $M_\nu$ and certain cosmological parameters -- Fig.~\ref{fig:mnu_sig8} shows how the IM and CMB data have somewhat complementary correlation directions in the $M_\nu-\sigma_8$ plane, for example, and Fig.~\ref{fig:correlations} shows correlations with other parameters. This complementarity is also present in other types of large-scale structure survey \citep{Carbone:2010ik, 2011APh....35..177A}. Adding the spectroscopic survey data improves the constraints on other cosmological parameters somewhat, resulting in a slight further improvement to $\sigma(M_\nu) = 0.058$ eV for the combination of SKA1-LOW + MID + Planck. For a best-fit value of $M_\nu = 0.06$ eV (i.e. at the minimum bound), this final figure would be sufficient for a $\gtrsim \!\! 2\sigma$ detection of non-zero $M_\nu$ from cosmological data alone. This would be a significant improvement over the 95\% upper limit of $M_\nu < 0.17$ eV from the Planck 2015 release, which combines CMB temperature and polarisation data with a compilation of BAO constraints \citep{Planck_2015},\footnote{These are the \texttt{Planck TT,TE,EE+lowP+BAO} constraints; see Eq.~(54d) of \cite{Planck_2015}.} and the best current upper limit of $M_\nu < 0.12$ eV, from Planck CMB + BAO + Ly$\alpha$ data \citep{ades}. Note that an upper limit of $M_\nu<0.095$ eV would also be enough to rule out the inverted hierarchy \citep{LesgourguesPastor}. It has been shown before that various large-scale structure surveys, including IM surveys, can provide strong constraints on $M_\nu$ \citep[e.g.][]{2008PhRvD..78f5009P, 2008PhRvD..78b3529M, Carbone:2010ik, 2011APh....35..177A, Audren_2013, Font_2014, AASKA_EORCosmo, Sartoris_2015}. For example, \cite{2008PhRvD..78f5009P} found a similar constraint, $\sigma(M_\nu)=0.075$ eV at $1\sigma$, when forecasting for Planck + SKA at $z \gtrsim 8$ (during the epoch of reionization). \cite{Font_2014} forecast a significantly stronger constraint from Planck + DESI at lower redshift of $\sigma(M_\nu)=0.024$ eV, improving to $0.011$ eV when Euclid, LSST, and Lyman-$\alpha$ forest data are also included. These studies all used different intrumental and modelling assumptions however, and marginalized over different cosmological and nuisance parameters, so a direct comparison is not possible. To put our result into context, we therefore also produced Fisher forecasts for the spectroscopic survey with $M_\nu$ now included as a parameter, finding $\sigma(M_\nu) = 0.060$ eV for the combination of spectro-z + Planck $M_\nu$. Up to the same forecasting assumptions, the combination of IM surveys here should therefore be competitive with future spectroscopic surveys like Euclid and DESI in terms of a neutrino mass measurement. Perhaps more significant is the forecast of $\sigma(M_\nu) = 0.089$ eV for SKA1-LOW + Planck. While not quite enough for a $2\sigma$ detection of $M_\nu = 0.06$ eV, it is nevertheless a strong constraint, using a completely different redshift range and observation technique to other planned surveys. This is despite our pessimistic analysis, which marginalises over the amplitude and scale-dependence of the bias in each redshift bin, and uses only the monopole of the redshift-space power spectrum. The redshift range we assumed for LOW ($z \sim 3 - 6$) has several advantages for matter power spectrum measurements -- non-linear effects are important only on significantly smaller scales than at lower redshifts, and radiative transfer processes that affect the 21cm signal at higher redshift, in the EoR \citep{AASKA_EORmodel}, are not present. An IM survey, piggy-backed on the deep EoR survey that will be performed by SKA1-LOW anyway, may therefore be an interesting prospect for a more `systematic-tolerant' neutrino mass measurement survey \citep[although foreground contamination and instrumental systematics are still serious issues that would need to be resolved; see e.g.][]{Alonso:2014dhk, AASKA_EORfg, AASKA_IMfg}. \begin{figure}[t] \includegraphics[width=\columnwidth]{mnu_sigma8_hitypes.pdf} \caption{Forecast 2$\sigma$ constraints on $M_\nu$ and $\sigma_8$ for SKA1-LOW + Planck, for the different methods used to model the HI (see Sec.~\ref{sec:HI}). The largest ellipses are obtained by employing the pseudo-RT 1 method (green, solid line) and the pseudo-RT 2 method (grey, dashed line); the smallest are for the halo-based 1 method (red, solid) and halo-based 2 method (blue, dashed).} \label{fig:mnu_sig8_hitype} \end{figure} \subsection{Dependence on HI prescription and bias} \label{sec:hidependence} The various methods used to model the spatial distribution of HI discussed in Section \ref{sec:HI} give somewhat different predictions for the amplitude of the 21cm power spectrum, especially at high redshift. Clearly this can impact the $M_\nu$ constraints by (e.g.) changing the SNR of the power spectrum detection, so in this section we investigate the senstivity of $\sigma(M_\nu)$ to this choice. We concentrate on the SKA1-LOW + Planck constraints, including the Planck $M_\nu$ information. There is a spread of $\sigma(M_\nu)$ values (at 95\% CL), ranging from 0.089 eV for the default pseudo-RT 1 method, to 0.097 eV (pseudo-RT 2), 0.080 eV (halo-based 1), and 0.083 eV (halo-based 2); corresponding forecasts for $M_\nu$ vs. $\sigma_8$ for each HI prescription are shown in Fig.~\ref{fig:mnu_sig8_hitype}. At low redshift, uncertainty in the bias amplitude parameters is a potential limiting factor for the neutrino mass constraint. Without any bias priors, we find that SKA1-MID (plus the Planck prior) should achieve $\sigma(M_\nu) = 0.071$ eV. A 1\% prior on the bias in all redshift bins improves this slightly to $\sigma(M_\nu) = 0.066$ eV, while a 0.1\% prior further reduces it to $0.058$ eV. A high-precision bias model is therefore of some use in improving the neutrino mass constraints at these redshifts, although the gains are too small to justify the difficulty of reaching this level of accuracy in practise. At high redshifts, the constraints are also relatively insensitive to the bias priors -- for the combination SKA1-LOW + Planck, $\sigma(M_\nu)$ improves to $0.082$ eV when a 1\% prior is applied to the $A_i$ parameters. An additional 1\% prior on the $\alpha_i$ parameters yields $\sigma(M_\nu) = 0.079$ eV. The low redshift forecasts use 21cm power spectra calculated from the \cite{Bagla_2010} HI model for the function $M_{\rm HI}(M,z)$, which has two free parameters -- $v_{\rm min}$ and $v_{\rm max}$ -- as described in Section \ref{subsec:low-z}. These effectively set the normalisation of the 21cm power spectrum, and are constrained by the need to reproduce the observed HI density evolution. They are nevertheless subject to some uncertainty, so we also check the effect of allowing them to be free parameters. We find that there are strong correlations between $v_{\rm min}$ and $v_{\rm max}$ and the bias parameters, to the point where a weak prior on one of the two (e.g. $\sigma(v_{\rm max}) \sim 100$ kms$^{-1}$) is needed to make the Fisher matrix invertible for SKA1-MID alone. Once this has been applied, however, there is essentially no correlation between the Bagla parameters and $M_\nu$, meaning that their effect on $\sigma(M_\nu)$ is limited to changing the signal-to-noise ratio of the 21cm power spectrum detection. \subsection{Dependence on survey parameters} \label{sec:surveydependence} \begin{figure}[t] \includegraphics[width=\columnwidth]{sigma_mnu_sarea.pdf} \caption{Forecast 2$\sigma$ constraints on $M_\nu$ as a function of survey area, for the SKA1-LOW and SKA1-MID surveys combined with Planck. The current best upper and lower limits on the sum of the neutrino masses are shown as dashed lines. The fiducial survey areas are marked as crosses.} \label{fig:mnu_sarea} \end{figure} Finally, we check the dependence of our results on the assumed IM survey parameters. Fig.~\ref{fig:mnu_sarea} shows $\sigma(M_\nu)$ as a function of the total survey area, $S_{\rm area}$, for SKA1-LOW and MID, combined with the full Planck Fisher matrix. Fig.~\ref{fig:mnu_ttot} shows the same as a function of survey time, $t_{\rm tot}$. For a fixed survey time of $t_{\rm tot} = 10^4$ hours, the dependence on survey area is relatively mild for both surveys -- SKA1-LOW yields essentially the same constraints for survey areas up to $100$ deg$^2$. Similarly, the MID constraints improve relatively little above $S_{\rm area} \approx 2000$ deg$^2$. In both cases the fiducial survey areas are close to optimal for the chosen survey time. Fixing the survey areas to their fiducial values, we find a greater sensitivity to the chosen $t_{\rm tot}$. Larger survey times than the fiducial value of 10,000 hours are largely impractical, but would be required for the constraints for either survey to cross the minimum mass value ($0.06$ eV) at the $2\sigma$ level -- MID would require 30,000 hours to reach this limit, while LOW would require 100,000 hours. MID provides a useful improvement over the current best constraint for a minimum survey time of $t_{\rm tot} \gtrsim 600$ hours, while LOW requires $t_{\rm tot} \gtrsim 3000$ hours. Note that all survey times should be thought of as `effective' values, as foreground cleaning and other effects will increase the noise level, requiring an increase in survey time to compensate. \section{Summary and conclusions} \label{sec:conclusions} Neutrinos are one of the most enigmatic particles in nature. The standard model of particle physics describe them as massless particles, while observations of the so-called neutrino oscillations imply that at least two of the three neutrino species must be massive. Massive neutrinos are therefore one of the clearest indications of physics beyond the standard model. As such, one of the most important currently-unanswered questions in modern physics is: what are the masses of the neutrinos? Constraints on the neutrino masses arising from laboratory experiments are not very tight, $m(\bar{\nu}_e)\textless2.3$ eV \citep{Kraus_2005}, but are expected to shrink down to $m(\bar{\nu}_e)\textless0.2$ eV in the coming years. On the other hand, constraints obtained by using cosmological observables are very tight: $\sum_i m_{\nu_i} <0.12$ eV ($95\%$ CL) \citep{ades}. This constraint is obtained by combining data from CMB, BAO, and the Ly$\alpha$-forest \citep{palanque-data}. \begin{figure}[t] \includegraphics[width=\columnwidth]{sigma_mnu_ttot.pdf} \caption{Forecast marginal 1$\sigma$ constraint on $M_\nu$ as a function of survey time, for the same combination of surveys as in Fig.~\ref{fig:mnu_sarea}. The SKA1-LOW IM survey is taken to be 20 deg$^2$.} \label{fig:mnu_ttot} \end{figure} In the near future, 21cm intensity mapping observations in the post-reionization era will be introduced as a powerful new cosmological tool \citep{Bull_2015}. These observations can also be used to place extremely tight constraints on the neutrino masses. In order to extract the maximum information from these surveys, one needs to understand, from the theory side, the impact that massive neutrinos have on the spatial distribution of neutral hydrogen, both at linear and fully non-linear level. In this paper we study, for the first time, the detailed effects that massive neutrinos have on the neutral hydrogen spatial distribution in the post-reionization epoch, focusing our attention on their impact on the HI clustering and abundance. We do this by running hydrodynamical simulations with massless and massive neutrinos that cover the redshift range $3\leqslant z \leqslant5.5$. The output of our simulations is corrected {\it a posteriori} to account for HI self shielding and the formation of molecular hydrogen. We use four different methods to model those processes. In our fiducial method (pseudo-RT 1), the HI self-shielding is corrected by employing the fitting formula of \cite{Rahmati_2013}, while we use a phenomenological relation where the fraction of molecular hydrogen depends on the hydrodynamical pressure, based on the Blizt \& Rosolowsky and THINGS observations \citep{Blitz_2006, Leroy_2008}, to model the formation of molecular hydrogen. We find that neutral hydrogen is more clustered in cosmologies with massive neutrinos, although its abundance, $\Omega_{\rm HI}(z)$, is lower. These differences increase with the sum of the neutrino masses, and are mainly due to the impact that massive neutrinos have on the spatial distribution of matter, which is well known (see Appendix \ref{sec:appendixA}): they suppress the amplitude of the matter power spectrum, and thus the abundance of dark matter halos, on small scales. The reason for the differences in the spatial distribution of neutral hydrogen between the massless/massive neutrino cases is mainly due to the different cosmologies; massive neutrinos barely modify the function $M_{\rm HI}(M,z)$. That is, for a fixed dark matter halo mass, the HI mass in cosmologies with massive/massless neutrinos is very similar, although on average it increases slightly with the sum of the neutrino masses. We find that the $M_{\rm HI}(M,z)$ function can be well fitted by $M_{\rm HI}(M,z)=(M/M_0)^\alpha$ in the redshift and mass range covered by our simulations. The value of $\alpha$ decreases with redshift and, for a fixed redshift, increases with the sum of the neutrino masses. This points out another well-known effect of massive neutrinos: in cosmologies with massive neutrinos, the spatial distribution of neutral hydrogen on small scales can be viewed as the spatial distribution of HI in the corresponding massless neutrino model at a earlier time. In terms of the HI column density distribution function, we find that our fiducial massless neutrinos model reproduces observational measurements in the redshift range $3\leqslant z \leqslant 5$ very well. In cosmologies with massive neutrinos we find a deficit in the abundance of DLAs and sub-DLAs, which can be explained by the lower value of $\Omega_{\rm HI}(z)$ present in those cosmologies. As stated above, the HI is more strongly clustered in cosmologies with massive neutrinos. The reason is that $M_{\rm HI}(M,z)$ barely changes, but halos of the same mass are more clustered in massive neutrino cosmologies; this happens because massive neutrinos suppress the abundance of dark matter halos, and therefore halos of the same mass will cluster more strongly in cosmologies with massive neutrinos, since they are rarer in those models. Even though the HI is more clustered in massive neutrino cosmologies, we find that the amplitude of the 21cm power spectrum is lower in those models with respect to the model with massless neutrinos. The reason for this is again that $\Omega_{\rm HI}(z)$ is lower in those models. We have forecasted the constraints that the future SKA telescope will place on the sum of the neutrino masses by means of 21cm intensity mapping observations in the post-reionization era. We did this using the Fisher matrix formalism, modeling the spatial distribution of neutral hydrogen in 14 different cosmological models from redshift $z=0$ to $z \approx 6$. At redshifts $z>3$ we use hydrodynamic simulations to simulate the spatial distribution of neutral hydrogen, while at $z<3$ we use a simple analytic model based on halofit. Our forecasts are summarised in Table \ref{tbl:marginal}. We find that with 10,000 hours of observations in a deep and narrow survey, SKA1-LOW, employing the interferometer mode, will be able to measure the sum of the neutrino masses with an error $\sigma(M_\nu)\simeq0.3$ eV (95\% CL), lower than the one obtained from CMB observations alone \citep{Planck_2015}. A slightly tighter constraint can be placed with 10,000 hours of observations with SKA1-MID, using single-dish (autocorrelation) mode for a wide 25,000 deg$^2$ survey. Combining the SKA results with CMB data, we forecast that the sum of the neutrino masses can be constrained with a much smaller error, as various degeneracies with cosmological parameters are broken. For instance, by using CMB constraints on cosmological parameters alone (i.e. neglecting information on $M_\nu$ from the CMB), we find that SKA1-LOW+CMB will be able to measure the sum of the neutrino masses with an error $\sigma(M_\nu)\simeq0.21$ eV (95\% CL), while the combination SKA1-MID + CMB will improve this to $\sigma(M_\nu)\simeq0.19$ eV (95\% CL). On the other hand, if we do include the information on $M_\nu$ contained in the CMB, we find that $\sigma(M_\nu)\simeq0.09,0.07$ for SKA1-LOW + CMB and SKA1-MID + CMB, respectively. Finally, by also including a prior on the cosmological parameters from a spectroscopic redshift survey such as Euclid or DESI, we see a modest further improvement. For instance, by combining data from the CMB (neglecting information on the sum of the neutrino masses), a spectro-z survey, and the SKA (either SKA1-LOW or SKA1-MID), we find that $\sigma(M_\nu)\simeq0.1$ eV. As expected, the tighter constraint on the sum of the neutrino masses can be obtained by combining the full data (including the CMB $M_\nu$ constraint), a spectro-z survey, and data from both SKA1-LOW and SKA1-MID: $\sigma(M_\nu)=0.058$ (95\% CL), which represents a significant improvement over the current tightest bound arising from CMB+BAO+Ly$\alpha$-forest data of $M_\nu<0.12$ eV \citep{ades}. In order to check the robustness of our forecasts for SKA1-LOW, which were obtained by running high-resolution hydrodynamic simulations and modeling the HI using the pseudo-RT 1 method, we have repeated the analysis by modeling the spatial distribution of neutral hydrogen using 3 completely different methods. We find that our results are very stable against the model used to assign the HI, as can be seen in Fig.~\ref{fig:mnu_sig8_hitype}. Our forecasts depend only weakly on the assumed survey area (see Fig.~\ref{fig:mnu_sarea}), but are more sensitive to the total observation time available (Fig.~\ref{fig:mnu_ttot}). A 10,000 hour survey performed `commensally' with a deep EoR survey over 5 pointings on the sky with SKA1-LOW is a realistic prospect however, and should be able to put strong constraints on the sum of neutrino masses. We conclude that massive neutrinos imprint distinctive signatures on the 21cm power spectrum in the post-reionization era, and that future 21cm intensity mapping surveys with (e.g.) the SKA will be able to place very tight constraints on the sum of the neutrino masses. \begin{acknowledgements} We thank Mario G. Santos for useful discussions. Simulations were performed on the COSMOS Consortium supercomputer within the DiRAC Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS and the University of Cambridge, as well as the Darwin Supercomputer of the University of Cambridge High Performance Computing Service (\url{http://www.hpc.cam.ac.uk/}), provided by Dell Inc. using Strategic Research Infrastructure Funding from the Higher Education Funding Council for England. Part of the simulations and the post-processing has been carried out in the Zefiro cluster (Pisa, Italy). FVN and MV are supported by the ERC Starting Grant ``cosmoIGM'' under GA 257670 and partially supported by INFN IS PD51 ``INDARK''. PB is supported by ERC grant StG2010-257080. We acknowledge partial support from ``Consorzio per la Fisica -- Trieste''. \end{acknowledgements}
1,116,691,498,839
arxiv
\section{Introduction} \setcounter{page}{1} Deep neural networks (DNNs) have seen enormous success in image and speech classification, natural language processing, health sciences among other big data driven applications in recent years. However, DNNs typically require hundreds of megabytes of memory storage for the trainable full-precision floating-point parameters, and billions of FLOPs (floating point operations per second) to make a single inference. This makes the deployment of DNNs on mobile devices a challenge. Some considerable recent efforts have been devoted to the training of low precision (quantized) models for substantial memory savings and computation/power efficiency, while nearly maintaining the performance of full-precision networks. Most works to date are concerned with weight quantization (WQ) \cite{bc_15,twn_16,xnornet_16,lbwn_16,carreira1,BR_18}. In \cite{he2018relu}, He et al. theoretically justified for the applicability of WQ models by investigating their expressive power. Some also studied activation function quantization (AQ) \cite{bnn_16,xnornet_16,Hubara2017QuantizedNN,halfwave_17,entropy_17,dorefa_16}, which utilize an external process outside of the network training. This is different from WQ at 4 bit or under, which must be achieved through network training. Learning activation function $\sigma$ as a parametrized family ($\sigma=\sigma(x,\alpha)$) and part of network training has been studied in \cite{paramrelu} for parametric rectified linear unit, and was recently extended to uniform AQ in \cite{pact}. In uniform AQ, $\sigma(x,\alpha)$ is a step (or piecewise constant) function in $x$, and the parameter $\alpha$ determines the height and length of the steps. In terms of the partial derivative of $\sigma(x,\alpha)$ in $\alpha$, a two-valued proxy derivative of the parametric activation function (PACT) was proposed \cite{pact}, although we will present an almost everywhere (a.e.) exact one in this paper. \medskip The mathematical difficulty in training activation quantized networks is that the loss function becomes a piecewise constant function with sampled stochastic gradient a.e. zero, which is undesirable for back-propagation. A simple and effective way around this problem is to use a (generalized) straight-through (ST) estimator or derivative of a related (sub)differentiable function \cite{hinton2012neural,bengio2013estimating,bnn_16,Hubara2017QuantizedNN} such as clipped rectified linear unit (clipped ReLU) \cite{halfwave_17}. The idea of ST estimator dates back to the perceptron algorithm \cite{rosenblatt1957perceptron,rosenblatt1962principles} proposed in 1950s for learning single-layer perceptrons with binary output. For multi-layer networks with hard threshold activation (a.k.a. binary neuron), Hinton \cite{hinton2012neural} proposed to use the derivative of identity function as a proxy in back-propagation or chain rule, similar to the perceptron algorithm. The proxy derivative used in backward pass only was referred as straight-through estimator in \cite{bengio2013estimating}, and several variants of ST estimator \cite{bnn_16,Hubara2017QuantizedNN,halfwave_17} have been proposed for handling quantized activation functions since then. A similar situation, where the derivative of certain layer composited in the loss function is unavailable for back-propagation, has also been brought up by \cite{wang2018deep} recently while improving accuracies of DNNs by replacing the softmax classifier layer with an implicit weighted nonlocal Laplacian layer. For the training of the latter, the derivative of a pre-trained fully-connected layer was used as a surrogate \cite{wang2018deep}. \medskip On the theoretical side, while the convergence of the single-layer perception algorithm has been extensively studied \cite{widrow199030,freund1999large}, there is almost no theoretical understanding of the unusual `gradient' output from the modified chain rule based on ST estimator. Since this unusual `gradient' is certainly not the gradient of the objective function, then a question naturally arises: \emph{how does it correlate to the objective function?} One of the contributions in this paper is to answer this question. Our main contributions are threefold: \begin{enumerate} \item Firstly, we introduce the notion of coarse derivative and cast the early ST estimators or proxy partial derivatives of $\sigma(x,\alpha)$ in $\alpha$ including the two-valued PACT of \cite{pact} as examples. The coarse derivative is non-unique. We propose a three-valued coarse partial derivative of the quantized activation function $\sigma(x,\alpha)$ in $\alpha$ that can outperform the two-valued one \cite{pact} in network training. We find that unlike the partial derivative $\frac{\partial \sigma}{\partial x}(x,\alpha)$ which vanishes, the a.e. partial derivative of $\sigma(x,\alpha)$ in $\alpha$ is actually multi-valued (piecewise constant). Surprisingly, this a.e. accurate derivative is empirically less useful than the coarse ones in fully quantized network training. \\ \item Secondly, we propose a novel accelerated training algorithm for fully quantized networks, termed blended coarse gradient descent method (BCGD). Instead of correcting the current full precision weights with coarse gradient at their quantized values like in the popular BinaryConnect scheme \cite{bc_15,bnn_16,xnornet_16,twn_16,dorefa_16,halfwave_17,Goldstein_17,BR_18}, the BCGD weight update goes by coarse gradient correction of a suitable average of the full precision weights and their quantization. We shall show that BCGD satisfies the sufficient descent property for objectives with Lipschitz gradients, while BinaryConnect does not unless an approximate orthogonality condition holds for the iterates \cite{BR_18}. \\ \item Our third contribution is the mathematical analysis of coarse gradient descent for a two-layer network with binarized ReLU activation function and i.i.d. unit Gaussian data. We provide an explicit form of coarse gradient based on proxy derivative of regular ReLU, and show that when there are infinite training data, the negative expected coarse gradient gives a descent direction for minimizing the expected training loss. Moreover, we prove that a normalized coarse gradient descent algorithm only converges to either a global minimum or a potential spurious local minimum. This answers the question. \end{enumerate} \medskip The rest of the paper is organized as follows. In section 2, we discuss the concept of coarse derivative and give examples for quantized activation functions. In section 3, we present the joint weight and activation quantization problem, and BCGD algorithm satisfying the sufficient descent property. For readers' convenience, we also review formulas on 1-bit, 2-bit and 4-bit weight quantization used later in our numerical experiments. In section 4, we give details of fully quantized network training, including the disparate learning rates on weight and $\alpha $. We illustrate the enhanced validation accuracies of BCGD over BinaryConnect, and 3-valued coarse $\alpha$ partial derivative of $\sigma$ over 2-valued and a.e. $\alpha$ partial derivative in case of 4-bit activation, and (1,2,4)-bit weights on CIFAR-10 image datasets. We show top-1 and top-5 validation accuracies of ResNet-18 with all convolutional layers quantized at 1-bit weight/4-bit activation (1W4A), 4-bit weight/4-bit activation (4W4A), and 4-bit weight/8-bit activation (4W8A), using 3-valued and 2-valued $\alpha$ partial derivatives. The 3-valued $\alpha$ partial derivative out-performs the two-valued with larger margin in the low bit regime. The accuracies degrade gracefully from 4W8A to 1W4A while all the convolutional layers are quantized. The 4W8A accuracies with either the 3-valued or the 2-valued $\alpha$ partial derivatives are within 1\% of those of the full precision network. If the first and last convolutional layers are in full precision, our top-1 (top-5) accuracy of ResNet-18 at 1W4A with 3-valued coarse $\alpha$-derivative is 4.7 \% (3\%) higher than that of HWGQ \cite{halfwave_17} on ImageNet dataset. This is in part due to the value of parameter $\alpha$ being learned without any statistical assumption. \bigskip \noindent{\bf Notations.} $\|\cdot\|$ denotes the Euclidean norm of a vector or the spectral norm of a matrix; $\|\cdot\|_\infty$ denotes the $\ell_\infty$-norm. $\mathbf{0}\in\mathbb{R}^n$ represents the vector of zeros, whereas $\mathbf{1}\in\mathbb{R}^n$ the vector of all ones. We denote vectors by bold small letters and matrices by bold capital ones. For any $\mathbf{w}, \, \mathbf{z}\in\mathbb{R}^n$, $\mathbf{w}^\top\mathbf{z} = \langle \mathbf{w}, \mathbf{z} \rangle = \sum_{i} w_i z_i$ is their inner product. $\mathbf{w}\odot\mathbf{z}$ denotes the Hadamard product whose $i$-th entry is given by $(\mathbf{w}\odot\mathbf{z})_i = w_iz_i$. \section{Activation Quantization}\label{sec:activ} In a network with quantized activation, given a training sample of input $\mathbf{Z}$ and label $u$, the associated sample loss is a composite function of the form: \begin{equation}\label{sLoss} \ell(\mathbf{w}, \bm{\alpha} ; \{ \mathbf{Z}, u \}) := \ell(\mathbf{w}_l*\sigma(\mathbf{w}_{l-1}*\cdots \mathbf{w}_2*\sigma(\mathbf{w}_1*\mathbf{Z}, \alpha_1) \cdots, \alpha_{l-1} ); \, u), \end{equation} where $\mathbf{w}_j$ contains the weights in the $j$-th linear (fully-connected or convolutional) layer, `$*$' denotes either matrix-vector product or convolution operation; reshaping is necessary to avoid mismatch in dimensions. The $j$-th quantized ReLU $\sigma(\mathbf{x}_j,\alpha_j)$ acts element-wise on the vector/tensor $\mathbf{x}_j$ output from the previous linear layer, which is parameterized by a trainable scalar $\alpha_j>0$ known as the resolution. For practical hardware-level implementation, we are most interested in uniform quantization: \begin{equation}\label{eq:qrelu} \sigma\left(x,\alpha\right) = \begin{cases} 0, \quad & \mathrm{if}\quad x \leq 0,\\ k\alpha, \quad & \mathrm{if}\quad \left(k-1\right)\alpha < x \leq k\alpha, \; k = 1, 2, \dots, 2^{b_a}-1,\\ \left(2^{b_a}-1\right)\alpha,\quad & \mathrm{if}\quad x > \left(2^{b_a}-1\right)\alpha, \end{cases} \end{equation} where $x$ is the scalar input, $\alpha>0$ the resolution, $b_a \in \mathbb{Z}_+$ the bit-width of activation and $k$ the quantization level. For example, in 4-bit activation quantization (4A), we have $b_a = 4$ and $2^{b_a} = 16$ quantization levels including the zero. \medskip Given $N$ training samples, we train the network with quantized ReLU by solving the following empirical risk minimization \begin{equation}\label{eq:training} \min_{\mathbf{w},\boldsymbol\alpha}\, f(\mathbf{w},\boldsymbol\alpha) := \frac{1}{N}\sum_{i=1}^N \; \ell(\mathbf{w}, \bm{\alpha} ; \{ \mathbf{Z}^{(i)},u^{(i)} \}) \end{equation} In gradient-based training framework, one needs to evaluate the gradient of the sample loss (\ref{sLoss}) using the so-called back-propagation (a.k.a. chain rule), which involves the computation of partial derivatives $\frac{\partial \sigma}{\partial x}$ and $\frac{\partial \sigma}{\partial \alpha}$. Apparently, the partial derivative of $\sigma\left(x,\alpha\right)$ in $x$ is almost everywhere (a.e.) zero. After composition, this results in a.e. zero gradient of $\ell$ with respect to (w.r.t.) $\{\mathbf{w}_j\}_{j=1}^{l-1}$ and $\{\alpha_j\}_{j=1}^{l-2}$ in (\ref{sLoss}), causing their updates to become stagnant. To see this, we abstract the partial gradients $\frac{\partial \ell}{\partial \mathbf{w}_{l-1}}$ and $\frac{\partial \ell}{\partial \alpha_{l-2}}$, for instances, through the chain rule as follows: $$ \frac{\partial \ell}{\partial \mathbf{w}_{l-1}}(\mathbf{w},\bm{\alpha}; \{\mathbf{Z},u\}) = \sigma(\mathbf{x}_{l-2},\alpha_{l-2}) \circ \frac{\partial\sigma}{\partial x}(\mathbf{x}_{l-1},\alpha_{l-1})\circ \mathbf{w}_l^\top \circ \nabla\ell(\mathbf{x}_l; u) $$ and $$ \frac{\partial \ell}{\partial \alpha_{l-2}}(\mathbf{w},\bm{\alpha}; \{\mathbf{Z},u\}) = \frac{\partial \sigma}{\partial \alpha}(\mathbf{x}_{l-2},\alpha_{l-2}) \circ \mathbf{w}_{l-1}^\top \circ \frac{\partial\sigma}{\partial x}(\mathbf{x}_{l-1},\alpha_{l-1})\circ \mathbf{w}_l^\top \circ \nabla\ell(\mathbf{x}_l; u), $$ where we recursively define $\mathbf{x}_1 = \mathbf{w}_1*\mathbf{Z}$, and $\mathbf{x}_j = \mathbf{w}_j*\sigma(\mathbf{x}_{j-1}, \alpha_{j-1})$ for $j\geq2$ as the output from the $j$-th linear layer, and `$\circ$' denotes some sort of proper composition in the chain rule. It is clear that the two partial gradients are zeros a.e. because of the term $\frac{\partial\sigma}{\partial x}(\mathbf{x}_{l-1},\alpha_{l-1})$. In fact, the automatic differentiation embedded in deep learning platforms such as PyTorch \cite{pytorch} would produce precisely zero gradients. \medskip To get around this, we use a proxy derivative or so-called ST estimator for back-propagation. By overloading the notation `$\approx$', we denote the proxy derivative by \begin{align*} \frac{\partial \sigma}{\partial x}\left(x,\alpha\right) \approx \begin{cases} 0, \quad & \mathrm{if}\quad x \leq 0,\\ 1, \quad & \mathrm{if}\quad 0 < x \leq \left(2^{b_a}-1\right)\alpha,\\ 0,\quad & \mathrm{if}\quad x > \left(2^{b_a}-1\right)\alpha. \end{cases} \end{align*} The proxy partial derivative has a non-zero value in the middle to reflect the overall variation of $\sigma$, which can be viewed as the derivative of the large scale (step-back) view of $\sigma$ in $x$, or the derivative of the clipped ReLU \cite{halfwave_17}: \begin{equation}\label{eq:crelu} \tilde{\sigma}(x,\alpha) = \begin{cases} 0, \quad & \mathrm{if}\quad x \leq 0,\\ x, \quad & \mathrm{if}\quad 0 < x \leq (2^{b_a}-1)\alpha, \\ \left(2^{b_a}-1\right)\alpha,\quad & \mathrm{if}\quad x > \left(2^{b_a}-1\right)\alpha. \end{cases} \end{equation} \begin{figure}[ht] \centering \begin{tabular}{cc} \includegraphics[width=0.48\textwidth]{qrelu.png} \includegraphics[width=0.48\textwidth]{crelu.png} \end{tabular} \caption{Left: plot of 2-bit quantized ReLU $\sigma(x,\alpha)$ in $x$. Right: plot of the associated clipped ReLU $\tilde{\sigma}(x,\alpha)$ in $x$.}\label{fig:relu} \end{figure} On the other hand, we find the a.e. partial derivative of $\sigma(x,\alpha)$ w.r.t. $\alpha$ to be \begin{align*} \dfrac{\partial \sigma}{\partial \alpha}(x,\alpha) = \begin{cases} 0, \quad & \mathrm{if}\quad x \leq 0,\\ k, \quad & \mathrm{if}\quad \left(k-1\right)\alpha < x \leq k\alpha,\;\; k=1,2,\cdots, 2^{b_a} -1;\\ 2^{b_a}-1,\quad & \mathrm{if}\quad x > \left(2^{b_a}-1\right)\alpha. \end{cases} \end{align*} \noindent Surprisingly, this a.e. derivative is not the best in terms of accuracy or computational cost in training, as will be reported in section \ref{sec:exper}. We propose an empirical three-valued proxy partial derivative in $\alpha$ as follows \begin{align*} \dfrac{\partial \sigma}{\partial \alpha}(x,\alpha) \approx \begin{cases} 0, \quad & \mathrm{if}\quad x \leq 0,\\ 2^{(b_a-1)}, \quad & \mathrm{if}\quad 0 < x \leq \left(2^{b_a}-1\right)\alpha,\\ 2^{b_a}-1,\quad & \mathrm{if}\quad x > \left(2^{b_a}-1\right)\alpha. \end{cases} \end{align*} The middle value $2^{b_a-1}$ is the arithmetic mean of the intermediate $k$ values of the a.e. partial derivative above. Similarly, a more coarse two-valued proxy, same as PACT \cite{pact} which was derived differently, follows by zeroing out all the nonzero values except their maximum: \begin{align*} \dfrac{\partial \sigma}{\partial \alpha}(x,\alpha) \approx \begin{cases} 0, \quad & \mathrm{if}\quad x \leq \left(2^{b_a}-1\right)\alpha,\\ 2^{b_a}-1,\quad & \mathrm{if}\quad x > \left(2^{b_a}-1\right)\alpha. \end{cases} \end{align*} This turns out to be exactly the partial derivative $\dfrac{\partial \tilde{\sigma}}{\partial \alpha}(x,\alpha)$ of the clipped ReLU defined in (\ref{eq:crelu}). \medskip We shall refer to the resultant composite `gradient' of $f$ through the modified chain rule and averaging as coarse gradient. While given the name `gradient', we believe it is generally not the gradient of any smooth function. It, nevertheless, somehow exploits the essential information of the piecewise constant function $f$, and its negation provides a descent direction for the minimization. In section \ref{sec:analy}, we will validate this claim by examining a two-layer network with i.i.d. Gaussian data. We find that when there are infinite number of training samples, the overall training loss $f$ (i.e., population loss) becomes pleasantly differentiable whose gradient is non-trivial and processes certain Lipschitz continuity. More importantly, we shall show an example of expected coarse gradient that provably forms an acute angle with the underlying true gradient of $f$ and only vanishes at the possible local minimizers of the original problem. \medskip During the training process, the vector $\bm{\alpha}$ (one component per activation layer) should be prevented from being either too small or too large. Due to the sensitivity of $\bm{\alpha}$, we propose a two-scale training and set the learning rate of $\bm{\alpha}$ to be the learning rate of weight $\mathbf{w}$ multiplied by a rate factor far less than 1, which may be varied depending on network architectures. That rate factor effectively helps quantized network converge steadily and prevents $\bm{\alpha}$ from vanishing. \section{Full Quantization} Imposing the quantized weights amounts to adding a discrete set-constraint $\mathbf{w}\in\mathcal{Q}$ to the optimization problem (\ref{eq:training}). Suppose $M$ is the total number of weights in the network. For commonly used $b_w$-bit layer-wise quantization, $\mathcal{Q}\subset\mathbb{R}^M$ takes the form of $\mathcal{Q}_1 \times \mathcal{Q}_2 \cdots\times \mathcal{Q}_l$, meaning that the weight tensor in the $j$-th linear layer is constrained in the form $\mathbf{w}_j = \delta_j \mathbf{q}_j \in\mathcal{Q}_j$ for some adjustable scaling factor $\delta_j>0$ shared by weights in the same layer. Each component of $\mathbf{q}_j$ is drawn from the quantization set given by $\{\pm 1\}$ for $b_w = 1$ (binarization) and $\{0,\pm 1,\cdots, \pm(2^{b_w-1} -1)\}$ for $b_w \geq 2$. This assumption on $\mathcal{Q}$ generalizes those of the 1-bit BWN \cite{xnornet_16} and the 2-bit TWN \cite{twn_16}. As such, the layer-wise weight and activation quantization problem here can be stated abstractly as follows \begin{equation} \min_{\mathbf{w},\boldsymbol\alpha}\, f(\mathbf{w},\boldsymbol\alpha) \;\; \mbox{subject to} \;\; \mathbf{w} \in \mathcal{Q}=\mathcal{Q}_1 \times \mathcal{Q}_2 \cdots\times \mathcal{Q}_l, \label{e1} \end{equation} where the training loss $f(\mathbf{w}, \bm{\alpha})$ is defined in (\ref{eq:training}). Different from activation quantization, one bit is taken to represent the signs. For ease of presentation, we only consider the network-wise weight quantization throughout this section, i.e., weights across all the layers share the same (trainable) floating scaling factor $\delta>0$, or simply, $\mathcal{Q} = \mathbb{R}_+\times\{\pm 1\}^M$ for $b_w =1$ and $\mathcal{Q} = \mathbb{R}_+\times\left\{0,\pm 1, \dots, \pm(2^{b_w-1}-1)\right\}^M$ for $b_w\geq2$. \subsection{Weight Quantization} Given a float weight vector $\mathbf{w}_f$, the quantization of $\mathbf{w}_f$ is basically the following optimization problem for computing the projection of $\mathbf{w}_f$ onto set $\mathcal{Q}$ \begin{equation}\label{eq:proj} \mathrm{proj}_\mathcal{Q}(\mathbf{w}_f) := \arg\min_{\mathbf{w}\in\mathcal{Q}} \; \|\mathbf{w}- \mathbf{w}_f\|^2 . \end{equation} Note that $\mathcal{Q}$ is a non-convex set, so the solution of (\ref{eq:proj}) may not be unique. When $b_w=1$, we have the binarization problem \begin{equation}\label{eq:binary} \min_{\delta, \mathbf{q}} \; \|\delta \, \mathbf{q} -\mathbf{w}_f\|^2 \quad \mbox{subject to} \quad \delta>0, \; \mathbf{q}\in \left\{\pm1\right\}^M. \end{equation} For $b_w\geq 2$, the projection/quantization problem (\ref{eq:proj}) can be reformulated as \begin{equation}\label{eq:quant} \min_{\delta, \mathbf{q}} \; \|\delta \, \mathbf{q} -\mathbf{w}_f\|^2 \quad \mbox{subject to} \quad \delta>0, \; \mathbf{q}\in \left\{0,\pm1,\cdots, \pm (2^{b_w-1} -1)\right\}^M. \end{equation} It has been shown that the closed form (exact) solution of (\ref{eq:binary}) can be computed at $O(M)$ complexity for (1-bit) binarization \cite{xnornet_16} and at $O(M\log(M))$ complexity for (2-bit) ternarization \cite{lbwn_16}. An empirical ternarizer of $O(M)$ complexity has also been proposed \cite{twn_16}. At wider bit-width $b_w\geq 3$, accurately solving (\ref{eq:quant}) becomes computationally intractable due to the combinatorial nature of the problem \cite{lbwn_16}. \medskip The problem (\ref{eq:quant}) is basically a constrained $K$-means clustering problem of 1-D points \cite{BR_18} with the centroids being $\delta$-spaced. It in principle can be solved by a variant of the classical Lloyd's algorithm \cite{lloyd} via an alternating minimization procedure. It iterates between the assignment step ($\mathbf{q}$-update) and centroid step ($\delta$-update). In the $i$-th iteration, fixing the scaling factor $\delta^{i-1}$, each entry of $\mathbf{q}^{i}$ is chosen from the quantization set, so that $\delta^{i-1} \mathbf{q}^i$ is as close as possible to $\mathbf{w}_f$. In the $\delta$-update, the following quadratic problem $$ \min_{\delta\in\mathbb{R}} \; \|\, \delta \, \mathbf{q}^{i} - \mathbf{w}_f\|^2 $$ is solved by $\delta^{i} = \frac{(\mathbf{q}^{i})^\top\mathbf{w}_f}{\|\mathbf{q}^{i}\|^2}$. Since quantization (\ref{eq:proj}) is required in every iteration, to make this procedure practical, we just perform a single iteration of Lloyd's algorithm by empirically initializing $\delta$ to be $\frac{2}{2^{b_w} - 1} \|\mathbf{w}_f\|_\infty$, which is derived by setting $$ \frac{\delta}{2}\left((2^{b_w-1}-1) + 2^{b_w-1}\right) = \|\mathbf{w}_f\|_\infty. $$ This makes the large components in $\mathbf{w}_f$ well clustered. \medskip First introduced in \cite{bc_15} by Courbariaux et al., the BinaryConnect (BC) scheme has drawn much attention in training DNNs with quantized weight and regular ReLU. It can be summarized as $$ \mathbf{w}_f^{t+1} = \mathbf{w}_f^t - \eta \nabla f(\mathbf{w}^t), \; \mathbf{w}^{t+1} = \mathrm{proj}_\mathcal{Q}(\mathbf{w}_f^{t+1}), $$ where $\{\mathbf{w}^t\}$ denotes the sequence of the desired quantized weights, and $\{\mathbf{w}^t_f\}$ is an auxiliary sequence of floating weights. BC can be readily extended to full quantization regime by including the update of $\bm{\alpha}^t$ and replacing the true gradient $\nabla f(\mathbf{w}^t)$ with the coarse gradients from section \ref{sec:activ}. With a subtle change to the standard projected gradient descent algorithm (PGD) \cite{combettes2015stochastic}, namely $$ \mathbf{w}_f^{t+1} = \mathbf{w}^t - \eta \nabla f(\mathbf{w}^t), \; \mathbf{w}^{t+1} = \mathrm{proj}_\mathcal{Q}(\mathbf{w}_f^{t+1}), $$ BC significantly outperforms PGD and effectively bypasses spurious the local minima in $\mathcal{Q}$ \cite{Goldstein_17}. An intuitive explanation is that the constraint set $\mathcal{Q}$ is basically a finite union of isolated one-dimensional subspaces (i.e., lines that pass through the origin) \cite{BR_18}. Since $\mathbf{w}_f^t$ is obtained near the projected point $\mathbf{w}^t$, the sequence $\{\mathbf{w}_f^t\}$ generated by PGD can get stuck in some line subspace easily when updated with a small learning rate $\eta$; see Figure \ref{fig:alg} for graphical illustrations. \begin{figure}[ht] \centering \begin{tabular}{cc} \includegraphics[width=0.48\textwidth]{bc.png} \includegraphics[width=0.48\textwidth]{pgd.png} \end{tabular} \caption{The ternarization of two weights. The one-dimensional subspaces $\mathcal{L}_i$'s constitute the constraint set $\mathcal{Q} = \mathbb{R}_{+}\times\{0, \pm 1\}^2$. When updated with small learning rate, BC keeps searching among the subspaces (left), whereas PGD can get stagnated in $\mathcal{L}_1$ (right). }\label{fig:alg}. \end{figure} \subsection{Blended Gradient Descent and Sufficient Descent Property} Despite the superiority of BC over PGD, we point out a drawback in regard to its convergence. While Yin et al. provided the convergence proof of BC scheme in the recent papers \cite{BR_18}, their analysis hinges on an approximate orthogonality condition which may not hold in practice; see Lemma 4.4 and Theorem 4.10 of \cite{BR_18}. Suppose $f$ has $L$-Lipschitz gradient\footnote{This assumption is valid for the population loss function; we refer readers to Lemma \ref{lem:lipschitz} in section \ref{sec:analy}.}. In light of the convergence proof in Theorem 4.10 of \cite{BR_18}, we have \begin{equation}\label{eq:desc} f(\mathbf{w}^{t+1})-f(\mathbf{w}^t)\leq - \frac{1}{2}\left(\frac{1}{\eta} ( \|\mathbf{w}^{t+1}-\mathbf{w}_f^t\|^2 - \|\mathbf{w}^t -\mathbf{w}_f^t\|^2 ) - L\|\mathbf{w}^{t+1}-\mathbf{w}^t\|^2\right). \end{equation} For the objective sequence $\{f(\mathbf{w}^t)\}$ to be monotonically decreasing and $\{\mathbf{w}^k\}$ converging to a critical point, it is crucial to have the sufficient descent property \cite{gilbert1992global} hold for sufficiently small learning rate $\eta>0$: \begin{equation} f(\mathbf{w}^{t+1}) - f(\mathbf{w}^t) \leq - c \, \|\mathbf{w}^{t+1} -\mathbf{w}^{t}\|^2, \label{descent} \end{equation} with some positive constant $c>0$. \medskip Since $\mathbf{w}^{t} = \arg\min_{\mathbf{w}\in\mathcal{Q}} \|\mathbf{w} - \mathbf{w}_f^t\|^2$ and $\mathbf{w}^{t+1}\in\mathcal{Q}$, it holds in (\ref{eq:desc}) that $$ \frac{1}{\eta}(\|\mathbf{w}^{t+1}-\mathbf{w}_f^t\|^2 - \|\mathbf{w}^t -\mathbf{w}_f^t\|^2)\geq 0. $$ Due to non-convexity of the set $\mathcal{Q}$, the above term can be as small as zero even when $\mathbf{w}^t$ and $\mathbf{w}^{t+1}$ are distinct. So it is not guaranteed to dominate the right hand side of (\ref{eq:desc}). Consequently given (\ref{eq:desc}), the inequality (\ref{descent}) does not necessarily hold. Without sufficient descent, even if $\{f(\mathbf{w}^t)\}$ converges, the iterates $\{\mathbf{w}^t\}$ may not converge well to a critical point. To fix this issue, we blend the ideas of PGD and BC, and propose the following blended gradient descent (BGD) \begin{equation}\label{bcgd} \mathbf{w}_f^{t+1} = (1-\rho)\mathbf{w}_f^t + \rho \mathbf{w}^t - \eta \nabla f(\mathbf{w}^t), \; \mathbf{w}^{t+1} = \mathrm{proj}_\mathcal{Q}(\mathbf{w}_f^{t+1}) \end{equation} for some blending parameter $\rho\ll 1$. In contrast, the blended gradient descent satisfies (\ref{descent}) for small enough $\eta$. \begin{prop}\label{prop:bgd} For $\rho\in(0,1)$, the BGD (\ref{bcgd}) satisfies $$ f(\mathbf{w}^{t+1})-f(\mathbf{w}^t)\leq - \frac{1}{2}\left(\frac{1-\rho}{\eta}(\|\mathbf{w}^{t+1}-\mathbf{w}_f^t\|^2 - \|\mathbf{w}^t -\mathbf{w}_f^t\|^2) + \left(\frac{\rho}{\eta}-L \right)\|\mathbf{w}^{t+1}-\mathbf{w}^t\|^2\right). $$ \end{prop} Choosing the learning rate $\eta$ small enough so that $\rho/\eta \geq L + c$. Then inequality (\ref{descent}) follows from the above proposition, which will guarantee the convergence of (\ref{bcgd}) to a critical point by using similar arguments as in the proofs from \cite{BR_18}. \begin{cor} The blended gradient descent iteration (\ref{bcgd}) satisfies the sufficient descent property (\ref{descent}). \end{cor} \section{Experiments}\label{sec:exper} We tested BCGD, as summarized in Algorithm \ref{alg}, on the CIFAR-10 \cite{cifar_09} and ImageNet \cite{imagenet_09,imagnet_12} color image datasets. We coded up the BCGD in PyTorch platform \cite{pytorch}. In all experiments, we fix the blending factor in (\ref{bcgd}) to be $\rho = 10^{-5}$. All runs with quantization are warm started with a float pre-trained model, and the resolutions $\bm{\alpha}$ are initialized by $\frac{1}{2^{b_a} - 1}$ of the maximal values in the corresponding feature maps generated by a random mini-batch. The learning rate for weight $\mathbf{w}$ starts from $0.01$. Rate factor for the learning rate of $\bm{\alpha}$ is $0.01$, i.e., the learning rate for $\bm{\alpha}$ starts from $10^{-4}$. The decay factor for the learning rates is $0.1$. The weights $\mathbf{w}$ and resolutions $\bm{\alpha}$ are updated jointly. In addition, we used momentum and batch normalization \cite{bn_15} to promote training efficiency. We mainly compare the performances of the proposed BCGD and the state-of-the-art BC (adapted for full quantization) on \emph{layer-wise} quantization. The experiments were carried out on machines with 4 Nvidia GeForce GTX 1080 Ti GPUs. \begin{algorithm} \caption{One iteration of BCGD for full quantization}\label{alg} \textbf{Input}: mini-batch loss function $f_t(\mathbf{w},\bm{\alpha})$, blending parameter $\rho=10^{-5}$, learning rate $\eta_\mathbf{w}^t$ for the weights $\mathbf{w}$, learning rate $\eta_{\bm{\alpha}}^t$ for the resolutions $\bm{\alpha}$ of AQ (one component per activation layer). \\ \textbf{Do}: \begin{algorithmic} \STATE Evaluate the mini-batch coarse gradient $(\tilde{\nabla}_\mathbf{w} f_t, \tilde{\nabla}_{\bm{\alpha}} f_t)$ at $(\mathbf{w}^t, \bm{\alpha}^t)$ according to section 2. \STATE $\mathbf{w}_f^{t+1} = (1-\rho)\mathbf{w}_f^t + \rho\mathbf{w}^t - \eta^t_\mathbf{w} \tilde{\nabla}_\mathbf{w} f_t(\mathbf{w}^t,\bm{\alpha}^t)$ \quad $//$ blended gradient update for weights \STATE $\bm{\alpha}^{t+1} = \bm{\alpha}^t - \eta^t_{\bm{\alpha}} \tilde{\nabla}_{\bm{\alpha}} f_t(\mathbf{w}^t,\bm{\alpha}^t)$ \quad $//$ $\eta_{\bm{\alpha}}^t = 0.01\cdot \eta_\mathbf{w}^t$ \STATE $\mathbf{w}^{t+1} = \mathrm{proj}_\mathcal{Q}(\mathbf{w}_f^{t+1})$ \quad $//$ quantize the weights as per section 3.1 \end{algorithmic} \end{algorithm} The CIFAR-10 dataset consists of 60,000 $32\times32$ color images of 10 classes, with 6,000 images per class. There dataset is split into 50,000 training images and 10,000 test images. In the experiments, we used the testing images for validation. The mini-batch size was set to be $128$ and the models were trained for $200$ epochs with learning rate decaying at epoch 80 and 140. In addition, we used weight decay of $10^{-4}$ and momentum of $0.95$. The a.e derivative, 3-valued and 2-valued coarse derivatives of $\bm{\alpha}$ are compared on the VGG-11 \cite{vgg_14} and ResNet-20 \cite{resnet_15} architectures, and the results are listed in Tables \ref{tab:1}, \ref{tab:2} and \ref{tab:3}, respectively. It can be seen that the 3-valued coarse $\bm{\alpha}$ derivative gives the best overall performance in terms of accuracy. Figure \ref{fig:1} shows that in weight binarization, BCGD converges faster and better than BC. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Network & Float & 32W4A & 1W4A & 2W4A & 4W4A \\ \hline VGG-11 + BC & \multirow{2}{*}{92.13} & \multirow{2}{*}{91.74} & 88.12 & 89.78 & {\bf 91.51} \\ \cline{1-1}\cline{4-6} VGG-11+BCGD & & & {\bf 88.74} & {\bf 90.08} & 91.38 \\ \hline ResNet-20 + BC & \multirow{2}{*}{92.41} & \multirow{2}{*}{91.90} & 89.23 & 90.89 & 91.53 \\ \cline{1-1}\cline{4-6} ResNet-20+BCGD & & & {\bf 90.10} & {\bf 91.15}& 91.56 \\ \hline \end{tabular} \caption{CIFAR-10 validation accuracies in \% with the a.e. $\bm{\alpha}$ derivative.} \label{tab:1} \end{table} \begin{figure}[ht] \centering \begin{tabular}{cc} \includegraphics[width=0.48\textwidth]{vgg11_single_4a1w.png} \includegraphics[width=0.48\textwidth]{res20_single_4a1w.png} \end{tabular} \caption{CIFAR-10 validation accuracies vs. epoch numbers with a.e. $\bm{\alpha}$ derivative and 1W4A quantization on VGG-11 (left) and ResNet-20 (right), with (orange) and without (blue) blending which speeds up training towards higher accuracies.}\label{fig:1} \end{figure} \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Network & Float & 32W4A & 1W4A & 2W4A & 4W4A \\ \hline VGG-11 + BC & \multirow{2}{*}{92.13} & \multirow{2}{*}{92.08} & 89.12 & 90.52 & {\bf 91.89} \\ \cline{1-1}\cline{4-6} VGG-11+BCGD & & & {\bf 89.59} & {\bf 90.71} & 91.70 \\ \hline ResNet-20 + BC & \multirow{2}{*}{92.41} & \multirow{2}{*}{92.14} & 89.37 & 91.02 & 91.71 \\ \cline{1-1}\cline{4-6} ResNet-20+BCGD & & & {\bf 90.05} & 91.03 & {\bf 91.97} \\ \hline \end{tabular} \caption{CIFAR-10 validation accuracies with the 3-valued $\bm{\alpha}$ derivative.} \label{tab:2} \end{table} \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Network & Float & 32W4A & 1W4A & 2W4A & 4W4A \\ \hline VGG-11 + BC & \multirow{2}{*}{92.13} & \multirow{2}{*}{91.66} & 88.50 & 89.99 & 91.31 \\ \cline{1-1}\cline{4-6} VGG-11+BCGD & & & {\bf 89.12} & 90.00 & 91.31 \\ \hline ResNet-20 + BC & \multirow{2}{*}{92.41} & \multirow{2}{*}{91.73} & 89.22 & 90.64 & 91.37 \\ \cline{1-1}\cline{4-6} ResNet-20+BCGD & & & {\bf 89.98} & {\bf 90.75} & {\bf 91.65} \\ \hline \end{tabular} \caption{CIFAR-10 validation accuracies with the 2-valued $\bm{\alpha}$ derivative (PACT \cite{pact}).} \label{tab:3} \end{table} ImageNet (ILSVRC12) dataset \cite{imagenet_09} is a benchmark for large-scale image classification task, which has $1.2$ million images for training and $50,000$ for validation of 1,000 categories. We set mini-batch size to $256$ and trained the models for 80 epochs with learning rate decaying at epoch 50 and 70. The weight decay of $10^{-5}$ and momentum of $0.9$ were used. The ResNet-18 accuracies 65.46\%/86.36\% at 1W4A in Table 4 outperformed HWGQ \cite{halfwave_17} where top-1/top-5 accuracies are 60.8\%/83.4\% with non-quantized first/last convolutional layers. The results in the Table 4 and Table 5 show that using the 3-valued coarse $\bm{\alpha}$ partial derivative appears more effective than the 2-valued as quantization bit precision is lowered. We also observe that the accuracies degrade gracefully from 4W8A to 1W4A for ResNet-18 while quantizing all convolutional layers. Again, BCGD converges much faster than BC towards higher accuracy as illustrated by Figure \ref{fig:2}. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{Float} & \multicolumn{2}{c|}{1W4A} & \multicolumn{2}{c|}{4W4A} & \multicolumn{2}{c|}{4W8A} \\ \cline{3-8} & & 3 valued & 2 valued & 3 valued & 2 valued & 3 valued & 2 valued \\ \hline top-1 & 69.64 & 64.36/$65.46^*$ & 63.37/$64.57^*$ & 67.36 & 66.97 & 68.85 & 68.83 \\ \hline top-5 & 88.98 & 85.65/$86.36^*$ & 84.93/$85.75^*$ & 87.76 & 87.41 & 88.71 & 88.84\\ \hline \end{tabular} \caption{ImageNet validation accuracies with BCGD on ResNet-18. Starred accuracies are with first and last convolutional layers in float precision as in \cite{halfwave_17}. The accuracies are for quantized weights across all layers otherwise.} \label{tab:4} \end{table} \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{Float} & \multicolumn{2}{c|}{1W4A} & \multicolumn{2}{c|}{4W4A} & \multicolumn{2}{c|}{4W8A} \\ \cline{3-8} & & 3 valued & 2 valued & 3 valued & 2 valued & 3 valued & 2 valued \\ \hline top-1 & 73.27 & 68.43 & 67.51 & 70.81 & 70.01 & 72.07 & 72.18 \\ \hline top-5 & 91.43 & 88.29 & 87.72 & 90.00 & 89.49 & 90.71 & 90.73\\ \hline \end{tabular} \caption{ImageNet validation accuracies with BCGD on ResNet-34. The accuracies are for quantized weights across all layers.} \label{tab:5} \end{table} \begin{figure}[H] \centering \begin{tabular}{cc} \includegraphics[width=0.48\textwidth]{resnet18_4a1w_top1.png} \includegraphics[width=0.48\textwidth]{resnet18_4a1w_top5.png} \end{tabular} \caption{ImageNet validation accuracies (left: top-1, right: top-5) vs. number of epochs with 3-valued derivative on 1W4A quantization on ResNet-18 with (orange) and without (blue) blending which substantially speeds up training towards higher accuracies.}\label{fig:2} \end{figure} \medskip \section{Analysis of Coarse Gradient Descent for Activation Quantization} \label{sec:analy} As a proof of concept, we analyze a simple two-layer network with binarized ReLU activation. Let $\sigma$ be the binarized ReLU function, same as hard threshold activation \cite{hinton2012neural}, with the bit-width $b_a=1$ and the resolution $\alpha \equiv 1$ in (\ref{eq:qrelu}): \begin{equation*} \sigma(x) = \begin{cases} 0 \quad \mbox{if } x \leq 0, \\ 1 \quad \mbox{if } x >0 . \end{cases} \end{equation*} We define the training sample loss by $$ \ell(\mathbf{v},\mathbf{w}; \mathbf{Z}): = \frac{1}{2}\Big(\mathbf{v}^\top \sigma(\mathbf{Z}\mathbf{w}) - (\mathbf{v}^*)^\top \sigma(\mathbf{Z}\mathbf{w}^*)\Big)^2, $$ where $\mathbf{v}^*\in\mathbb{R}^m$ and $\mathbf{w}^*\in\mathbb{R}^n$ are the underlying (nonzero) teacher parameters in the second and first layers, respectively. Same as in the literature that analyze the conventional ReLU nets \cite{Lee,li2017convergence,tian2017analytical,brutzkus2017globally}, we assume the entries of $\mathbf{Z}\in\mathbb{R}^{m\times n}$ are i.i.d. sampled from the standard normal distribution $\mathcal{N}(0,1)$. Note that $\ell(\mathbf{v},\mathbf{w};\mathbf{Z}) = \ell(\mathbf{v},\mathbf{w}/c; \mathbf{Z})$ for any scalar $c>0$. Without loss of generality, we fix $\|\mathbf{w}^*\| = 1$. \subsection{Population Loss Minimization} Suppose we have $N$ independent training samples $\{\mathbf{Z}^{(1)},\dots, \mathbf{Z}^{(N)}\}$, then the associated empirical risk minimization reads \begin{equation}\label{eq:mean_model} \min_{\mathbf{v}\in\mathbb{R}^m,\mathbf{w}\in\mathbb{R}^n} \; \frac{1}{N}\sum_{i=1}^N \ell(\mathbf{v},\mathbf{w}; \mathbf{Z}^{(i)}). \end{equation} The major difficulty of analysis here is that the empirical risk function in (\ref{eq:mean_model}) is still piecewise constant and has a.e. zero partial $\mathbf{w}$ gradient. This issue can be resolved by instead considering the following population loss minimization \cite{li2017convergence,brutzkus2017globally,Lee,tian2017analytical}: \begin{equation}\label{eq:model} \min_{\mathbf{v}\in\mathbb{R}^m,\mathbf{w}\in\mathbb{R}^n} \; f(\mathbf{v}, \mathbf{w}) := \mathds{E}_\mathbf{Z} \left[\ell(\mathbf{v},\mathbf{w}; \mathbf{Z})\right]. \end{equation} Specifically, in the limit $N\to\infty$, the objective function $f$ becomes favorably smooth with non-trivial gradient. For nonzero vector $\mathbf{w}$, let us define the angle between $\mathbf{w}$ and $\mathbf{w}^*$ by $$ \theta(\mathbf{w},\mathbf{w}^*) := \arccos\Big(\frac{\mathbf{w}^\top\mathbf{w}^*}{\|\mathbf{w}\|\|\mathbf{w}^*\|}\Big) = \arccos\Big(\frac{\mathbf{w}^\top\mathbf{w}^*}{\|\mathbf{w}\|}\Big), $$ then we have \begin{lem}\label{lem:obj} If every entry of $\mathbf{Z}$ is i.i.d. sampled from $\mathcal{N}(0,1)$, $\|\mathbf{w}^*\|=1$, and $\|\mathbf{w}\|\neq 0$, then the population loss is \begin{equation}\label{eq:loss} f(\mathbf{v},\mathbf{w}) = \frac{1}{8}\left[\mathbf{v}^\top\big(\mathbf{I} + \mathbf{1}\1^\top \big) \mathbf{v} -2\mathbf{v}^\top \left( \left(1-\frac{2}{\pi}\theta(\mathbf{w},\mathbf{w}^*) \right)\mathbf{I} + \mathbf{1}\1^\top \right)\mathbf{v}^* + (\mathbf{v}^*)^\top \big(\mathbf{I} + \mathbf{1}\1^\top \big)\mathbf{v}^* \right]. \end{equation} Moreover, the gradients of $f(\mathbf{v},\mathbf{w})$ w.r.t. $\mathbf{v}$ and $\mathbf{w}$ are \begin{equation}\label{eq:grad_v} \frac{\partial f}{\partial \mathbf{v}}(\mathbf{v},\mathbf{w}) = \frac{1}{4}\big(\mathbf{I} + \mathbf{1}\1^\top \big) \mathbf{v} - \frac{1}{4}\left( \left(1-\frac{2}{\pi}\theta(\mathbf{w},\mathbf{w}^*) \right)\mathbf{I} + \mathbf{1}\1^\top \right)\mathbf{v}^* \end{equation} and \begin{equation}\label{eq:grad_w} \frac{\partial f}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w}) = -\frac{\mathbf{v}^\top\mathbf{v}^*}{2\pi\|\mathbf{w}\|} \frac{\Big(\mathbf{I} - \frac{\mathbf{w}\w^\top}{\|\mathbf{w}\|^2}\Big)\mathbf{w}^*}{\Big\| \Big(\mathbf{I} - \frac{\mathbf{w}\w^\top}{\|\mathbf{w}\|^2}\Big)\mathbf{w}^*\Big\|}, \quad \mbox{for } \theta(\mathbf{w},\mathbf{w}^*)\in(0, \pi), \end{equation} respectively. \end{lem} \medskip When $\mathbf{w} \neq \mathbf{0}$, the possible (local) minimizers of problem (\ref{eq:model}) are located at \begin{enumerate} \item Stationary points where the gradients defined in (\ref{eq:grad_v}) and (\ref{eq:grad_w}) vanish simultaneously (which may not be possible), i.e., \begin{equation}\label{eq:critical} \mathbf{v}^\top\mathbf{v}^* = 0 \mbox{ and } \mathbf{v} = \big(\mathbf{I} + \mathbf{1}\1^\top \big)^{-1}\left( \left(1-\frac{2}{\pi}\theta(\mathbf{w},\mathbf{w}^*) \right)\mathbf{I} + \mathbf{1}\1^\top \right)\mathbf{v}^* . \end{equation} \item Non-differentiable points where $\theta(\mathbf{w},\mathbf{w}^*) = 0$ and $\mathbf{v} = \mathbf{v}^*$, or $\theta(\mathbf{w},\mathbf{w}^*) = \pi$ and $\mathbf{v} = \big(\mathbf{I} + \mathbf{1}\1^\top \big)^{-1}( \mathbf{1}\1^\top -\mathbf{I} )\mathbf{v}^*$. \end{enumerate} Among them, $\{(\mathbf{v},\mathbf{w}): \mathbf{v} = \mathbf{v}^*, \, \theta(\mathbf{w},\mathbf{w}^*) = 0\}$ are the global minimizers with $f(\mathbf{v},\mathbf{w}) = 0$. \begin{prop}\label{prop} If $(\mathbf{1}^\top\mathbf{v}^*)^2 < \frac{m+1}{2}\|\mathbf{v}^*\|^2$, then \begin{align*} \bigg\{(\mathbf{v},\mathbf{w}) \in\mathbb{R}^{m+n}: \mathbf{v} = (\mathbf{I} + \mathbf{1}\1^\top)^{-1} & \left(\frac{-(\mathbf{1}^\top\mathbf{v}^*)^2}{(m+1)\|\mathbf{v}^*\|^2- (\mathbf{1}^\top\mathbf{v}^*)^2}\mathbf{I} + \mathbf{1}\1^\top\right)\mathbf{v}^*, \\ & \qquad \qquad \qquad \theta(\mathbf{w},\mathbf{w}^*) = \frac{\pi}{2}\frac{(m+1)\|\mathbf{v}^*\|^2}{(m+1)\|\mathbf{v}^*\|^2 - (\mathbf{1}^\top\mathbf{v}^*)^2} \bigg\} \end{align*} gives the stationary points obeying (\ref{eq:critical}). Otherwise, problem (\ref{eq:model}) has no stationary points. \end{prop} The gradient of the population loss, $\left(\frac{\partial f}{\partial \mathbf{v}}, \, \frac{\partial f}{\partial \mathbf{w}}\right)(\mathbf{v},\mathbf{w})$, holds Lipschitz continuity under a boundedness condition. \begin{lem}\label{lem:lipschitz} For any $(\mathbf{v},\mathbf{w})$ and $(\tilde{\v},\tilde{\w})$ with $\min\{ \|\mathbf{w}\|, \, \|\tilde{\w}\|\} = c>0$ and $\max\{\|\mathbf{v}\|, \, \|\tilde{\v}\|\} = C$, there exists a constant $L>0$ depending on $c$ and $C$, such that $$ \left\|\left( \frac{\partial f}{\partial \mathbf{v}}, \frac{\partial f}{\partial \mathbf{w}} \right)(\mathbf{v},\mathbf{w}) - \left(\frac{\partial f}{\partial \mathbf{v}}, \frac{\partial f}{\partial \mathbf{w}} \right)(\tilde{\v},\tilde{\w}) \right\| \leq L \|(\mathbf{v}, \mathbf{w}) - (\tilde{\v}, \tilde{\w})\|. $$ \end{lem} \subsection{Convergence Analysis of Normalized Coarse Gradient Descent} The partial gradients $\frac{\partial f}{\partial \mathbf{v}}$ and $\frac{\partial f}{\partial \mathbf{w}}$, however, are not available in the training. What we really have access to are the expectations of the sample gradients, namely, $$\mathds{E}_\mathbf{Z}\left[\frac{\partial \ell}{\partial \mathbf{v}}(\mathbf{v},\mathbf{w};\mathbf{Z})\right] \mbox{ and } \mathds{E}_\mathbf{Z}\left[\frac{\partial \ell}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w};\mathbf{Z})\right]. $$ If $\sigma$ was differentiable, then the back-propagation reads \begin{equation}\label{eq:bp_v} \frac{\partial \ell}{\partial \mathbf{v}}(\mathbf{v},\mathbf{w};\mathbf{Z}) = \sigma(\mathbf{Z}\mathbf{w})\Big(\mathbf{v}^\top \sigma(\mathbf{Z}\mathbf{w}) - (\mathbf{v}^*)^\top \sigma(\mathbf{Z}\mathbf{w}^*)\Big). \end{equation} and \begin{equation}\label{eq:bp_w} \frac{\partial \ell}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w};\mathbf{Z}) = \mathbf{Z}^\top\big(\sigma^{\prime}(\mathbf{Z}\mathbf{w})\odot\mathbf{v}\big)\Big(\mathbf{v}^\top \sigma(\mathbf{Z}\mathbf{w}) - (\mathbf{v}^*)^\top \sigma(\mathbf{Z}\mathbf{w}^*)\Big). \end{equation} Now that $\sigma$ has zero derivative a.e., which makes (\ref{eq:bp_w}) inapplicable. We study the coarse gradient descent with $\sigma^{\prime}$ in (\ref{eq:bp_w}) being replaced by the (sub)derivative $\mu^{\prime}$ of regular ReLU $\mu(x):= \max(x,0)$. More precisely, we use the following surrogate of $\frac{\partial \ell}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w};\mathbf{Z})$: \begin{equation}\label{eq:cbp_w} \mathbf{g}(\mathbf{v},\mathbf{w};\mathbf{Z}) = \mathbf{Z}^\top\big(\mu^{\prime}(\mathbf{Z}\mathbf{w})\odot\mathbf{v}\big)\Big(\mathbf{v}^\top \sigma(\mathbf{Z}\mathbf{w}) - (\mathbf{v}^*)^\top \sigma(\mathbf{Z}\mathbf{w}^*)\Big) \end{equation} with $\mu'(x) = \sigma(x)$, and consider the following coarse gradient descent with weight normalization: \begin{equation}\label{eq:iter} \begin{cases} \mathbf{v}^{t+1} = \mathbf{v}^t - \eta \mathds{E}_\mathbf{Z} \left[\frac{\partial \ell}{\partial \mathbf{v}}(\mathbf{v}^t,\mathbf{w}^t;\mathbf{Z})\right] \\ \mathbf{w}^{t+\frac{1}{2}} = \mathbf{w}^t - \eta \mathds{E}_\mathbf{Z} \left[\mathbf{g}(\mathbf{v}^t,\mathbf{w}^t; \mathbf{Z})\right] \\ \mathbf{w}^{t+1} = \frac{\mathbf{w}^{t+1/2}}{\left\|\mathbf{w}^{t+1/2}\right\|} \end{cases} \end{equation} \medskip \begin{lem} \label{lem:cgd} The expected gradient of $\ell(\mathbf{v},\mathbf{w};\mathbf{Z})$ w.r.t. $\mathbf{v}$ is \begin{equation}\label{eq:cgrad_v} \mathds{E}_\mathbf{Z} \left[\frac{\partial \ell}{\partial \mathbf{v}}(\mathbf{v},\mathbf{w};\mathbf{Z})\right] = \frac{\partial f}{\partial \mathbf{v}}(\mathbf{v},\mathbf{w}) = \frac{1}{4}\big(\mathbf{I} + \mathbf{1}\1^\top \big) \mathbf{v} - \frac{1}{4}\left( \left(1-\frac{2}{\pi}\theta(\mathbf{w},\mathbf{w}^*) \right)\mathbf{I} + \mathbf{1}\1^\top \right)\mathbf{v}^* . \end{equation} The expected coarse gradient w.r.t. $\mathbf{w}$ is \begin{equation}\label{eq:cgrad_w} \mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big] = \frac{h(\mathbf{v},\mathbf{v}^*)}{2\sqrt{2\pi}}\frac{\mathbf{w}}{\|\mathbf{w}\|} - \cos\left(\frac{\theta(\mathbf{w},\mathbf{w}^*)}{2}\right)\frac{\mathbf{v}^\top\mathbf{v}^*}{\sqrt{2\pi}}\frac{\frac{\mathbf{w}}{\|\mathbf{w}\|} + \mathbf{w}^*}{\left\|\frac{\mathbf{w}}{\|\mathbf{w}\|} + \mathbf{w}^*\right\|},\footnote{We redefine the second term as $\mathbf{0}$ in the case $\theta(\mathbf{w},\mathbf{w}^*) = \pi$.} \end{equation} where $h(\mathbf{v},\mathbf{v}^*) = \|\mathbf{v}\|^2+ (\mathbf{1}^\top\mathbf{v})^2 - (\mathbf{1}^\top\mathbf{v})(\mathbf{1}^\top\mathbf{v}^*) + \mathbf{v}^\top\mathbf{v}^*$. In particular, $\mathds{E}_\mathbf{Z} \Big[\frac{\partial \ell}{\partial \mathbf{v}}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big]$ and $\mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big]$ vanish simultaneously only in one of the following cases \begin{enumerate} \item (\ref{eq:critical}) is satisfied according to Proposition \ref{prop}. \item $\mathbf{v} = \mathbf{v}^*$, $\theta(\mathbf{w},\mathbf{w}^*)=0$, or $\mathbf{v} = (\mathbf{I} + \mathbf{1}\1^\top)^{-1}(\mathbf{1}\1^\top - \mathbf{I})\mathbf{v}^*$, $\theta(\mathbf{w},\mathbf{w}^*)=\pi$. \end{enumerate} \end{lem} \medskip What is interesting is that the coarse partial gradient $\mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big]=\mathbf{0}$ is properly defined at global minimizers of the population loss minimization problem (\ref{eq:model}) with $\mathbf{v} = \mathbf{v}^*$, $\theta(\mathbf{w},\mathbf{w}^*)=0$, whereas the true gradient $\frac{\partial f}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w})$ does not exist there. Our key finding is that the coarse gradient \emph{$\mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big]$ has positive correlation with the true gradient $\frac{\partial f}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w}) $, and consequently, $-\mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big]$ together with $-\mathds{E}_\mathbf{Z} \left[\frac{\partial \ell}{\partial \mathbf{v}}(\mathbf{v},\mathbf{w};\mathbf{Z})\right]$ give a descent direction in algorithm (\ref{eq:iter}).} \begin{lem}\label{cor} If $\theta(\mathbf{w},\mathbf{w}^*)\in(0, \pi)$ , and $\|\mathbf{w}\|\neq 0$, then the inner product between the expected coarse and true gradients w.r.t. $\mathbf{w}$ is \begin{equation*} \left\langle \mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big], \frac{\partial f}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w}) \right\rangle = \frac{\sin\left(\theta(\mathbf{w},\mathbf{w}^*)\right)}{2(\sqrt{2\pi})^3\|\mathbf{w}\|}(\mathbf{v}^\top\mathbf{v}^*)^2 \geq 0. \end{equation*} \end{lem} Moreover, the following lemma asserts that $\mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big]$ is sufficiently correlated with $\frac{\partial f}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w})$, which will secure sufficient descent in objective values $\{f(\mathbf{v}^t,\mathbf{w}^t)\}$ and thus the convergence of $\{(\mathbf{v}^t,\mathbf{w}^t)\}$. \begin{lem}\label{lem:correlated} Suppose $\|\mathbf{w}\| = 1$ and $\|\mathbf{v}\|\leq C$. There exists a constant $A>0$ depending on $C$, such that $$ \left\|\mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big] \right\|^2 \leq A\left(\left\|\frac{\partial f}{\partial \mathbf{v}}(\mathbf{v},\mathbf{w})\right\|^2 + \left\langle \mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v},\mathbf{w}; \mathbf{Z})\Big], \frac{\partial f}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w}) \right\rangle \right). $$ \end{lem} \medskip Equipped with Lemma \ref{lem:lipschitz} and Lemma \ref{lem:correlated}, we are able to show the convergence result of iteration (\ref{eq:iter}). \begin{thm}\label{thm} Given the initialization $(\mathbf{v}^0,\mathbf{w}^0)$ with $\|\mathbf{w}^0\|=1$, and let $\{(\mathbf{v}^t, \mathbf{w}^t)\}$ be the sequence generated by iteration (\ref{eq:iter}). There exists $\eta_0>0$, such that for any step size $\eta<\eta_0$, $\{f(\mathbf{v}^t,\mathbf{w}^t)\}$ is monotonically decreasing, and both $\left\|\mathds{E}_\mathbf{Z} \Big[\frac{\partial \ell}{\partial \mathbf{v}}(\mathbf{v}^t,\mathbf{w}^t; \mathbf{Z})\Big]\right\|$ and $\left\|\mathds{E}_\mathbf{Z} \Big[\mathbf{g}(\mathbf{v}^t,\mathbf{w}^t; \mathbf{Z})\Big]\right\|$ converge to 0, as $t\to\infty$. \end{thm} \begin{rem} Combining the treatment of \cite{Lee} for analyzing two-layer networks with regular ReLU and the positive correlation between $\mathds{E}_\mathbf{Z}\left[ \mathbf{g}(\mathbf{w},\mathbf{v};\mathbf{Z})\right]$ and $\frac{\partial f}{\partial \mathbf{w}}(\mathbf{v},\mathbf{w})$, one can further show that if the initialization $(\mathbf{v}^0,\mathbf{w}^0)$ satisfies $(\mathbf{v}^0)^\top\mathbf{v}^*>0$, $\theta(\mathbf{w}^0,\mathbf{w}^*)<\frac{\pi}{2}$ and $(\mathbf{1}^\top \mathbf{v}^*)(\mathbf{1}^\top \mathbf{v}^0)\leq (\mathbf{1}^\top\mathbf{v}^*)^2$, then $\{(\mathbf{v}^t,\mathbf{w}^t)\}$ converges to the global minimizer $(\mathbf{v}^*,\mathbf{w}^*)$. \end{rem} \section{Concluding Remarks} We introduced the concept of coarse gradient for activation quantization problem of DNNs, for which the a.e. gradient is inapplicable. Coarse gradient is generally not a gradient but an artificial ascent direction. We further proposed BCGD algorithm, for training fully quantized neural networks. The weight update of BCGD goes by coarse gradient correction of a weighted average of the float weights and their quantization, which yields sufficient descent in objective and thus acceleration. Our experiments demonstrated that BCGD is very effective for quantization at extremely low bit-width such as binarization. Finally, we analyzed the coarse gradient descent for a two-layer neural network model with Gaussian input data, and proved that the expected coarse gradient essentially correlates positively with the underlying true gradient. \bigskip \noindent{\bf Acknowledgements.} This work was partially supported by NSF grants DMS-1522383, IIS-1632935; ONR grant N00014-18-1-2527, AFOSR grant FA9550-18-0167, DOE grant DE-SC0013839 and STROBE STC NSF grant DMR-1548924. \section*{Conflict of Interest Statement} On behalf of all authors, the corresponding author states that there is no conflict of interest. \bibliographystyle{spmpsci}
1,116,691,498,840
arxiv
\section{Introduction} The $\alpha$ emission is one of the decay channels of the heavy nuclei. Measurements on the $\alpha$ decay can provide reliable information on the nuclear structure such as the ground-state energy, the ground-state half-life, the nuclear spin and parity, the nuclear deformation, the nuclear clustering, the shell effects and the nuclear interaction \cite{Re87,Ho91,Fi96,Lo98, Gar00,Au03,Gi03,Ho03,He04,Ga04,Se06,Lep07}. Measurements on the $\alpha$ decay are also used to identify new nuclides or new heavy elements since the $\alpha$ decay allows to extract clear and reliable information on the parent nucleus \cite{Og05,Hof07,Mo07}. Recently, the interest in the $\alpha$ decay has been renewed because of the development of radioactive beams and new detector technology under low temperature. The $\alpha$ decay process was first explained by Gamow and by Condon and Gurney in the 1920s \cite{Ga28,Co28} as a quantum-tunneling effect and is one of the first examples proving the need to use the quantum mechanics to describe the nuclear phenomena and its correctness. Later on, theoretical calculations were performed to predict the absolute $\alpha$ decay width, to extract nuclear structure information, and to pursue a microscopic understanding of the $\alpha$-decay phenomenon. These studies are based on various theoretical models such as the shell model, the fissionlike model, and the cluster model \cite{Po83,Br92,Va92,Bu93,Roy00,Gu02,Du02,Ba03,Zh06,Mo06,Re06,Po06,Pe07}. In the $\alpha$ cluster model, the decay constant $\lambda$ is the product of three terms: the cluster preformation probability $P_{0}$, the assault frequency $\nu_{0}$ and the barrier penetrability P. The $\alpha$ particle emission is a quantum tunnelling through the potential barrier leading from the mother nucleus to the two emitted fragments: the $\alpha$ particle and the daughter nucleus. The penetrability can be described successfully using the WKB approximation. The most important problem of the $\alpha$ decay is how to estimate the preformation probability $P_{0}$ that the $\alpha$ particle exists as a recognizable entity inside the nucleus before its emission. It is very dependent on the structure of the states of the mother and daughter nuclei and is a measure of the degree of resemblance between the initial state of the mother nucleus and the final state of the daughter nucleus plus the $\alpha$ particle. Within an asymmetric fission picture the preformation factor P$_{0}$ has been taken as 1 in previous studies and that allows to reproduce the experimental $\alpha$ decay half-lives \cite{Roy00,Zh06} when the experimental $Q_{\alpha}$ values are used. There are, however, still small differences between the calculated and experimental values and these discrepancies may be used to determine the $\alpha$-preformation probability. The $\alpha$ preformation factor is very important from the viewpoint of the nuclear structure. Numerous studies of the $\alpha$ decay have been concentrated on this problem \cite{Ma60,Ar74,Fl76,Ga83,Du86,Re88,Ro98,Ha00}. There are also several approaches to calculate the formation amplitude : the shell model, the BCS method \cite{del96}, the hybrid (shell model+$\alpha$-cluster) model \cite{var92} etc. The effect of continuum states on the $\alpha$ decay is known to be very large \cite{del96}. One needs therefore very large shell model basis to obtain the experimental values of the $\alpha$-preformation probability. The hybrid model \cite{var92}, which treats a large shell model basis up to the continuum states through the wavefunction of the spatially localized $\alpha$ cluster, explains well the experimental decay width. In these calculations the wave function is necessary, and it is not easy to extend the approaches for nuclei including more nucleons outside the double-magic core. In this contribution, the preformation factor is extracted from the experimental $\alpha$ decay half-life $T_{\alpha}$ and the penetration probability is obtained from the WKB approximation. The potential barrier is determined within a generalized liquid drop model (GLDM) including the proximity effects between the $\alpha$ particle and the daughter nucleus and adjusted to reproduce the experimental Q$_{\alpha}$ \cite{Roy00,Zh06}. The paper is organized as follows. In Sec. 2, the methods for calculating the assault frequencies, the penetrability of the even-even nuclei and the preformation factor are presented. The calculated results are shown and discussed in Sec. 3. In the last section, the conclusions are given and future works are proposed. \section{Methods} The $\alpha$ decay constant is defined as \begin{equation}\label{lamda1} \lambda=P_{0}\nu_{0}P. \end{equation} Imagining the $\alpha$ particle moving back and forth inside the nucleus with a velocity $v ~=~\sqrt{\frac{2E_{\alpha}}{M}} ~$, it presents itself at the barrier with a frequency : \begin{eqnarray}\label{mu} \nu_{0} ~=~\left(\frac{1}{2R}\sqrt{\frac{2E_{\alpha}}{M}}\right)~. \label{orig_nu} \end{eqnarray} R is the radius of the parent nucleus given by \begin{equation}\label{radii} R_i=(1.28A_i^{1/3}-0.76+0.8A_i^{-1/3}) \ \textrm{fm}, \end{equation} and E$_{\alpha}$ is the energy of the alpha particle, corrected for recoil; $M$ being its mass. The penetration probability P is calculated within the WKB approximation. The potential barrier governing the $\alpha$ emission is determined within the GLDM, including the volume, surface, Coulomb and proximity energies \cite{Roy85}: \begin{equation}\label{etot} E ~= ~E_{V}+E_{S}+E_{C}+E_{\text{Prox}} ~. \end{equation} When the nuclei are separated: \begin{equation}\label{ev} E_{V}=-15.494\left \lbrack (1-1.8I_1^2)A_1+(1-1.8I_2^2)A_2\right \rbrack \ \textrm{MeV}, \end{equation} \begin{equation}\label{es} E_{S}=17.9439\left \lbrack(1-2.6I_1^2)A_1^{2/3}+(1-2.6I_2^2)A_2^{2/3} \right \rbrack \ \textrm{MeV}, \end{equation} \begin{equation}\label{ec} E_{C}=0.6e^2Z_1^2/R_1+0.6e^2Z_2^2/R_2+e^2Z_1Z_2/r, \end{equation} where $A_i$, $Z_i$, $R_i$ and $I_i$ are the mass numbers, charge numbers, radii and relative neutron excesses of the two nuclei. $r$ is the distance between the mass centres. The radii $R_{i}$ are given by Eq. (\ref{radii}). For one-body shapes, the surface and Coulomb energies are defined as \cite{Roy85,Roy00}: \begin{equation}\label{esone} E_{S}=17.9439(1-2.6I^2)A^{2/3}(S/4\pi R_0^2) \ \textrm{MeV}, \end{equation} \begin{equation}\label{econe} E_{C}=0.6e^2(Z^2/R_0) \times 0.5\int (V(\theta)/V_0)(R(\theta)/R_0)^3 \sin \theta d \theta. \end{equation} $S$ is the surface of the one-body deformed nucleus. $V(\theta )$ is the electrostatic potential at the surface and $V_0$ the surface potential of the sphere. The surface energy results from the effects of the surface tension forces in a half space. When there are nucleons in regard in a neck or a gap between separated fragments an additional term called proximity energy must be added to take into account the effects of the nuclear forces between the close surfaces. This term is essential to describe smoothly the one-body to two-body transition and to obtain reasonable fusion barrier heights. It moves the barrier top to an external position and strongly decreases the pure Coulomb barrier. \begin{equation} E_{\text{Prox}}(r)=2\gamma \int _{h_{\text{min}}} ^{h_{\text{max}}} \Phi \left \lbrack D(r,h)/b\right \rbrack 2 \pi hdh, \end{equation} where $h$ is the distance varying from the neck radius or zero to the height of the neck border. $D$ is the distance between the surfaces in regard and $b=0.99$~fm the surface width. $\Phi$ is the proximity function. The surface parameter $\gamma$ is the geometric mean between the surface parameters of the two fragments. The combination of the GLDM and of a quasi-molecular shape sequence has allowed to reproduce the fusion barrier heights and radii, the fission and the $\alpha$ and cluster radioactivity data. The barrier penetrability $P$ is calculated within the action integral \begin{equation}\label{penetrability} P ~= ~exp[-\frac{2}{\hbar}\int_{R_{\text{in}}}^{R_{\text{out}}}\sqrt{2B(r)(E(r)-E(sphere))}] ~. \end{equation} The deformation energy (relatively to the sphere energy) is small till the rupture point between the fragments \cite{Roy00} and the two following approximations may be used : $R_{\text{in}}=R_{d}+R_{\alpha}$ and $B(r)=\mu$ where $\mu$ is the reduced mass. $R_{out}$ is simply e$^{2}$Z$_{d}$Z$_{\alpha}$/Q$_{\alpha}$. The decay constant may be deduced from the experimental $\alpha$ decay half-life $T_{\alpha}$ by \begin{equation}\label{lamda2} \lambda ~ = ~ \frac{ln2}{T_{\alpha}}. \end{equation} The preformation factor $P_{0}$ of an $\alpha$ cluster inside the mother nucleus can be estimated inserting Eq.(\ref{mu}), Eq.(\ref{penetrability}) and Eq.(\ref{lamda2}) in Eq.(\ref{lamda1}). \section{Results and discussions} The values of the decay constant $\lambda$, the velocity, the assault frequency $\nu_{0}$, the penetrability P and the preformation factor P$_{0}$ are given for the even-even Pb, Po, Rn, Ra, Th and U isotopes in the tables 1 and 2. In the simple picture of an $\alpha$ particle moving with a kinetic energy E$_{\alpha}$ inside the nucleus, the order of magnitude of the $\alpha$ velocity $v$ is 1.0$\times$10$^{22}$ fm/s for all considered nuclei. The order of magnitude of the assault frequency $\nu_{0}$ calculated by Eq.(\ref{lamda1}) is 1.0$\times$10$^{21}s^{-1}$ both in the table 1 and the table 2. The variations of $v$ and $\nu_{0}$ are small. In the previous study relative to the $\alpha$ decay half-lives using the GLDM \cite{Roy00}, the assault frequency $\nu_{0}$ is fixed as 1.0$\times$10$^{20}$, which is smaller by one order of magnitude than the value obtained in the present calculation, meaning that an averaged preformation factor of 0.1 has been included in the previous $\alpha$ decay study. So the previous calculations are consistent quantitatively with the experimental data in general. A fixed value of the assault frequency cannot describe the detailed features of the nuclear structure and the last column of the tables 1 and 2 displays the extracted preformation factor P$_{0}$ when the Eq. (\ref{mu}) is assumed. To study the correlation between the preformation factors and the structure properties, the preformation factors of Pb, Po, Rn, Ra, Th, and U isotopes are given as a function of the neutron number N in Fig.\ref{Fig1}. The preformation factors of the isotopes generally decrease with increasing neutron number up to the spherical shell closure N=126, where the minimum of P$_{0}$ occurs, and then they increase quickly with the neutron number. The values of the preformation factors is always minimum for the Po, Rn, Ra, Th, and U isotopes when N=126. The preformation factors of N=126 isotones are shown in Fig. \ref{Fig2}. The preformation factors increase when the proton number goes away from the magic number Z=82. The maximum values of the preformation factor for the five elements, which correspond to the most neutron-rich nucleus from Po to U isotopes respectively, are presented in Fig.\ref{Fig3}. In general, the values increase with the proton number. From $^{232}$Th to $^{238}$U, the value of the preformation factor decreases apparently, even if the proton number Z is far from the magic number Z=82. The neutron number of $^{238}$U (N=146) is closer to the submagic neutron number N=152 than that of the nucleus $^{232}$Th (N=142); so the preformation factor of $^{232}$Th is larger than that of the nucleus $^{238}$U. As a conclusion the shell closure effects play the key role in the $\alpha$ formation mechanism. The more the nucleon number is close to the magic numbers, the more the formation of the $\alpha$ cluster inside the mother nucleus is difficult. The ground-state spin and parity of even-even nuclei is 0$^{+}$ and the $\alpha$ decay of even-even nuclei mainly proceeds to the ground state of the daughter nucleus. Though the parent nucleus may also decay to the excited states of the daughter nucleus, the probability is very small and it can be neglected for a systematic study of penetrability. The calculated penetrabilities are given in the seventh column in the tables 1 and 2. The values of the penetrability vary in a wide range from 10$^{-39}$ to 10$^{-14}$ for the even-even isotopes of Pb, Po, Rn, Ra, Th, and U, implying that the penetrability plays the key role to determine the $\alpha$ decay half-lives. The values of log$_{10}$(P) are plotted in Fig. \ref{Fig4} for the even-even Pb, Po, Rn, Ra, Th, and U isotopes. They decrease with the increasing neutron number N for the Pb, Po, Rn, Ra, and Th isotopes before the magic number N=126. For the U isotopes, the experimental data of Q$_{\alpha}$ and T$_{\alpha}$ are not supplied. The penetrability values increase sharply after N=126 and reach the maximum at N=128, then they decrease again quickly for Po, Rn, Ra, Th, and U isotopes. For the Pb isotopes the experimental data of Q$_{\alpha}$ and T$_{\alpha}$ are not supplied after N=128. For the nuclei with two neutrons outside the closed shell, the $\alpha$ particle emission is easier than that of the other nuclei of the same isotopes. The closed shell structures play also the key role for the penetrability mechanism. The more the nucleon number is far away from the nucleon magic numbers, the more the penetrability of the $\alpha$ cluster from mother nucleus becomes larger. The values of the $\alpha$ decay constant $\lambda$ deduced from the experimental half-lives are shown in the fourth column of the tables 1 and 2 for the even-even isotopes of Pb, Po, Rn, Ra, Th, and U. In order to clarify the relations between the decay constant $\lambda$, the preformation factor P$_{0}$, the assault frequency $\nu_{0}$ and the penetrability P, the values of log$_{10}$(P$_{0}$), log$_{10}$($\nu_{0}$), log$_{10}$(P), and log$_{10}(\lambda)$ are displayed as functions of the neutron number N for Po isotopes in the Fig. \ref{Fig5}. The curve shape of log$_{10}(\lambda)$ is the same as that of log$_{10}$(P), confirming that the $\alpha$ decay half-lives are mainly decided by the penetrability. log$_{10}$($\nu_{0}$) decreases smoothly before N=126, increases from N=126 to N=128, and decreases again with the increasing of the neutron number, indicating that the assault frequency of $\alpha$ particle can also reflect the shell effects, even if log$_{10}$(P$_{0}$) exhibits the microscopic nuclear structure more clearly. \begin{table*}[h] \label{table1} \caption{Characteristics of the $\alpha$ formation and penetration for the even-even Pb, Po and Rn isotopes. The first column indicates the mother nucleus. The second and third columns correspond, respectively, to the experimental Q$_{\alpha}$ and log$_{10}$[T$_{\alpha}$(s)]. The fourth column is the decay constant $\lambda$. The fifth and sixth columns correspond, respectively, to the velocity and frequency of the $\alpha$ cluster inside the mother nucleus. The seventh column is the penetration probability and the last column gives the preformation factor extracted from Eq. (\ref{lamda1}) and the data of this table.} \begin{ruledtabular} \begin{tabular}{llllllll} $Nuclei$& Q$_{\alpha}$ [MeV]&log$_{10}$(T$_{\alpha}$)&$\lambda [ s^{-1} ]$ &$v [ fm/s ]$&$\nu_{0} [ s^{-1} ]$ & P &P$_{0}$\\ \hline $^{178}_{82}$Pb & 7.790 & -3.64 & 3.012 $\times 10^{3}$ & 1.913 $\times 10^{22}$& 1.453$\times 10^{21}$ & 4.097 $\times 10^{-17}$& 0.0506 \\ $^{180}_{82}$Pb & 7.415 & -2.30 & 1.386 $\times 10^{2}$ & 1.816 $\times 10^{22}$& 1.415$\times 10^{21}$ & 3.098 $\times 10^{-18}$ & 0.0317 \\ $^{182}_{82}$Pb & 7.080 & -1.26 & 1.261 $\times 10^{1}$ & 1.823 $\times 10^{22}$& 1.347$\times 10^{21}$ & 2.558 $\times 10^{-19}$ & 0.0359 \\ $^{184}_{82}$Pb & 6.774 & -0.21 & 1.137 $\times 10^{0}$ & 1.784 $\times 10^{22}$& 1.339$\times 10^{21}$ & 2.262 $\times 10^{-20}$ & 0.0375 \\ $^{186}_{82}$Pb & 6.470 & 1.08 & 5.779 $\times 10^{-2}$& 1.743 $\times 10^{22}$& 1.303$\times 10^{21}$ & 1.845 $\times 10^{-21}$ & 0.0240 \\ $^{188}_{82}$Pb & 6.109 & 2.43 & 2.569 $\times 10^{-3}$& 1.694 $\times 10^{22}$& 1.262$\times 10^{21}$ & 6.264 $\times 10^{-23}$ & 0.0325 \\ $^{190}_{82}$Pb & 5.697 & 4.26 & 3.853 $\times 10^{-5}$& 1.636 $\times 10^{22}$& 1.214$\times 10^{21}$ & 8.784 $\times 10^{-25}$ & 0.0361 \\ $^{192}_{82}$Pb & 5.221 & 6.56 & 1.927 $\times 10^{-7}$& 1.566 $\times 10^{22}$& 1.157$\times 10^{21}$ & 3.237 $\times 10^{-27}$ & 0.0514 \\ $^{194}_{82}$Pb & 4.738 & 9.99 & 7.077$\times 10^{-11}$& 1.491 $\times 10^{22}$& 1.098$\times 10^{21}$ & 5.278 $\times 10^{-30}$ & 0.0122 \\ $^{210}_{82}$Pb & 3.790 & 16.57 & 1.866$\times 10^{-17}$& 1.334 $\times 10^{22}$& 0.959$\times 10^{21}$ & 6.676 $\times 10^{-37}$ & 0.0293 \\ $^{188}_{84}$Po & 8.087 & -3.40 & 1.733 $\times 10^{3}$ & 1.950 $\times 10^{22}$& 1.452$\times 10^{21}$ & 8.129 $\times 10^{-17}$ & 0.0147 \\ $^{190}_{84}$Po & 7.693 & -2.60 & 2.772 $\times 10^{2}$ & 1.902 $\times 10^{22}$& 1.411$\times 10^{21}$ & 5.907 $\times 10^{-18}$ & 0.0333 \\ $^{192}_{84}$Po & 7.319 & -1.54 & 2.392 $\times 10^{1}$ & 1.855 $\times 10^{22}$& 1.371$\times 10^{21}$ & 3.909 $\times 10^{-19}$ & 0.0446 \\ $^{194}_{84}$Po & 6.990 & -0.41 & 1.782 $\times 10^{0}$ & 1.813 $\times 10^{22}$& 1.335$\times 10^{21}$ & 3.249 $\times 10^{-20}$ & 0.0410 \\ $^{196}_{84}$Po & 6.660 & 0.76 & 1.205 $\times 10^{-1}$& 1.770 $\times 10^{22}$& 1.299$\times 10^{21}$ & 2.041 $\times 10^{-21}$ & 0.0455 \\ $^{198}_{84}$Po & 6.310 & 2.18 & 4.580 $\times 10^{-3}$& 1.722 $\times 10^{22}$& 1.259$\times 10^{21}$ & 8.283 $\times 10^{-23}$ & 0.0439 \\ $^{200}_{84}$Po & 5.980 & 3.79 & 1.124 $\times 10^{-4}$& 1.677 $\times 10^{22}$& 1.222$\times 10^{21}$ & 3.066 $\times 10^{-24}$ & 0.0300 \\ $^{202}_{84}$Po & 5.700 & 5.13 & 5.138 $\times 10^{-6}$& 1.637 $\times 10^{22}$& 1.188$\times 10^{21}$ & 1.721 $\times 10^{-25}$ & 0.0251 \\ $^{204}_{84}$Po & 5.480 & 6.28 & 3.638 $\times 10^{-7}$& 1.605 $\times 10^{22}$& 1.161$\times 10^{21}$ & 1.376 $\times 10^{-26}$ & 0.0228 \\ $^{206}_{84}$Po & 5.330 & 7.15 & 4.907 $\times 10^{-8}$& 1.583 $\times 10^{22}$& 1.141$\times 10^{21}$ & 2.254 $\times 10^{-27}$ & 0.0191 \\ $^{208}_{84}$Po & 5.220 & 7.97 & 7.427 $\times 10^{-9}$& 1.567 $\times 10^{22}$& 1.126$\times 10^{21}$ & 5.727 $\times 10^{-28}$ & 0.0115 \\ $^{210}_{84}$Po & 5.407 & 7.08 & 5.765 $\times 10^{-8}$& 1.596 $\times 10^{22}$& 1.142$\times 10^{21}$ & 7.615 $\times 10^{-27}$ & 0.0065 \\ $^{212}_{84}$Po & 8.950 & -6.52 & 2.295 $\times 10^{6}$ & 2.054 $\times 10^{22}$& 1.466$\times 10^{21}$ & 4.598 $\times 10^{-14}$ & 0.0341 \\ $^{214}_{84}$Po & 7.830 & -3.87 & 5.138 $\times 10^{3}$ & 1.921 $\times 10^{22}$& 1.366$\times 10^{21}$ & 4.309 $\times 10^{-17}$ & 0.0873 \\ $^{216}_{84}$Po & 6.900 & -0.82 & 4.580 $\times 10^{0}$ & 1.803 $\times 10^{22}$& 1.278$\times 10^{21}$ & 3.670 $\times 10^{-20}$ & 0.0976 \\ $^{218}_{84}$Po & 6.110 & 2.27 & 3.722 $\times 10^{-3}$& 1.696 $\times 10^{22}$& 1.199$\times 10^{21}$ & 2.844 $\times 10^{-23}$ & 0.1092 \\ $^{198}_{86}$Rn & 7.349 & -1.19 & 1.066 $\times 10^{1}$ & 1.859 $\times 10^{22}$& 1.360$\times 10^{21}$ & 9.928 $\times 10^{-20}$ & 0.0790 \\ $^{200}_{86}$Rn & 7.040 & 0.0 & 6.931 $\times 10^{-1}$& 1.820 $\times 10^{22}$& 1.326$\times 10^{21}$ & 8.491 $\times 10^{-21}$ & 0.0616 \\ $^{202}_{86}$Rn & 6.770 & 1.06 & 6.037 $\times 10^{-2}$& 1.785 $\times 10^{22}$& 1.296$\times 10^{21}$ & 9.713 $\times 10^{-22}$ & 0.0480 \\ $^{204}_{86}$Rn & 6.550 & 2.00 & 6.931 $\times 10^{-3}$& 1.755 $\times 10^{22}$& 1.270$\times 10^{21}$ & 1.375 $\times 10^{-22}$ & 0.0397 \\ $^{206}_{86}$Rn & 6.380 & 2.71 & 1.352 $\times 10^{-3}$& 1.733 $\times 10^{22}$& 1.249$\times 10^{21}$ & 2.830 $\times 10^{-23}$ & 0.0382 \\ $^{208}_{86}$Rn & 6.260 & 3.34 & 3.168 $\times 10^{-4}$& 1.716 $\times 10^{22}$& 1.233$\times 10^{21}$ & 9.051 $\times 10^{-24}$ & 0.0284 \\ $^{210}_{86}$Rn & 6.160 & 3.96 & 7.600 $\times 10^{-5}$& 1.703 $\times 10^{22}$& 1.219$\times 10^{21}$ & 3.957 $\times 10^{-24}$ & 0.0158 \\ $^{212}_{86}$Rn & 6.380 & 3.17 & 4.686 $\times 10^{-4}$& 1.733 $\times 10^{22}$& 1.237$\times 10^{21}$ & 3.810 $\times 10^{-23}$ & 0.0099 \\ $^{214}_{86}$Rn & 9.210 & -6.57 & 2.575 $\times 10^{6}$ & 2.084 $\times 10^{22}$& 1.482$\times 10^{21}$ & 4.350 $\times 10^{-14}$ & 0.0399 \\ $^{216}_{86}$Rn & 8.200 & -4.35 & 1.552 $\times 10^{4}$ & 1.966 $\times 10^{22}$& 1.393$\times 10^{21}$ & 1.011 $\times 10^{-16}$ & 0.1101 \\ $^{218}_{86}$Rn & 7.260 & -1.45 & 1.854 $\times 10^{1}$ & 1.850 $\times 10^{22}$& 1.307$\times 10^{21}$ & 1.226 $\times 10^{-19}$ & 0.1220 \\ $^{220}_{86}$Rn & 6.400 & 1.75 & 1.233 $\times 10^{-2}$& 1.736 $\times 10^{22}$& 1.223$\times 10^{21}$ & 6.470 $\times 10^{-23}$ & 0.1558 \\ $^{222}_{86}$Rn & 5.590 & 5.52 & 2.093 $\times 10^{-6}$& 1.733 $\times 10^{22}$& 1.237$\times 10^{21}$ & 1.055 $\times 10^{-26}$ & 0.1743 \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*}[h] \label{table2} \caption{Same as table 1, but for the even-even Ra, Th and U isotopes. } \begin{ruledtabular} \begin{tabular}{llllllll} $Nuclei$& Q$_{\alpha}$ [MeV]&log$_{10}$(T$_{\alpha}$)&$\lambda [ s^{-1} ]$&$v [ fm/s ]$&$\nu_{0} [ s^{-1} ]$ & P &P$_{0}$\\ \hline $^{202}_{88}$Ra & 8.020 & -2.58 & 2.666 $\times 10^{2}$ & 1.943 $\times 10^{22}$& 1.411$\times 10^{21}$ & 2.931 $\times 10^{-18}$ & 0.0645 \\ $^{204}_{88}$Ra & 7.636 & -1.23 & 1.174 $\times 10^{1}$ & 1.896 $\times 10^{22}$& 1.372$\times 10^{21}$ & 1.893 $\times 10^{-19}$ & 0.0452 \\ $^{206}_{88}$Ra & 7.42 & -0.62 & 2.890 $\times 10^{0}$ & 1.869 $\times 10^{22}$& 1.347$\times 10^{21}$ & 3.750 $\times 10^{-20}$ & 0.0572 \\ $^{208}_{88}$Ra & 7.27 & 0.15 & 4.907 $\times 10^{-1}$& 1.850 $\times 10^{22}$& 1.329$\times 10^{21}$ & 1.193 $\times 10^{-20}$ & 0.0310 \\ $^{210}_{88}$Ra & 7.16 & 0.56 & 1.909 $\times 10^{-1}$& 1.836 $\times 10^{22}$& 1.315$\times 10^{21}$ & 5.682 $\times 10^{-21}$ & 0.0256 \\ $^{212}_{88}$Ra & 7.03 & 1.15 & 4.907 $\times 10^{-2}$& 1.819 $\times 10^{22}$& 1.298$\times 10^{21}$ & 1.982 $\times 10^{-21}$ & 0.0191 \\ $^{214}_{88}$Ra & 7.27 & 0.40 & 2.759 $\times 10^{-1}$& 1.851 $\times 10^{22}$& 1.316$\times 10^{21}$ & 1.575 $\times 10^{-20}$ & 0.0133 \\ $^{216}_{88}$Ra & 9.53 & -6.74 & 3.809 $\times 10^{6}$ & 2.120 $\times 10^{22}$& 1.503$\times 10^{21}$ & 5.580 $\times 10^{-14}$ & 0.0454 \\ $^{218}_{88}$Ra & 8.55 & -4.59 & 2.697 $\times 10^{4}$ & 2.008 $\times 10^{22}$& 1.419$\times 10^{21}$ & 2.049 $\times 10^{-16}$ & 0.0928 \\ $^{220}_{88}$Ra & 7.60 & -1.74 & 3.809 $\times 10^{1}$ & 1.893 $\times 10^{22}$& 1.333$\times 10^{21}$ & 2.854 $\times 10^{-19}$ & 0.1001 \\ $^{222}_{88}$Ra & 6.68 & 1.59 & 1.782 $\times 10^{-2}$& 1.774 $\times 10^{22}$& 1.245$\times 10^{21}$ & 1.269 $\times 10^{-22}$ & 0.1127 \\ $^{224}_{88}$Ra & 5.79 & 5.53 & 2.046 $\times 10^{-6}$& 1.651 $\times 10^{22}$& 1.155$\times 10^{21}$ & 1.216 $\times 10^{-26}$ & 0.1457 \\ $^{226}_{88}$Ra & 4.87 & 10.73 & 1.291$\times 10^{-11}$& 1.514 $\times 10^{22}$& 1.056$\times 10^{21}$ & 5.861 $\times 10^{-32}$ & 0.2086 \\ $^{210}_{90}$Th & 8.053 & -1.77 & 4.082 $\times 10^{1}$ & 1.948 $\times 10^{22}$& 1.395$\times 10^{21}$ & 9.009 $\times 10^{-19}$ & 0.0325 \\ $^{212}_{90}$Th & 7.952 & -1.44 & 1.927 $\times 10^{1}$ & 1.935 $\times 10^{22}$& 1.381$\times 10^{21}$ & 4.616 $\times 10^{-19}$ & 0.0302 \\ $^{214}_{90}$Th & 7.83 & -1.00 & 6.931 $\times 10^{0}$ & 1.921 $\times 10^{22}$& 1.366$\times 10^{21}$ & 1.978 $\times 10^{-19}$ & 0.0257 \\ $^{216}_{90}$Th & 8.07 & -1.55 & 2.459 $\times 10^{1}$ & 1.950 $\times 10^{22}$& 1.382$\times 10^{21}$ & 1.256 $\times 10^{-18}$ & 0.0142 \\ $^{218}_{90}$Th & 9.85 & -7.00 & 6.931 $\times 10^{6}$ & 2.156 $\times 10^{22}$& 1.523$\times 10^{21}$ & 7.388 $\times 10^{-14}$ & 0.0616 \\ $^{220}_{90}$Th & 8.95 & -5.01 & 7.093 $\times 10^{4}$ & 2.055 $\times 10^{22}$& 1.447$\times 10^{21}$ & 5.102 $\times 10^{-16}$ & 0.0961 \\ $^{222}_{90}$Th & 8.13 & -2.55 & 2.459 $\times 10^{2}$ & 1.958 $\times 10^{22}$& 1.374$\times 10^{21}$ & 2.514 $\times 10^{-18}$ & 0.0712 \\ $^{224}_{90}$Th & 7.30 & 0.11 & 5.381 $\times 10^{-1}$& 1.855 $\times 10^{22}$& 1.298$\times 10^{21}$ & 4.512 $\times 10^{-21}$ & 0.0919 \\ $^{226}_{90}$Th & 6.45 & 3.39 & 2.824 $\times 10^{-4}$& 1.743 $\times 10^{22}$& 1.216$\times 10^{21}$ & 1.901 $\times 10^{-24}$ & 0.1221 \\ $^{228}_{90}$Th & 5.52 & 7.92 & 8.333 $\times 10^{-9}$& 1.612 $\times 10^{22}$& 1.121$\times 10^{21}$ & 5.626 $\times 10^{-29}$ & 0.1321 \\ $^{230}_{90}$Th & 4.77 & 12.51 & 2.142 $\times10^{-13}$& 1.498 $\times 10^{22}$& 1.038$\times 10^{21}$ & 1.203 $\times 10^{-33}$ & 0.1714 \\ $^{232}_{90}$Th & 4.08 & 17.76 & 1.205$\times 10^{-18}$& 1.385 $\times 10^{22}$& 0.957$\times 10^{21}$ & 4.447 $\times 10^{-39}$ & 0.2831 \\ $^{218}_{92}$U & 8.773 & -3.29 & 1.358 $\times 10^{3}$ & 2.034 $\times 10^{22}$ & 1.437$\times 10^{21}$ & 3.036 $\times 10^{-17}$ & 0.0311 \\ $^{220}_{92}$U & 10.30 & -7.22 & 1.156 $\times 10^{7}$ & 2.205 $\times 10^{22}$ & 1.553$\times 10^{21}$ & 1.700 $\times 10^{-13}$ & 0.0438 \\ $^{224}_{92}$U & 8.620 & -3.15 & 9.904 $\times 10^{2}$ & 2.016 $\times 10^{22}$ & 1.411$\times 10^{21}$ & 1.389 $\times 10^{-17}$ & 0.0505 \\ $^{226}_{92}$U & 7.701 & -0.30 & 1.386 $\times 10^{0}$ & 1.906 $\times 10^{22}$ & 1.329$\times 10^{21}$ & 1.900 $\times 10^{-20}$ & 0.0549 \\ $^{228}_{92}$U & 6.80 & 2.76 & 1.205 $\times 10^{-3}$ & 1.790 $\times 10^{22}$ & 1.245$\times 10^{21}$ & 9.088 $\times 10^{-24}$ & 0.1065 \\ $^{230}_{92}$U & 5.99 & 6.43 & 2.575 $\times 10^{-7}$ & 1.680 $\times 10^{22}$ & 1.164$\times 10^{21}$ & 1.914 $\times 10^{-27}$ & 0.1155 \\ $^{232}_{92}$U & 5.41 & 9.52 & 2.093 $\times 10^{-10}$& 1.596 $\times 10^{22}$ & 1.103$\times 10^{21}$ & 1.360 $\times 10^{-30}$ & 0.1395 \\ $^{234}_{92}$U & 4.86 & 13.02 & 6.620 $\times 10^{-14}$& 1.512 $\times 10^{22}$ & 1.042$\times 10^{21}$ & 4.166 $\times 10^{-34}$ & 0.1525 \\ $^{236}_{92}$U & 4.57 & 14.99 & 7.093 $\times 10^{-16}$& 1.466 $\times 10^{22}$ & 1.007$\times 10^{21}$ & 3.976 $\times 10^{-36}$ & 0.1771 \\ $^{238}_{92}$U & 4.27 & 17.27 & 3.722 $\times 10^{-18}$& 1.417 $\times 10^{22}$ & 0.970$\times 10^{21}$ & 1.572 $\times 10^{-38}$ & 0.2440 \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{figure*}[t] \includegraphics[scale=0.9]{fig1.eps} \caption{Extracted preformation factors for the even-even isotopes of Pb, Po, Rn, Ra, Th, and U.} \label{Fig1} \end{figure*} \begin{figure*}[t] \includegraphics[scale=0.8]{fig2.eps} \caption{Extracted preformation factors for the N=126 isotones of Po, Rn, Ra, Th, and U.} \label{Fig2} \end{figure*} \begin{figure*}[t] \includegraphics[scale=0.8]{fig3.eps} \caption{Extracted preformation factors for $^{218}$Po, $^{222}$Rn, $^{226}$Ra, $^{232}$Th, and $^{238}$U.} \label{Fig3} \end{figure*} \begin{figure*}[t] \includegraphics[scale=0.9]{fig4.eps} \caption{Logarithms of the penetrability for the even-even isotopes of Pb, Po, Rn, Ra, Th, and U.} \label{Fig4} \end{figure*} \begin{figure*}[t] \includegraphics[scale=0.9]{fig5.eps} \caption{Characteristics of the $\alpha$ formation and penetration for the even-even Po isotopes.} \label{Fig5} \end{figure*} It has been shown that the nuclear structure characteristics can be detected through the study of the $\alpha$ formation and penetrability in the above discussions. It is tempting to extend the study to the superheavy nuclei. Recently, isotopes of the elements 112, 113, 114, 115, 116 and 118 have been synthesized in fusion-evaporation reactions using $^{209}$Bi, $^{233,238}$U, $^{242,244}$Pu, $^{243}$Am, $^{245,248}$Cm and $^{249}$Cf targets with $^{48}$Ca and $^{70}$Zn beams \cite{Og05,Hof07,Mo07}. These recent experimental results have led to new theoretical studies on the $\alpha$ decay; for example within the relativistic mean field theory \cite{Sh05}, the DDM3Y interaction \cite{Ba03}, the GLDM \cite{Zh06,Roy08} and the Skyrme-Hartree-Fock mean field model \cite{Pe07}. All the above theoretical works are concentrated on the half-lives of these observed superheavy nuclei, and the results are reasonably consistent with the experimental data. The characteristics of the $\alpha$ formation and penetrability for the even-even superheavy nuclei are listed in the table 3. The preformation factors are distributed between 0.005 and 0.05 showing that the shell effect structures are also important features for the superheavy nuclei. The preformation factor of $^{294}118$ is 0.0056, which is smaller than that of the spherical nucleus $^{210}$Po, implying perhaps that the neutron number N=176 is a submagic number. In Ref. \cite{zh04}, it is also pointed out that for the oblate deformed chain of Z$=112$, the shell closure appears at N$=176$. Thus the preformation of $\alpha$ is not easy for these heaviest nuclei. The penetration probabilities P given in the seventh column of the table 3 always stay between 10$^{-20}$ and 10$^{-15}$. The penetration probabilities are relatively very large, while the range is narrow. So it is easy for the $\alpha$ particle to escape from these heaviest nuclei as soon as they form, confirming that the superheavy nuclei are weakly bound. \begin{table*}[h] \label{table3} \caption{Same as table 1, but for the even-even superheavy nuclei. } \begin{ruledtabular} \begin{tabular}{llllllll} $Nuclei$& Q$_{\alpha}$ [MeV]&T$_{\alpha}$&$\lambda$&$v [ fm/s ]$&$\nu_{0} [ s^{-1} ]$ & P &P$_{0}$ \\ \hline &&&&&&& \\ $^{294}$118 & 11.81 & 0.89 ms & 7.788 $\times 10^{2}$ & 2.365$\times 10^{22}$ & 1.502$\times 10^{21}$ & 9.304 $\times 10^{-17}$ & 0.0056 \\ &&&&&&& \\ $^{292}$116 & 10.66 & 18 ms & 3.851 $\times 10^{1}$ & 2.246$\times 10^{22}$ & 1.430$\times 10^{21}$ & 5.394 $\times 10^{-19}$ & 0.0499 \\ &&&&&&& \\ $^{290}$116 & 11.00 & 7.1 ms & 9.763 $\times 10^{1}$ & 2.282$\times 10^{22}$ & 1.457$\times 10^{21}$ & 3.813 $\times 10^{-18}$ & 0.0176 \\ &&&&&&& \\ $^{288}$114 & 10.09 & 0.80 s & 8.664 $\times 10^{-1}$ & 2.185$\times 10^{22}$ & 1.398$\times 10^{21}$ & 5.734 $\times 10^{-20}$ & 0.0108 \\ &&&&&&& \\ $^{286}$114 & 10.33 & 0.13 s & 5.331 $\times 10^{0}$ & 2.153$\times 10^{22}$ & 1.418$\times 10^{21}$ & 2.476 $\times 10^{-19}$ & 0.0152 \\ &&&&&&& \\ $^{270}$Ds & 11.20 & 0.1 ms & 6.931$\times 10^{3}$ & 2.302$\times 10^{22}$ & 1.507$\times 10^{21}$ & 3.486 $\times 10^{-16}$ & 0.0132\\ &&&&&&& \\ $^{266}$Hs & 10.34 & 2.3 ms & 3.014 $\times 10^{2}$ & 2.211$\times 10^{22}$ & 1.456$\times 10^{21}$ & 1.134 $\times 10^{-17}$ & 0.0183 \\ &&&&&&& \\ $^{264}$Hs & 10.58 & 0.25 ms & 2.759 $\times 10^{3}$ & 2.237$\times 10^{22}$ & 1.476$\times 10^{21}$ & 4.143 $\times 10^{-17}$ & 0.0451 \\ &&&&&&& \\ $^{266}$Sg & 8.76 & 33.88 s & 2.046 $\times 10^{-2}$ & 2.035$\times 10^{22}$ & 1.339$\times 10^{21}$ & 1.473 $\times 10^{-21}$ & 0.0104 \\ &&&&&&& \\ $^{260}$Sg & 9.93 & 8.51 ms & 8.144 $\times 10^{1}$ & 2.167$\times 10^{22}$ & 1.438$\times 10^{21}$ & 3.359 $\times 10^{-18}$ & 0.0169 \\ \end{tabular} \end{ruledtabular} \end{table*} The experimental but model-dependent preformation factors of the 180 even-even nuclei from Z=52 to Z=118 are displayed in Fig. \ref{Fig6}. In general, the formation factor decreases with the increase of the mass number A. It can be understand easily that the theoretical $\alpha$ decay GLDM half-lives are very slightly higher than the experimental data for the light nuclei and lower for the heaviest systems, when the preformation factors are neglected. The preformation factors decrease with mass number till about 210, then, the preformation factors jump nearly one order of magnitude, and finally decrease again, due to the neutron number N=126 in this mass region. \begin{figure*}[t] \includegraphics[scale=0.8]{fig6.eps} \caption{$\alpha$ preformation factors for 180 even-even nuclei (Z=52-118).} \label{Fig6} \end{figure*} \section{Conclusion and outlook} A global study of the $\alpha$ formation and penetration is presented for the even-even nuclei from Z=52 to Z=118. The decay constants are obtained from the experimental half-lives. The penetration probabilities are calculated from the WKB approximation and the penetration barriers are constructed with the GLDM. After using a classical method to estimate the assault frequencies the preformation factors are extracted systematically. The preformation factors are not constant for these nuclei, and show regular law for the studied isotopes. Clearly the closed shell structures play the key role for the preformation mechanism, and the more the nucleon number is close to the magic nucleon numbers, the more the preformation of $\alpha$ cluster is difficult inside the mother nucleus. The decay constant is the product of the preformation, the penetration, and the assault frequency. The penetrability determines mainly the behaviour of the decay constant. The study is extended to the even-even superheavy nuclei. The penetration probabilities are relatively very large and its range is narrow for the heaviest nuclei. So it is easy for the $\alpha$ particle to escape from these nuclei as soon as they are formed, confirming that the superheavy nuclei are weakly bound. Thus, it is interesting and timely to study the preformation factors microscopically and systematically. It is also desired to extend the study to the odd A and odd-odd nuclei and extract possibly analytical formulas of preformation factors, which should be functions of the nucleon numbers and the $\alpha$ decay energy. It would be also important to take into account the deformations, which may change the results by perhaps one order of magnitude. \section{Acknowledgments} H.F. Zhang is thankful to Prof. Bao-Qiu Chen, Zhong-Zhou Ren, En-Guang Zhao for valuable discussions. The work of H.F. Zhang was supported by the National Natural Science Foundation of China (10775061,10505016,10575119,10175074), the Knowledge Innovative Project of CAS (KJCX3-SYW-N2), the Major Prophase Research Project of Fundamental Research of the Ministry of Science and Technology of China (2007CB815004).
1,116,691,498,841
arxiv
\section*{Content} \section*{Introduction} The issue of whether formation of arrays of Ge quantum dots on the Si(001) surface is an equilibrium driven or kinetically controlled process has not been solved since the very discovery of coherent GeSi and Ge islands by Eaglesham and Cerullo \cite{Eaglesham_Cerullo} and Mo {\it et al.} \cite{Mo}. Numerous articles supported the model of kinetically-driven growth of Ge clusters whereas others gave theoretical and experimental evidences of the equilibrium nature of Ge quantum dot arrays (see, e.\,g., Refs.\,\cite{Kinetically_Suppressed_Ripening, Kastner, Island_growth, Voigt_Review, Tersoff_Tromp, LLL, Goldfarb_2006, Goldfarb_2005, Facet-105, Stability-Anealing}). In 1999, a detailed analysis of this long-standing (even at that time) issue was made by Shchukin and Bimberd in their now already classical review cited in Ref.\,\cite{Schukin-Bimberg} which covered a wide selection of various heteroepitaxial systems including Ge/Si(001) epitaxial films with Ge clusters. Additional discussions of this issue can be found in a brief review section included to Ref.\,\cite{classification} or in our recent review article appeared in Ref.\,\cite{Yur_JNO}. This article presents an experimental study of this issue. We deposited Ge on Si(001) at the room temperature and explored crystallization of the disordered Ge film as a result of a thermal treatment. This experiment, originally conceived as purely technological, gave somewhat unexpected results. First of all, it demonstrated that the Ge/Si(001) film formed in the conditions of a closed system consists of the usual patched wetting layer (WL) and large droplike clusters of Ge rather than huts or domes which appear when a film is deposited in a flux of Ge atoms arriving on its surface. Additionally we observed nucleation of the $2\times 1$ reconstructed phase (i.e. the ordered one) in the disordered Ge film deposited at the room temperature. And finally, we detected a mixture of $c(4\times 2)$ and $p(2\times 2)$ reconstructions on the surface of the resultant Ge film annealed at high temperature (600\,\textcelsius) whereas the simultaneous presence of both these structures in comparable proportions on wetting layer patches is a distinctive feature of the low-temperature mode of the wetting layer growth ($<600$\,\textcelsius) \cite{initial_phase}. Now, we proceed to the detailed presentation of the obtained results.$^{\rm a}$ \section*{Equipment, methods and samples} Experiments were carried out using a specially built setup consisting of a ultrahigh-vacuum (UHV) molecular-beam epitaxy (MBE) chamber (Riber EVA~32) equipped with a RH20 reflected high-energy electron diffraction (RHED) unit (Staib Instruments) and connected with a UHV STM chamber (GPI~300) \cite{CMOS-compatible-EMRS,VCIAN2011,STM_GPI-Proc}. The Ge deposition rate and the Ge coverage ($h_{\rm Ge}$) were measured by a graduated in advance film thickness monitor (Inficon Leybold-Heraeus XTC 751-001-G1) with a quartz sensor installed at the MBE chamber. Samples were heated from the rear side by radiators of tantalum. The temperature was monitored with a tungsten-rhenium thermocouple mounted in the vacuum near the rear side of the samples and {\it in situ} graduated beforehand against a specialized pyrometer (IMPAC IS 12-Si) which measured the Si sample temperature \textit{T} through a chamber window with an accuracy of \textpm $(0.003\,T$\,[{\textcelsius}] + 1)\,\textcelsius. The atmosphere composition in the MBE chamber was monitored using a mass-spectrometer residual gas analyzer (SRS RGA-200) before and during the process. Additional details concerning the used equipment can be found, e.\,g., in Refs.\,\cite{classification, CMOS-compatible-EMRS, VCIAN2011}. Details of the pre-growth treatments of Si wafers, which included wet chemical etching and oxide removal by short high-temperature annealing ($T\sim$~900\,\textcelsius), can be found in our previous articles \cite{our_Si(001)_en, stm-rheed-EMRS, phase_transition}. 7\,\r{A} thick Ge films ($h_{\rm Ge}= 7$\,{\AA}) were deposited at the rate of 0.15\,{\AA}/s directly on the clean Si(001) surface at the room temperature. Then the samples were heated to 600{\,\textcelsius} at the maximum rate achievable for the used infrared heaters (0.24{\,\textcelsius}/s),$^{\rm b}$ annealed at this temperature for 5 minutes and cooled to the room temperature at the quenching mode \cite{hydrogenation_YUR} at the rate of 0.4{\,\textcelsius}/s. Afterwards the samples were moved into the STM chamber for the structural analysis. STM images were processed using the WSxM software \cite{WSxM}. RHEED patterns were monitored on a screen and recorded with a video camera during the whole cycle of Ge film deposition and heat treatment. \section*{Experimental data and their discussion} \subsection*{A structure of the initial Ge film} When Ge is deposited on the Si(001) surface at the room temperature the $2\times 1$ RHEED pattern of Si(001) evolves into the $1\times 1$ one with the gradually decaying \textonehalf-streaks as the thickness of the deposited Ge film ($h_{\rm Ge}$) increases. However, even at $h_{\rm Ge}= 7$\,{\AA}, the {\textonehalf}-reflexes are still observable in the diffraction pattern (Fig.\,\ref{fig:RHEED_before}\,a,\,b) that is a direct indication of presence of the $2\times 1$ reconstructed domains of the film surface which occupy a part of its area. At the same time, the visible diffuse scattering of electrons and widening of the streaks indicates that the film is significantly disordered. So, we should conclude that an ordered $2\times 1$ reconstructed crystalline phase, although occupying a minor part of the surface, forms in the disordered Ge film deposited on Si(001) at the room temperature. STM images obtained from the as-deposited Ge film demonstrate a highly disordered granular structure (Fig.\,\ref{fig:RHEED_before}\,c). The film thickness reaches 1\,nm due to its porosity. We failed to resolve the $2\times 1$ reconstructed islands or areas. However this does not allow one to state that such reconstructed domains are absent since the atomic resolution necessary for their observation was not reached at the mainly disordered surface of the granular (and porous) Ge film. The RHEED patterns themselves demonstrating such reconstruction are a sufficient evidence for its presence on the surface. \subsection*{A structure of the {Ge/Si(001)} film after annealing} \subsubsection*{The wetting layer} RHEED patterns of the Ge layer drastically change as a result of sample annealing (Fig.\,\ref{fig:RHEED_after}): streaks of the $2 \times 1$ structure as well as 3D features are now observed. The 3D pattern becomes visible during sample heating at the temperature some higher than 500{\,\textcelsius}; the pronounced $2 \times 1$ structure appears, as it has already been shown previously \cite{Yur_JNO,SPIE_QD-chains}, during cooling at the temperature less then 600{\,\textcelsius}. So, we can conclude that the disordered Ge film deposited at the room temperature on Si(001) transforms as a result of annealing at 600{\,\textcelsius} into the ordered $2 \times 1$ reconstructed Ge layer. Furthermore, STM images of the resultant surface demonstrate typical pictures of the $M \times N$ reconstruction (Fig.\,\ref{fig:WL}): the surface is composed by patches bounded by the dimer-row vacancies and dimer vacancy lines \cite{Wu,Iwawaki_initial,Voigt,Wetting,Vailionis}. There is no visible difference between this wetting layer, obtained as a result of annealing of the disordered Ge film, and wetting layers grown by molecular beam epitaxy at different conditions, both in low-temperature and high-temperature modes (Fig.\,\ref{fig:WL-360_600_650}); the only specific feature of this wetting layer is that, being formed at high temperature, it contains both $p(2\times 2)$ and $c(4\times 2)$ structures whereas only (or nearly only) the latter one is observed on the wetting layers deposited at high temperatures and a mixture of the two reconstructions is a characteristic feature of the wetting layers deposited at low temperatures \cite{Yur_JNO,SPIE_QD-chains, Growing_Ge_hut-structure,Nucleation_high-temperatures}. But in general this minor peculiarity does not seem to be very significant in this case. Thus, we can conclude from the above that formation of the Ge/Si(001) wetting layer is controlled by system evolution to thermodynamic equilibrium, i.e. it is an equilibrium system. \subsubsection*{Ge hillocks} The thickness of the Ge/Si(001) wetting layer is known to be equal to 4 monolayers that is a little greater than 5.5\,\AA. We deposited 7\,\r{A} of Ge. Usually, if Ge is deposited at low temperatures (say, e.g., at {\em T}$_{\rm gr}$ = 360\,\textcelsius), at such coverages the excess Ge atoms form the well recognizable structure of the hut array (Fig.\,\ref{fig:array-600_360C}\,a) \cite{Mo,CMOS-compatible-EMRS,Yur_JNO}; if Ge is deposited at 600\,{\textcelsius} (or at the high-temperature mode) the excess Ge starts to gather into cluster arrays at even less coverages and forms pyramids in which split edges are often observed (Fig.\,\ref{fig:array-600_360C}\,b) \cite{VCIAN-2012, Prepyramid-to-pyramid, Morphological_evolution, Nucleation_high-temperatures}. The question is where the excess Ge deposited at the room temperature flows during the heat treatment at 600\,{\textcelsius}. As we have observed, at these treatment Ge forms neither hut arrays nor pyramids or domes but it is gathered into large partially faceted droplike clusters (Fig.\,\ref{fig:clusters}). The lateral dimensions of these oval hillocks vary from about 100 to about 200\,nm, their heights vary from nearly 10 to more than 20\,nm and their number density makes (1.2 to 1.3)$\times 10^9$\,cm$^{-2}$. ``Pelerines'' of multiple incomplete facets seen around apexes (Fig.\,\ref{fig:clusters}\,b) and indicating that these clusters grow from tops to bottoms \cite{VCIAN-2012}, as well as split edges (Fig.\,\ref{fig:clusters}\,b,\,d), demonstrate that they have some common features with the pyramids growing at the same temperature during the molecular beam epitaxy (Fig.\,\ref{fig:array-600_360C}\,b). The presented observation allows us to definitely state that no huts (or domes) appear if a Ge/Si(001) film forms moving to equilibrium as a closed system (or, in fact, as an isolated system since outflow and radiative inflow of energy are balanced). Pyramid arrays (no wedges arise at high temperatures \cite{Nucleation_high-temperatures}) emerge only as a result of a process requiring Ge atoms to flow on the surface. So, this experiment unambiguously demonstrates that at least the growth of the high-temperature pyramids appearing at temperatures greater than 600\,{\textcelsius} is controlled by kinetics rather than evolution to thermodynamic equilibrium; equilibrium clusters are the oval ones. Reasoning which allows us to come to this conclusion is very simple. The question is about the driving force of Ge cluster formation: if it is kinetics (random migration of atoms and their interactions) or thermodynamics (evolution of a system to equilibrium). The Ge/Si(001) isolated system comes to a thermodynamic equilibrium state for the given temperature as a result of isothermal annealing \cite{Ehrenfest-Afanassjewa_Thermodynamics}. If the pyramids would be equilibrium (or thermodynamically driven) structures they would appear as a result of annealing; or in other words, if they do not appear they are not equilibrium at high-temperature growth. We can conclude now that at high temperatures pyramids arise due to kinetics rather than thermodynamics, and thermodynamics would give rise to the oval clusters instead of the pyramids. It should be noted that we consider the oval mounds as equilibrium structures (or maybe close to equilibrium, since this does not matter for our conclusions) despite that the annealing time seems too short. We believe that this conclusion is correct because the process, which first would result in appearance of large oval mounds on WL and then in their dissolution or reformation into some other structures, seems to be very unlikely in a closed system. Also we can hardly assume that the oval cluster formation is constrained by some other kinetic processes, different from those constraining the pyramid formation during MBE, because the closed system that we consider (which is the isolated system at the phase of isothermal annealing) should evolve to its thermodynamic equilibrium and even if the oval mounds are not yet completely equilibrium formations they are certainly on this way and they likely may become larger or differently faceted in equilibrium but they may not become pyramids similar to those appearing as a result of MBE. Now let us consider in a few words the pyramid formation in the process of MBE. There are two competing kinetic processes on WL during MBE at the temperature of the described experiment: a process of a pyramid formation near a place of Ge atom ``landing'' on WL requiring a small migration length and a process of a long-distance migration of the ``landed'' Ge atoms along the WL surface resulting in appearance of oval clusters (whose number density is only about 10$^9$\,cm$^{-2}$). Probably, the latter process would be favorable at very low Ge deposition rates, approaching zero, when the system resembles the closed one and local supersaturation does not appear or has enough time to disappear due to adatom migration. At practical deposition rates, local supersaturation rises rapidly, Ge adatoms are incorporated by WL; WL patches, whose upper layers are unstrained \cite{Yur_JNO,SPIE_QD-chains,Growing_Ge_hut-structure}, can no longer consume Ge and the strain would begin to grow unless Ge atoms start to form pyramids on patches as described by our model presented in Ref.\,\cite{Growing_Ge_hut-structure}. So, kinetics governs the processes of nucleation and growth of the high-temperature pyramids; otherwise the excess Ge adatoms would be distributed into the observed oval clusters as it is prescribed by thermodynamics. We should note also that unlike {SiGe} islands grown by {Ge} diffusion from a local source \cite{Annealing_SiGe_600C_Montalenti,Annealing_SiGe_600C_nanotechnology} at the temperatures close or equal to the temperature of the discussed experiment, which also have oval shapes although some different from those described in this article, the oval mounds described by us evolve to equilibrium with the wetting layer. The islands studied in Refs.\,\cite{Annealing_SiGe_600C_Montalenti,Annealing_SiGe_600C_nanotechnology} arise and grow in a permanent flux of adatoms directionally migrating along the wetting layer from the Ge stripe (the source of Ge) to the Si surface free of Ge. Consequently, they never come to equilibrium with the wetting layer (although they grow under the isothermal annealing) and their formation is controlled by kinetic processes, i.\,e. by diffusion and capture of diffusing atoms. \section*{Conclusion} In conclusion of the article, we would like to emphasize its main statements. First, we conclude that an ordered $2\times 1$ reconstructed crystalline phase forms in the disordered Ge film during its deposition on the Si(001) surface at the room temperature. Second, as a result of annealing at 600{\,\textcelsius} the disordered Ge film deposited at the room temperature on Si(001) transforms into the ordered $2 \times 1$ reconstructed Ge wetting layer. Third, the resultant wetting layer surface is $M \times N$ reconstructed, i.e. it consists of patches bounded by the dimer-row vacancies and dimer vacancy lines; apexes of the patches are formed by both $p(2\times 2)$ and $c(4\times 2)$ structures. Fourth, huts (pyramids or wedges) or domes do not emerge if a Ge/Si(001) film is formed in a closed system by annealing at 600{\,\textcelsius} of a Ge layer deposited on Si(001) at the room temperature; the excess Ge atoms from the initial Ge film are gathered into large partially faceted oval clusters. And finally, we have demonstrated that at least the growth of the pyramids appearing at temperatures greater than 600\,{\textcelsius} is controlled by kinetics rather than evolution to thermodynamic equilibrium whereas the wetting layer is an equilibrium structure. Consequently, we can conclude that these nonequilibrium, kinetically controlled, Ge quantum dots grow on the equilibrium, thermodynamically controlled, Ge/Si(001) wetting layer. Thus, the wetting layer could determine only the places of their nucleation and maybe the structure of their nuclei but cannot control their growth process; this inference completely agrees with the hut nucleation and growth scenarios proposed in our previous publications \cite{Yur_JNO, SPIE_QD-chains, Growing_Ge_hut-structure}. \begin{backmatter} \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} MSS conceived of the study and designed it, he carried out the experiments, and took part in discussions and interpretation of the results. LVA participated in the design of the study, and carried out the experiments; she took part in discussions and interpretation of the results. VAY performed data analysis and interpreted the results, he drafted the manuscript. All authors read and approved the final manuscript. \section*{Acknowledgements} We thank Ms. N. V. Kiryanova for her invaluable contribution to arrangement of this research. We also thank Ms. L. M. Krylova for chemical treatments of the samples. Equipment of the Center for Collective Use of Scientific Equipment of GPI RAS was used for this study. We acknowledge the support of this research. \section*{Abbreviations} MBE: molecular beam epitaxy; RHEED: reflected high-energy electron diffraction; WL: wetting layer \section*{Endnotes} $^{\rm a}$A preprint of the current article is cited in Ref.\,\cite{Anneal@600C_arXiv}. $^{\rm b}$The heating process does not seem to be really adiabatic: its rate was insufficiently high and we observed changes in the surface structure that means that some changes in entropy took place. However, we performed heating as rapidly as we could to minimize its effect on the Ge layer. \bibliographystyle{bmc-mathphys}
1,116,691,498,842
arxiv
\section{Introduction} Quantum gravity on three-dimensional anti-de Sitter ($AdS$) space was found in~\cite{BH86} to be holographically dual to a two-dimensional conformal field theory (CFT). In the spirit of this work, it was recently argued~\cite{GHSS0809} that the extremal four-dimensional Kerr black hole~\cite{BH9905}, for which the angular momentum $J$ saturates the regularity bound $J\leq GM^2$, is holographically dual to a chiral CFT in two dimensions. This Kerr/CFT correspondence was subsequently~\cite{HMNS0811} generalized to a similar correspondence for the extremal Kerr-Newman black hole as well as for its $AdS$ and $dS$ generalizations. We refer to~\cite{MS9702} for earlier work on a dual description of the Kerr black hole, and to~\cite{wake1,wake1a,wake2,wake2a,wake3,wake3a,wake3b} for further progress in the wake of~\cite{GHSS0809}. Excitations around the near-horizon extremal black holes can be controlled by imposing appropriate boundary conditions. To every consistent set of boundary conditions, there is an associated asymptotic symmetry group generated by the diffeomorphisms obeying the conditions. The conserved charge of an asymptotic symmetry is constructed as a surface integral and can be analyzed using the formalism of~\cite{BB0111,BC0708} based on~\cite{BH86,AD82} and discussed extensively in~\cite{Com0708}. An asymptotic symmetry transformation with vanishing conserved charge is rendered trivial. The near-horizon metrics of the extremal black holes of our interest all have an $SL(2,\mathbb{R})\times U(1)$ isometry group. In the studies~\cite{GHSS0809,HMNS0811} of the Kerr/CFT correspondence and its generalizations, the $SL(2,\mathbb{R})$ becomes trivial while the $U(1)$ is enhanced to a Virasoro algebra. This is in contrast to the situation in studies of the G\"odel black hole~\cite{CD0701} and warped $AdS_3$~\cite{CD0808} in which an $SL(2,\mathbb{R})$ isometry is enhanced to a Virasoro algebra. The boundary conditions imposed in~\cite{GHSS0809,HMNS0811} are relevant for describing the ground-state entropy of the extremal Kerr black hole or its generalization. A discussion of the microscopic origin of the Bekenstein-Hawking entropy~\cite{Bek73} for a class of black holes may be found in~\cite{SV9601} and references therein. Once constructed, boundary conditions enhancing the $SL(2,\mathbb{R})$ isometry of the extremal Kerr black hole, on the other hand, are speculated~\cite{GHSS0809} to be relevant for the understanding of the entropy of near-extremal fluctuations. It is an objective of the present work to devise such boundary conditions and to study the resulting asymptotic symmetry group. The proposed boundary conditions are isometry-preserving in the sense that the original exact $SL(2,\mathbb{R})$ isometries correspond to the global conformal transformations (generated by the Virasoro modes $\ell_n$, $n=-1,0,1$) of the dual two-dimensional CFT. Following the standard approach~\cite{BB0111,BC0708}, the conserved charges associated to these Virasoro-generating asymptotic Killing vectors are well-defined and non-vanishing, but yield a centreless Virasoro algebra. Challenges from so-called back-reaction effects are briefly indicated. After reviewing the construction in~\cite{GHSS0809}, we argue that there are infinitely many choices of boundary conditions yielding the same centrally-extended Virasoro algebra as the $U(1)$-enhanced one obtained in~\cite{GHSS0809}. We also show that there is a related class of boundary conditions giving rise to a centreless $U(1)$-enhanced Virasoro algebra. We then present a class of boundary conditions enhancing the $SL(2,\mathbb{R})$ isometries to a centreless Virasoro algebra. The corresponding asymptotic symmetry is generated by an unusual differential-operator realization of the Virasoro algebra. A well-defined central extension of the algebra generated by the associated conserved charges is not permitted in this context -- not even when ignoring back-reaction effects. Nor does it seem possible, within our approach, to construct boundary conditions resulting in two copies of the Virasoro algebra -- one enhancing the $U(1)$ isometry; the other enhancing the $SL(2,\mathbb{R})$ isometries. It certainly is possible, though, at the level of asymptotic Killing vectors, but the charges associated to at least one of the two Virasoro copies are ill-defined or simply vanish. \\[.2cm] While this work was being completed, the paper~\cite{MTY0907} on the Kerr/CFT correspondence appeared. It shares some of our objectives and has an overlap in approach, but is based on a particular choice of boundary conditions not considered here. As in our similar cases, the $SL(2,\mathbb{R})$ isometries of the extremal Kerr black hole are enhanced to a Virasoro algebra generated by a set of asymptotic Killing vectors. Contrary to our cases, one verifies that the associated conserved charges all vanish, thus rendering the corresponding asymptotic symmetries trivial. The subsequent analysis of finite-temperature effects and entropy in~\cite{MTY0907} is based on {\em quasi-local} charges~\cite{BY9209}. Analyses of that kind are not carried out here. A continuation of the work~\cite{MTY0907} can be found in~\cite{MTY0907b}. \section{Near-horizon extremal geometry} We are interested in the class~\cite{KLR0705} of extremal, stationary and rotationally symmetric four-dimensional black holes whose near-horizon metric and gauge field are of the form \bea d\sbar^2&=&\G(\theta)\Big(\!-r^2dt^2+\frac{dr^2}{r^2}+\al(\theta)d\theta^2\Big) +\g(\theta)\big(d\phi+krdt\big)^2\nn \Ab&=&f(\theta)\big(d\phi+krdt\big) \label{ds2} \eea where $\theta\in[0,\pi]$ and $\phi\in[0,2\pi)$. Among these, is the extremal Kerr-Newman black hole as well as its $AdS$ and $dS$ generalizations. The corresponding isometry group $SL(2,\mathbb{R})\times U(1)$ is generated by the Killing vectors \be \big\{\pa_t,\ t\pa_t-r\pa_r,\ \big(t^2+\frac{1}{r^2}\big)\pa_t -2tr\pa_r-\frac{2k}{r}\pa_\phi\big\}\cup\big\{\pa_\phi\big\} \label{iso} \ee In this work, we only consider the gravitational part but hope to discuss the gauge transformations elsewhere. The near-horizon extremal Kerr metric, in particular, is obtained by setting \be \G(\theta)=a^2(1+\cos^2\theta),\qquad \g(\theta)=\frac{4a^2\sin^2\theta}{1+\cos^2\theta},\qquad \al(\theta)=k=1 \label{Kerr} \ee and the ADM mass and angular momentum of the extremal Kerr black hole are given by \be M=\frac{a}{G},\qquad\quad J=\frac{a^2}{G} \ee The Kerr metric itself~\cite{Kerr63} describes a rotating black hole as a solution to the four-dimensional vacuum Einstein equations. \section{Asymptotic symmetry group} We are interested in fluctuations of the near-horizon geometry of the extremal black hole whose background metric $\gb_{\mu\nu}$ is defined in (\ref{ds2}). We denote the corresponding perturbation of this metric by $h_{\mu\nu}$. Asymptotic symmetries are generated by the diffeomorphisms whose action on the metric generates metric fluctuations compatible with the chosen boundary conditions. We are thus looking for contravariant vector fields $\eta$ along which the Lie derivative of the metric is of the form \be \Lc_\eta \gb_{\mu\nu}\sim h_{\mu\nu} \label{Lgh} \ee The asymptotic symmetry group is generated by the set of these transformations modulo those whose charges, to be defined below, vanish. The boundary conditions should therefore be strong enough to ensure well-defined charges, yet weak enough to keep the charges non-zero. It is of interest to determine the window of suitable boundary conditions. Once the asymptotic group of a consistent set of boundary conditions has been determined, one may scan, by weakening or strengthening the conditions, for other boundary conditions yielding the same or a closely related asymptotic group. Generally speaking, a strengthening of the boundary conditions will disallow certain asymptotic symmetries but not allow new ones. A weakening of the boundary conditions, on the other hand, will typically allow new asymptotic symmetries but may render some charges ill-defined. A change in boundary conditions strengthening some parts of the metric fluctuations but weakening others, may result in a different asymptotic symmetry group or in an equivalent group obtained as an enhancement of different exact isometries. To the asymptotic symmetry generator $\eta$ satisfying (\ref{Lgh}), one associates~\cite{BB0111,BC0708} the conserved charge \bea Q_\eta&=&\frac{1}{8\pi G}\int_{\pa\Sigma}\sqrt{-\gb}k_\eta[h;\gb] =\frac{1}{8\pi G}\int_{\pa\Sigma}\frac{\sqrt{-\gb}}{4}\eps_{\al\beta\mu\nu}d_\eta^{\mu\nu}[h;\gb] dx^\al\wedge dx^\beta\nn &=&\frac{1}{16\pi G}\int_0^\pi\int_0^{2\pi}\sqrt{-\gb} \big(d_{\eta}^{tr}[h;\gb]-d_\eta^{rt}[h;\gb]\big)d\phi d\theta \label{Q} \eea where \be d_\eta^{\mu\nu}[h;\gb] =\eta^\nu \Db^\mu h-\eta^\nu \Db_\sigma h^{\mu\sigma} +\eta_\sigma\Db^\nu h^{\mu\sigma} -h^{\nu\sigma}\Db_\sigma\eta^\mu+\frac{1}{2}h\Db^\nu\eta^\mu +\frac{1}{2}h^{\sigma\nu}\big(\Db^\mu\eta_\sigma+\Db_\sigma\eta^\mu\big) \label{dmunu} \ee and where $\pa\Sigma$ is the boundary of a three-dimensional spatial volume, ultimately near spatial infinity. Here, indices are lowered and raised using the background metric $\gb_{\mu\nu}$ and its inverse, $\Db_\mu$ denotes a background covariant derivative, while $h$ is defined as $h=\gb^{\mu\nu}h_{\mu\nu}$. To be a well-defined charge in the asymptotic limit, the underlying integral must be finite as $r\to\infty$. If the charge vanishes, the asymptotic symmetry is rendered trivial. The algebra generated by the set of well-defined charges is governed by the Dirac brackets computed~\cite{BB0111,BC0708} as \be \big\{Q_\eta,Q_{\hat\eta}\big\} =Q_{[\eta,\hat\eta]}+\frac{1}{8\pi G}\int_{\pa\Sigma}\sqrt{-\gb}k_{\eta}[\Lc_{\hat\eta}\gb;\gb] \label{QQ} \ee where the integral yields the eventual central extension. \section{Kerr/CFT correspondence} For ease of comparison, we here use the global coordinates of the near-horizon extremal Kerr black hole used in~\cite{GHSS0809} in which the metric reads \be d\sbar^2=2GJ\Omega^2\Big(\!-(1+r^2)d\tau^2+\frac{dr^2}{1+r^2}+d\theta^2 +\Lambda^2\big(d\var+rd\tau\big)^2\Big) \label{ds2Kerr} \ee where \be \Omega^2=\Omega^2(\theta)=\frac{1+\cos^2\theta}{2},\qquad\qquad \La=\Lambda(\theta)=\frac{2\sin\theta}{1+\cos^2\theta} \ee In these coordinates, the rotational $U(1)$ isometry is generated by the Killing vector $\pa_\var$, while the $SL(2,\mathbb{R})$ isometries do not concern us here. Written in the ordered basis $\{\tau,r,\var,\theta\}$, the imposed boundary conditions read \be h_{\mu\nu}=\Oc\!\left(\!\!\begin{array}{cccc} r^2&r^{-2}&1&r^{-1} \\ &r^{-3}&r^{-1}&r^{-2} \\ &&1&r^{-1} \\ &&&r^{-1} \end{array} \!\!\right), \qquad\quad h_{\mu\nu}=h_{\nu\mu} \label{hKerr} \ee To preserve extremality of the Kerr black hole, one imposes the additional condition $Q_{\pa_\tau}=0$. The asymptotic Killing vectors are given by \be K_\eps =\Oc(r^{-3})\pa_\tau +\big(\!-r\eps'(\var)+\Oc(1)\big)\pa_r +\big(\eps(\var)+\Oc(r^{-2})\big)\pa_{\var} +\Oc(r^{-1})\pa_{\theta} \ee where $\eps(\var)$ is a smooth function, in addition to the `trivialized' $\pa_\tau$. The generators of the corresponding asymptotic symmetry read \be \xi=-r\eps'(\var)\pa_r+\eps(\var)\pa_{\var} \ee and form the centreless Virasoro algebra \be \big[\xi_{\eps},\xi_{\hat\eps}\big]=\xi_{\eps\hat\eps'-\eps'\hat\eps} \label{Virdiff} \ee The usual form of the Virasoro algebra is obtained by choosing an appropriate basis for the functions $\eps(\var)$ and $\hat\eps(\var)$. This symmetry is an enhancement of the exact $U(1)$ isometry generated by the Killing vector $\pa_\var$ of (\ref{ds2Kerr}) as the latter is recovered by setting $\eps(\var)=1$. The associated charges are computed using \be \sqrt{-\gb}k_\xi[h;\gb]=\left(\frac{1}{2}\eps'\La rh_{r\var} -\frac{1}{4}\eps\La\Big(\La^2\frac{h_{\tau\tau}}{r^2} +2r\pa_\var h_{r\var}+\big(\La^2+1\big)h_{\var\var}\Big)\right)d\var\wedge d\theta+\ldots \label{kxiKerr} \ee where we have introduced the shorthand notation $\eps=\eps(\var)$. The dots indicate that terms not contributing to the charge (\ref{Q}) have been omitted. With respect to the basis $\xi_n(\var)$, where $\eps_n(\var)=-e^{-in\var}$, one introduces the dimensionless quantum versions of the conserved charges \be L_n=\frac{1}{\hbar}\Big(Q_{\xi_n}+\frac{3J}{2}\delta_{n,0}\Big) \label{L} \ee After the usual substitution $\{.,.\}\to-\frac{i}{\hbar}[.,.]$ of Dirac brackets by quantum commutators, the quantum charge algebra is recognized as the centrally-extended Virasoro algebra~\cite{GHSS0809} \be \big[L_n,L_m\big]=(n-m)L_{n+m}+\frac{c}{12}n(n^2-1)\delta_{n+m,0},\qquad\quad c=\frac{12J}{\hbar} \label{VirKerr} \ee \subsection{Partial classification of $U(1)$-enhancing boundary conditions} The Lie derivative along $\xi$ of the background metric $\gb_{\mu\nu}$ is given by \be \Lc_\xi\gb_{\mu\nu}=-2GJ\Omega^2\left(\!\!\begin{array}{cccc} 2(\La^2-1)r^2\eps'&0&0&0 \\ 0&\frac{2}{(1+r^2)^2}\eps'&\frac{r}{1+r^2}\eps''&0 \\ 0&\frac{r}{1+r^2}\eps''&-2\La^2\eps'&0 \\ 0&0&0&0 \end{array} \!\!\right) \label{Liexigb} \ee and has only four (independent) non-trivial entries. It is thus natural to ask if the boundary conditions (\ref{hKerr}) can be strengthened while maintaining the Virasoro algebra (with its non-trivial central charge (\ref{VirKerr})) as an enhancement of the $U(1)$ isometry generated by the Killing vector $\pa_\var$. The {\em strongest} such boundary conditions are \be h_{\mu\nu}=\left(\!\!\begin{array}{cccc} \Oc(r^2)&0&0&0 \\ 0&\Oc(r^{-4})&\Oc(r^{-1})&0 \\ 0&\Oc(r^{-1})&\Oc(1)&0 \\ 0&0&0&0 \end{array} \!\!\right) \label{hKerrStrongC} \ee and have asymptotic symmetries generated by $\xi$. The expression (\ref{kxiKerr}) remains and the Virasoro algebra generated by the associated charges $Q_\xi$ has the same central charge as above. The additional condition $Q_{\pa_\tau}=0$ is still imposed. Determining the {\em weakest} boundary conditions yielding a $U(1)$-enhanced Virasoro algebra is more delicate. As already indicated, new symmetries may arise. Even if the charges of the Virasoro algebra in question remain well-defined, the new symmetries may be ill-defined and thus render the boundary conditions inconsistent. It is, a priori, unclear if such ill-defined charges can be dealt with by imposing additional boundary conditions setting them equal to zero, as in the case $Q_{\pa_\tau}=0$ above. It is beyond the scope of the present work to classify such possibilities. Instead, we merely point out that the weakest boundary conditions keeping the Virasoro-generating charges {\em themselves} well-defined are of the form \be h_{\mu\nu}=\Oc\!\left(\!\!\begin{array}{cccc} r^2&r^{p_{\tau r}}&r&r \\ &r^{-2}&r^{-1}&r^{p_{r \theta}} \\ &&1&1 \\ &&&1 \end{array} \!\!\right), \qquad\quad h_{\mu\nu}=h_{\nu\mu} \label{hwea} \ee Here, $p_{\tau r}$ and $p_{r \theta}$ are real parameters bounded from above by the applicability, at infinity, of the linear theory assumed in the charge formula (\ref{Q})~\cite{BC0708}. Explicit values for the bounds are not discussed here, though. The general form of (\ref{hwea}) follows from a simple inspection of the $r$-powers in \bea \sqrt{-\gb}k_\xi[h;\gb]\big|_{d\var\wedge d\theta}&=& -\frac{\La^3\eps}{4(r^2+1)}h_{\tau\tau}+\frac{r\eps''+2r\La^4\eps-2(r^2+1)\La^2\eps\pa_r -2r\eps'\pa_\var}{4\La(r^2+1)}h_{\tau\var}\nn &-&\frac{r\eps'}{2(r^2+1)}\pa_\theta\big(\La h_{\tau\theta}\big) +\frac{\La^3}{4}(r^2+1)\eps h_{rr} +\frac{\La}{2}\big(r\eps'-r\eps\pa_\var+\eps\pa_t\big)h_{r\var}\nn &-&\frac{\La^2\big((\La^2+1)r^2+1\big)\eps-2\La^2r(r^2+1)\eps\pa_r-2r\eps'\pa_t}{4\La(r^2+1)} h_{\var\var}+\frac{r^2\eps'}{2(r^2+1)}\pa_{\theta}\big(\La h_{\var\theta}\big)\nn &-&\frac{\La}{4(r^2+1)}\big(\La^2(r^2+1)\eps-r^2\eps''+2r^2\eps'\pa_\var -2r\eps'\pa_t\big)h_{\theta\theta} \eea where subleading terms have been included to show that $p_{\tau r}$ and $p_{r \theta}$ are unaffected by this particular evaluation. Compared with (\ref{Liexigb}), we see that the contributions from $\Lc_\xi\gb_{\tau\tau}$, $\Lc_\xi\gb_{r\var}$ and $\Lc_\xi\gb_{\var\var}$ to the central charge are independent of the particular choice of boundary conditions considered here, while $\Lc_\xi\gb_{rr}$ does not contribute to any of them. The central charge (\ref{VirKerr}) is thus the same for all these choices. \subsubsection{Centreless Virasoro algebra} As we are about to demonstrate, one obtains a {\em centreless} Virasoro algebra as an enhancement of the $U(1)$ isometry by imposing boundary conditions $h_{\mu\nu}$ in one of the three `ranges' \be \left(\!\!\begin{array}{cccc} 0&0&\Oc(r)&0 \\ 0&0&0&0 \\ \Oc(r)&0&\Oc(1)&0 \\ 0&0&0&0 \end{array} \!\!\right) \ <\ \left(\!\!\begin{array}{cccc} h_{\tau\tau}&\Oc(r^{p_{\tau r}})&\Oc(r)&\Oc(r^{p_{\tau\theta}}) \\ &h_{rr}&h_{r\var}&\Oc(r^{p_{r\theta}}) \\ &&\Oc(1)&\Oc(r^{p_{\var\theta}}) \\ &&&\Oc(1) \end{array} \!\!\right),\qquad\quad h_{\mu\nu}=h_{\nu\mu} \ee where \bea &&h_{\tau\tau},\ h_{rr},\ h_{r\var}=\Oc(r),\ \Oc(r^{-2}),\ \Oc(r^{-1})\nn \mathrm{or}\ && h_{\tau\tau},\ h_{rr},\ h_{r\var}=\Oc(r^2),\ \Oc(r^{-5}),\ \Oc(r^{-1})\nn \mathrm{or}\ && h_{\tau\tau},\ h_{rr},\ h_{r\var}=\Oc(r^2),\ \Oc(r^{-2}),\ \Oc(r^{-2}) \eea in addition to $Q_{\pa_\tau}=0$. As in the discussion following (\ref{hwea}), the real parameters $p_{\tau r}$, $p_{\tau\theta}$, $p_{r\theta}$ and $p_{\var\theta}$ are bounded from above. Either of the new conditions $h_{\tau\tau}=\Oc(r)$, $h_{r\var}=\Oc(r^{-2})$ or $h_{rr}=\Oc(r^{-5})$ reduces the symmetry generator $\xi$ from $\xi=-r\eps'(\var)\pa_r+\eps(\var)\pa_{\var}$ (as allowed by boundary conditions in the range from (\ref{hKerrStrongC}) to (\ref{hwea})) to $\xi=\eps(\var)\pa_{\var}$. Further conditions may have to be imposed to ensure finiteness of eventual charges different from the ones generated by the new asymptotic symmetry $\xi=\eps(\var)\pa_\var$, but this question is not addressed here. Along $\xi=\eps(\var)\pa_\var$, the Lie derivative of the background metric reads \be \Lc_{\xi}\gb_{\mu\nu}=2GJ\Omega^2\La^2\eps'\left(\!\!\begin{array}{cccc} 0&0&r&0 \\ 0&0&0&0 \\ r&0&2&0 \\ 0&0&0&0 \end{array} \!\!\right) \ee The associated conserved and central charges follow from \bea \!\!\!\!\!\!\sqrt{-\gb}k_\xi[h;\gb]\big|_{d\var\wedge d\theta}&=& -\frac{\La^3\eps}{4(r^2+1)}h_{\tau\tau} +\frac{\La\eps}{2}\Big(\frac{\La^2r}{r^2+1}-\pa_r\Big)h_{\tau\var} +\frac{\La^3}{4}(r^2+1)\eps h_{rr}\nn &+&\frac{\La}{4}\big(r\eps'-2r\eps\pa_\var+2\eps\pa_t\big)h_{r\var} -\frac{\La\eps}{4}\Big(\frac{\La^2r^2}{r^2+1}+1-2r\pa_r\Big)h_{\var\var} -\frac{\La^3}{4}\eps h_{\theta\theta} \eea and \be \frac{1}{8\pi G}\int_{\pa\Sigma}\sqrt{-\gb}k_{\xi}[\Lc_{\hat\xi}\gb;\gb] =-\frac{J}{\pi}\int_0^{2\pi}\eps(\var)\hat\eps'(\var)d\var \ee where $\xi=\eps(\var)\pa_\var$ and $\hat\xi=\hat\eps(\var)\pa_\var$. Using the same basis $\xi_n(\var)$ as above, but with \be L_n=\frac{1}{\hbar}\big(Q_{\xi_n}+J\delta_{n,0}\big) \label{L2} \ee the quantum charge algebra is recognized as the centreless Virasoro algebra \be \big[L_n,L_m\big]=(n-m)L_{n+m} \ee \section{Enhancing the $SL(2,\mathbb{R})$ isometries} Returning to the general near-horizon geometry of an extremal black hole whose background metric $\gb_{\mu\nu}$ is defined in (\ref{ds2}), we are now looking for fluctuations $h_{\mu\nu}$ of the background metric enhancing the $SL(2,\mathbb{R})$ isometries to a Virasoro algebra. That is, some of the generators of the asymptotic symmetry group should correspond to the generators of the $SL(2,\mathbb{R})$ isometries. Also, since these fluctuations are expected to be relevant for the description of {\em near-extremal} perturbations, we refrain from imposing $Q_{\pa_t}=0$. Aside from this {\em weakening} of the boundary conditions, it is natural to expect otherwise {\em stronger} boundary conditions than the ones enhancing the $U(1)$ isometry. To select such boundary conditions, we reverse-engineer the problem. First, though, we note that supplementing the boundary conditions (\ref{hKerr}) with the condition $Q_{\pa_t}=0$ corresponds to restricting to solutions with vanishing energy. This zero-energy condition was imposed in~\cite{GHSS0809} not only to preserve extremality and study the ground states of the Kerr black hole, but also to ensure finiteness of the associated conserved charges. It was subsequently argued in~\cite{wake3a} that this additional condition actually follows from the boundary conditions (\ref{hKerr}). It was also argued that so-called ``back-reaction effects" at orders higher than linear could impose vanishing conditions, known as ``linearization-stability constraints"\!, on seemingly well-defined charges. In particular, boundary conditions admitting near-extremal perturbations of the near-horizon extremal Kerr geometry preserving any of the $SL(2,\mathbb{R})$ isometries are believed to back-react so strongly that the Kerr asymptotics would break down. Despite these assertions, we find it worthwhile to continue our `linear' analysis of $SL(2,\mathbb{R})$-enhancing boundary conditions of the near-horizon geometry (\ref{ds2}). Thus, we now consider one of the simplest possible sets of asymptotic Killing vectors generating a Virasoro algebra whose `global' subalgebra (generated by $\ell_n$, $n=-1,0,1$) corresponds to the $SL(2,\mathbb{R})$ isometries, namely \be K_\eps=\big[\eps(t)+\frac{\eps''(t)}{2r^2}+\Oc(r^{-4})\big]\pa_t +\big[-r\eps'(t)+\Oc(r^{-1})\big]\pa_r +\big[-\frac{k\eps''(t)}{r}+\Oc(r^{-3})\big]\pa_{\phi} +\big[\Oc(r^{-2})\big]\pa_\theta \label{xidiff} \ee Here, $\eps(t)$ is a smooth function and it follows that \be \big[K_\eps,K_{\hat\eps}\big]=K_{\eps\hat\eps'-\eps'\hat\eps} \ee The associated asymptotic symmetry generator is given by the contravariant vector field \be \kappa_\eps=\big(\eps(t)+\frac{\eps''(t)}{2r^2}\big)\pa_t -r\eps'(t)\pa_r-\frac{k\eps''(t)}{r}\pa_{\phi} \label{kappa} \ee and the three $SL(2,\mathbb{R})$ generators in (\ref{iso}) follow by setting $\eps(t)=t^{n+1}$, $n=-1,0,1$. The Lie derivative along $\kappa_\eps$ of the background metric is given by \be \Lc_{\kappa_\eps}\gb_{\mu\nu}=-\eps'''\left(\!\!\begin{array}{cccc} \G+k^2\g&0&\frac{k\g}{2r}&0 \\ 0&0&0&0 \\ \frac{k\g}{2r}&0&0&0 \\ 0&0&0&0 \end{array} \!\!\right) \ee here written in the ordered basis $\{t,r,\phi,\theta\}$ and in terms of the shorthands \be \G=\G(\theta),\qquad\g=\g(\theta),\qquad\al=\al(\theta) \ee A set of boundary conditions compatible with these diffeomorphisms are \be h_{\mu\nu}=\Oc\!\left(\!\!\begin{array}{cccc} 1&r^{-3}&r^{-1}&r^{-2} \\ &r^{-4}&r^{-3}&r^{-3} \\ &&r^{-2}&r^{-2} \\ &&&r^{-2} \end{array} \!\!\right), \qquad\quad h_{\mu\nu}=h_{\nu\mu} \label{hnaive} \ee There are several problems with this construction. First, the associated conserved charges $Q_{\kappa_{\eps}}$ vanish, thereby rendering the asymptotic symmetries trivial. Second, the asymptotic symmetry generators (\ref{kappa}) do not quite form a Virasoro algebra as we have \be [\kappa_\eps,\kappa_{\hat\eps}]=\kappa_{\eps\hat\eps'-\eps'\hat\eps} +\frac{\eps''(t)\hat\eps'''(t)-\eps'''(t)\hat\eps''(t)}{4r^4}\big(\pa_t-2kr\pa_\phi\big) \label{comm} \ee A proper differential-operator realization of the Virasoro algebra containing the $SL(2,\mathbb{R})$ isometries (\ref{iso}) {\em does} exist, though, and is discussed in the following. The issue with the charges is subsequently addressed and resolved, and we are ultimately left with a well-defined and non-vanishing set of conserved charges realizing the Virasoro algebra. As we will see, the symmetry generators $\kappa_\eps$ (\ref{kappa}) differ from the proper Virasoro generators by diffeomorphisms rendered trivial by their vanishing charges. \subsection{Asymptotic Virasoro symmetry} \label{SecVir} For every smooth function $\eps(t)$, we introduce the contravariant vector field \be \xi=\xi^\mu\pa_\mu=\cosh\cdot\eps(t)\pa_t-r^2\sinh\cdot\eps(t)\pa_r-k\sinh\cdot\eps'(t)\pa_\phi \label{xi} \ee where \bea &&\cosh\cdot\eps(t):=\cosh\big(\frac{\pa_t}{r}\big)\eps(t) =\sum_{n=0}^{\infty}\frac{1}{(2n)!r^{2n}}\eps^{(2n)}(t)\nn &&\sinh\cdot\eps(t):=\sinh\big(\frac{\pa_t}{r}\big)\eps(t) =\sum_{n=0}^{\infty}\frac{1}{(2n+1)!r^{2n+1}}\eps^{(2n+1)}(t) \eea with the $n$'th derivative of $\eps(t)$ denoted by $\eps^{(n)}(t)$. We note that \be \pa_t\xi^r=r^4\pa_r\xi^t,\qquad\quad \pa_t\xi^t=r^2\pa_r\big(\frac{\xi^r}{r^2}\big),\qquad\quad \xi^\phi=\frac{k}{r^2}\pa_t\xi^r \ee After a bit of algebra, one verifies that the vectors (\ref{xi}) satisfy (\ref{Virdiff}), thus providing a somewhat unusual differential-operator realization of the centreless Virasoro algebra. With $\xi$ as the candidate for the generator of the corresponding asymptotic symmetry, we now continue the reverse-engineered selection of suitable $SL(2,\mathbb{R})$-enhancing boundary conditions. Along $\xi$, the Lie derivative of the background metric is worked out to be \be \Lc_{\xi}\gb_{\mu\nu}=\!\left(\!\!\begin{array}{cccc} 2r\big(k\g \pa_t\xi^\phi-(\G-k^2\g)(\xi^r+r\pa_t\xi^t)\big)& k\g r\big(kr\pa_r\xi^t+\pa_r\xi^\phi\big)& \g\big(k(\xi^r+r\pa_t\xi^t)+\pa_t\xi^\phi\big)&0 \\ k\g r\big(kr\pa_r\xi^t+\pa_r\xi^\phi\big)& \frac{2\G}{r}\pa_r\big(\frac{\xi^r}{r}\big)& \g\big(kr\pa_r\xi^t+\pa_r\xi^\phi\big)&0 \\ \g\big(k(\xi^r+r\pa_t\xi^t)+\pa_t\xi^\phi\big)& \g\big(kr\pa_r\xi^t+\pa_r\xi^\phi\big)&0&0 \\ 0&0&0&0 \end{array} \!\!\right) \ee Its non-vanishing components can be written as \bea \Lc_\xi\gb_{tt}&=&-4\sum_{n=0}^{\infty}r^{-2n}\frac{n+1}{(2n+3)!}\big(\G+2(n+1)k^2\g\big) \eps^{(2n+3)}(t)\nn \Lc_\xi\gb_{tr}=\Lc_\xi\gb_{rt} &=&2k^2\g\sum_{n=0}^{\infty}r^{-3-2n}\frac{n+1}{(2n+3)!}\eps^{(2n+4)}(t)\nn \Lc_\xi\gb_{t\phi}=\Lc_\xi\gb_{\phi t} &=&-4k\g\sum_{n=0}^{\infty}r^{-1-2n}\frac{(n+1)^2}{(2n+3)!}\eps^{(2n+3)}(t)\nn \Lc_\xi\gb_{rr}&=&4\G\sum_{n=0}^{\infty}r^{-4-2n}\frac{n+1}{(2n+3)!}\eps^{(2n+3)}(t)\nn \Lc_\xi\gb_{r\phi}=\Lc_\xi\gb_{\phi r} &=&2k\g\sum_{n=0}^{\infty}r^{-4-2n}\frac{n+1}{(2n+3)!}\eps^{(2n+4)}(t) \eea and it is observed that \be \Lc_\xi\gb_{tt}+r^4\Lc_\xi\gb_{rr}=2kr\Lc_\xi\gb_{t\phi},\qquad\quad \Lc_\xi\gb_{tr}=kr\Lc_\xi\gb_{r\phi} \label{Lgid} \ee It is straightforward, albeit rather tedious, to compute the relevant part of the integrand in the surface integral (\ref{Q}) defining $Q_\xi$, and we find \bea \sqrt{-\gb}k_\xi[h;\gb]\big|_{d\phi\wedge d\theta}&=&\sqrt{\frac{\al}{4\g\G}}\Big( \frac{k^2\g^2}{2\G}\pa_r(r\xi^t)\big(r^3h_{rr}-\frac{h_{tt}}{r}\big) +H_{t\phi}+H_{r\phi}+H_{\phi\phi}+H_{\theta\theta}\Big)\nn &+&\frac{1}{2r^2}\pa_\theta\Big(\sqrt{\frac{\g}{\al\G}}\big(r^4\xi^t h_{r\theta} +\xi^r(h_{t\theta}-krh_{\phi\theta})\big)\Big) +\pa_\phi \Phi \label{2h} \eea where \bea H_{t\phi}&=&\Big(\frac{k^3\g^2}{\G}\pa_r(r\xi^t)+\frac{\g}{2r}\pa_r(r\xi^\phi)\Big)h_{t\phi} -k\g r\pa_r(r\xi^t)\pa_r h_{t\phi}\nn H_{r\phi}&=&-\frac{\g}{2}\Big(kr^2\pa_r\big(\frac{\xi^r}{r}\big)+\pa_t\xi^\phi\Big)h_{r\phi} +k\g r\pa_r(r\xi^t)\pa_t h_{r\phi}\nn H_{\phi\phi}&=&\Big(\big(1+\frac{k^2\g}{2\G}\big)(\G-k^2\g)r\pa_r(r\xi^t) -\frac{k\g}{2}\pa_r(r\xi^\phi)\Big)h_{\phi\phi}-\frac{\G\xi^r}{r^2}\pa_t h_{\phi\phi}\nn &-&\big(\G\xi^t-k^2\g\pa_r(r\xi^t)\big)r^2\pa_r h_{\phi\phi}\nn H_{\theta\theta}&=&\frac{\g}{\al}\Big(\big(1-\frac{k^2\g}{2\G}\big)r\pa_r(r\xi^t)h_{\theta\theta} -\frac{\xi^r}{r^2}\pa_t h_{\theta\theta}-r^2\xi^t\pa_r h_{\theta\theta}\Big) \eea and \be \Phi=\sqrt{\frac{\al}{4\g\G}}\Big(\frac{\G}{r^2}\xi^r h_{t\phi} +\big(\G\xi^t-k^2\g\pa_r(r\xi^t)\big)r^2h_{r\phi}+\frac{k\g\xi^r}{\al r}h_{\theta\theta}\Big) \ee It is emphasized that these expressions are valid for all $r$, and we note that (\ref{2h}) is independent of $h_{tr}=h_{rt}$. The total $\phi$-derivative $\pa_\phi\Phi$ can be ignored in the surface integral (\ref{Q}). \subsection{Boundary conditions} We initially require the boundary conditions to be of the form \be h_{\mu\nu}=\Oc(r^{p_{\mu\nu}}), \qquad\quad h_{\mu\nu}=h_{\nu\mu} \label{hO} \ee where we allow $p_{\mu\nu}=-\infty$ for some coordinates, as for several entries in (\ref{hKerrStrongC}), for example. The {\em strongest} such conditions compatible with the Virasoro symmetry generator $\xi$ (\ref{xi}) are given by \be h_{\mu\nu}=\Oc\!\left(\!\!\begin{array}{cccc} 1&r^{-3}&r^{-1}&0 \\ &r^{-4}&r^{-4}&0 \\ &&0&0 \\ &&&0 \end{array} \!\!\right), \qquad\quad h_{\mu\nu}=h_{\nu\mu} \label{hs} \ee but one verifies that the associated conserved charges vanish. The {\em weakest} boundary conditions (\ref{hO}), compatible with $\xi$ and yielding well-defined charges $Q_\xi$, are given by \be h_{\mu\nu}=\Oc\!\left(\!\!\begin{array}{cccc} r&r^{p_{tr}}&1&r \\ &r^{-3}&r^{-1}&r^{-2} \\ &&r^{-1}&1 \\ &&&r^{-1} \end{array} \!\!\right), \qquad\quad p_{tr}\geq-3, \qquad\quad h_{\mu\nu}=h_{\nu\mu} \label{hw} \ee These bounds on the asymptotic boundary conditions follow from analyzing the leading terms in (\ref{2h}) given by \bea \sqrt{-\gb}k_\xi[h;\gb]\big|_{d\phi\wedge d\theta}&=&\eps\sqrt{\frac{\al}{4\g\G}}\Big( \frac{k^2\g^2}{2\G}\big(r^3 h_{rr}-\frac{h_{tt}}{r}\big)+\frac{k^3\g^2}{\G}h_{t\phi} +k\g r(\pa_t h_{r\phi}-\pa_r h_{t\phi})\nn &&+(\G-k^2\g)\big((1+\frac{k^2\g}{2\G})rh_{\phi\phi} -r^2\pa_r h_{\phi\phi}\big) +\frac{\g}{\al}\big(1-\frac{k^2\g}{2\G}\big)rh_{\theta\theta} -\frac{\g r^2}{\al}\pa_r h_{\theta\theta}\Big)\nn &+&\frac{1}{2}\pa_\theta\Big(\sqrt{\frac{\g}{\al\G}}\big(\eps r^2 h_{r\theta} -\eps'\big(\frac{h_{t\theta}}{r}-kh_{\phi\theta}\big)\big)\Big) +\ldots \eea where the total $\phi$-derivatives have been ignored. Including these derivatives might prompt one to strengthen the allowed fluctuation $h_{r\phi}$ unnecessarily from $\Oc(r^{-1})$ to $\Oc(r^{-2})$. As in the discussion following (\ref{hwea}), the real parameter $p_{tr}$ is bounded from above by the linear theory underlying (\ref{Q}). The conserved charges $Q_\xi$ corresponding to boundary conditions in the range from (\ref{hs}) to (\ref{hw}) are non-zero if at least one of the bounds in (\ref{hw}), $h_{tr}$ excluded, is saturated. In all such cases, the charges generate a {\em centreless} Virasoro algebra since a simple $r$-power counting asymptotically gives $\Lc_\xi\gb_{\mu\nu}<h_{\mu\nu}$ for all $\mu,\nu$. With reference to the comments following (\ref{comm}), we note that $Q_{\xi_\eps-\kappa_\eps}=0$ for all boundary conditions in the range from (\ref{hs}) to (\ref{hw}). This implies the announced triviality of the difference between the naive symmetry generators $\kappa_\eps$ and the proper Virasoro generators $\xi_\eps$ for every smooth function $\eps(t)$. Many alternatives exist to boundary conditions of the simple type (\ref{hO}). Imposing the separate condition $h_{t\theta}=krh_{\phi\theta}$, for example, renders (\ref{2h}) independent of $h_{t\theta}$ and $h_{\phi\theta}$ thus weakening the conditions on the real parameter $p_{\phi\theta}$ in $h_{\phi\theta}=\Oc(r^{p_{\phi\theta}})$, and subsequently in $h_{t\theta}=\Oc(r^{p_{t\theta}+1})$. Imposing conditions resembling the relations (\ref{Lgid}) is another interesting possibility. We emphasize that there are infinitely many sets of consistent boundary conditions simultaneously admitting two copies of Virasoro-generating asymptotic Killing vectors enhancing the $SL(2,\mathbb{R})$ and $U(1)$ isometries separately. Within the realm of boundary conditions considered here, however, at least one of these copies gives rise to vanishing or ill-defined charges, and we are left with {\em at most} one quantum charge Virasoro algebra. \subsection{Spurious asymptotic symmetries} There may be many more asymptotic Killing vectors and asymptotic symmetry generators than conserved charges since the surface integrals (\ref{Q}) producing the charges from the generators may vanish. The corresponding contravariant vector fields thus generate {\em spurious} asymptotic symmetries. Let us illustrate this by considering the diffeomorphisms whose action on the background metric generate fluctuations compatible with \be h_{\mu\nu}=\left(\!\!\begin{array}{cccc} \Oc(r^{-2\ell_0})&0&\Oc(r^{-1-2\ell_0})&0 \\ 0&\Oc(r^{-4-2\ell_0})&0&0 \\ \Oc(r^{-1-2\ell_0})&0&0&0 \\ 0&0&0&0 \end{array} \!\!\right) \label{hrr} \ee for $\ell_0$ a non-negative integer. Since these conditions are stronger than (\ref{hs}), the diffeomorphisms to be discussed are compatible with all boundary conditions in the range from (\ref{hs}) to (\ref{hw}). For every integer $\ell\geq\ell_0+2$, we find the contravariant vector field \be \eta_{(2\ell)}=\frac{\rho_{(2\ell)}'(t)}{2\ell r^{2\ell}}\pa_t -\frac{\rho_{(2\ell)}(t)}{r^{2\ell-3}}\pa_r -\frac{k\rho_{(2\ell)}'(t)}{(2\ell-1)r^{2\ell-1}}\pa_\phi \ee The Lie derivative along $\eta_{(2\ell)}$ of the background metric is found to have the following non-trivial components \bea \Lc_{\eta_{(2\ell)}}\gb_{tt}&=&\frac{2\ell(2\ell-1)\big(\G-k^2\g\big)r^2\rho_{(2\ell)}(t) -\big((2\ell-1)\G+k^2\g\big)\rho_{(2\ell)}''(t)}{\ell(2\ell-1)r^{2\ell-2}}\nn \Lc_{\eta_{(2\ell)}}\gb_{t\phi}=\Lc_{\eta_{(2\ell)}}\gb_{\phi t}&=& -\frac{k\g\Big(2\ell(2\ell-1)r^2\rho_{(2\ell)}(t) +\rho_{(2\ell)}''(t)\Big)}{2\ell(2\ell-1)r^{2\ell-1}}\nn \Lc_{\eta_{(2\ell)}}\gb_{rr}&=&\frac{(4\ell-4)\G\rho_{(2\ell)}(t)}{r^{2\ell}} \eea It follows that the corresponding charges $Q_{\eta_{(2\ell)}}$ all vanish. Our final example concerns the fate of the original $U(1)$ isometry in conjunction with the $SL(2,\mathbb{R})$-enhanced Virasoro algebra. Since $\Lc_{\pa_\phi}\gb_{\mu\nu}=0$, it survives all consistent sets of boundary conditions. We wish to demonstrate that it can be enhanced to a $U(1)$ current, though this current generates a spurious asymptotic symmetry. To see this, let us weaken the boundary conditions (\ref{hrr}) and consider \be h_{\mu\nu}=\left(\!\!\begin{array}{cccc} \Oc(r)&0&\Oc(1)&0 \\ 0&\Oc(r^{-3-2\ell_0})&0&0 \\ \Oc(1)&0&0&0 \\ 0&0&0&0 \end{array} \!\!\right) \label{hrr3} \ee preserving the centreless Virasoro algebra and the contravariant vector fields $\eta_{(2\ell)}$ for $\ell\geq\ell_0+2$. For every non-negative integer $\ell_0$, we have the vector field \be \zeta=\psi(t)\pa_\phi \ee and for every $\ell\geq\ell_0$, we find the additional vector field \be \zeta_{(2\ell)}=-\frac{\psi_{(2\ell)}'(t)}{(2\ell+\delta_{\ell,0})(2\ell+3)r^{2\ell+3}}\pa_t +\frac{\psi_{(2\ell)}(t)}{(2\ell+\delta_{\ell,0})r^{2\ell}}\pa_r +\frac{k\psi_{(2\ell)}'(t)}{(2\ell+\delta_{\ell,0})(2\ell+2)r^{2\ell+2}}\pa_\phi \ee We note that the exact $U(1)$ isometry follows from setting $\psi(t)=1$ in $\zeta$. The non-trivial components of the corresponding Lie derivatives are \be \Lc_{\zeta}\gb_{tt}=2k\g r\psi'(t),\qquad\quad \Lc_{\zeta}\gb_{t\phi}=\Lc_{\zeta}\gb_{\phi t}=\g\psi'(t) \label{Liezeta} \ee and \bea \Lc_{\zeta_{(2\ell)}}\gb_{tt}&=&-\frac{(2\ell+2)(2\ell+3)\big(\G-k^2\g\big)r^2\psi_{(2\ell)}(t) -\big((2\ell+2)\G+k^2\g\big)\psi_{(2\ell)}''(t)}{(2\ell+\delta_{\ell,0})(\ell+1)(2\ell+3)r^{2\ell+1}}\nn \Lc_{\zeta_{(2\ell)}}\gb_{t\phi}= \Lc_{\zeta_{(2\ell)}}\gb_{\phi t}&=& \frac{k\g\Big((2\ell+2)(2\ell+3)r^2\psi_{(2\ell)}(t)+\psi_{(2\ell)}''(t)\Big)}{(2\ell+\delta_{\ell,0}) (2\ell+2)(2\ell+3)r^{2\ell+2}}\nn \Lc_{\zeta_{(2\ell)}}\gb_{rr}&=&-\frac{2(2\ell+1)\G\psi_{(2\ell)}(t)}{(2\ell+\delta_{\ell,0})r^{2\ell+3}} \eea Now, weakening the boundary conditions (\ref{hrr3}) to (\ref{hw}), compatible with the Virasoro generators $\xi$ (\ref{xi}), we find \be \sqrt{-\gb}k_\zeta[h;\gb]\big|_{d\var\wedge d\theta}=-\sqrt{\frac{\al}{4\g\G}}k\g r\psi(t)\pa_\phi h_{r\phi}+\Oc(r^{-1}) \ee where $h_{r\phi}=\Oc(r^{-1})$. However, we can ignore total $\phi$-derivatives, implying that $Q_{\zeta}=0$. One could, perhaps, still wonder if the formalism would allow a central extension when combining $\xi$ and $\zeta$. This is not the case, though, since we have \be \sqrt{-\gb}k_\xi[\Lc_{\zeta}\gb;\gb]\big|_{d\phi\wedge d\theta}= \sqrt{\frac{\al\g^3}{\G}}\frac{\pa_r(r\xi^\phi)\psi'(t)}{4r} =\Oc(r^{-4}) \ee while $\!\int\!\sqrt{-\gb}k_\zeta[\Lc_{\xi}\gb;\gb]=0$ since $Q_\zeta=0$ and $\xi$ is compatible with all the boundary conditions considered here. We also find that $Q_{\zeta_{(2\ell)}}=0$. \vskip.5cm \subsection*{Acknowledgments} \vskip.1cm \noindent This work is supported by the Australian Research Council. The author thanks Niels Obers (The Niels Bohr Institute) for discussions sparking the undertaking of this work, and The Niels Bohr Institute for its generous hospitality during his visit there in May 2009. The author also thanks Dumitru Astefanesei for comments.
1,116,691,498,843
arxiv
\section{Introduction and Motivation} \label{sec:introduction} The exact nature of deep interior of Earth has been a long-standing puzzle. The internal regions of Earth are inaccessible due to extreme temperatures and pressures. Direct measurements are not feasible after a depth of a few km. Most of the information available about the internal structure of Earth has been obtained using indirect probes such as the gravitational measurements~\cite{Ries:1992,Luzum:2011,Rosi:2014kva,astro_almanac,Williams:1994,Chen:2014} and the seismic studies~\cite{Robertson:1966,Volgyesi:1982,Loper:1995,Alfe:2007,Stacey_Davis:2008,McDonough:2022,Thorne:2022,Hirose:2022}. For example, gravitational measurements provide information on the mass ~\cite{Ries:1992,Williams:1994,Luzum:2011,Rosi:2014kva,astro_almanac}, and the moment of inertia of Earth~\cite{Williams:1994,Chen:2014}. These tell us that the average density of Earth deep below must be much larger than the typical rock density at the surface. Seismic waves are generated when the tectonics plates in the outermost layers of Earth slide against each other. The origin of earthquake, called epicenter, is typically located up to a depth of about 700 km~\cite{Stacey_Davis:2008,Volgyesi:1982}. These seismic waves travel in all directions through the bulk of Earth and can reach the surface all over the globe. They are measured by seismometers which provide the information about the time, location, and intensity of the earthquake. While passing though Earth, these waves can get reflected or refracted on encountering matter with different densities and composition~\cite{Volgyesi:1982}. The features observed in the seismic waves are used to infer the internal structure of Earth. From seismic studies~\cite{Robertson:1966,Volgyesi:1982,Loper:1995,Alfe:2007,Stacey_Davis:2008,McDonough:2022,Thorne:2022,Hirose:2022}, we know that the Earth consists of a layered structure in the form of concentric shells. As we go from outside to deep inside the Earth, the layers are arranged in the order crust, outer mantle, inner mantle, outer core and inner core. Even though seismic studies and gravitational measurements have provided us a huge amount of data and revealed crucial information about the internal structure of Earth, there are still many open issues~\cite{McDonough:2022}. For example, the radius, mass and chemical composition of the core are not very well known. The density jump between outer core and inner core is not known precisely. There are many unexplained structures and heterogeneities observed in the lowermost mantle, beneath Africa and Pacific, that show lower-than-average seismic wave speeds which are known as large low-shear-velocity provinces (LLSVPs) and ultra low velocity zones (ULVZs)~\cite{Garnero:2016,McNamara:2019,Thorne:2022}. Active research is being pursued to determine the composition of the bulk silicate Earth~\cite{McDonough:1995,McDonough:2022}. If we talk about the presence of light elements inside Earth, we don't know how much H\textsubscript{2}O is present in the mantle and how much H is present in the core~\cite{Williams:2001,Hirose:2021,Hirose:2022}. The precise measurement of radioactive power produced in the mantle and core is important to understand the thermal dynamics inside Earth~\cite{Araki:2005qa,Borexino:2015ucj,Bellini:2021sow,KamLAND:2022vbm}. Complementary information using additional probes such as neutrinos can improve our understanding about the structure of Earth. Neutrinos interact only via weak interactions, which enables most of the neutrinos to pass through Earth without getting absorbed. The cross section for interaction of neutrino with nucleons increases with energy. At energies higher then a few TeV, the interaction cross-section is large enough to have sizable absorption of neutrinos inside the Earth~\cite{Gandhi:1995tf,IceCube:2017roe}. The idea of using the attenuation in the flux of neutrinos to probe the internal structure of Earth was proposed in Ref.~\cite{Placci:1973,Volkova:1974xa}, and detailed studies using neutrinos from various sources, such as man-made neutrinos~\cite{Placci:1973,Volkova:1974xa,Nedyalkov:1981,Nedyalkov:1981yy,Nedialkov:1983,Krastev:1983,DeRujula:1983ya,Wilson:1983an,Askarian:1984xrv,Volkova:1985zc,Tsarev:1985yub,Borisov:1986sm,Tsarev:1986xg,Borisov:1989kh,Winter:2006vg}, extraterrestrial neutrinos~\cite{Wilson:1983an,Kuo:1995,Crawford:1995,Jain:1999kp,Reynoso:2004dt}, and atmospheric neutrinos~\cite{Gonzalez-Garcia:2007wfs,Borriello:2009ad,Takeuchi:2010,Romero:2011zzb} have been performed. The absorption tomography of Earth has been performed using one-year data of multi-TeV atmospheric muon neutrinos collected by the IceCube detector~\cite{Gonzalez-Garcia:2007wfs,Donini:2018tsg} where the densities of various layers and the mass of Earth have been estimated using neutrinos. Another way of exploring the interior of Earth can be the use of diffraction patterns produced by the coherent scattering of neutrinos with the Earth's matter, but this is not feasible with the present technology~\cite{Fortes:2006}. The improvement in the precision measurement of neutrino oscillation parameters~\cite{Capozzi:2021fjo,NuFIT,Esteban:2020cvm,deSalas:2020pgw}, including reactor mixing angle $\theta_{13}$~\cite{Meregaglia:2017spw,DayaBay:2018yms,RENO:2018dro,RENO:2022}, have provided a new way to explore Earth's interior via matter effects in neutrino oscillations in the multi-GeV range of energies. The matter effects come into picture during the interactions of upward-going atmospheric neutrinos with the ambient electrons present inside Earth. This charged-current (CC) coherent elastic forward scattering results into an effective matter potential for neutrinos which depends upon the number density of electrons along the neutrino trajectory. The possibility of probing the internal structure of Earth using these density-dependent matter effects is known as ``neutrino oscillation tomography''. These studies have been performed using man-made neutrino beams~\cite{Ermilova:1986ph,Nicolaidis:1987fe,Ermilova:1988pw,Nicolaidis:1990jm,Ohlsson:2001ck,Ohlsson:2001fy,Winter:2005we,Minakata:2006am,Gandhi:2006gu,Tang:2011wn,Arguelles:2012nw}, supernova neutrinos~\cite{Akhmedov:2005yt,Lindner:2002wm}, solar neutrinos~\cite{Ioannisian:2002yj,Ioannisian:2004jk,Akhmedov:2005yt,Ioannisian:2015qwa,Ioannisian:2017chl,Ioannisian:2017dkx,Bakhti:2020tcj}, and atmospheric neutrinos~\cite{Agarwalla:2012uj,Rott:2015kwa,Winter:2015zwx,Bourret:2017tkw,Bourret:2019wme,Bourret:2020zwg,DOlivo:2020ssf,Kumar:2021faw,Kelly:2021jfs,Maderer:2021aeb,Capozzi:2021hkl,Denton:2021rgt,Upadhyay:2021kzf,Maderer:2022toi,DOlivoSaez:2022vdl}. In the sub-GeV and multi-GeV energy ranges, atmospheric neutrinos are the best source of neutrinos to probe the internal structure of Earth because they have access to a wide range of baselines starting from about 15 km to 12750 km which cover all the layers of Earth. Sensitivity studies for current and future atmospheric neutrino experiments like IceCube~\cite{Rott:2015kwa}, Oscillation Research with Cosmics in the Abyss (ORCA)~\cite{KM3Net:2016zxf}, Deep Underground Neutrino Experiment (DUNE)~\cite{DUNE:2021tad}, Hyper-Kamiokande (Hyper-K)~\cite{Hyper-Kamiokande:2018ofw}, and Iron Calorimeter (ICAL) detector~\cite{Kumar:2021faw} have been performed. These studies have shown how to detect the presence of core-mantle boundary (CMB) using ICAL~\cite{Kumar:2021faw}, constrain the average densities of the core, and the mantle using ORCA~\cite{Winter:2015zwx,Maderer:2021aeb,Capozzi:2021hkl,DOlivoSaez:2022vdl} and DUNE~\cite{Kelly:2021jfs}, determine the position of the core-mantle boundary using DUNE~\cite{Denton:2021rgt}, and explore the chemical composition of the Earth core using Hyper-K and IceCube~\cite{Rott:2015kwa} as well as ORCA~\cite{Bourret:2017tkw,Bourret:2019wme,Bourret:2020zwg,Maderer:2021aeb,Maderer:2022toi,DOlivoSaez:2022vdl}. Further information about the chemical composition of the Earth can also be obtained using the geoneutrinos which are produced during the decay of radioactive elements such as Uranium, Thorium and Potassium~\cite{Araki:2005qa,Borexino:2015ucj,McDonough:2020,Bellini:2021sow,KamLAND:2022vbm,McDonough:2022a}. Geoneutrinos may shed light on the radiogenic contribution to the heat budget of Earth. Since neutrinos use weak interactions to probe the internal structure of Earth, which is complementary to seismic studies based on electromagnetic interactions and gravitational measurements based on gravitational interactions, this would pave the way for ``multi-messenger tomography of Earth''. The proposed 50 kt ICAL detector at the upcoming India-based Neutrino Observatory (INO)~\cite{ICAL:2015stm} would be able to detect atmospheric neutrinos and antineutrinos in the multi-GeV range of energies and over a wide range of baselines. Thanks to the presence of magnetic field of about 1.5 T, ICAL would be able to distinguish between neutrino (by observing $\mu^-$) and antineutrino (by observing $\mu^+$) events separately. Further, due to its good directional resolution, ICAL would be able to observe and identify the neutrinos passing through core and mantle separately. The ICAL detector would be sensitive to Mikheyev-Smirnov-Wolfenstein (MSW) resonance~\cite{Wolfenstein:1977ue, Mikheev:1986gs, Mikheev:1986wj}, which occurs around the energies of 6 to 10 GeV for mantle-passing neutrinos. It can also observe neutrino oscillation length resonance (NOLR)~\cite{Petcov:1998su,Chizhov:1998ug,Petcov:1998sg,Chizhov:1999az,Chizhov:1999he} or parametric resonance (PR)~\cite{Akhmedov:1998ui,Akhmedov:1998xq}, which occurs around the energies of 3 to 6 GeV for some of the core-passing neutrinos. The NOLR/ parametric resonance occurs due to a sudden jump in the density when we go from mantle to core. The pattern of NOLR/ parametric resonance depends upon the amount of density jump and the location of CMB. In the present work, we explore the impact of modifying the CMB radius on neutrino oscillations and hence, on the reconstructed muon events at the ICAL detector. We demonstrate that the location of core-mantle boundary can be measured using the matter effects in atmospheric neutrino oscillations with the help of the ICAL detector. In section~\ref{sec:earth_model}, we discuss the internal structure of Earth and the propagation of seismic waves through different regions of Earth. In section~\ref{sec:CMB_variation}, we describe our methodology for exploring various modified-CMB scenarios. The impact of CMB modification on neutrino oscillation probabilities is discussed in section~\ref{sec:probability} and \ref{sec:oscillograms}. The neutrino event simulation at ICAL is described in section~\ref{sec:events}. The method of statistical analysis to quantify the impact of CMB modification is explained in section~\ref{sec:statistical analysis}. In section~\ref{sec:results}, we evaluate the sensitivity of ICAL for determining the location of CMB. Finally, we present the summary and concluding remarks of this study in section~\ref{sec:conclusion}. \section{A Brief Review of the Internal Structure of Earth} \label{sec:earth_model} The surface of Earth consists of soil, sand, rocks, mountains, rivers, oceans, ice, and lava etc, which differ from each other geologically as well as in terms of chemical composition. The layers beneath these structures are mostly solid and consist of various types of rocks. The direct observations of layers inside Earth have been possible only up to a few kms because the deepest hole in the world till today, with current technology, is only about 12 km which was drilled on the rocks of Kola Peninsula in the Murmansk region of Russia~\cite{Kozlovsky:1984,Kola_Superdeep}. Below the depth of 6 km, a temperature gradient of 20$^\circ$ C per km was observed instead of the expected gradient of 16$^\circ$ C per km. Eventually, the drilling had to stop due to extreme temperature of about 180$^\circ$ C. This exploration clearly shows that if we want to know more about the structure of Earth at the deeper locations, then we need to use indirect methods, for example, gravitational measurements~\cite{Ries:1992,Luzum:2011,Rosi:2014kva,astro_almanac,Williams:1994,Chen:2014}, studies of earthquakes or seismic waves~\cite{Robertson:1966,Volgyesi:1982,Loper:1995,Alfe:2007,Stacey_Davis:2008,McDonough:2022,Thorne:2022,Hirose:2022}, neutrino tomography~\cite{MMTE:2022}, etc. Seismic waves are generated when the tectonic plates inside Earth slide and the energy is released in the form of vibrations. They get modified while propagating through Earth. The velocity and timing measurements of seismic waves are used to unravel the internal structure of Earth~\cite{Robertson:1966,Loper:1995,Alfe:2007}. Seismic studies have revealed that the Earth consists of the layers in the form of concentric shells which can be broadly divided into core and mantle. The radius of core is about half the radius of Earth ($R_E$) which is about 6371 km. The contribution of core and mantle to the mass of Earth are about 32\% and 68\%, respectively. The information about Earth is also obtained using gravitational measurements. The mass of Earth ($M_E$) is gravitationally measured to be about $(5.9722\,\pm\,0.0006) \times 10^{24}$ kg~\cite{Ries:1992,Luzum:2011,Rosi:2014kva,astro_almanac} whereas its moment of inertia is about $(8.01736\,\pm\, 0.00097) \times 10^{37}$ kg m\textsuperscript{2}~\cite{Williams:1994,Chen:2014}. Since the measured moment of inertia is less than the expected one ($2/5 M_E R_E^2 \sim 9.7 \times 10^{37}$ kg m\textsuperscript{2}) for a uniform density distribution inside Earth, the density should be more as we go deeper inside the Earth. In other words, the matter should be concentrated more towards the center of Earth. Using $M_E$ and $R_E$, one can obtain the average density of Earth to be 5.5 g/cm\textsuperscript{3}, which is larger than the density of rocks ($\sim$2.8 g/cm\textsuperscript{3}) present on the surface. This observation also points towards the presence of regions of higher densities deep inside the Earth. Now, let us try to understand, how the interiors of Earth are probed using seismology. Seismic waves can be classified into two categories -- shear (S) waves and pressure (P) waves. The S-waves result into the vibration of rocks perpendicular to the direction of propagation whereas the P-waves force the particles to vibrate along the direction of propagation. The S-waves and P-waves modify differently while passing through the layers of Earth. The P-wave can travel through both solid as well as liquid layers but the velocity of the P-wave decreases while passing through liquid layers. As far as the S-waves are concerned, they cannot travel through the liquid layers. \begin{figure} \centering \includegraphics[width=\linewidth]{images/seismic_neutrino} \mycaption{The left panel shows examples of seismic waves originating at the focus of an earthquake (black dot) and traveling inside Earth. The dashed and solid curves represent the S (shear) and P (pressure) waves, respectively. Colored curves represent the trajectories of these seismic waves. The components (S, P, K, I, J, s, p, c) of the seismic waves are described in the text. The right panel shows a three-layered model of Earth consisting of the core, inner mantle, and outer mantle. The red curve depicts the density profile of Earth as a function of radial distance from its center.} \label{fig:seismic_phases} \end{figure} The seismic waves originating from the center of the earthquake travel through the Earth and land at various positions on the surface. During their propagation through Earth, S and P-waves may get reflected and refracted at multiple density discontinuities inside the Earth, splitting into different segments as shown in the left panel of Fig.~\ref{fig:seismic_phases}. The nomenclature of these segments is as follows: \begin{itemize} \item S and P indicate the S-wave and P-wave traveling through the mantle. \item K and I stand for the P-waves passing through the outer and inner core, respectively. \item J indicates the segment of the S-wave traveling through the inner core. \item s and p indicate those S and P-waves, respectively, which initially travel upward from the center of the earthquake, and then get reflected downward from the Earth's outer surface. \item c represents an upward reflection at the core-mantle boundary. \end{itemize} The seismic studies indicate that the mantle consists of hot rocks of silicate~\cite{Robertson:1966}. The rocks in mantle are not molten. However, they are plastic in nature which allows them to change their shape over long timescales. Beneath the mantle, the Earth consists of a high density core which is composed of metals like iron and nickel. The inability of S-waves to pass through the outer core, and the decrease in the velocity of P-waves therein, indicate that the outer core is expected be liquid~\cite{Robertson:1966}. In 1936, I. Lehmann discovered the inner core by its higher P-wave velocity~\cite{Lehmann:1936}. Inner core is expected to be solid~\cite{Birch:1940}. The only possible reason why iron and other heavy metals may be solid at high temperatures inside the inner core would be because their melting points increase significantly at the tremendous pressure present there~\cite{Birch:1940}. The data on the velocities of seismic waves are used to develop the Preliminary Reference Earth Model (PREM)~\cite{Dziewonski:1981xy} profile where the density of any layer is given as a one-dimensional function\footnote{The seismological studies indicate that the departure from spherical symmetry near CMB is small. For example, the ellipticity at CMB is about $2.5 \times 10^{-3}$ whereas the outer core surface topography is within 3 km~\cite{McDonough:2017}. Recently, several new three-dimensional Earth models like Shen-Ritzwoller (S-R)~\cite{sr:2016}, FWEA18~\cite{FWEA18:2018}, SAW642AN~\cite{SAW642AN:2000}, CRUST1~\cite{CRUST1:2013}, etc. have been developed. At multi-GeV energies, the oscillation wavelength for atmospheric neutrinos is around a few thousands of km. Therefore, atmospheric neutrino oscillations are not expected to be sensitive to such detailed features.} of the radial distance of the layer from the center of Earth. The PREM profile is mainly based on two empirical equations which relate the velocities of S and P-waves with the densities of the layers inside Earth. The first one is called the Birch's law~\cite{Birch:1964} and is valid for the outer mantle whereas the second one is known as Adams-Williamson equation~\cite{Williamson:1923} that is suitable for the inner mantle and the core. The parameters of these empirical relations depend upon temperature, pressure, composition, and elastic properties of the layers of Earth, which give rise to uncertainties in density distribution. The density of the mantle has a uncertainty of about 5\% whereas for the core, it is significantly larger~\cite{Bolt:1991,kennett:1998,Masters:2003}. The sharp change in the densities of layers results in the partial reflection and refraction of seismic waves. Such a sudden rise in density is also observed at the CMB. In the present work, we explore whether the matter effects in atmospheric neutrino oscillations can be used to probe the location of CMB. We consider a simple three-layered model of Earth which consists of core, inner mantle, and outer mantle as shown in the right panel of Fig.~\ref{fig:seismic_phases}. The core is present up to a radial distance of 3480 km from the center of Earth, the inner mantle spans from 3480 km to 5701 km, and the outer mantle is from 5701 km to the radius of Earth. The densities of core, inner mantle and outer mantle are taken as 11.37, 5, and 3.3 g/cm\textsuperscript{3}. Note that we have not considered the crust which has much smaller thickness as compared to the other layers. \section{Location of CMB and Neutrino Oscillations} \label{sec:CMB_variation_effect} \subsection{Three-layered Models with Modified CMB Locations} \label{sec:CMB_variation} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{images/cmb-profile-all} \mycaption{Densities as functions of radial distance for different ways of incorporating modified CMB in the three-layered profile of Earth. The black curves show the standard density profiles with $R_\text{CMB} = 3480$ km. The dotted-red (dashed-blue) curves indicate the density profile in the SC (LC) scenario, where $R_\text{CMB}$ in decreased (increased) by 500 km. See text (Sec.~\ref{sec:CMB_variation}) for details of cases I, II, and III. } \label{fig:CMB_variation_ways} \end{figure} \begin{table} \centering \begin{tabular}{| c | c |c c c |} \hline \multirow{2}{*}{Three-layered profile} & Radius of & \multicolumn{3}{c|}{ Layer densities (g/$\text{cm}^3$) } \\ \cline{3-5} & layer boundaries (km) & core & inner mantle & outer mantle\\ \hline\hline \textbf{Standard} & (3480, 5701, 6371) & 11.37 & 5.0 & 3.3 \\ \hline \textbf{Case-I} & & & & \\ Smaller core (SC) & (2980, 5701, 6371) & 11.37 & 5.0 & 3.3 \\ Larger core (LC) & (3980, 5701, 6371) & 11.37 & 5.0 & 3.3 \\ \hline \textbf{Case-II} & & & & \\ Smaller core (SC) & (2980, 5701, 6371) & 15.15 & 5.0 & 3.3 \\ Larger core (LC) & (3980, 5701, 6371) & 9.26 & 5.0 & 3.3 \\ \hline \textbf{Case-III} & & & & \\ Smaller core (SC) & (2980, 5701, 6371) & 18.11 & 4.51 & 3.3\\ Larger core (LC) & (3980, 5701, 6371) & 7.60 & 5.85 & 3.3 \\ \hline \end{tabular} \mycaption{ Boundaries of three layers and their densities in the three-layered profile of Earth when $R_\text{CMB}$ is modified by $\Delta R_\text{CMB}$. Smaller core (SC) stands for $\Delta R_\text{CMB} = - 500$ km, while larger core (LC) represents $\Delta R_\text{CMB} = + 500$ km. The corresponding modifications of density profiles are given by the cases I, II, III (see Sec.~\ref{sec:CMB_variation}).} \label{tab:CMB_variation} \end{table} In the present work, we use atmospheric neutrinos to probe the location of CMB where we explore how the matter effects in neutrino oscillations change in different modified-CMB scenarios. The standard CMB radius is taken to be $R_\text{CMB}$(standard)$=3480$ km. For illustration, we show the modification of the CMB radius by $\Delta R_\text{CMB} = -500$ km (smaller core or SC) and by $\Delta R_\text{CMB} = +500$ km (larger core or LC) in Fig~\ref{fig:CMB_variation_ways}. In the standard, SC, and LC scenarios, the core spans the zenith angle range $\cos\theta_\nu \le -0.84$, $\cos\theta_\nu \le -0.88$, and $\cos\theta_\nu \le -0.78$, respectively. We further consider three different ways of modifying the density profile of the Earth. We term these as three ``cases'': \begin{itemize} \item \textbf{Case-I:} The densities of all layers of Earth are kept fixed. Note that the mass of Earth is not constrained in this case. \item \textbf{Case-II:} The densities of inner and outer mantle are kept fixed, while the density of core is modified to keep the mass of Earth invariant. \item \textbf{Case-III:} The density of outer mantle is kept fixed, and the densities of core and inner mantle are modified, while keeping their individual masses invariant. It automatically ensures that the mass of Earth is unchanged. \end{itemize} Note that the Case-II is the most realistic one, since the density of the mantle is quite well measured by the combination of gravitational and seismological studies. However, we also analyze two dummy cases, Case-I and Case-III. In Case-I, we remove the constraint from the mass of Earth while imposing the constraint on the density of the core, compared to Case-II. In Case-III, we add the constraint on the mass of the mantle but remove the constraint on its density, compared to Case-II. This would allow us to obtain insights on the role of these constraints. In this study, we have not taken into account the constraint from the moment of inertia of Earth. Table~\ref{tab:CMB_variation} presents the modified boundaries between various layers and their densities in the SC and LC scenarios. The densities of core and mantle may change depending upon the cases as explained above. We discuss the effect of $R_\text{CMB}$ modification on neutrino oscillations in the next section. \subsection{Effect of Modified CMB Radius on Oscillation Probabilities} \label{sec:probability} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\sin^2 2\theta_{12}$ & $\sin^2\theta_{23}$ & $\sin^2 2\theta_{13}$ & $\Delta m^2_ \text{eff}$ (eV$^2$) & $\Delta m^2_{21}$ (eV$^2$) & $\delta_{\rm CP}$ & Mass Ordering\\ \hline 0.855 & 0.5 & 0.0875 & $2.49\times 10^{-3}$ & $7.4\times10^{-5}$ & 0 & Normal (NO)\\ \hline \end{tabular} \mycaption{The benchmark values of oscillation parameters considered in this analysis. These values are in good agreement with the present neutrino global fits~\cite{Capozzi:2021fjo,NuFIT,Esteban:2020cvm,deSalas:2020pgw}.} \label{tab:osc-param-value} \end{table} Atmospheric neutrinos and antineutrinos can be observed at ICAL separately in the mulit-GeV range of energies over a wide range of baselines from about 15 km to 12757 km. The upward-going neutrinos pass through the Earth and undergo charged-current interactions with the ambient electrons. This coherent forward scattering results in a matter potential $V_\text{CC}$ given by \begin{equation} V_\text{CC} = \pm\, \sqrt{2} G_F N_e \approx \pm \, 7.6 \times Y_e \times 10^{-14} \left[\frac{\rho}{\text{g/cm}^3}\right]~\text{eV}\,, \end{equation} where $G_F$ is Fermi coupling constant and $N_e$ is the ambient electron number density. Further, $Y_e = N_e/(N_p + N_n)$ is the relative number density of electron inside the matter having density $\rho$ where $N_p$ and $N_n$ denote the number densities of protons and neutrons. In this analysis, we assume that Earth is neutral and isoscalar, which implies $N_n \approx N_p = N_e$ and $Y_e = 0.5$. The positive and negative signs correspond to neutrinos and antineutrinos, respectively. Due to these opposite signs, the matter effects modify the oscillation patterns for neutrinos and antineutrinos differently. The charge identification capability of ICAL plays a crucial role in observing these different matter effects in neutrinos and antineutrinos separately, which enhances the sensitivity of ICAL towards locating the CMB as we demonstrate in our results. In the present work, we use the benchmark values of oscillation parameters given in Table~\ref{tab:osc-param-value}. The normal mass ordering (NO) represents $m_1 < m_2 < m_3$ whereas for inverted mass ordering (IO), $ m_3 < m_1 < m_2$. We implement NO vs. IO by taking opposite signs of the effective atmospheric mass-squared difference\footnote{The effective atmospheric mass-squared difference can be given in terms of $\Delta m^2_{31}$ and $\Delta m^2_{21}$ as follows~\cite{deGouvea:2005hk,Nunokawa:2005nx} \begin{equation} \Delta m^2_\text{eff} = \Delta m^2_{31} - \Delta m^2_{21} (\cos^2\theta_{12} - \cos \delta_\text{CP} \sin\theta_{13}\sin2\theta_{12}\tan\theta_{23}). \label{eq:m_eff} \end{equation}} $\Delta m^2_{\text{eff}}$. The matter effects result in a resonant enhancement in the effective value of the smallest lepton mixing angle $\theta_{13}$, which can become as large as $45^\circ$. This adiabatic resonance due to $\theta_{13}$-driven matter effects is known as the Mikheyev-Smirnov-Wolfenstein (MSW) resonance~\cite{Wolfenstein:1977ue,Mikheev:1986gs,Mikheev:1986wj}. For NO (IO), the MSW resonance occurs in neutrinos (antineutrinos). For a given matter density $\rho$, the energy at which the MSW resonance occurs can be given as \begin{align} E_\text{res} & = \frac{\Delta m^2_{31} \cos 2\theta_{13}}{2\sqrt{2}G_F N_e} \simeq \frac{\Delta m^2_{31}\left[\text{eV}^2\right] \cos 2\theta_{13}}{7.6 \times 10^{-14} \rho\left[\text{g/cm}^3\right]} ~ \text{eV}\,. \end{align} Now, if we consider the average density of mantle to be around 4.5 $\text{g/cm}^3$, then the MSW resonance occurs at approximately 7 GeV: \begin{align} E_\text{res} \simeq 7~\text{GeV} \left(\frac{4.5\,\text{g/cm}^3}{\rho}\right)\left(\frac{\Delta m^2_{31}}{2.4\times10^{-3}~\text{eV}^2}\right) \cos 2\theta_{13}\,. \end{align} Therefore, the mantle-passing neutrinos feel the MSW resonance in the energy range of about 5 to 10 GeV. On the other hand, if we consider the density of core to be about 11.3 $\text{g/cm}^3$, the MSW resonance in the core occurs at approximately 2.8 GeV: \begin{align} E_\text{res} \simeq 2.8~\text{GeV} \left(\frac{11.3\,\text{g/cm}^3}{\rho}\right)\left(\frac{\Delta m^2_{31}}{2.4\times10^{-3}~\text{eV}^2}\right) \cos 2\theta_{13}\,. \end{align} As we go along the trajectory of a core-passing neutrino, first the density increases from mantle to core and then it decrease from core to mantle. The quasi-periodic nature of densities along the neutrino path, combined with a sharp contrast in the densities of core and mantle at CMB, may result in specific phase relationships between neutrino oscillation amplitudes in the core and the mantle for particular path lengths. This phenomenon is known as the neutrino oscillation length resonance (NOLR)~\cite{Petcov:1998su,Chizhov:1998ug,Petcov:1998sg,Chizhov:1999az,Chizhov:1999he} or parametric resonance (PR)~\cite{Akhmedov:1998ui,Akhmedov:1998xq}, which may enhance neutrino flavor conversions. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{images/1d-osc-prob-all-case} \mycaption{The three-flavor $\nu_\mu\rightarrow\nu_\mu$ survival probability as a function of neutrino energy $E_\nu$. The solid-black, dotted-red, and dashed-blue curves represent the scenarios of standard core ($R_\text{CMB} = 3480$ km), SC ($\Delta R_\text{CMB} = -500$ km), and LC ($\Delta R_\text{CMB} = +500$ km), respectively. The top, middle, and bottom panels correspond to the Case-I, Case-II, and Case-III, respectively. The left (right) panels consider the neutrino baseline $L_\nu$ of 11000 (11500) km. We use three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5.} \label{fig:Prob_case_1} \end{figure} The muon neutrino events at ICAL are contributed by both $\nu_\mu \rightarrow \nu_\mu$ survival as well as $\nu_e \rightarrow \nu_\mu$ appearance channels. However, since the contribution of the survival channel is more than 98\%, here we present the effect of $R_\text{CMB}$ modification only on the oscillation probabilities for $\nu_\mu \rightarrow \nu_\mu$ survival channel. Figure~\ref{fig:Prob_case_1} shows the three-flavor $\nu_{\mu}$ survival probability $P({\nu_\mu \rightarrow \nu_\mu})$ as a function of energy for NO. The top, middle, and bottom panels are for Case-I, Case-II, and Case-III, respectively. We consider the baseline of 11000 km (11500 km) in the left (right) panels. For both of these baselines, neutrinos pass through the Earth's core. The $\theta_{13}$-driven matter resonances inside Earth at atmospheric frequency result in significant modification of $\nu_{\mu}$ survival probability, $P({\nu_\mu \rightarrow \nu_\mu})$. The curves corresponding to SC and LC scenarios have different patterns with respect to each other for a given case and baseline. The oscillation patterns are also quite different in the three cases. The deviations in oscillation patterns mostly occur in the energy range of 3 to 6 GeV, which corresponds to the NOLR/parametric resonance. These observations imply that the neutrinos are highly sensitive to the modification of CMB as well as the way of modifying the density profile. Further, a comparison of the left and right panels shows that the modifications in neutrino oscillation patterns due to $R_\text{CMB}$ modification also depend upon the baselines through which neutrinos have passed. \subsection{Effect of Modified CMB Radius on Oscillograms} \label{sec:oscillograms} \begin{figure} \centering \includegraphics[width=\linewidth]{images/fixed-core-oscillogram} \mycaption{Three-flavor $\nu_{\mu}\rightarrow\nu_{\mu}$ oscillograms for the three-layered profile of Earth with the standard $R_\text{CMB}$ of 3480 km as shown by the vertical dotted-white lines. The left (right) panel is for neutrino (antineutrino). We use the three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5.} \label{fig:fixed_core_Oscillograms} \end{figure} Since the modifications in neutrino oscillation probabilities depend upon the baselines, let us discuss the effects of modification of $R_\text{CMB}$ on neutrino oscillation probabilities in the two-dimensional plane of energy and direction. First, we consider the standard scenario of $R_{\text{CMB}}=3480$ km in Fig.~\ref{fig:fixed_core_Oscillograms}, where we present the three-flavor $\nu_\mu$ survival probability oscillograms in the plane of ($E_\nu$, $\cos\theta_\nu$) for NO. The standard $R_\text{CMB}$ is denoted by a vertical dotted-white line. The left and right panels show the survival probabilities $P(\nu_\mu \rightarrow \nu_\mu)$ and $P({\bar{\nu}_\mu \rightarrow \bar{\nu}_\mu})$, respectively. In each panel, we consider the neutrino energy range of 1 to 25 GeV and neutrino zenith angle ($\cos\theta_\nu$) range of -1 to 0. Here, $\cos\theta_\nu$= -1 ($\cos\theta_\nu$ = 1) represents the upward-going (downward-going) neutrinos. The dark-blue diagonal band which starts from ($E_\nu$ = 1 GeV, $\cos\theta_\nu$ = 0) and ends at ($E_\nu$ = 25 GeV, $\cos\theta_\nu$ = -1), corresponds to the first oscillation minimum, which is also known as ``oscillation valley''~\cite{Kumar:2020wgz,Kumar:2021lrn}. In the left panel, the red patch around -0.8 $< \cos\theta_\nu <$ -0.5 and 6 GeV $< E_\nu <$ 10 GeV corresponds to the MSW resonance whereas the yellow patches around $\cos\theta_\nu<$ -0.8 and 3 GeV $< E_\nu <$ 6 GeV are found to be due to the NOLR/ parametric resonance. With NO, both these resonances are experienced only by neutrinos. As far as antineutrinos are concerned, we do not see the MSW and the NOLR/ parametric resonance in the right panel for normal ordering. On the other hand, for inverted ordering, this trend is opposite, i.e., antineutrinos experience a significant amount of matter effects whereas neutrinos do not. In this section, we will only focus on neutrino survival channel and NO. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{images/2d-oscillogram-prob-all-case} \mycaption{Three-flavor $\nu_{\mu}\rightarrow\nu_{\mu}$ oscillograms for the three-layered profile of Earth with modified $R_\text{CMB}$. The left (right) panel is for SC (LC) with $\Delta R_\text{CMB} = -500$ km ($+500$ km). The white, red, and cyan vertical dotted lines correspond to the standard, smaller, and larger $R_\text{CMB}$, respectively. The top, middle and bottom panels correspond to the Case-I, Case-II and Case-III, respectively. We use three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5.} \label{fig:Oscillograms_case_1} \end{figure} Now, we describe how do the neutrino oscillograms modify during the modification of CMB radius. The top, middle, and bottom rows in Fig.~\ref{fig:Oscillograms_case_1} present the $\nu_{\mu}$ survival probability oscillograms for the Case-I, Case-II, and Case-III, respectively. The left (right) panels correspond to the SC (LC) scenario. The observations are as follows: \begin{itemize} \item Comparing the oscillograms in Fig.~\ref{fig:Oscillograms_case_1} with those in Fig.~\ref{fig:fixed_core_Oscillograms}, we observe that, for both Case-I and Case-II, significant effect on the probabilities occurs in the NOLR/ parametric resonance region. \item In the standard core scenario (left panel of Fig.~\ref{fig:fixed_core_Oscillograms}), there are three yellow patches in the NOLR/ parametric resonance region. In both Case-I and Case-II, the SC scenario shows that one of the yellow patch is missing and the other two are shrunk. In the LC scenario, these patches are stretched towards lower baselines. \item However, for Case-III, we see that the NOLR/ parametric resonance as well as the MSW resonance region both are affected in the SC and LC scenarios, the effect being more prominent for LC. \end{itemize} \begin{figure} \centering \includegraphics[width=0.88\linewidth]{images/2d-oscillogram-prob-difference-all-case} \mycaption{Three-flavor $\nu_{\mu}\rightarrow\nu_{\mu}$ survival probability difference oscillograms for the three-layered profile of Earth. Here, $\Delta P_{\text{SC}}$ ($\Delta P_{\text{LC}}$) in the left (right) panels denote P($\nu_{\mu}\rightarrow\nu_{\mu}$) survival probability difference between the standard core and SC (LC) scenario with $\Delta R_\text{CMB} = -500$ km ($+500$ km). The dotted-red and dotted-cyan curves represent the CMB radius for the smaller core and larger core, respectively. Top, middle, and bottom rows correspond to the Case-I, Case-II, and Case-III, respectively. We use the three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5. } \label{fig:Diff_Oscillograms_case_1} \end{figure} For a clear picture of the energies and baselines where the probability is affected significantly, we present Fig.~\ref{fig:Diff_Oscillograms_case_1} which shows the difference between $\nu_{\mu}$ survival probability for the standard $R_\text{CMB}$ and SC/LC scenario. The left and right panels show the values of $\Delta P_{\text{SC}}$ and $\Delta P_{\text{LC}}$, respectively, which are defined as \begin{align} \Delta P_{\text{SC}} &= P(\nu_{\mu} \rightarrow \nu_\mu)_{\text{standard}} - P(\nu_{\mu} \rightarrow \nu_\mu)_{\text{SC}}\,, \\ \Delta P_{\text{LC}} &= P(\nu_{\mu} \rightarrow \nu_\mu)_{\text{standard}} - P(\nu_{\mu} \rightarrow \nu_\mu)_{\text{LC}}\,. \end{align} It may be observed that the probability differences are nonzero only at the core regions for Case-I (top panels) and Case-II (middle panels). $\Delta P_{\text{SC/LC}}$ vanishes for the mantle region because in both these cases, the density of the mantle remains the same (see Table~\ref{tab:CMB_variation}). In contrast, for the Case-III (bottom panels), the differences are significant in the core as well as mantle. From Figs.~\ref{fig:fixed_core_Oscillograms}, \ref{fig:Oscillograms_case_1}, and \ref{fig:Diff_Oscillograms_case_1}, we can infer that the effect of $R_\text{CMB}$ modification manifests itself mainly at the higher baselines and lower energies. The dependence of this effect on the zenith angle ($\cos\theta_\nu$) implies that we need a detector like ICAL with high directional resolution to probe the position of the CMB. \section{Event Generation at ICAL} \label{sec:events} The 50 kton magnetized iron calorimeter (ICAL) detector at the proposed India-based Neutrino Observatory (INO)~\cite{ICAL:2015stm} is going to detect the atmospheric neutrinos and antineutrinos separately in multi-GeV energy range and over a wide range of baselines. ICAL, with a total size of 48 m $\times$ 16 m $\times$ 14.5 m, will have about 151 alternative layers of iron having a thickness of 5.6 cm, with glass Resistive Plate Chambers (RPCs)~\cite{SANTONICO1981377,Bheesette:2009yrp,Bhuyan:2012zzc} sandwiched between the iron layers. The iron plates act as passive detector elements for the neutrino interactions, whereas RPCs act as active detector elements. In other words, the iron plates provide the target mass for neutrino interactions, and RPCs detect the secondary particles like muons and hadrons that are produced during the charged-current (CC) interactions of neutrinos with the iron nuclei. The charged particles deposit their energies in the form of hits in the RPCs during their propagation inside the detector. The X and Y coordinates of the hits are given by the pickup strips using the produced electronic signals. At the same time, the RPC layer number provides the Z coordinate of the hit. A muon in the multi-GeV energy range is a minimum ionizing particle. Hence, it can pass through many layers, leaving a hit in each layer. These hits produced by muons form a track-like event. The magnetic field of about 1.5 T~\cite{Behera:2014zca} enables ICAL to distinguish between the atmospheric neutrinos and antineutrinos by identifying the opposite curvature of $\mu^-$ and $\mu^+$ tracks. Further the nanosecond-level time resolution of RPCs~\cite{Dash:2014ifa,Bhatt:2016rek,Gaur:2017uaf} helps ICAL to separately identify the upward-going and downward-going muon events. In the multi-GeV energy range, the resonance and deep inelastic scatterings (DIS) give rise to the production of hadrons which can deposit energy in the form of multiple hits in the same layer of RPC, resulting into shower-like events. During neutrino interactions, a large fraction of neutrino energy is carried away by the hadrons, which is quantified as ${E'}_\text{had} = E_\nu - E_\mu$. In this work, we simulate the unoscillated neutrino events using the NUANCE~\cite{Casper:2002sd} Monte Carlo (MC) neutrino event generator with the ICAL detector geometry as a target. The atmospheric neutrino flux at the proposed INO site~\cite{Athar:2012it,Honda:2015fha} at Theni district of Tamil Nadu, India, is used as an input to NUANCE. The solar modulation effect on atmospheric neutrino flux is incorporated by considering the flux with high solar activity (solar maximum) for half exposure and low solar activity (solar minimum) for another half exposure. A mountain coverage of about 1 km (3800 m water equivalent) at the INO site acts as a filter to reduce the downward-going cosmic muon background by a factor of $\sim 10^6$ ~\cite{Dash:2015blu}. In our analysis, the statistical uncertainties are minimized by generating MC unoscillated neutrino events for a large exposure of 1000 years for the ICAL detector. The three-flavor neutrino oscillations in the presence of Earth's matter effects are incorporated using the reweighting algorithm~\cite{Devi:2014yaa,Ghosh:2012px,Thakore:2013xqa}. In the present analysis, we incorporate the detector response for muons and hadrons as described in refs.~\cite{Chatterjee:2014vta,Devi:2013wxa}. These detector responses are obtained by ICAL collaboration after performing a detailed GEANT4~\cite{Geant4:2003} simulation of the ICAL detector~\cite{Chatterjee:2014vta,Devi:2013wxa}. The procedure for obtaining the detector response for muons has been discussed in detail in ref.~\cite{Chatterjee:2014vta}. The muon detector response is simulated by passing a large number of muons through the ICAL detector. The muon forms a track-like event while passing through various RPC layers. These tracks are fitted using a Kalman filter technique to obtain the energy, direction, and charge of reconstructed muons~\cite{Bhattacharya:2015bsp}. The ICAL reconstruction algorithm requires at least 8 to 10 muon hits to reconstruct a track. Since muon deposits an energy of about 100 MeV in each layer of iron, the energy threshold of ICAL is about 1 GeV. Reference~\cite{Chatterjee:2014vta} provides the reconstruction efficiency, energy resolution, angular resolution, and charge identification (CID) efficiency of ICAL for muons (figures 13, 11, 6 and 14 therein, respectively). Along with the reconstruction of muons, ICAL can also retrieve information about the hadron energy using the total number of hits in the shower-like events. The details about the hadron energy resolution of the ICAL detector is given in ref.~\cite{Devi:2013wxa}. Since the hadron energy resolution is much poorer than the muon energy resolution, we do not add them to obtain neutrino energy. Instead, to exploit the measured four-momentum of muon and hadron energy on an event-by-event basis, we employ a binning scheme having reconstructed muon energy ($E_\mu^\text{rec}$), muon direction ($\cos\theta_\mu^\text{rec}$), and hadron energy (${E^\prime}_{\text{had}}^\text{rec}$) as three independent observables. The detector properties are folded in following the procedure mentioned in~\cite{Ghosh:2012px,Thakore:2013xqa,Devi:2014yaa}. For the analysis, the reconstructed events are scaled from 1000-yr MC to 20-yr MC. For 1 Mt$\cdot$yr exposure of ICAL, we would get about 8850 reconstructed $\mu^-$ and 4032 reconstructed $\mu^+$ events using the three-flavor neutrino oscillation for propagation through the Earth's matter considering the three-layered profile of Earth with the standard core if the mass ordering is normal. In Table~\ref{tab:events}, we present the number of reconstructed $\mu^-$ and $\mu^+$ events for NO for all the three cases for SC and LC scenarios for the exposure of 1 Mt$\cdot$yr at the ICAL detector. It is important to note that the total event rate in Table~\ref{tab:events} is almost identical for all the three cases. However, after binning these events in the above three observables, the three cases would look quite different. This is possible because ICAL would have good energy and directional resolutions for reconstructed muons. \begin{table} \centering \begin{tabular}{| c | c | c | c | c |} \hline \hline & \multicolumn{2}{c|}{$\mu^-$ events} & \multicolumn{2}{c|}{$\mu^+$ events} \\ \cline{2-5} & SC & LC & SC & LC \\ \hline Case-I & 8842 & 8858 & 4032 & 4032 \\ \hline Case-II & 8844 & 8862 & 4030 & 4034 \\ \hline Case-III & 8850 & 8876 & 4032 & 4032 \\ \hline \hline \end{tabular} \mycaption{The total number of reconstructed $\mu^-$ and $\mu^+$ events expected at the 50 kt ICAL detector in 20 years for the cases I, II, and III. The SC (LC) scenario corresponds to $\Delta R_\text{CMB} = -500$ km ($+500$ km). The number of reconstructed events for the standard CMB is 8850 (4032) for $\mu^-$ ($\mu^+$). We use the three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5. } \label{tab:events} \end{table} Now we discuss the effect of $R_\text{CMB}$ modification on the distributions of reconstructed muon events at the ICAL detector. In the left (right) panels of Fig.~\ref{fig:Event_dif}, we present the distributions of differences between events of the standard core and that of smaller (larger) core in the plane of ($E{^\text{rec}_{\mu^-}}$, $\cos\theta{^\text{rec}_{\mu^-}}$) for 20 years (1 Mt$\cdot$yr) of exposure considering NO. For demonstrating the difference of reconstructed event distributions with 1 Mt$\cdot$yr exposure, we use a binning scheme\footnote{For $E_\mu^\text{rec}$, we take 5 bins of 1 GeV in $(1 - 5)$ GeV, 1 bin of 2 GeV in $(5 - 7)$ GeV, 1 bin of 3 GeV in $(7 - 10)$ GeV, and 3 bins of 5 GeV in $(10 - 25)$ GeV. On the other hand, for $\cos\theta_\mu^\text{rec}$, we choose uniform bins with a width of 0.1 in the range of $-1$ to $1$. Note that for analysis, we will use a finer binning scheme as described in Sec.~\ref{sec:binning_scheme}.} where we have 10 bins in $E_\mu^\text{rec}$ and 20 bins in $\cos\theta_\mu^\text{rec}$. Since for NO, significant amount of matter effects are present for neutrinos but not antineutrinos, we choose to present the distributions of event difference in Fig.~\ref{fig:Event_dif} for $\mu^-$ only. The event difference is quite small for $\mu^+$, and hence, not shown here. For Case-I and Case-II, event differences mainly occur in the core region, although some differences can also be observed in the mantle region due to the smearing of energy and direction in the detector. On the other hand, for Case-III, we see significant event differences which span both the core as well as in the mantle regions. In the next section, we describe the numerical procedure used to estimate the median sensitivity of ICAL to probe the CMB. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{images/2d-event-difference-all-case} \mycaption{The ($E{^\text{rec}_{\mu^-}}$, $\cos\theta{^\text{rec}_{\mu^-}}$) distributions of the differences in reconstructed $\mu^-$ events with the standard and modified $R_\text{CMB}$. We have taken an exposure of 1 Mt$\cdot$yr at the ICAL detector. In the left (right) panels, $\Delta N_{\text{SC}}$ ($\Delta N_{\text{LC}}$) denote the $\mu^-$ event differences between the standard core and SC (LC) scenario with $\Delta R_\text{CMB} = -500$ km ($+500$ km). The dotted-red and dotted-cyan curves represent the CMB radius for the smaller core and larger core, respectively. Top, middle, and bottom rows correspond to the Case-I, Case-II, and Case-III, respectively. We use the three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5.} \label{fig:Event_dif} \end{figure} \section{The Analysis Method} \label{sec:statistical analysis} \subsection{Binning Scheme for Analysis} \label{sec:binning_scheme} In this section, we present the binning scheme used for numerical analysis. From Figs.~\ref{fig:Diff_Oscillograms_case_1} and \ref{fig:Event_dif}, it is clear that $R_\text{CMB}$ modification manifests itself mainly through the regions of higher baselines ($-1<\cos\theta<-0.7$) and lower energies ($1<E_\nu<6$ GeV). Therefore, we use finer bins for this region. The optimized binning scheme is shown in Table~\ref{tab:binning_scheme}. We have total 16 bins for $E^{\text{rec}}_\mu$ spanning in the range of 1 to 25 Gev, 39 bins for $\cos\theta^{\text{rec}}_\mu$ from -1 to 1, and 4 bins for $E^{\prime \text{rec}}_{\text{had}}$ in the range of 0 to 25 GeV. For our analysis, only upward-going neutrinos are relevant because they have experienced the matter effects, but we have also included downward-going neutrinos ($0<\cos\theta <1$) because they help in increasing the overall statistics as well as minimizing the normalization errors in atmospheric neutrino events. This also includes those upward-going (near horizon) neutrino events that result in the downward-going reconstructed muon events due to angular smearing during the interaction of neutrinos as well as reconstruction. We have considered the same binning scheme for $\mu^-$ and $\mu^+$. \begin{table} \centering \begin{tabular}{|c|c|c|c c|} \hline Observable & Range & Bin width & \multicolumn{2}{c|}{Number of bins} \\ \hline \multirow{4}{*}{$E_\mu^{\rm rec}$ (GeV)} & [1, 6] & 0.5 & 10& \rdelim\}{4}{7mm}[16] \cr & [6, 12] & 2 & 3 &\cr & [12, 15] & 3 & 1 & \cr & [15, 25] & 5 & 2 & \cr \hline \multirow{4}{*}{$\cos\theta^{\text{rec}}_\mu$} & [-1.0, -0.85] & 0.0125 & 12 & \rdelim\}{4}{7mm}[39]\cr & [-0.85, -0.4] & 0.025 & 18 &\cr & [-0.4, 0] & 0.1 & 4 & \cr & [0, 1] & 0.2 & 5 & \cr \hline \multirow{3}{*}{$E^{\prime \text{rec}}_{\text{had}}$ (GeV)} & [0, 2] & 1 & 2 & \rdelim\}{3}{7mm}[4] \cr & [2, 4] & 2 & 1 &\cr & [4, 25] & 21 & 1 &\cr \hline \end{tabular} \mycaption{The optimized binning scheme considered for the numerical analysis of the present work for reconstructed observables $E_\mu^{\rm rec}$, $\cos\theta^{\text{rec}}_\mu$, and $E^{\prime \text{rec}}_{\text{had}}$ for both reconstructed $\mu^-$ and $\mu^+$.} \label{tab:binning_scheme} \end{table} \subsection{Numerical Analysis} To estimate the sensitivity of ICAL, a $\chi^2$ analysis is performed which is expected to give median sensitivity in the frequentist approach~\cite{Blennow:2013oma}. For this analysis, we define the following Poissonian $\chi^2_-$~\cite{Baker:1983tu} for reconstructed $\mu^-$ events as considered in ref.~\cite{Devi:2014yaa}: \begin{equation}\label{eq:chisq_mu-} \chi^2_- = \mathop{\text{min}}_{\xi_l} \sum_{i=1}^{N_{{E'}_\text{had}^\text{rec}}} \sum_{j=1}^{N_{E_{\mu}^\text{rec}}} \sum_{k=1}^{N_{\cos\theta_\mu^\text{rec}}} \left[2(N_{ijk}^\text{theory} - N_{ijk}^\text{data}) -2 N_{ijk}^\text{data} \ln\left(\frac{N_{ijk}^\text{theory} }{N_{ijk}^\text{data}}\right)\right] + \sum_{l = 1}^5 \xi_l^2\,, \end{equation} with \begin{equation} N_{ijk}^\text{theory} = N_{ijk}^0\left(1 + \sum_{l=1}^5 \pi^l_{ijk}\xi_l\right)\,. \label{eq:chisq_2} \end{equation} Here $N_{ijk}^\text{theory}$ and $N_{ijk}^\text{data}$ correspond to the expected and observed number of reconstructed $\mu^-$ events in a given ($E^{\text{rec}}_\mu$, $\cos\theta^{\text{rec}}_\mu$, $E^{\prime \text{rec}}_{\text{had}}$) bin, respectively. The quantity $N_{ijk}^0$ stands for the number of expected events without considering systematic uncertainties. From the binning scheme mentioned in Table~\ref{tab:binning_scheme}, $N_{E_{\mu}^\text{rec}}$ = 16, $N_{\cos\theta_\mu^\text{rec}}$ = 39, and $N_{{E'}_\text{had}^\text{rec}}$ = 4. For our analysis, we use the well-known method of pulls~\cite{Gonzalez-Garcia:2004pka,Huber:2002mx,Fogli:2002pt} to incorporate the following five systematic uncertainties~\cite{Ghosh:2012px,Thakore:2013xqa}: (i) 20\% uncertainty on flux normalization, (ii) 10\% uncertainty on cross section, (iii) 5\% energy dependent tilt error in flux, (iv) 5\% zenith angle dependent tilt error in flux, and (v) 5\% overall systematics. The pull variables of systematic uncertainties are denoted by $\xi_l$ in Eqs.~\ref{eq:chisq_mu-} and ~\ref{eq:chisq_2}. The $\chi^2_+$ for reconstructed $\mu^+$ events is also obtained by following the same procedure. The separate contributions of both $\chi^2_-$ and $\chi^2_+$ are added to get the resultant sensitivity of the ICAL detector, which is define as $\chi^2$: \begin{equation} \chi^2 = \chi^2_- + \chi^2_+\,. \end{equation} To simulate the MC data for our analysis, we use the benchmark values of oscillation parameters given in Table~\ref{tab:osc-param-value} as true parameters. In the fit, we minimize $\chi^2$ over the pull variables $\xi_l$ and the relevant oscillation parameters. We vary the atmospheric mixing angle $\sin^2\theta_{23}$ in the range $(0.36 - 0.66)$ and atmospheric mass-squared difference $|\Delta m^2_{\text{eff}}|$ in the range $(2.1 - 2.6)\times 10^{-3}$ eV$^2$. We also minimize over both the choices of neutrino mass orderings, NO and IO. During the fit, we do not vary solar oscillation parameters $\sin^2 2\theta_{12}$ and $\Delta m^2_{21}$; they are kept fixed at their true values given in Table~\ref{tab:osc-param-value}. For the reactor mixing angle, which is already very well measured~\cite{Capozzi:2021fjo,NuFIT,Esteban:2020cvm,deSalas:2020pgw}, we take a fixed value of $\sin^2 2\theta_{13}$ = 0.0875 both in MC data and theory. We kept $\delta_{\rm CP}$ = 0 throughout our analysis in both data and theory. \section{Results} \label{sec:results} In this section, we present the statistical significance of the ICAL detector to probe the location of the core-mantle boundary. For numerical analysis, we simulate the MC data assuming the three-layered profile as a true profile of the Earth with standard core. We quantify the statistical significance of the analysis for measuring the position of CMB in the following way: \begin{align} \Delta \chi^2_{\text{CMB}} = \chi^2(\text{modified } R_\text{CMB}) - \chi^2 (\text{standard } R_\text{CMB})\,, \label{eq:delta_chisq} \end{align} where $\chi^2$(modified $R_\text{CMB}$) and $\chi^2$(standard $R_\text{CMB}$) is calculated by performing a fit to the MC data with the modified and standard $R_\text{CMB}$, respectively. Here, we calculate the Asimov sensitivity representing the median $\Delta \chi^2_{\text{CMB}}$ in the frequentist approach where the statistical fluctuations are suppressed such that $\chi^2$(standard $R_\text{CMB}$) $\approx$ 0. \begin{table} \centering \begin{tabular}{|cc|c|c|c|c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{} & \multicolumn{2}{c|}{500 kt$\cdot$yr} & \multicolumn{2}{c|}{1 Mt$\cdot$yr} \\ \cline{3-6} & & w/ CID & w/o CID & w/ CID & w/o CID \\ \hline \multirow{2}{*}{Case-I} & SC & 0.76 & 0.50 & 1.53 & 1.01 \\ & LC & 0.72 & 0.47 & 1.44 & 0.95 \\ \hline \multirow{2}{*}{Case-II} & SC & 1.27 & 0.83 & 2.53 & 1.66\\ & LC & 1.33 & 0.84 & 2.63 & 1.67 \\ \hline \multirow{2}{*}{Case-III} & SC & 1.04 & 0.66 & 2.09 & 1.33 \\ & LC & 4.06 & 2.63 & 8.07 & 5.23\\ \hline \end{tabular} \mycaption{$\Delta \chi^2_{\text{CMB}}$ sensitivities for the modification of $R_\text{CMB}$ by $\pm\,$500 km for all three cases with an exposure of 500 kt$\cdot$yr (10 years) and 1 Mt$\cdot$yr (20 years). The $\Delta \chi^2_{\text{CMB}}$ minimization has been performed by varying test values of $\sin^2\theta_{23}$ and $\Delta m^2_{\text{eff}}$ over their current uncertainties, and taking both mass orderings NO and IO. The results in the third and fifth (fourth and sixth) columns are with (without) charge identification. For simulating MC data, we use three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5.} \label{tab:results} \end{table} In Table~\ref{tab:results}, we present the $\Delta \chi^2_{\text{CMB}}$ sensitivity with an exposure of 500 kt$\cdot$yr and 1 Mt$\cdot$yr for all three cases in the SC ($R_{\text{CMB}}$ = 2980 km) and LC ($R_{\text{CMB}}$ = 3980 km) scenarios. It is apparent from Table~\ref{tab:results} that the ICAL detector is sensitive to modification of $R_\text{CMB}$ by $\pm\,$500 km with a statistical significance of more than 1$\sigma$ for all three cases with an exposure of 1 Mt$\cdot$yr and the CID capability of ICAL detector plays a crucial role in achieving this sensitivity. On the other hand, the effect of uncertainties of oscillation parameters on this sensitivity is very small. \begin{figure} \centering \includegraphics[width=0.85\linewidth]{images/chi-sqr-vs-R-margin} \mycaption{The median $\Delta \chi^2_{\text{CMB}}$ sensitivities as functions of the location of CMB. The red, green, and blue curves represent Case-I, Case-II, and Case-III, respectively. The $\Delta \chi^2_{\text{CMB}}$ minimization has been performed by varying test values of $\sin^2\theta_{23}$ and $\Delta m^2_{\text{eff}}$ over their current uncertainties, and taking both mass orderings NO and IO. The results correspond to 1 Mt$\cdot$yr exposure with charge identification capability of ICAL. For simulating MC data, we use three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5. } \label{fig:CMB_variation_chisq_margin} \end{figure} In Fig.~\ref{fig:CMB_variation_chisq_margin}, we present the sensitivities of the ICAL detector while modifying $R_\text{CMB}$ in smaller steps of 50 - 100 km up to $\Delta R_\text{CMB} = \pm\,$1000 km with respect to the standard $R_{\text{CMB}}$ of 3480 km. The figure shows that the ICAL detector would be able to measure the location of CMB at 1$\sigma$ with a precision of about $\pm\,$380 km, $\pm\,$260 km, and $\pm\,$120 km for Case-I, Case-II, and Case-III, respectively. We can observe that the sensitivities are in general increasing as we go from Case-I to Case-II and Case-III. Note that Case-II is the most realistic one as mentioned earlier. Needless to mention that the CID capability of ICAL plays a crucial role to achieve this precision in locating $R_\text{CMB}$. For an example, in the absence of CID, the 1$\sigma$ precision for Case-II would be around $\pm\,330$ km. The most striking feature is the asymmetry and non-monotonic behavior observed in $\Delta \chi^2_\text{CMB}$ about the standard $R_\text{CMB}$ value in Case-II and Case-III. This turns out to be the effect of NOLR/ parametric resonance as explained in appendix~\ref{app:Case-II-variation}. \section{Summary and Concluding Remarks} \label{sec:conclusion} Understanding the detailed internal structure of Earth is an active field of research. This quest has been pursued traditionally using gravitational measurements and seismic studies. In recent times, neutrinos have emerged as important messengers to achieve this goal of multi-messenger tomography of Earth. For example, geoneutrinos can shed light on the composition and energy budget of Earth. The oscillations of atmospheric neutrinos at GeV energies and their absorption at TeV-PeV energies deep inside the Earth can provide crucial information on density profile and composition of Earth. We start by discussing how the information about the interior of Earth is obtained indirectly using gravitational measurements and seismic studies, and focus on how neutrinos can provide complementary information through their weak interactions. While passing through different regions inside Earth, the multi-GeV atmospheric neutrinos experience the Earth's matter effects due to interactions with ambient electrons. These matter effects depend upon the density of electrons and alter the neutrino oscillation probabilities. In particular, neutrinos with energies 6--10 GeV experience MSW resonance while passing through mantle. The core-passing neutrinos with energies 3--6 GeV may also experience the neutrino parametric/ oscillation length resonance (NOLR) which depends upon the density jump at the core-mantle boundary (CMB) and its location. These effects make the neutrino signal at atmospheric neutrino detectors sensitive to the density profile of Earth. In the present work, we explore the effect of changing the radius of the CMB with respect to its standard value, $R_\text{CMB} = 3480$ km, on the neutrino oscillation probabilities, and calculate the sensitivity of the ICAL detector at INO for measuring the CMB location. We perform our analysis in the approximation of three-layered profile of Earth: core, inner mantle and outer mantle. While exploring the effect of changing CMB location, we consider the modification of the density profile of Earth in three ways. In Case-I, we modify $R_\text{CMB}$ while keeping the density in each layer constant; the total mass of Earth is not constrained. In Case-II, we modify $R_\text{CMB}$ and allow the density of core to modify such that the Earth's mass remains the same. In Case-III, we modify $R_\text{CMB}$ and the densities of core and inner mantle such that the masses of core and inner mantle, and hence that of Earth, are not affected. For each case, we consider two scenarios: SC (smaller core with $\Delta R_\text{CMB}=-500$ km) and LC (larger core with $\Delta R_\text{CMB}=+500$ km). For these scenarios, we observe that the effect of modification in $R_\text{CMB}$ manifests itself mainly for core-passing neutrinos in the NOLR/parametric resonance region. Moreover, for Case-III, these effects can also be seen in the MSW resonance region. The good direction and energy resolution of ICAL enables it to preserve these features at the level of reconstructed muons. We estimate the sensitivity of ICAL for measuring the position of CMB in the three cases described above. To determine the precision on the location of CMB, we simulate the prospective data with the standard $R_\text{CMB}$ and calculate $\Delta \chi^2$ when the data are fitted with modified $R_\text{CMB}$ values. We observe that the sensitivity is significantly enhanced by the charge identification capability of ICAL, but not affected much by the uncertainties in oscillation parameters. The ICAL detector would be able to measure the location of CMB at 1$\sigma$ with a precision of about $\pm\,$380 km, $\pm\,$260 km, and $\pm\,$120 km for Case-I, Case-II, and Case-III, respectively. Note that Case-II is the most realistic one, since it is based on the assumption that the density of mantle is known from seismic studies and the total mass of Earth is well measured from gravitational probes. Also, we find that the sensitivity is not symmetric about the standard $R_\text{CMB}$ when we go to smaller or larger core radii. At large values of $|\Delta R_\text{CMB}|$ in Case-II and Case-III, the value of $\Delta \chi^2$ is not even monotonic. This a real physical effect, the origin of which may be traced to interesting interference effects in the NOLR/ parametric resonance region. As a result of this, larger core radii would be more constrained as compared to smaller core radii using atmospheric neutrino oscillation data. In this study, we have found that the NOLR/ parametric resonance effects play a crucial role in measuring the radius of the CMB. Note that the NOLR/ parametric resonance comes into picture because of the contrast in the density of core and mantle, and specific relation between oscillation phases gained by neutrinos during their travel in mantle and core. Such quantum mechanical effects are possible with neutrino oscillations where details of phase change along the neutrino path are important, but not with neutrino absorption where only the integrated matter profile encountered along the path is relevant. Therefore, neutrino oscillations have a great potential for probing the internal structure of Earth, owing to the current precision of neutrino oscillation parameters. The large amount of data from the next-generation atmospheric neutrino experiments like ORCA, IceCube/DeepCore/Upgrade, Hyper-K, DUNE, and P-ONE will significantly enhance the prospects of neutrino oscillation tomography of Earth. This will augment the gravitational measurements and seismic studies to give us a clearer picture of the internal structure of Earth. Since neutrino oscillations are sensitive to the electron number densities, as opposed to the gravitational and seismic measurements that are sensitive to the baryonic number densities and material properties of the Earth's interior, the information from the neutrino data will be complementary to these traditional probes. \subsubsection*{Acknowledgements} We thank F. Halzen, C. Rott, W. F. McDonough, V. Leki$\acute{c}$, and P. Huber for useful discussions. A. Kumar would like to thank the organizers of the ``Multi-messenger Tomography of Earth (MMTE 2022)” workshop at The Cliff Lodge at Snowbird, Salt Lake City, Utah, USA during 30th to 31st July, 2022, for providing him an opportunity to present the preliminary results from this work. We acknowledge the support from the Department of Atomic Energy (DAE), Govt. of India, under the Project Identification Numbers RTI4002 and RIO 4001. S.K.A. is supported by the Young Scientist Research Grant [INSA/SP/YSP/144/2017/1578] from the Indian National Science Academy (INSA). S.K.A. acknowledges the financial support from the Swarnajayanti Fellowship (sanction order No. DST/SJF/PSA- 05/2019-20) provided by the Department of Science and Technology (DST), Govt. of India, and the Research Grant (sanction order No. SB/SJF/2020-21/21) provided by the Science and Engineering Research Board (SERB), Govt. of India, under the Swarnajayanti Fellowship project. S.K.A would like to thank the United States-India Educational Foundation for providing the financial support through the Fulbright-Nehru Academic and Professional Excellence Fellowship (Award No. 2710/F-N APE/2021). A.K.U. acknowledges financial support from the DST, Govt. of India (DST/INSPIRE Fellowship/2019/IF190755). The numerical simulations are carried out using the ``SAMKHYA: High-Performance Computing Facility” at the Institute of Physics, Bhubaneswar, India. \begin{appendix} \section{Asymmetric Effects with Smaller and Larger Core} \label{app:Case-II-variation} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/Puu_CMB_fixed_core} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/sc/Puu-nu-200-case2} \\ \vspace{2mm} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/sc/Puu-nu-400-case2} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/sc/Puu-nu-600-case2} \\ \vspace{2mm} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/sc/Puu-nu-800-case2} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/sc/Puu-nu-1000-case2} \mycaption{Possible impact of smaller core radii on three-flavor $\nu_{\mu}\rightarrow\nu_{\mu}$ oscillograms for the three-layered profile of Earth. The oscillograms are shown for $R_\text{CMB} = 3480 \text{ km} + \Delta R_\text{CMB}$ where $\Delta R_\text{CMB} = (0, -200, -400, -600, -800, -1000)$ km, with the density profile modified according to Case-II. We use three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5.} \label{fig:CMB_varying_case_II_sc} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/Puu_CMB_fixed_core} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/lc/Puu-nu-200-case2-l} \\ \vspace{2mm} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/lc/Puu-nu-400-case2-l} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/lc/Puu-nu-600-case2-l} \\ \vspace{2mm} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/lc/Puu-nu-800-case2-l} \includegraphics[width=0.49\linewidth]{images/Oscillograms-appendix/lc/Puu-nu-1000-case2-l} \mycaption{Possible impact of larger core radii on three-flavor $\nu_{\mu}\rightarrow\nu_{\mu}$ oscillograms for the three-layered profile of Earth. The oscillograms are shown for $R_\text{CMB} = 3480 \text{ km} + \Delta R_\text{CMB}$ where $\Delta R_\text{CMB} = (0, +200, +400, +600, +800, +1000)$ km, with the density profile modified according to Case-II. We use three-flavor neutrino oscillation parameters given in Table~\ref{tab:osc-param-value}, where we assume NO and $\sin^2\theta_{23}$ = 0.5.} \label{fig:CMB_varying_case_II_lc} \end{figure} While showing our results in Sec.~\ref{sec:results}, we have observed in Fig.~\ref{fig:CMB_variation_chisq_margin} that $\Delta\chi^2_\text{CMB}$ is asymmetric about the standard $R_\text{CMB}$ value in Case-II and Case-III. It also shows non-monotonic behavior at smaller test values of $R_\text{CMB}$. In this appendix, we demonstrate the possible origin of these effects at the probability level in the context of Case-II using Figs.~\ref{fig:CMB_varying_case_II_sc} and \ref{fig:CMB_varying_case_II_lc}, which show the possible impact of smaller and larger core on three-flavor $\nu_{\mu}\rightarrow\nu_{\mu}$ oscillograms. In Fig.~\ref{fig:CMB_varying_case_II_sc}, we present $\nu_{\mu}\rightarrow\nu_{\mu}$ oscillograms for smaller core radii with $\Delta R_\text{CMB} = (-200, -400, -600, -800, -1000)$ km. Note that as $R_\text{CMB}$ decreases (core becomes smaller), the oscillograms show changes in the energy range of $\sim$ 3 to 15 GeV for core-passing neutrinos. We observe that the oscillation patterns for core-passing neutrinos get compressed towards $\cos\theta_\nu = -1$. Moreover, the oscillations in the NOLR/ parametric resonance region becomes more rapid as the core becomes smaller. This oscillatory behavior explains the non-monotonic nature of $\Delta \chi^2_\text{CMB}$ for smaller core. In Fig.~\ref{fig:CMB_varying_case_II_lc}, we present $\nu_{\mu}\rightarrow\nu_{\mu}$ oscillograms for larger core radii with $\Delta R_\text{CMB} = (+200, +400, +600, +800, +1000)$ km. Here also, as as $R_\text{CMB}$ increases (core becomes larger), we see changes in the oscillograms in the energy range of $\sim$ 3 to 15 GeV for core-passing neutrinos. However, here, the oscillation patterns for core-passing neutrinos get stretched towards smaller $|\cos\theta_\nu|$ values. For $\Delta R_\text{CMB} \gtrsim 400$ km, the oscillatory behavior in the NOLR/ parametric resonance region becomes smooth, which accounts for the monotonic behavior of $\Delta \chi^2_\text{CMB}$ at larger core radii. As a result of the above-mentioned differences between oscillation patterns for smaller and larger core radii, the values of $\Delta \chi^2_\text{CMB}$ are not symmetric about $\Delta R_\text{CMB} = 0$ km, i.e., about the standard $R_\text{CMB}$. The same observations described above for Case-II also hold for Case-III. \end{appendix} \bibliographystyle{JHEP}
1,116,691,498,844
arxiv
\section{Introduction} \label{sec:1} Experiments with ultracold atoms \cite{BDZ2008, greiner1, greiner2} give direct access to many-body quantum systems that often cannot be described theoretically due to the enormous computational complexity \cite{Qsim}. Quite challenging in this respect are many-body models either in more dimensions \cite{2D, kolovskypra, FHW2016} or with more modes per lattice site \cite{Bands, FW2016}, extending e.g. single-band Hubbard models \cite{JZ1998}. The coupling of energy bands in tilted Wannier-Stark systems was demonstrated experimentally \cite{Pisa, AW2011}. The effect of many-body interactions on the interband coupling was studied theoretically to great detail, in the weakly interacting limit \cite{ploetz} as well as in the context of strongly correlated Bose-Hubbard models \cite{tomadin, parraphd, Comp2015}. The latter many-body models, however, restricted to two bands only and neglected the opening of the system arising from the coupling to higher lying energy bands. In this paper, we discuss an open two-band Bose-Hubbard model with an additional periodic driving force to control the coupling between the two bands. Similar Floquet engineering of ultracold quantum gases are discussed, e.g., in \cite{Pisa-D,Innsbruck2016}. We include the loss channel to higher bands using a non-hermitian Hamiltonian approach \cite{nonhermitian}. While this method is valid only for small loss, it still allows us the access to the Floquet spectrum of the periodically driven system. In particular, we show that the simultaneous presence of dissipation and quasi-resonant driving between the two band induces the formation of states with interesting properties. First these states are extremely robust with respect to dissipation, and second they correspond to highly entangled superpositions of Fock states. The paper is organized as follows. Section \ref{sec:2} introduces the system we investigate as well as the origins of its temporal dependence. Subsection \ref{sec:2-2}, in particular, discusses the manifold of states coupled in our system. The dynamics induced by the dissipation and its application in the generation of highly entangles states are reported in section \ref{sec:3}. Section \ref{sec:concl} then concludes the paper. \section{Non-Hermitian approach and Bose-Hubbard Hamiltonian} \label{sec:2} This section introduces our many-body Bose-Hubbard model with two coupled energy bands, external driving and the dissipation process. First we discuss the general properties of the system, before addresses the specific structure of states participating in the dynamical evolution. \subsection{Many-Body Wannier-Stark System} \label{sec:2-1} The Wannier-Stark (WS) problem in its simplest form consists of a quantum particle trapped in a one-dimensional periodic potential, $V(x+d_L)=V(x)$, subjected to an additional static force of magnitude $F$. The Hamiltonian of this system, $\hat H_0 = \hat p^2/2m+V(x)+Fx$, is analytically diagonalized within the tight-binding approach. The eigenenergies, $E_l=l d_L F$, form the so-called WS ladder responsible for the occurrence, for instance, of Bloch oscillations \cite{kolovskyphysrep}, see e.g. \cite{kol68PRE2003, Innsbruck2008} for many-body studies of them and \cite{AW2011, Innsbruck2013, Innsbruck2014} for related resonant tunneling phenomena. Here, $d_L$ is the lattice constant and $l$ is an integer indexing a single well or lattice site of the periodic potential $V(x)$. The WS eigenstates are embedded resonances within the continuous unbounded quantum spectrum of $\hat H_0$. Therefore the WS eigenstates are metastable states with lifetimes $\tau_l\sim 1/\Gamma_{l}$. These lifetimes are associated with the imaginary part of the corrected single-particle eigenenergies $\varepsilon_l= E_l-i\Gamma_{l}/2$, see, e.g., Ref.~\cite{kolovskyphysrep} for details on the single-body case. Our system of interest is a many-body version of the WS Hamiltonian ultimately constrained to the two-lowest WS ladders, see the sketch in Fig.~\ref{fig:intro}. The respective Hamiltonian in second quantization can be build following the standard procedure \begin{eqnarray}\label{eq:01} \hat H &=& \int dx\, [\hat\phi^{\dagger}(x) \hat H_0\hat\phi(x)+ g\hat\phi^{\dagger}(x)\hat\phi^{\dagger}(x)\hat\phi(x)\hat\phi(x)],\;\;\;\;\;\; \end{eqnarray} with the field operators defined in terms of site-localized Wannier functions of the flat lattice, i.e. for $F=0$: $\hat\phi(x) = \sum_{\alpha,l} \chi_l^{\alpha}(x)\hat\alpha_l$. $\alpha$ is the Bloch band index and $l$ is the site index. The Hamiltonian might be seen as an infinite collection of tilted single-band Bose-Hubbard Hamiltonians, plus additional coupling terms between them. The coupling is mainly induced by the external force $F$, but also by the interparticle interaction characterized by the constant $g$ \cite{parraphd}. Clearly, the force breaks the translational invariance of the flat lattice, therefore the Bloch band picture interpretation strictly speaking fails. Instead, we have WS ladders (see Fig.~\ref{fig:intro}) and an entire zoo of single- and many-body processes to be identified and investigated in detail, see e.g. \cite{parraphd,ploetz,tomadin, Comp2015}. Generally, the total state space is spanned by the following Fock states $\{|n^{\alpha_1}_1 \ n^{\alpha_1}_2\cdots, n^{\alpha_2}_1 \ n^{\alpha_2}_2 \cdots, \ldots, n^{\alpha_k}_1 \ n^{\alpha_k}_2\cdots\rangle\}$, where $k\in \mathbb N$ is the number of bands taken into account. The integers $n^{\alpha_i}_l$ denote the number of atoms in the potential well $l$ and band $i$. \begin{figure}[t] \includegraphics[width=0.95\columnwidth]{fig1} \caption{\label{fig:intro} ({\em Color online}) Scheme of the single-particle Wannier-Stark ladder. The bold lines correspond to the two-lowest (quasi)bound states (marked by $a$ and $b$) considered in this paper. The structure can be understood as the remainders of the Bloch bands after turning on the external field. The wave packet (red) represents a Wannier-Stark state that leaks to the continuum of higher bands. This process is marked by the arrow going to the left in the upper panel.} \end{figure} In what follows we restrict to only the two lowest WS ladders $\alpha_{j=1,2}$. The remaining ladders $\alpha_{j>2}$ act as an effective environment forming a continuum of bands (Fig.~\ref{fig:intro}). The system dynamics is studied by means of the reduced density operator $\hat\rho_S={\rm Tr_{\alpha_{j>2}}}[\hat\rho]$. The evolution of $\hat\rho_S$ is approximately described by a markovian quantum master equation of the form \begin{eqnarray}\label{eq:02} \partial_t\hat\rho_S=-i(\hat H_{\rm eff}\hat\rho_S-\hat\rho_S \hat H_{\rm eff}^{\dagger}) +\sum_{\alpha,l} \hat C_l^{\alpha}\,\hat\rho_S\, \hat C_l^{\alpha\dagger}. \end{eqnarray} $\hat H_{\rm eff} = \hat H - i\sum_{\alpha,l} \hat C_l^{\alpha \dagger}\hat C_l^{\alpha}$ is an effective non-Hermitian Hamiltonian defined by the quantum jump operators $\hat C_l^{\alpha}$ and the isolated-system Hamiltonian $\hat H$. The jump operators $\hat C_l^{\alpha}=\sqrt{\gamma_{l}^\alpha} \hat \alpha_l$ describe the particle loss to the higher bands. We use an approximate non-hermitian Hamiltonian approach to address our problem. This method has the advantage that we have access to the spectrum of the effective Hamiltonian without having to solve the master equation for the reduced density matrix. The method consists in omitting the last term on the right-hand-side of Eq.~\eqref{eq:03}. The imaginary parts of the effective Hamiltonian give diagonal contributions in the Fock basis, which are proportional to the occupation numbers in the upper-WS ladder. In the end, we obtain a new effective Hamiltonian which is non-hermitian, but can simply be diagonalized in the Fock space of fixed particle number. This dramatically simplifies the numerical effort. The non-hermitian part of the effective Hamiltonian nonetheless reduces the norm of the evolving states and induces a dynamical dephasing via the decay rates $\gamma_{l}^\alpha$. The approach is valid for $\gamma_{l}^{\alpha}$ are much smaller than the relevant energy scales of the system. Hence, here we use rates $\gamma_{l}^{\alpha}\ll |J_{\alpha}|$. Following Ref.~\cite{parraphd}, we can argue, that in a doubly periodic optical lattice, we obtain situations in which the two lowest (mini)bands are sufficiently isolated from the upper bands, such that the assumptions just discussed are indeed fulfilled. Furthermore, we assume that only the upper band is effectively coupled to the higher bands. Then our system Hamiltonian $\hat H$ for two-bands can be written as \cite{parraphd, Comp2015} \begin{eqnarray}\label{eq:03} \hat H &=& \sum_{\alpha={\{a,b\}}}\sum_l J_{\alpha}(\hat\alpha^{\dagger}_{l+1}\hat \alpha_l+{\rm h.c.})+ \frac{U_{\alpha}}{2} \hat \alpha^{\dagger 2}_l \hat{\alpha}_l^2+\varepsilon_l^{\alpha}\hat n^{\alpha}_l\notag\\ &+& \sum_{\mu = \{0,\pm 1\}}\sum_l c_{\mu} F (\hat b^{\dagger}_{l+\mu}\hat a_l+{\rm h.c.})\notag\\ &+&\sum_l \frac{U_x}{2} (\hat b^{\dagger 2}_l\hat a_l^2+{\rm h.c.})+2U_x \hat n^{a}_l\hat n^{b}_l \,, \end{eqnarray} with $\varepsilon_l^{\alpha}=d_LFl+\Delta_{\alpha}$. $\Delta_{\alpha}=(\delta_{\alpha,b}-\delta_{\alpha,a})\Delta/2$ is the on-site energy splitting between the single-site modes. To simplify the notation, we set $a\equiv\alpha_1$ and $b\equiv\alpha_2$. $\delta_{i,j}$ is the Kronecker delta. The system parameters are: the hopping matrix elements $J_{a,b}$, the atom-atom interaction constants $U_{x,a,b}$, and the dipole coupling matrix elements $c_{\mu=0,\pm1}$ between the bands induced by the force. The lattice constant $d_L$ and $\hbar$ are set to one. The control parameter is the Stark force $F(t)$ which will be explicitly time-dependent. This allows us to study the response of the system to any specific protocol for the driving. Here, we restrict to a periodic driving $F(t)=A \cos(\omega t)$. We work within the reduced Fock basis $|\{n\}\rangle\equiv|n^a_1n^a_2\cdots n^a_L,n^b_1n^b_2\cdots n^b_L\rangle$, with dimension $\mathcal N_f=(N+2L-1)/N!(2L-1)$ given $N$ bosonic atoms in a system with $L$ lattice sites. In the rest of this work, we consider unitary filling with $N=L$. The time evolution is numerically performed using a vector-optimized fourth-order Runge-Kutta routine presented and benchmarked in Ref.~\cite{Comp2015}. The aim of this paper is to study the interplay between dissipation and the periodic driving. It will be shown that the combination of this two processes reveals surprising dynamical features of our system. For simplicity, we set $\gamma_b\equiv\gamma_l^b$ and therefore the system-environment copuling term is: $-i\gamma_b \sum_l\hat{b}_l^\dagger\hat{b}_l=-i\gamma_b \sum_l\hat n^b_l$. This latter condition is not really relevant since our results do not strongly depend on how the single-site decay rates are distributed. Our figure of merit is the expected value of inter-WS ladder population inversion operator \begin{equation}\label{eq:04} W(t) = \langle \psi_t| \sum\nolimits_l(\hat n^b_l-\hat n^a_l)|\psi_t\rangle /\langle\psi_t|\psi_t\rangle\,. \end{equation} We restrict to initial states of the type $|\psi_0^1\rangle =\sum\nolimits_{\{n\}} D_{\{n\}}^0|n^a_1n^a_2\cdots,000\cdots\rangle$, for which all particles are in the lowest WS ladder at $t=0$. An example of this kind of states is the Mott-insulator state represented by $|\psi^1_0\rangle = |111\cdots,000\cdots\rangle$, which is routinely prepared in many experiments as those Refs~\cite{greiner1,BDZ2008}. Other types of initial conditions can be prepared taking as a starting point the Mott state. Examples of such different states are so-called doublon states \cite{Innsbruck2013, kolovskypra} $|\psi_0^{2,3}\rangle=\{|2020\cdots,000\cdots\rangle,|0202\cdots,000\cdots\rangle\}$, which can be used to simulate quantum magnetism with ultracold atoms \cite{greiner2}. As will be shown later in this work, the precise choice of the initial state is not crucial, since the structure of the dynamically generated states, if they exist, will turn out to be robust with respect to imperfections in the initial conditions. \subsection{Resonant Manifolds} \label{sec:2-2} Following Ref. \cite{kolovskypra}, we transform the Hamiltonian in Eq.~\eqref{eq:03} into the rotating frame with respect to the operator $\hat A=\sum_{\alpha,l}(\Delta_{\alpha}-\frac{1}{2}U_{\alpha}+lF)\hat n^{\alpha}_l$. The single-site annihilation (creation) operators transform as $$\hat\alpha_l\rightarrow \hat \alpha_l\exp[-i(\Delta_{\alpha}-\frac{1}{2}U_{\alpha})t-il\,\theta(t)]\,,$$ with $\theta(t)=\int_0^tF(t')dt'$, then after plugging it into the Hamiltonian, we obtain \begin{eqnarray}\label{eq:05} \hat H' &=& \sum_{\alpha={\{a,b\}}}\sum_l J_{\alpha}\hat\alpha^{\dagger}_{l+1}\hat \alpha_le^{-i\theta(t)}\notag\\ &+& \sum_{\mu,l} \tilde{c_{\mu}} [e^{-i((\omega+\Delta) t -\mu\theta(t))}+e^{-i((\Delta-\omega) t -\mu\theta(t))}]\hat b^{\dagger}_{l+\mu}\hat a_l\notag\\ &+&\sum_l \frac{U_x}{2} \hat b^{\dagger 2}_l\hat a_l^2e^{-i 2\Delta t}+E_{\mu}^{\lambda}[\{n\}]+{\rm H.c.} \end{eqnarray} where we have defined $\tilde{c}_{\mu}\equiv c_{\mu}A/2$ and the unperturbed energies as $E_{\nu}^{\lambda}[\{n\}] = U(\nu+\lambda)$, with $\nu=\frac{1}{2}\sum\nolimits_l (n^b_l-n^a_l)^2$, $\lambda=3\sum\nolimits_ln^a_ln^b_l$. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{fig2} \caption{\label{fig:reson} ({\em Color online}). Energy diagram and allowed couplings in the many-body system for $N/L=2/2$, i.e., for two particles in two lattice sites. The color of the arrow defines the process, i.e. induced by hopping (blue), by the force (red and yellow), or by the interactions (green).} \end{figure} To simplify the discussion, we assume equal interaction strengths $U=U_a\approx U_{b}\approx U_{x}$. This approach is valid under the condition $J_{\alpha}\ll U_{a,b,x}$ as shown in \cite{parraphd}. The definition of $E_{\nu}^{\lambda}[\{n\}]$ permits us to split the Fock basis into sets of states with the same energy that can be exploited to enhance the inter-WS ladder coupling via resonant tunneling. The lowest-energy manifold is that for $\nu=L/2$ and $\lambda=0$ corresponding to Fock states with $n_l^a+n_l^b=1$. In Fig.~\ref{fig:reson}, we show the energy diagram for the case $N/L=2/2$ with the respective inter and intra-WS ladders coupling between the different states. Note that the pairs of states $\{|10,01\rangle,|01,10\rangle\}$, $\{|20,00\rangle,|00,02\rangle\}$ and $\{|01,01\rangle,|10,10\rangle\}$ are not directly coupled through the Hamiltonian terms. We will see that the principal effect of the combination of shaking and the dissipation is to 'freeze' the system into a certain linear combination of the first pair of states, which satisfies the condition $\sum_l (\hat n^b-\hat n^a_l)=0$. The transformed Hamiltonian $\hat H'$ has three different time scales related to every time-dependent term in Eq.~\eqref{eq:05}. These scales are defined by the periods $T_{\omega}=2\pi/\omega$, $T_{\Delta}=\pi/\Delta$, and $T^{\pm}_{\mu\neq 0}\approx 2A|\mu|/\omega(\Delta\pm \omega)$. For typical experimental parameters \cite{parraphd}, $\Delta$ is the largest energy scale. Therefore we consider frequencies with $\Delta> \omega$, and then $T_{\omega}> T_{\Delta}\gg T^{\pm}_{\mu\neq 0}$. This implies that the dynamical effects of the fast oscillating processes with frequencies $\omega_{\Delta}$, $\omega_{\mu\neq 0}^{\pm}$ happen within one period of the driving $F(t)$, and that they do not dominate the long-time evolution. \section{Dissipation-induced asymptotic Dynamics and Entanglement generation} \label{sec:3} \begin{figure}[t] \includegraphics[width=\columnwidth]{fig3} \caption{\label{fig:1} ({\em Color online}). (a) Long-time average of the population inversion $W(t)$ as a function of the amplitude $A$ and frequency $\omega$ of the drive $F(t)$. The initial condition is the Mott state $|11,00\rangle$. Here $\omega_0=\Delta-U$ and $\gamma_b=0.08 J_{\alpha}$, with $J_{\alpha}\equiv |J_{a,b}|$. (b) Decay rates computed from the quasi-energyspectrum of $\hat H'$ for different system sizes $N/L$ vs. the dissipation strengths (see legends). The inset shows the projections of the Fock states onto the Floquet eigenstates for $N/L=2/2$. The most stable state corresponds to a linear combination of only two Fock states belonging to the lowest resonant manifold. The parameters taken from \cite{parraphd} are: $\Delta= 0.1$, $J_a= -0.006$, $J_b = 0.006$, $U_a= U_b= U_x= 0.034$, $c_0 = -0.098$, and $c_{\pm}= 0.035$.} \end{figure} We compute the long-time average $\overline{W(t)}=\lim_{T\rightarrow \infty}T^{-1}\int_0^{T}W(t)dt$ in order to analyze the dependence of the inter-WS ladder dynamics on the driving and the dissipation. The results are shown in the density plot in Fig.~\ref{fig:1}-(a) for $\gamma_b/J_{\alpha}=0.08$ and $t\gg 2000 T_{\omega}$. This figure reveals the existence of resonance structures, i.e., finite-size areas in the parameter space at which the interband coupling saturates to $\overline{W(t)}\approx 0$, see the white areas in Fig.~\ref{fig:1}-(a). The regions are around discrete values for the frequency $\omega_n\propto \omega_0/n$, with $n\in \mathbb N$ with amplitude $A_n\propto 1/\sqrt{\omega_0/\omega}\propto \sqrt{n}$ in the strong driving regime, i.e. for $\tilde{c}_{\mu}\gg J_{\alpha}$ (see yellow-dashed lines in Fig.~\ref{fig:1}-(a)). For weak driving, i.e., $\tilde{c}_{\mu}\sim J_{\alpha}$, the equidistance resonance structure no longer appears. Instead, only one area is found in the high frequency regime around $\omega_0/\omega \approx 1/3$. In the low frequency regime, there is a large region in the vicinity of $A=1$. We learn from Fig.~\ref{fig:1}-(a) that the probability of hitting a resonant structure is rather large. Moreover, the only difference between them is the time scale along which $W(t)\rightarrow 0$ for $t \gg T_{\omega}$. $W(t)= 0$ means that the asymptotic state corresponds to an equal-population state, for which the particle exchange between the two WS-ladders is frozen. In order to understand the origin of the asymptotic state, we study the quasi-energy spectrum of $\hat H'$. We diagonalize the Floquet operator, i.e. the one-cycle evolution operator $\hat U(T_{\omega})$, and plot the imaginary parts of the quasi-energies $\varepsilon_i = E_i-i\Gamma_i/2$. Fig.~\ref{fig:1}-(b) presents the decay rates $\Gamma_i$ as a function of the ratio $\gamma_b/J_{\alpha}$. Only if $N$ is even, highly stable states exist with small decay rates $\Gamma_i$. For $N$ odd, we do not observe equilibration since no stable states are found. For $N/L=2/2$, the stable state is always found and can be expressed as a linear combination of the states $\{|10,01\rangle,|01,10\rangle\}$. Here, the individual weights become equal when increasing the dissipation strength. In the case $N/L=4/4$, the stable state only appears when increasing the dissipation strength up to a critical value, which must be sufficiently strong to influence the long-time dynamics. Given an initial state $|\psi_0^i\rangle$, the asymptotic dynamics is driven by the non-hermitian part of $\hat H_{\rm eff}$. To characterize the system's steady state, if it exists, we write it in Fock basis representation as \begin{eqnarray}\label{eq:06} |\psi_{t\ggg T_{\omega}}\rangle=\sum_{\{n\}} D_{\{n\}}^t |n^a_1n^a_2\cdots n^a_L,n^b_1n^b_2\cdots n^b_L\rangle\,. \end{eqnarray} This permits us to study the asymptotic normalized probability distribution $p(\{|D_{\{n\}}^t|^2\})$ for different system sizes. Let us start with the minimal system $N/L=2/2$. The $|\psi_{t\gg T_{\omega}}\rangle$ results in the expected equally-weighted linear combination of the Fock states $\{|10,01\rangle,|01,10\rangle\}$ predicted in Fig.~\ref{fig:1}-(b) for $\gamma_b/J_{\alpha}\approx 0.08$. Then we end up in the state \begin{equation}\label{eq:07} |\psi_{t\gg T_{\omega}}\rangle\approx \frac{1}{\sqrt{2}}(|01,10\rangle+e^{i\phi}|10,01\rangle)\,, \end{equation} which resembles a Bell state. The connection to the Bell basis is straightforward if we use instead the single-site population inversion number $w_l\equiv n^b_l- n^a_l$ such that we map the configuration $|n^a_1n^a_2\cdots n^a_L,n^b_1n^b_2\cdots n^b_L\rangle$ onto $|w_1w_2\cdots w_L\rangle$. Note that, for $|\psi_{t\gg T_{\omega}}\rangle$ in Eq.~\eqref{eq:07}, $w_l$ can only take the values $\pm 1$ and satisfies $\langle \psi_{t\ggg T_{\omega}}|\sum\nolimits_l\hat w_l|\psi_{t\ggg T_{\omega}}\rangle \rightarrow 0$. We introduce the pseudo-spin representation $w_l=-1_l \equiv \downarrow_l$ and $w_l=1_l \equiv \uparrow_l$, with which the state renders the Bell state $\frac{1}{\sqrt{2}}(\ket{\uparrow\downarrow} - \ket{\downarrow\uparrow}) \equiv |\Psi^-\rangle$. The phase factor $\phi \approx \pi$ is extracted from the numerics. \begin{figure}[t] \includegraphics[width=0.83\columnwidth]{fig4} \caption{\label{fig:2} ({\em Color online}) (a) and (b) show $W(t)$ for $N/L=2/2$ and the final-time normalized distribution $p(\{|D^t_{\{n\}}|^2\})$ for the asymptotic state vs. the dissipation strength. The initial condition is the Mott-insulator state defined in main text. Panels (c) and (d) show $p(\{|D^t_{\{n\}}|^2\})$ for $N/L=4/4$ and $N/L=6/6$. Initial conditions: (black-filled circles) $\ket{\psi_0^1}$, (blue-squares) $\frac{1}{\sqrt{2}}(\ket{\psi_0^2}+\ket{\psi_0^3})$ and (red-filled triangles) $\frac{1}{\sqrt{2}}\ket{\psi_0^1}+\frac{1}{2}(\ket{\psi_0^2}+\ket{\psi_0^3})$ defined in the main text. The green asterisks in the panel (c) correspond to an average over 100 initial random states of the type $\sum D_{\{n\}}|n^a_1n^a_2\cdots,00\cdots\rangle$. Note that the states of interest satisfy $\sum\nolimits_l(\hat n^b_l-\hat n^a_l)=0$ and $n^a_l+n^b_l=1$. These conditions imply a band-exchange symmetry of the populations discussed at the end of section \ref{sec:3}. The parameters are the same of Fig.~\ref{fig:1} with $|c_{\mu}A/J_a|\simeq 10$ and $\omega=\omega_0$.} \end{figure} Remarkably, the results for $N/L=2/2$ can be extended to larger systems. The probability distributions $p(\{|D_{\{n\}}^t|^2\})$ for $N/L=\{4/4; 6/6\}$ are shown in Fig.~\ref{fig:2}-(c,d). Again, pairs of equally-weighted states have the largest contributions. For $N/L=4/4$ the pseudo-spin representation for $|\psi_{t\gg T_{\omega}}\rangle$ is given by \begin{eqnarray}\label{eq:08} |\psi_{t\gg T_{\omega}}\rangle &\approx & |D_1^t| (|\uparrow\uparrow\downarrow\downarrow\rangle +e^{i\phi_1}|\downarrow\downarrow\uparrow\uparrow\rangle)\notag\\ &+& |D_2^t| e^{i\theta_1^2}(|\uparrow\downarrow\uparrow\downarrow\rangle+e^{i\phi_2}|\downarrow\uparrow\downarrow\uparrow\rangle)\notag\\ &+& |D_3^t| e^{i\theta_1^3}(|\downarrow\uparrow\uparrow\downarrow\rangle+e^{i\phi_3}|\uparrow\downarrow\downarrow\uparrow\rangle)\,. \end{eqnarray} It satisfies $w_l \approx \pm 1$ and $W(t\gg T_{\omega})\approx 0$. From the numerics, the phase differences are found this time to be $\phi_i\approx 0$. A Bell-like state structure is recovered after rewriting $|\psi_{t\gg T_{\omega}}\rangle$ using a set of new labels based on the spatial lattice bipartition with respect to the center of the lattice \begin{equation}\label{eq:09} |\{w\}\rangle = |\underbrace{w_1w_2\cdots w_{L/2}}_\text{Left}\rangle\otimes|\underbrace{w_{L/2+1}\cdots w_{L}}_\text{Right}\rangle\,. \end{equation} A careful analysis of the structure of the asympotic state in the pseudo-spin representation suggests the following definitions $$d\equiv \downarrow_1\downarrow_2\,,\,\,\,\bar{d}\equiv \uparrow_3\uparrow_4\,,\,\,\,o\equiv \downarrow_1\uparrow_2\,,\,\text{and}\,\,\,\bar{o}\equiv \uparrow_3\downarrow_4\,,$$ with which the state gets the form \begin{eqnarray}\label{eq:10} |\psi_{t\ggg T_{\omega}}\rangle &\approx & |D_1^t| (|d\bar{d}\rangle+|\bar{d}d\rangle)\notag\\ &+& |D_2^t| e^{i\theta_1^2}(|o\bar{o}\rangle+|\bar{o}o\rangle)\notag\\ &+& |D_3^t| e^{i\theta_1^3}(|oo\rangle+|\bar{o}\bar{o}\rangle)\notag\\ &\approx & |D_1^t| |\Psi^+_d\rangle+ |D_2^t| e^{i\theta_1^2}|\Psi^+_o\rangle + |D_3^t| e^{i\theta_1^3}|\Phi^+_o\rangle\,.\notag\\ \end{eqnarray} Hence, $|\psi_{t\gg T_{\omega}}\rangle$ can be mapped onto a linear combination of the Bell states $|\Psi^+_x\rangle\equiv \frac{1}{\sqrt{2}}(\ket{x \bar{x}} + \ket{\bar{x}x\rangle}$ and $|\Phi^+_x\rangle\equiv \frac{1}{\sqrt{2}}(\ket{xx} + \ket{\bar{x}\bar{x}\rangle}$, taking into account the new set of labels $x\in\{d,\bar{d},o,\bar{o}\}$. Therefore, spatial entanglement between the two lattice subsystems persists when increasing the system size. As anticipated, the structure of the asymptotic state does not depend much on the initial state, see Fig.~\ref{fig:2}-(c). We observe a dependence of the weights of the relevant states on the initial state choice, though the symmetry is preserved. This is verified after averaging over randomized linear combination of lower band Fock states: the asymptotic distribution $p(\{|D_{\{n\}}^t|^2\})$ preserves the above Bell-state like structure. The time scale for the formation of the asymptotic state depends on the system-environment coupling strength and the system size \footnote{For the systems $N/L=4/4$ and $N/L=6/6$ the asymtotic states occurs for $t\sim 2000 T_{\omega}$ and $t\sim 8000 T_{\omega}$, respectively given $\gamma_b=(0.08-0.1) J_{\alpha}$.}, see discussion at the beginning of this section. This time scale depends on the norm weight of the basin of attraction of the asymptotic states of interest here. Therefore, the asymptotic state structure is found to be very robust with respect to imperfections in the initial state preparation. In Fig.~\ref{fig:2}-(d) we show the probability distribution for $N/L=6/6$ given the initial states $|\psi_0^i\rangle$. Despite the large dimension of the associated Fock space, we immediately notice that the relevant contributions in the asymptotic state come, again, by pairs, as in the previous cases. Following transformations as described above, we have \begin{eqnarray}\label{eq:11} |\psi_{t\gg T_{\omega}}\rangle &=& |D_1^t| (|d\bar{d}\rangle+|\bar{d}d\rangle)\notag\\ &+&|D_2^t|e^{i\theta_1^2}(|sp\rangle+|ps\rangle)\notag\\ &+&|D_3^t|e^{i\theta_1^3}(|\bar{o}p\rangle+|p\bar{o}\rangle)\notag\\ &+&|D_4^t|e^{i\theta_1^4}(|\bar{p}p\rangle+|p\bar{p}\rangle)\notag\\ &\vdots& \notag\\ &+&|D_{10}^t|e^{i\theta_1^{10}}(|s\bar{s}\rangle+|\bar{s}s\rangle)\,. \end{eqnarray} with \begin{eqnarray}\label{eq:12} d\equiv\downarrow_1\downarrow_2\downarrow_3,\,\,\bar{d}\equiv\uparrow_4\uparrow_5\uparrow_6 && o\equiv\downarrow_1\uparrow_2\downarrow_3,\,\,\bar{o}\equiv\uparrow_4\downarrow_5\uparrow_6\notag\\ s\equiv\downarrow_1\downarrow_2\uparrow_3,\,\,\bar{s}\equiv\uparrow_4\uparrow_5\downarrow_6 && p\equiv\uparrow_1\downarrow_2\downarrow_3,\,\,\bar{p}\equiv\downarrow_4\uparrow_5\uparrow_6\notag \,. \end{eqnarray} We see that again a sort of superposition of Bell states is recovered. In order to confirm that the states for larger systems are indeed spatially entangled we compute the entanglement negativity $E_{\mathcal N}(\hat\rho_t)=(\|\rho^{\Gamma_{\mathcal R}}\|_1-1)/2$ \cite{Luba2011,negativity}. It is defined via the partial transpose of the density operator $\hat\rho^{\Gamma_{\mathcal R}}$ taken over right-side subsystem partition $\mathcal R$, whose matrix elements in the $w_l$ representation are \begin{eqnarray}\label{eq:13} \langle\{s\}|\hat\rho_t^{\Gamma_{\mathcal R}}|\{v\}\rangle = \sum_{\{w\},\{w'\}} D_{\{w\}}^tD_{\{w'\}}^{t\,*} \prod_{l=1}^{L/2}\delta_{s_l,w_l}\delta_{w_l',v_l}\notag\\ \times\prod_{l=L/2+1}^{L}\delta_{s_l,w_l'}\delta_{w_l,v_l}\,. \end{eqnarray} \begin{figure}[t] \includegraphics[width=\columnwidth]{fig5} \caption{\label{fig:3} ({\em Color online}) Entanglement negativity for $N/L=2/2$ and $N/L=4/4$ for increasing dissipation $\gamma_b/J_{\alpha}$. The initial state is a non-entangled Mott-insulator state. The parameters are the same as in Figs.~\ref{fig:1} and \ref{fig:2}.} \end{figure} It is known that $E_{\mathcal N}(\hat\rho_t)$ is an entanglement witness, i.e. it only detects entanglement rather than quantifying it. The matrix is numerically diagonalized using LAPACK subroutines to obtain the eigenvalues. We evaluate the trace norm as $\|\hat\rho^{\Gamma_{\rm R}}\|_1=\sum_i|\lambda_i|$. In Fig.~\ref{fig:3}, we show the onset of entanglement between the subsystem partitions ${\mathcal L}$eft and ${\mathcal R}$ight of the lattice, see also \cite{Luba2011} where a similar partition was chosen. The figure confirms that, for larger systems, the evolving state gets entangled in time. Moreover, it also shows a saturation exactly at the interband equilibrium, i.e., when the system steady state regime is reached. For the case $N/L=2/2$, this is seen in comparison with Fig.~\ref{fig:2}(a). The previous analysis of the asymptotic dynamics reveals that the system is driven into a quasi loss-free subspace set by the following conditions: $(i)$ $n_l^{a}+n_l^{b}=1$ (or equivalently $w_l=\{-1,1\}$), and $(ii)$ $\sum\nolimits_l w_l = 0$. These conditions are satisfied only for a small subset of states from the low-energy manifold $E^{L/2}_0$ which are not directly coupled via the Hamiltonian terms in Eq.~(\ref{eq:05}) (see Fig.~\ref{fig:reson}). The structure of the asymptotic state implies the existence of a subspace of states $\mathcal S$ formed by pairs of Fock states, the linear combination of which is invariant under the exchange of the on-site populations between the two bands, for example, $\sim|\uparrow\downarrow\uparrow\downarrow\cdots\rangle \pm |\downarrow\uparrow\downarrow\uparrow\cdots\rangle$. Whether the state is symmetric or anti-symmetric the condition $(ii)$ is always satisfied (see inset in Fig.~\ref{fig:2}-(b)). In addition, the asymptotic state never has more than one atom per lattice site. Therefore, it is "free" of interaction effects, and we might operate on it via the dipole $c_{0}$ term without destroying its symmetry $(i,ii)$, with an appropriate choice of the drive $F(t)$. The symmetry above exposed is broken if $N$ is an odd number since the internal subspace $\mathcal S$ can be split into two subspaces with $\sum w_l=\pm 1$. Yet if the evolving state can be written as a linear combination of pairs of states with the mirror symmetry we have $\langle \psi_{t\ggg T_{\omega}}|\sum\nolimits_l\hat w_l|\psi_{t\ggg T_{\omega}}\rangle \approx 0,\pm 2$, i.e., there is no privileged value for the inter-WS ladder exchange and the system will not equilibrate. This is the reason why we observe no stable state in the quasi-spectrum analysis in Fig.~\ref{fig:1}-(b). $N$ odd also affects the definition of the spatial bipartition that generates entanglement and the Bell-like structures since there is no geometrical center which allows for the introduction of the labels $x$. Therefore the spatial entanglement, if possible, is far more complicated to study. For instance, if we set periodic boundary conditions there is no geometrical center and we cannot straightforwardly define spatial bipartitions as in Eq.~(\ref{eq:09}). \section{Conclusions} \label{sec:concl} We have investigated a two-band many-body Wannier-Stark model with an interplay between periodic driving and an effective dissipative coupling to higher bands. This is a first study combining previous work on closed two-band Bose-Hubbard models \cite{parraphd, Comp2015} with open single-band many-body problems \cite{EPJ2015}. The driven many-body system shows remarkable features such as the dynamical generation of entanglement and the evolution into dynamically decoupled asymptotically stable states. These states obey a band-population exchange symmetry that is responsible for the creation of maximally entangled Bell-like states that appear after a convenient reorganization of their Fock representation. The spatial 'left-right' entanglement is detected by means of the negativity as an entanglement witness. Our findings turn out to be sensitive only with respect to the precise geometry of the system, i.e., they occur for a lattice with an even number of sites and hard-wall boundary conditions. Our results are, as shown, robust to imperfections in the preparation of the initial state, which gives good perspectives for future applications in experiments with ultracold atoms \cite{Entangle,Greiner2015,Bloch2015} and particularly in quantum information processing \cite{lidar}. \section{Acknowledgments} The authors acknowledge financial support of the University del Valle (project CI 7996). C. A. Parra-Murillo gratefully acknowledges financial support of COLCIENCIAS (grant 656) and SW support by the FIL2014 program of Parma University. We warmly thank J. H. Reina and C. Susa for lively discussion. Moreover, we are very grateful to Elmar Bittner for computational and logistic support at the ITP in Heidelberg.
1,116,691,498,845
arxiv
\section{Introduction} We conjecture that holomorphic locally homogeneous geometric structures on complex tori are translation invariant. Our motivation is from Ghys \cite{Ghys:1996}: holomorphic nonsingular foliations of codimension one on any complex torus admit a subtorus of symmetries of codimension at least one. Let us briefly recall Ghys's classification of holomorphic codimension one nonsingular foliations on complex tori. The simplest are those given by the kernel of some holomorphic 1-form \(\omega\). Since \(\omega\) is necessarily translation invariant on the complex torus \(T\), the foliation will be also translation invariant. Assume now that \(T=\C{n}/ \Lambda\), with \(\Lambda\) a lattice in \(\C{n}\) and there exists a linear form \(\pi \colon \C{n} \to \C{}\) sending \(\Lambda\) to a lattice \(\Lambda'\) in \(\C{}\). Then \(\pi\) descends to a map \(\pi \colon T \to T'\defeq\C{}/ \Lambda'\). Pick a nonconstant meromorphic function \(u\) on the elliptic curve \(T'\) and consider the meromorphic closed 1-form \(\Omega=\pi^{*}(udz) + \omega\) on \(T\). It is easy to see that the foliation given by the kernel of \(\Omega\) extends to all of \(T\) as a nonsingular holomorphic codimension one foliation. This foliation is not invariant by all translations in \(T\), but only by those which lie the kernel of \(\pi\). They act on \(T\) as a subtorus of symmetries of codimension one. Ghys's theorem asserts that all nonsingular codimension one holomorphic foliations on complex tori are constructed in this way. In particular, they are invariant by a subtorus of complex codimension one. Moreover, for generic complex tori, there are no nonconstant meromorphic functions and, consequently, all holomorphic codimension one foliations on those tori are translation invariant. Our aim is to generalize Ghys's result to other holomorphic geometric structures on complex tori and to find the smallest amount of symmetry those geometric structures can have. In particular, using a result from~\cite{Dumitrescu:2010b,Dumitrescu:2011} we prove here that on complex tori with no nonconstant meromorphic functions (algebraic dimension zero), all holomorphic geometric structures are translation invariant. Notice that there are \emph{holomorphic Cartan geometries} (see section~\ref{section:cartan geometries} for the precise definition) or holomorphic \emph{rigid geometric structures in Gromov's sense} (see~\cite{Dumitrescu:2011}) on complex tori which are not translation invariant: just add any holomorphic affine structure (the standard one for example) to one of the previous holomorphic foliations of Ghys. Then the subtorus of symmetries is of codimension one: all symmetries of the foliation preserve the affine structure. There are holomorphic rigid geometric structures on complex tori without any symmetries. For example, any projective embedding of an abelian variety into a complex projective space is a holomorphic rigid geometric structure. There are no symmetries: any symmetry preserves the fibers of the map; since the map is an embedding the only symmetry is the identity. Nevertheless, we conjecture that all holomorphic \emph{locally homogeneous} geometric structures on tori are translation invariant. Locally homogeneous geometric structures naturally arise in the following way. Start with a holomorphic rigid geometric structure in Gromov's sense \(\phi\) on a complex manifold \(M\); think of a holomorphic affine connection or of a holomorphic projective connection. Assume that the local holomorphic vector fields preserving \(\phi\) are transitive on the manifold: they span the holomorphic tangent bundle \(TM\) at each point \(m \in M\). The rigidity of \(\phi\) implies that those local vector fields form a finite dimensional Lie algebra (associated to a connected complex Lie group \(G\)). Under these assumptions, \(M\) is then locally modelled on a complex \(G\)-homogeneous space \(X\) and, consequently, we get a holomorphic locally homogeneous geometric structure on $M$ locally modelled on \((X,G)\), also called a holomorphic \((X,G)\)-structure (see the precise definition in section~\ref{section:Notations and main result}). For various types of complex homogeneous spaces \((X,G)\), we develop some general techniques below to demonstrate that all holomorphic \((X,G)\)-structures on complex tori are translation invariant. We will prove this below for all \((X,G)\) in complex dimension 1 and 2, by running through the classification of complex homogeneous surfaces following \cite{McKay:2012,McKay/Pokrovskiy:2010,McKay:2014}. We also prove it for \(G\) nilpotent. We further conjecture that holomorphic locally homogeneous geometric structures on compact quotients \(P/\pi\), with \(\pi\) a discrete cocompact subgroup of a complex Lie group \(P\), lift to right invariant geometric structures on \(P\). This will be proved here only for those quotients which are of algebraic dimension zero. For \(P=SL(2,\C{})\) this generalizes a result of Ghys about holomorphic tensors on \(SL(2,\C{})/\pi\)~\cite{Ghys:1995}. Notice also that these results do not hold in the real analytic category. For example, there are real analytic flat affine structures on two dimensional real tori (constructed and classified by Nagano and Yagi) which are not translation invariant \cite{Benoist:1999}. \section{Notation and main result}\label{section:Notations and main result} A complex homogeneous space \((X,G)\) is a connected complex Lie group \(G\) acting transitively, effectively and holomorphically on a complex manifold \(X\). A \emph{holomorphic \((X,G)\)-structure} (also known as a \emph{holomorphic locally homogeneous structure modelled on \((X,G)\)}) on a complex manifold \(M\) is a maximal collection of local biholomorphisms of open subsets of \(M\) to open subsets of \(X\) (the \emph{charts} of the structure), so that any two charts differ by action of an element of \(G\), and every point of \(M\) lies in the domain of a chart. Every holomorphic \((X,G)\)-structure on a complex manifold \(M\) has a \emph{developing map} \(\delta \colon \tilde{M} \to X\), a local biholomorphism on the universal covering space of \(M\), so that the composition of \(\delta\) with any local section of \(\tilde{M} \to M\) is a chart of the structure. There is a unique developing map, up to post composition with elements in \(G\). Any developing map is equivariant for a unique group morphism \(h \colon \fundamentalGroup{M} \to G\), the \emph{holonomy morphism} of the developing map. The reader can consult \cite{Goldman:2010} as a reference on locally homogeneous structures. Throughout this paper, we take an \((X,G)\)-structure on a complex torus \(T=V/\Lambda\), the quotient of \(V=\C{n}\) with a lattice \(\Lambda \subset V\), with holonomy morphism \(h \colon \Lambda \to G\) and developing map \(\delta \colon V \to X\). Let \(x_0 \defeq \delta(0)\). Denote by \(H = G^{x_0}\subset G\) the stabilizer of the point \(x_0 \in X\) and let \(n=\dimC X\). Among interesting \((X,G)\)-structures on tori we note: \[ \begin{array}{@{}rll@{}} \toprule X & G & \text{\((X,G)\)-structure} \\ \midrule \C{n} & \GL{n,\C{}} \ltimes \C{n} & \text{complex affine structure} \\ \mathbb{P}^n & PSL(n+1, \C{}) & \text{complex projective structure} \\ \C{n} & O(n, \C{}) \ltimes \C{n} & \text{flat holomorphic Riemannian metric} \\ \pr{\sum_i z_i^2=0} \subset \mathbb{P}^{n+1} & PO(n+2, \mathbb{C}) & \text{flat holomorphic conformal structure} \\ \bottomrule \end{array} \] The main theorem of our article is the following: \begin{theorem} Suppose that \((X,G)\) is a complex homogeneous curve or surface. Then every holomorphic \((X,G)\)-structure on any complex torus is translation invariant. \end{theorem} For tori $T$ of higher dimension, we prove that the translation invariant \((X,G)\)-structures on $T$ form a union of connected components in the deformation space of \((X,G)\)-structures (see Theorem \ref{theorem:translation invariant.deformation}). \section{The symmetry group}\label{section:symmetries} Suppose that \(M\) is a manifold with an \((X,G)\)-structure with developing map \(\delta \colon \tilde{M} \to X\) and holonomy morphism \(h \colon \fundamentalGroup{M} \to G\). Let \(Z_{\tilde{M}}\) be the group of all pairs \((f,g)\) so that \(f \colon \tilde{M} \to \tilde{M}\) is a diffeomorphism equivariant for some automorphism \(a \in \Aut{\fundamentalGroup{M}}\), i.e. \(f \circ \gamma = a(\gamma) \circ f\) with quotient \(\bar{f} \colon M \to M\) and \(g \in G\) and \(\delta \circ f = g \delta\). Let \(f_* \defeq a\). We can see the fundamental group as a discrete normal subgroup of \(Z_{\tilde{M}}\) through the map \(\gamma \in \fundamentalGroup{M} \mapsto \pr{\gamma,h\of{\gamma}} \in Z_{\tilde{M}}\). The automorphism group of the \((X,G)\)-structure is the quotient \[Z_M \defeq Z_{\tilde{M}}/\fundamentalGroup{M}.\] Thus, a diffeomorphism of \(M\) is an automorphism just when, lifted to \(\tilde{M}\), it reads through the developing map \(\delta\) as an element of \(G\). We extend the holonomy morphism from \(\fundamentalGroup{M}\) to \(Z_{\tilde{M}}\) by defining \(h \colon Z_{\tilde{M}} \to G\), \(h(f,g) \defeq g\). Let \(Z_X \subset G\) be the image of this extended \(h\). The universal covering map \(\tilde{M} \to M\) is equivariant for \(Z_{\tilde{M}} \to Z_M\), while the developing map \(\delta \colon \tilde{M} \to X\) is equivariant for \(Z_{\tilde{M}} \to Z_X\). The notation \(Z_X\) is explained by the following lemma. \begin{lemma} \label{lemma:centralizer} The identity component \(Z_X^0 \subset Z_X\) is the identity component of the centralizer \(\centralizer{h\of{\pi}}{G}\) in \(G\) of the image of \(\pi \defeq \fundamentalGroup{M}\). \end{lemma} \begin{proof} Let \(Z \defeq \centralizer{h\of{\pi}}{G}\). For any \((f,g)\) in the identity component \(Z_{\tilde{M}}^0\) of \(Z_{\tilde{M}}\), \(f_*\) is the identity. Since the action of \(Z_{\tilde{M}}^0\) on \(\tilde{M} \) commutes with the action of \( \pi_1(M) \) and \(\delta\) is equivariant, \(Z_X^0=h(Z_{\tilde{M}}^0)\) commutes with \(h(\pi)\). Hence, \(Z_X^0\) lies in the centralizer \(Z\). Conversely, every vector field \(z \) in the Lie algebra of \(Z\) is a complete vector field on \(X\) commuting with \(h\of{\pi}\). Pull back \(z\) by \(\delta\) to become a complete vector field on \(\tilde{M}\), invariant under the action of \(\pi\) by deck transformations, i.e. the vector field descends to \(M\). The corresponding flow is an one-parameter subgroup in \(Z_{\tilde{M}}^0\) whose image under \(h\) is the flow of \(z\). Therefore the identity component of \(Z\) lies in \(Z_X^0=h(Z_{\tilde{M}}^0)\). \end{proof} For a complex homogeneous space \((X,G)\), let \(\bar{G}\) be the closure of \(G\) in the biholomorphism group of \(X\), in the topology of uniform convergence on compact sets. A complex homogeneous space \((X,G)\) is \emph{full} if \(G=\bar{G}\). If \(\bar{G}\) is a complex Lie group acting holomorphically on \(X\), then the complex homogeneous space \(\pr{X,\bar{G}}\) is the \emph{fulfillment} of \((X,G)\). It turns out that every complex homogeneous space of complex dimension 1 or 2 has a fulfillment, i.e. \(\bar{G}\) is a finite dimensional complex Lie group acting holomorphically on \(X\). It does not appear to be known if there are complex homogeneous spaces which do not have a fulfillment. A locally homogeneous structure is \emph{full} if its model is full. Take complex numbers \(\lambda_1, \lambda_2, \dots, \lambda_n\), algebraically independent, define the diagonal matrix \[ A= \begin{pmatrix} \lambda_1 \\ & \lambda_2 \\ & & \ddots \\ & & & \lambda_n \end{pmatrix} \] and consider the morphism of complex Lie groups \[ \mu \colon z \in \C{} \mapsto e^{z A} \in \C{*n}. \] Take a faithful representation \(\rho \colon \C{*n} \to \GL{N,\C{}}\), and let \(X=\C{N}\) and \(G \defeq \C{N} \rtimes \C{}\) with action \((v,z)x=\rho \circ \mu(z)x+v\). Then \((X,G)\) is a complex homogeneous space with fulfillment \(\pr{X,\bar{G}}\) where \(\bar{G}=\C{N} \rtimes \C{*n}\). \begin{lemma} Every algebraic homogeneous space \((X,G)\) is full. \end{lemma} \begin{proof} Let \(X^{(k)}\) be the set of all \(k\)-jets of local holomorphic coordinate charts on \(X\) (also called the $k$-frame bundle of $X$). The $G$-action on $X$ lifts to a holomorphic $G$-action on \(X^{(k)}\). For sufficiently large \(k\), every orbit of \(G\) on \(X^{(k)}\) is a holomorphic immersion of \(G\) as a complex submanifold of \(X^{(k)}\) \cite{Baouendi/Rothschild/Winkelmann/Zaitsev:2004} p. 4, Thm. 2.4. In other words this result asserts that the $G$-action is free on \(X^{(k)}\) (for $k$ large enough) as soon as the stabilizer of any point in $X$ has finitely many connected components (which is always true for algebraic actions). Moreover, the \(G\)-action being supposed algebraic, all $G$-orbits of minimal dimension in \(X^{(k)}\) are closed in the algebraic Zariski topology \cite{Springer:2009} p. 28 Lemma 2.3.3 (ii). Since the $G$-action is free on \(X^{(k)}\), all orbits have the same dimension and they are all closed in the algebraic Zariski topology of \(X^{(k)}\). Therefore the orbits of the \(G\)-action on \(X^{(k)}\) define an algebraic foliation on \(X^{(k)}\) such that each leaf is an embedded algebraic subvariety biholomorphic to \(G\). Recall now that the Lie group \(G\) has its Maurer-Cartan 1-form with values in its Lie algebra (defining a holomorphic parallelization of the holomorphic tangent bundle of \(G\)) and elements of \(G\) are precisely those biholomorphisms of \(G\) preserving the Maurer-Cartan 1-form. Hence the cotangent sheaf of our algebraic foliation on \(X^{(k)}\) admits a global holomorphic section with values in the Lie algebra of \(G\) which coincides on each leaf with the Maurer-Cartan 1-form. Elements in \(G\) are precisely those biholomorphisms of \(X\) which (when lifted to \(X^{(k)}\)) preserve a leaf (and hence all leafs) of this foliation and its Maurer-Cartan 1-form. Consequently, \(G\) is a closed subgroup in the biholomorphism group of \(X\), for the topology of uniform convergence on compact sets. \end{proof} The elements of \(Z_M\) are precisely the diffeomorphisms of \(M\) that lift to elements of \(Z_{\tilde{M}}\). If the model \((X,G)\) is full then the groups \(Z_{\tilde{M}}, Z_X\) and \(Z_M\) are closed subgroups of the appropriate biholomorphism groups, as limits of symmetries are symmetries. Even if the model is not full, \(Z_X^0\) is a closed subgroup of the biholomorphism group by lemma~\vref{lemma:centralizer}. Denote by \(\LieZ\) the Lie algebra of \(Z_X \subset G\), which is also the Lie algebra of \(Z_M\) and of \(Z_{\tilde{M}}\). For a complex torus \(T=V/ \Lambda \), the previous notations become \(Z_M =Z_T\) and \(Z_{\tilde{M}}=Z_V\). Every element \(\gamma=(f,g) \in Z_V\) has \(f \colon V \to V\) dropping to an automorphism of the torus, so \(f(v)=b+av\) where \(b \in V\) and \(a=f_* \in \GL{V}\) and \(a \Lambda=\Lambda\). As above, the holonomy morphism \(h\) extends from the fundamental group \(\Lambda\) to a complex Lie group morphism \(h \colon Z_V \to Z_X\) so that \(\delta(\gamma v)=h(\gamma)\delta(v)\), for all \(v \in V\) and \(\gamma \in Z_V\). The following lemma characterizes those \((X,G)\)-structures on complex tori which are translation invariant. \begin{lemma}\label{lemma:algebraic.description} For any complex homogeneous space \((X,G)\) and any holomorphic \((X,G)\)-structure on a complex torus \(T\), with notation as above, the identity component \(Z^0_X\) of \(Z_X \) is an abelian group, acting locally freely on the image of the developing map. The Lie algebra \(\LieZ \subset V\) is a complex linear subspace acting on \(V\) by translations. The subgroup \(Z_X \subset G\) is a closed complex subgroup and has a finite index subgroup lying inside \(\centralizer{h\of{\Lambda}}{G}\). If \((X,G)\) is full then the identity component of the Lie group \(Z_T\) is a subtorus \(Z_T^0 \subset T\) covered by \(Z_V^0=\LieZ\). The following are equivalent: \begin{enumerate} \item The \((X,G)\)-structure on \(T\) is translation invariant. \item \(\dimC \LieZ=\dimC T\). \item The morphism \(h\) extends to a Lie group morphism \(h \colon V \to G\) for which the developing map is equivariant. \item There is a morphism of complex Lie groups \(h \colon V \to G\) with image transverse to \(H\) so that the local biholomorphism \(\delta \colon v \in V \mapsto h(v)x_0 \in X\) is the developing map. \end{enumerate} \end{lemma} \begin{proof} Every holomorphic vector field on \(T\) is a translation, and translations commute, so \(Z_T^0\) is a complex abelian subgroup in (the translation group) \(T\). If \((X,G)\) is full, then \(Z_T\) is a closed complex subgroup of \(G\). No vector field on \(T\) can vanish at a point without vanishing everywhere: \(\LieZ \cap \LieH = 0\), with \(\LieH\) the Lie algebra of the stabilizer \(H \) of the point \(x_0\) in the image of the developing map. Since \(\LieZ \subset V\) is a complex linear subspace, \(\dimC{\LieZ} \le \dimC{V}\), with equality exactly when \(\LieZ=V\) and then the \((X,G)\)-structure is translation invariant. The map \((f,g) \in Z_V \mapsto f'(0) \in \GL{V}\) has finite image, because \(f'(0)\Lambda=\Lambda\). So the kernel of this map is a finite index subgroup of \(Z_V\), consisting of those pairs \((f,g) \in Z_V\) for which \(f\) is a translation, and therefore commutes with the translations \(\Lambda\), and so \(g\) commutes with \(h(\Lambda)\). So this finite index kernel maps to \(\centralizer{h(\Lambda)}{G}\). But it maps to a finite index subgroup of \(Z_X\). So, using Lemma \ref{lemma:centralizer}, \(Z_X\) is a finite extension of an open subgroup of \(\centralizer{h(\Lambda)}{G}\). Since \(\centralizer{h(\Lambda)}{G}\) is a closed complex subgroup, its components are closed in the complex analytic Zariski topology, and disjoint. Therefore the open subgroup is also a closed analytic subvariety, and so \(Z_X\) is also a closed analytic subvariety. \end{proof} Hence, to prove translation invariance of an \((X,G)\)-structure on a complex torus \(T=\C{n}/\Lambda\), one needs to prove that the centralizer of \(h(\Lambda)\) in \(G\) is of complex dimension \(n\). We will see in section \ref{section:finite.holonomy} that, at least for \((X,G)\) algebraic, this centralizer is always of positive dimension. \begin{example} If we have a translation invariant \((X,G)\)-structure on \(T=\C{n}/\Lambda\), the \emph{same} holonomy morphism \(h\) and developing map \(\delta\) defines a translation invariant \((X,G)\)-structure on \emph{any} complex torus \(T'=\C{n}/\Lambda'\) of the same dimension. \end{example} \begin{example} Take \(X=\C{2}\) and \(G\) the complex special affine group. The generic 1-dimensional subgroup of \(G\) has centralizer also 1-dimensional, so it is thus far possible that \(Z_X\) is one dimensional. We need something more to decide translation invariance. \end{example} \begin{example}\label{example:B.beta.1} Pick a positive integer \(k\). Let \(X\defeq \C{2}\) and let \(G\) be the set of pairs \((t,f)\) for \(t \in \C{}\) and \(f\) a complex-coefficient polynomial in one variable of degree at most \(k\), with multiplication \[ \pr{t_0,f_0}\pr{t_1,f_1}=\pr{t_0+t_1,f_0(u)+f_1\of{u-t_0}} \] and action on \(X\) \[ \pr{t,f}\pr{u,v}=\pr{u+t,v+f\of{u+t}}. \] As we vary \(k\) we obtain all of the complex algebraic homogeneous surfaces of class \(B\beta{1}\), in Sophus Lie's notation \cite{McKay:2014}. One checks easily that all 1-dimensional subgroups of \(G\) have 2-dimensional centralizer, so our group \(Z_X\) must have dimension at least 2. Therefore for this particular \((X,G)\), every \((X,G)\)-structure on any complex 2-torus is translation invariant. \end{example} \section{Finite holonomy} \label{section:finite.holonomy} Pick a complex vector space \(V\) and lattice \(\Lambda \subset V\). Let \(X\defeq G \defeq V/\Lambda'\) for some (possibly different) lattice \(\Lambda' \subset V\) and let \(\delta \colon z \in V \mapsto z+\Lambda' \in X\), so that \[ h\of{\Lambda}=\Lambda/\pr{\Lambda \cap \Lambda'}. \] The tori \(V/\Lambda\) and \(V/\Lambda'\) are isogenous just when \(h(\Lambda) \subset V/\Lambda'\) is a finite set, and then the Zariski closure is just \(h\of{\Lambda}\), finite. Since \(G\) is abelian (and connected), lemma~\vref{lemma:centralizer} implies \(Z_X=G\), so the \((X,G)\)-structure with developing map \(\delta\) and holonomy morphism \(h\) is translation invariant. \begin{lemma} \label{lemma:translation invariant} A holomorphic \((X,G)\)-structure on a complex torus \(V/\Lambda\) has finite holonomy group just when it is constructed as above. In particular, it is translation invariant. \end{lemma} \begin{proof} If \(h(\Lambda)\) is finite, then we can lift to a finite covering of \(T\) to arrange that \(h(\Lambda)\) is trivial, and then, by lemma~\ref{lemma:centralizer}, \(Z_X=G\). In particular, the \((X,G)\)-structure is translation invariant. Lemma \vref{lemma:algebraic.description} implies that \(G\) is an abelian group acting locally freely on \(X\), meaning that \(X\) is a quotient of \(G\) by a discrete subgroup. The developing map descends to \(T \). Its image is open and closed in \(X\), so the developing map is onto: it is a finite cover of \(X\), by \(T\). Consequently, \(X\) is also a complex torus. Since \(G\) is connected and acts transitively and effectively on the complex torus \(X\), \(G\) is the translation group \(X\). \end{proof} \begin{corollary}\label{corollary:positive.dim.symmetries} Suppose that \((X,G)\) is a complex algebraic homogeneous space. Any holomorphic \((X,G)\)-structure on a complex torus has positive dimensional symmetry group. \end{corollary} \begin{proof} Assume, by contradiction, that \(\dimC{\LieZ}=0\). Then \(\centralizer{h(\Lambda}{G})\) is finite, being a discrete algebraic subgroup of \(G\). But \(h(\Lambda) \subset \centralizer{h(\Lambda)}{G}\), since \(\Lambda\) is abelian. Therefore \(h(\Lambda)\) is finite. Lemma~\ref{lemma:translation invariant} implies that \(Z_X=G\): a contradiction. \end{proof} \begin{corollary}\label{corollary:curves} If a smooth compact complex curve has genus at most 1, then every holomorphic locally homogeneous structure on the curve is homogeneous. If the curve has genus more than 1, then no holomorphic locally homogeneous structure on the curve is homogeneous. \end{corollary} \begin{proof} Every complex homogeneous curve \((X,G)\) is algebraic \cite{McKay:2011b} p. 14 Theorem 2. By Corollary~\ref{corollary:positive.dim.symmetries}, any holomorphic \((X,G)\)-structure on any elliptic curve has positive dimensional symmetry group, with identity component consisting of translations, so is translation invariant. Any locally homogeneous structure on a simply connected compact manifold is identified with a cover of the model by the developing map, so there is only one locally homogeneous structure on \(\Proj{1}\). Higher genus Riemann surfaces have no nonzero holomorphic vector fields. \end{proof} \section{Discrete stabilizer} \begin{lemma}\label{lemma:X.is.G} Suppose that \((X,G)\) is a complex homogeneous space and that \(\dimC X = \dimC G\). Then every holomorphic \((X,G)\)-structure on any complex torus is translation invariant. \end{lemma} \begin{proof} Here \(X=G/H \), with \(H \) a discrete subgroup in \(G \). Lift the developing map uniquely to a map to \(G\), so that \(\delta(0)=1\), and then \[ \delta(x+\lambda)\pr{h(\lambda)\delta(x)}^{-1} \in H, \] i.e. \[ \delta(x+\lambda)\delta(x)^{-1}h(\lambda)^{-1} \in H \] is constant, because the stabilizer \(H\) has dimension zero. Plug in \(x=0\) to find \(\delta(x+\lambda) = \delta(\lambda)\delta(x)\), i.e. we can arrange that \(H=\curly{1}\) and \(h=\left.\delta\right|_{\Lambda}\). Consider the universal covering group \(\tilde{G} \to G\). The developing map \(\delta \colon V \to G\) lifts uniquely to a map \(\tilde{\delta} \colon V \to \tilde{G}\) so that \(\tilde{\delta}(0)=1\). By the same argument, \(\tilde{\delta}(x+\lambda)=\tilde{\delta}(\lambda)\tilde{\delta}(x)\) for all \(x \in V\) and \(\lambda \in \Lambda\), i.e. \(\tilde{\delta}\) is a developing map for a \(\pr{\tilde{G},\tilde{G}}\)-structure. So without loss of generality, we can assume that \(X=G\) and that \(G\) is simply connected. Consider the map \[ \Delta \colon (x,y) \in V \times V \mapsto \delta(x)^{-1}\delta(y)^{-1}\delta(x+y) \in G. \] Clearly \(\Delta(x,y+\lambda)=\Delta(x,y)\) if \(\lambda \in \Lambda\). So \(\Delta \colon V \times T \to G\) is holomorphic. Fixing \(x\), \(y \mapsto \Delta(x,y) \in G\) is a holomorphic map from a complex torus to a simply connected complex Lie group, and therefore is constant \cite[p. 139, theorem 1]{Matsushima/Morimoto:1960}. So \(\Delta(x,y)=\Delta(x,0)=1\) for all \(x,y\), i.e. \(\delta \colon V \to G\) is a holomorphic Lie group morphism, hence, by Lemma \ref {lemma:algebraic.description} point (3), a translation invariant \((X,G)\)-structure. \end{proof} \begin{example} From the classification of complex homogeneous surfaces \((X,G)\) \cite{McKay:2014}, the surfaces \(D1, D1_1, \dots, D1_5\) and \(D2, D2_1, \dots, D2_{14}\) are the smooth quotients of 2-dimensional complex Lie groups by various discrete groups, i.e. they are precisely the complex homogeneous surfaces \((X,G)\) with \(2=\dimC{X}=\dimC{G}\). By lemma~\vref{lemma:X.is.G}, all \((X,G)\)-structures on complex tori, with \((X,G)\) among the surfaces \(D1, D1_1, \dots, D1_5\) and \(D2, D2_1, \dots, D2_{14}\), are translation invariant. \end{example} \section{Enlarging the model and its symmetry group} A \emph{morphism} \((X,G) \to \pr{X',G'}\) of complex homogeneous spaces is a holomorphic map \(X \to X'\) equivariant for a holomorphic group morphism \(G \to G'\). If moreover \(X \to X'\) is a local biholomorphism, then every \((X,G)\)-structure induces an \(\pr{X',G'}\)-structure by composing the developing map with \(X \to X'\) and the holonomy morphism with \(G \to G'\), and any \(\pr{X',G'}\)-structure is induced by at most one \((X,G)\)-structure. Lemma~\vref{lemma:algebraic.description} together with lemma~\vref{lemma:X.is.G} lead to the following corollary: \begin{corollary} For any complex homogeneous space \((X,G)\), an \((X,G)\)-structure on a complex torus is translation invariant just when it is induced by an \(\pr{X_0,G_0}\)-structure, where \(G_0 \subset G\) is a connected complex subgroup acting transitively and locally freely on an open set \(X_0 \subset X\) and if this occurs then \(G_0\) is abelian. \end{corollary} In the statement above $X_0$ is the image of the developing map of the \((X,G)\)-structure, seen as a homogeneous space of the Lie group $G_0=Z^0_X$. \begin{proposition}\label{proposition:enlarge} Suppose that \((X,G) \to \pr{X',G'}\) is a morphism of complex homogeneous spaces for which \(X \to X'\) is a local biholomorphism and \(G \to G'\) has closed image \(\bar{G} \subset G'\). Suppose that there is no positive dimensional compact complex torus in \(G'/\bar{G}\) acted on transitively by a subgroup of \(G'\). For example, there is no such torus when \(G'\) is linear algebraic and \(G \to G'\) is a morphism of algebraic groups. Every translation invariant \(\pr{X',G'}\)-structure on any complex torus with holonomy contained in \(\bar{G}\) is induced by a unique \((X,G)\)-structure, which is also translation invariant. \end{proposition} \begin{proof} Denote the developing map and holonomy morphism of the \(\pr{X',G'}\)-structure by \(\delta'\) and \(h'\). Since the structure is translation invariant, extend \(h'\) to a complex Lie group morphism \(h' \colon V \to G'\) so that \(\delta' \colon V \to X\) is just \(\delta'(v)=h'(v)x_0'\). Denote the morphism \(G \to G'\) as \(\rho \colon G \to G'\). The holonomy morphism \(h'\) descends to a complex Lie group morphism \( T \to G'\!/\bar{G} \). By hypothesis, this is constant: \(h'\) has image in \(\bar{G}=\rho(G)\). The developing map is \(\delta'(v)=h'(v)x_0'\) so has image in the image of \(X \to X'\). On that image, \(X \to X'\) is a covering map, by \(G \to G'\) equivariance, so \(\delta' \colon \pr{V,0} \to \pr{X',x'_0}\) lifts to a unique local biholomorphism \(\delta \colon \pr{V,0} \to \pr{X,x_0}\). Similarly, the morphism \(h' \colon V \to \rho(G)\) lifts uniquely to a morphism \(h \colon V \to G\). By analytic continuation \(\delta(v)=h(v)x_0\) for all \(v \in V\), so that the \((X,G)\)-structure is translation invariant. Suppose that \(G'\) is linear algebraic and \(\rho \colon G \to G'\) is a morphism of algebraic groups. The quotient of a linear algebraic group by a Zariski closed normal subgroup is linear algebraic \cite{Borel:1991} p.93 theorem 5.6, so \(Z_{X'}/\rho\of{Z_X}\) is a linear algebraic group and therefore contains no complex torus subgroup. \end{proof} \begin{example} If \(G\) is the universal covering space of the group of complex affine transformations of \(\C{}\), and \(X=G\) acted on by left translation, then the center of \(G\) consists in the deck transformations over the complex affine group. The surface \((X,G)\) is not algebraic, but the quotient \(\pr{X',G'}\) by the center is algebraic; any \((X,G)\)-structure induces and arises uniquely from an \(\pr{X',G'}\)-structure. \end{example} \begin{example} The classification of the complex homogeneous surfaces \cite{McKay:2014} yields unique morphisms \(A2\to A1\), \(A3\to A1\), \(A3\to A2\), \(B\beta{1} \to B\beta{2}\), \(B\beta{1}B0 \to B\beta{2}'\), \(B\gamma{1} \to B\delta{4}\), \(B\gamma{2} \to B\delta{4}\), \(B\gamma{3} \to B\delta{4}\), \(B\gamma{4} \to B\delta{4}\), \(B\delta{1} \to B\delta{2}\), \(B\delta{1}' \to B\delta{2}'\), \(B\delta{3} \to B\delta{4}\), \(C2 \to C7\), \(C2' \to C5'\), \(C3 \to C7\), \(C5 \to C7\), \(C6 \to C7\), \(C8 \to A1\), \(D1 \to A1\), \(D1_1 \to C7\), \(D1_2 \to C7\), \(D1_3 \to C5'\), \(D1_4 \to C5'\), \(D2 \to A1\), \(D3 \to A1\). For each of these morphisms \((X,G) \to \pr{X',G'}\), \(G'/G\) contains no homogenenous complex torus. Below we will prove that all \(\pr{X',G'}\)-structures on complex tori are translation invariant. It follows that all \((X,G)\)-structures on complex tori are translation invariant, for each of these morphisms \((X,G) \to \pr{X',G'}\). This reduces the proof of translation invariance of \((X,G)\)-structures on tori for most of the transcendental surfaces \((X,G)\) to the same problem for algebraic surfaces \(\pr{X',G'}\). \end{example} \section{Normalizer chain of the holonomy} Continue with our notation as in section~\ref{section:Notations and main result}: \((X,G)\) is a complex homogeneous space, \(x_0 \in X\) some point, \(H \subset G\) the stabilizer of \(x_0\), \(T=V/\Lambda\) is a complex torus, \(\delta \colon \pr{V,0} \to \pr{X,x_0}\) is the developing map and \(h \colon \Lambda \to G\) the holonomy morphism for an \((X,G)\)-structure on \(T\). Extend \(h\) as above to a morphism of Lie groups \(h \colon Z_V \to Z_X\). Let \(S_{-1}\defeq \curly{1}\), \(S_0\defeq Z^0_X\) and let \(S_{i+1}\defeq \pr{\normalizer{S_i}{G}}^0\) with Lie algebras \(\LieS_i\). Recall that \(\pr{\normalizer{S_i}{G}}^0\) is the identity component of the normalizer of \(S_{i}\) in \(G\). Call the sequence \(S_{-1} \trianglelefteq S_0 \trianglelefteq \dots\) the \emph{normalizer chain} of the structure. Since \(S_0\) is \(\Ad{h(\Lambda)}\)-invariant, so are all of the \(S_i\). \begin{lemma}\label{lemma:normalizing.sequence} Consider an \((X,G)\)-structure on a complex torus \(T\). The groups \(S_0 \trianglelefteq S_1 \trianglelefteq \dots \) in the normalizer chain of that structure are solvable connected complex Lie groups with abelian quotients \(S_{i+1}/S_i\). Each of these groups acts locally freely on the image of the developing map of the \((X,G)\)-structure. \end{lemma} \begin{proof} Lemma~\vref{lemma:algebraic.description} proves that \(S_0=Z^0_X\) is abelian and acts locally freely at every point in \(\delta(V)\). Each element of \(\LieS_1 \subset \LieG\) is a vector field on \(X\), whose flow preserves the Lie subalgebra \(\LieS_0\). Such a vector field pulls back via the local biholomorphism \(\delta\) to a vector field on \(V\), whose flow preserves the translations \(\LieS_0 = \LieZ \subset V\). The \(\LieS_1\) vector fields on \(V\) locally descend to \(T\), but globally they only do so modulo transformations of \(\Lambda\), which add elements of \(\LieS_0\). The Lie brackets of the \(\LieS_1\) vector fields are only defined on \(T\) modulo the \(\LieS_0\) vector fields. The part of the bracket lying in the quotient \(\LieS_1/\LieS_0\) is a holomorphic map \(T \to \Lm{2}{\LieS_1/\LieS_0}^* \otimes \pr{\LieS_1/\LieS_0}\), so constant. This constant gives the structure constants of the Lie algebra \(\LieS_1/\LieS_0\). The normal bundle of the foliation inherited in \(T\) by the \(S_0\)-action admits an \(S_0\)-invariant integrable subbundle with fiber isomorphic to \(\LieS_1/\LieS_0\): it is a partial transverse structure to the foliation modelled on \(S_1/S_0 \). Split the tangent bundle of \(T\) by some linear splitting \(V=\LieS_0 \oplus \LieS_0^{\perp}\). Since \(S_0\) acts by translations, this splitting is preserved. The normal bundle to the foliation sits inside the tangent bundle of \(T\), and every vector field from \(\LieS_1/\LieS_0\) is represented as a vector field on the torus, hence a translation field. The brackets of these vector fields on the torus agree, modulo the constant translations in \(\LieS_0\), with those of \(\LieS_1/\LieS_0\). But Lie brackets of holomorphic vector fields on the torus are trivial, so \(\LieS_1/\LieS_0\) is abelian. Each of the vector fields arising from this splitting is translation invariant, so has vanishing normal component at a point in \(T\) just when its normal component vanishes everywhere on \(T\), i.e. just when the associated element in \(\LieS_1\) belongs to \(\LieS_0\). An element of \(\LieS_1\) pulls back by the developing map to \(V\) to agree with an element of \(\LieS_0\) at some point just when they agree at every point of \(V\), and so they agree at every point of \(\delta(V)\). In other words, \(\LieS_1\) acts locally freely on \(\delta(V)\). The same argument holds by induction for the successive subgroups \(\LieS_i \subset \LieS_{i+1}\). \end{proof} \begin{proposition} Suppose that \((X,G)\) is a complex homogeneous space and that \(T\) is a complex torus with a holomorphic \((X,G)\)-structure. As above, let \(S_{-1} \subset S_0 \subset S_1 \subset \dots \subset G\) be the normalizer chain of the holonomy morphism and let \(S=\bigcup_i S_i\), i.e. \(S\) be the terminal subgroup in the chain of connected complex subgroups. Then \(\dimC{S} \le \dimC{X}\) with equality if and only if the \((X,G)\)-structure is translation invariant. \end{proposition} \begin{proof} By lemma~\vref{lemma:normalizing.sequence}, \(S\) acts locally freely on \(\delta(V)\). Consequently, \(\dimC{S} \le \dimC{X}\). If \(\dimC{S} < \dimC{X}\), then \(\dimC{S_0}=\dimC{Z_X^0} < \dimC{X}\) and thus the \((X,G)\)-structure is not translation invariant. Assume now that \(\dimC{S} = \dimC{X}\). Replace \(G\) by \(S\) (modulo any elements of \(S\) acting trivially on \(\delta(V)\)) and \(X\) by \(\delta(V)\) to arrange that \(G\) acts on \(X\) locally freely, so \(\dim G=\dim X\) and apply lemma~\vref{lemma:X.is.G}. \end{proof} \begin{theorem}\label{theorem:nilpotent} If \((X,G)\) is a complex homogeneous space and \(G\) is nilpotent then every holomorphic \((X,G)\)-structure on any complex torus is translation invariant. \end{theorem} \begin{proof} The normalizer chain always increases in dimension until it reaches the dimension of \(G\) \cite{Borel:1991} p. 160. \end{proof} \begin{example} For the surfaces \((X,G)\) in example~\vref{example:B.beta.1}, and even for transcendental \(B\beta{1}\)-surfaces and \(B\beta{2}\)-surfaces \cite{McKay:2014}, \(G\) is nilpotent. The nilpotent complex homogeneous surfaces \((X,G)\) are \(B\beta{1}\), \(B\beta{2}\), \(D1\), \(D1_1, \dots, D1_5\), \(D2\), \(D2_1, \dots, D2_{14}\) \cite{McKay:2014}. Therefore for all of these surfaces \((X,G)\), all \((X,G)\)-structures on complex tori are translation invariant. \end{example} \section{Algebraic dimension} If \((X,G)\) is a complex algebraic homogeneous space then any holomorphic \((X,G)\)-structure is a holomorphic rigid geometric structure in Gromov's sense \cite{Dumitrescu:2001} and also a (flat) Cartan geometry (see the definition in section \ref{section:cartan geometries}). Recall that the \emph{algebraic dimension} of a complex manifold \(M\) is the transcendence degree of the field of meromorphic functions of \(M\) over the field of complex numbers. A generic torus has algebraic dimension zero, meaning that all its meromorphic functions are constant~\cite{Ueno:1975}. \begin{lemma}\label{lemma:symmetries.torus} The identity component of the symmetry group of any holomorphic geometric structure on a complex torus \(T\) acts as a subtorus \(T_0\) of dimension at least the algebraic codimension of \(T\) (i.e. \(n-\kappa\), where \(n=\dimC{T}\) and \(\kappa\) is the algebraic dimension of \(T\)). The quotient of \(T\) by the subtorus \(T_0\) is an abelian variety (which coincides with the algebraic reduction of \(T\) if and only if \(T_0\) is of complex dimension \(n-\kappa\)). \end{lemma} \begin{proof} The pair of the holomorphic geometric structure and the translation structure (the holomorphic parallelization) of \(T\) is a holomorphic rigid geometric structure on \(T\). The symmetry pseudogroup of any such structure acts transitively on sets of codimension \(\kappa\) \cite{Dumitrescu:2010b,Dumitrescu:2011}. Therefore near each point there are locally defined holomorphic vector fields preserving both the holomorphic geometric structure and the translation structure (the holomorphic parallelization), acting with orbits of dimension \(\ge \kappa\). Each of these vector fields preserves the translation structure, so is a translation. Translations on \(T\) extend globally, and give global symmetries. The family of symmetries is Zariski closed in the complex analytic Zariski topology, so forms a subtorus. \end{proof} \begin{corollary} \label{corollary:symmetries.torus} Suppose that \((X,G)\) is a complex algebraic homogeneous space and \(T\) is a complex torus of algebraic dimension zero. Every holomorphic \((X, G) \)-structure on \(T\) is translation invariant. \end{corollary} \begin{proof} Here the holomorphic geometric structure in the previous proof is the \((X,G)\)-structure. If \(T \) is of algebraic dimension zero, then the subtorus of common symmetries of the \((X,G)\)-structure and of the translation structure of \(T \) acts transitively. Consequently, the \((X,G)\)-structure is translation invariant. \end{proof} The results from \cite{Dumitrescu:2010b,Dumitrescu:2011} that we used in the proof of lemma~\ref{lemma:symmetries.torus} hold not only for tori, but for all complex manifolds: any holomorphic rigid geometric structure (or holomorphic Cartan geometry modelled on an algebraic homogeneous space) on a complex manifold of algebraic dimension zero is locally homogeneous on a dense open set (away from a nowhere dense analytic subset of positive codimension). With the same method we can then prove the following: \begin{theorem} \label{theorem:algebraic.dimension.zero} Let \(M \defeq P / \pi\) be a compact quotient of a complex Lie group \(P\) by a lattice \(\pi\). If \(M\) is of algebraic dimension zero, then any holomorphic geometric structure \(\phi\) on \(M\) pulls back to a translation invariant geometric structure on \(P\). Consequently, for a complex algebraic homogeneous space \((X,G)\), any \((X,G)\)-structure on \(M\) pulls back to \(P\) to a right invariant \((X,G)\)-structure. \end{theorem} \begin{proof} We add together the geometric structure \(\phi\) and the holomorphic parallelization of \(TM\) to give a holomorphic rigid geometric structure \(\phi'\). Then Corollary 2.2 in \cite{Dumitrescu:2011} shows that \(\phi'\) is locally homogeneous on an open dense set in \(M\), in the sense that the local holomorphic vector fields preserving both the holomorphic parallelization and \(\phi\) are transitive on an open dense set in \(M\). But the local holomorphic vector fields preserving the holomorphic parallelization (which is given by global right invariant vector fields) are those vector fields which are left invariant (they are locally defined on \(M\) and their pull back is globally defined on \(P\)). Hence, all left invariant vector fields on \(P\) must preserve the pull back of \(\phi\); if not \(\phi'\) is not locally homogeneous on any open set. Left invariant vector fields generate right translation: consequently, the pull back of \(\phi\) on \(P\) is invariant by right translation. If \(\phi\) is defined by an \((X,G)\)-structure with \((X,G)\) a complex algebraic homogeneous space, then the pull back of the \((X,G)\)-structure to \(P\) is right invariant. \end{proof} Theorem~\ref{theorem:algebraic.dimension.zero} generalizes a result of Ghys dealing with holomorphic tensors on \(SL(2,\C{})/\pi\)~\cite{Ghys:1995}. \begin{theorem} Consider a compact complex manifold \(M\) of complex dimension \(n\), algebraic dimension zero and Albanese dimension \(n\). Then \(M\) admits a holomorphic rigid geometric structure (or a holomorphic Cartan geometry modelled on an algebraic homogeneous space) if and only if \(M\) is a complex torus and the holomorphic rigid geometric structure (the Cartan geometry) is translation invariant. \end{theorem} \begin{proof} Since \(M\) is of algebraic dimension zero, it is known that the Albanese map \(M \to \alb{M}\) is surjective, with connected fibers and the Albanese torus \(\alb{M}\) contains no closed complex hypersurface (i.e. divisor) \cite{Ueno:1975} lemmas 13.1, 13.3, 13.6. Here, \(\alb{M}\) is of the same dimension as \(M\), so the Albanese map \(M \to \alb{M}\) is a modification (see lemma 13.7 in \cite{Ueno:1975}). Let \(H\) be the locus in \(M \) on which the Albanese map drops rank. The image \(\alpha(H)\) of \(H\) through the Albanese map \(\alpha\) is a nowhere dense analytic subset of \(\alb{M}\) of complex codimension at least two. The Albanese map is a biholomorphism between the open sets \(M \setminus \alpha^{-1}(\alpha(H))\) in \(M\) and \(\alb{M} \setminus \alpha(H)\) in \(\alb{M}\). This implies that the holomorphic rigid geometric structure of \(M\) drops down to a holomorphic geometric structure \(\phi\) on \(\alb{M} \setminus \alpha(H)\). Now we put together \(\phi\) and the translation structure (holomorphic parallelization) of \(\alb{M}\) together to form a holomorphic rigid geometric structure \(\phi'\) on \(\alb{M} \setminus \alpha(H)\). The complex manifold \(\alb{M} \setminus \alpha(H)\) being of algebraic dimension zero, the geometric structure \(\phi'\) is locally homogeneous on an open dense set in \(\alb{M} \setminus \alpha(H)\) \cite{Dumitrescu:2010b,Dumitrescu:2011}. The local infinitesimal symmetries are translations, because they preserve the translation structure. They extend to global translations on \(\alb{M}\) preserving \(\phi'\). Consequently, \(\phi'\) is the restriction to \(\alb{M} \setminus \alpha(H)\) of a translation invariant geometric structure defined on all of \(\alb{M}\). Consider a family of linearly independent translations on \(\alb{M}\). They pull back to commuting holomorphic vector fields \(\kappa_1, \dots, \kappa_n\) on \(M \setminus \alpha^{-1}(\alpha(H))\) which preserve the initial geometric structure and parallelize the holomorphic tangent bundle \(TM\) over \(M \setminus \alpha^{-1}(\alpha(H))\). Since they are symmetries of an analytic rigid geometric structure, they extend to all of \(M\) \cite{Amores:1979,Nomizu:1960}. Pull back a holomorphic volume form by the Albanese map: a holomorphic section \(\operatorname{vol}\) of the canonical bundle of \(M\) which vanishes on the branch locus \(H\). Plug \(\kappa_1, \dots, \kappa_n\) into the volume form and get the holomorphic function \[ \operatorname{vol}\of{\kappa_1, \dots, \kappa_n} \] which is constant and nonzero on \(M \setminus \alpha^{-1}(\alpha(H))\), since it corresponds to a constant nonzero function on the Albanese torus, so constant and nonzero on all of \(M\). This implies that the holomorphic vector fields \(\kappa_1, \dots, \kappa_n\) holomorphically parallelize \(TM\) on all of \(M\). Since they commute, \(M\) is a complex torus and \(\kappa_1, \dots, \kappa_n\) are translation vector fields. The initial geometric structure on \(M\) is translation invariant. \end{proof} \section{ \texorpdfstring{Deformation space of \((X,G)\)-structures}{Deformation space of (X,G)-structures} } Consider an \((X,G)\)-structure on a manifold \(M\) and the corresponding holonomy morphism \(h : \pi_1(M) \to G\). The deformation space of \((X,G)\)-structures on \(M\) is the quotient of the space of \((X,G)\)-structures on \(M\) by the group of diffeomorphisms of \(M\) isotopic to the identity. By the Ehresmann-Thurston principle (see, for instance, \cite{Goldman:2010} p. 7), the deformation space of \((X,G)\)-structures on \(M \) is locally homeomorphic, through the holonomy map, to an open neighborhood of \(h\) in the space of group homomorphisms from \( \pi_1(M) \) into \(G \) (modulo the action of \(G \) on the target \(G\) by inner automorphisms). In other words, any group homomorphism from \( \pi_1(M) \) into \(G \) close to \(h\) is itself the holonomy morphism of an \((X,G)\)-structure on \(M \) close to the initial one. Also, two close \((X,G)\)-structures with the same holonomy morphism are each conjugated to the other by an isotopy of \(M\). For any finitely generated group \(\pi\) and any algebraic group \(G\), \(\Hom{\pi}{G}\) is an algebraic variety (a subvariety of \(G^k\), if \(\pi\) can be generated by \(k\) elements). If $G$ is a complex Lie group, the space \(\Hom{\pi}{G}\) is a complex analytic space. A family of \((X,G)\)-structures on $M$ parametrized by a complex reduced space $S$ is called holomorphic if the family of the corresponding holonomy morphisms lifts as a complex analytic map from $S$ to \(\Hom{\pi}{G}\). Notice that, in our case, the \(G \)-action preserves a complex structure on \(X \) and hence any \((X,G)\)-structure on a manifold \(M \) induces an underlying complex structure on \(M \) (for which the \((X,G) \)-structure is holomorphic). In particular, when deforming the \((X,G) \)-structure on \(M \), one also deforms the complex structure on \(M\). Let us make precise how the complex structure varies under the deformation of a holomorphic \((X,G)\)-structure on a complex torus. \begin{lemma}\label{lemma:perturb} Consider a complex homogeneous space \((X,G)\). Suppose that we have a holomorphic \((X,G)\)-structure on a complex \(n\)-torus \(T=V/\Lambda\), with holonomy morphism \(h \colon \Lambda \to G\). If \(h_s \in \Hom{\Lambda}{G}\) is a holomorphic family of group morphisms for \(s\) in some reduced complex space \(S\), with \(h_{s_0}=h\) for some \(s_0 \in S\), then there is a holomorphic family of \((X,G)\)-structures on a holomorphic family of complex tori \(T_s\) with holonomy morphism \(h_s\), for \(s\) in an open neighborhood of \(s_0\). \end{lemma} \begin{proof} By the Ehresmann--Thurston principle, there is a unique nearby \((X,G)\)-structure on the same underlying real manifold with holonomy morphism \(h_s\). Since \(G\) preserves a complex structure on \(X\), this \((X,G)\)-structure is holomorphic for a unique complex structure on \(T_s\). Being a small deformation of a complex torus, \(T_s\) is a complex torus by Theorem 2.1 in \cite{Catanese}. \end{proof} The following result deals with the deformation space of translation invariant \((X,G)\)-structures on complex tori. Notice that the condition of the symmetry group \(Z_X\) being of dimension \(n\) is closed under deformation of \((X,G)\)-structures, since a limit of \((X,G)\)-structures could have smaller holonomy group (i.e. some generators of \(\Lambda\) landing in some special position), but that would only decrease the collection of conditions that determine the centralizer \(Z_X^0\), so make \(Z_X^0\) larger. Hence limits of translation invariant \((X,G)\)-structures are translation invariant, as the centralizer of the holonomy can only increase in dimension. \begin{theorem} \label{theorem:translation invariant.deformation} Let \((X,G)\) be a complex algebraic homogeneous space. If a holomorphic \((X,G)\)-structure on a complex torus \( T= V/\Lambda\) is translation invariant, then so is any deformation of that structure. Consequently, translation invariant \((X,G)\)-structures form a union of connected components in the deformation space of \((X,G)\)-structures on the (real) manifold \(T \). \end{theorem} \begin{proof} Start with a translation invariant \((X,G)\)-structure on \(T \). The holonomy morphism extends from \( \Lambda \) to a complex Lie group morphism \(h \colon V \to G\). For any other complex torus \(T'=V/\Lambda' \), restrict \(h\) to the period lattice \(\Lambda'\) of that torus and take the same developing map to construct a (translation invariant) \((X,G)\)-structure on \(T'\). In particular, we can deform the starting translation invariant \((X,G)\)-structure to another (translation invariant) \((X,G)\)-structure on a complex torus of algebraic dimension zero (by choosing a generic lattice \( \Lambda' \)). Moreover, by corollary \vref{corollary:symmetries.torus}, all nearby \((X,G)\)-structures are translation invariant, since the underlying complex structure of the torus \(T \) remains of algebraic dimension zero under small perturbation of the complex structure \(T'\). Hence, the translation invariant \((X, G) \)-structures on \(T\) form an open dense set in our connected component of the deformation space of \((X,G)\)-structures. In particular, we proved that the natural map associating to an \((X,G)\)-structure the underlying complex structure on \(T\) is surjective on the Kuranishi space of \(V/\Lambda\). All of these deformations of the \((X,G)\)-structure merely perturb the holonomy morphism \(h\) through a family of complex Lie group morphisms \(h \colon V \to G\) and the developing map is \(\delta(v)=h(v)x_0\), with \(x_0 \in X\). But, we have seen that the translation invariant \((X,G)\)-structures always form a closed set. Therefore in that connected component of the deformation space of \((X,G)\)-structures, all \((X,G)\)-structures are translation invariant. \end{proof} Let us give an easy argument implying, for various complex homogeneous surfaces \((X,G)\), that all \((X,G)\)-structures on complex tori of complex dimension two are translation invariant. \begin{lemma}\label{lemma:Z.nought} Suppose that \((X,G)\) is a complex algebraic homogeneous space. If there is a holomorphic \((X,G)\)-structure on a complex torus \(T\), and that structure is not translation invariant, then there is another such structure on another complex torus nearby to a finite covering of \(T\), also not translation invariant, with holonomy morphism having dense image in \(Z_X^0\). Any connected abelian subgroup near enough to \(Z_X^0\) is the identity component of its centralizer and arises as the Zariski closure of the image of the holonomy of a nearby \((X,G)\)-structure on a nearby complex torus. \end{lemma} \begin{proof} Since \(Z_X\) is the centralizer of \(h(\Lambda)\) in \(G\), it is an algebraic subgroup in \(G\). Therefore it consists of a finite number of connected components. After perhaps replacing \(T\) by a finite cover of \(T\), we can assume that \(h(\Lambda) \subset Z_X^0\). By lemma~\vref{lemma:algebraic.description}, \(Z_X^0\) is abelian. Since \(\Lambda\) is free abelian, morphisms \(\Lambda \to Z_X^0\) are precisely arbitrary choices of where to send some generating set of \(\Lambda\). Since \(\Lambda\) has rank \(2n\) and \(\dimC{Z_X^0} < n\), we can slightly deform the holonomy morphism to have Zariski dense image in \(Z_X^0\). If we can perturb \(Z_X^0\) slightly to an abelian subgroup with larger centralizer, we can repeat the process. Since we stay in the same connected component in the deformation space of \((X,G) \)-structures, theorem \vref{theorem:translation invariant.deformation} implies that none of these \((X,G)\)-structures are translation invariant. \end{proof} \begin{example} Suppose that \(\dimC{X}=2\) and the Levi decomposition of \(G\) has reductive part with rank 2 or more. Suppose we have a holomorphic \((X,G)\)-structure on a complex 2-torus. The generic connected 1-dimensional subgroup of \(G\) is not algebraic, because the characters on a generic element of \(\LieG\) have eigenvalues with irrational ratio. After perhaps a small perturbation of the \((X,G)\)-structure, \(Z_X^0\) has complex dimension 2 or more: the \((X,G) \)-structure becomes translation invariant. Every holomorphic \((X,G)\)-structure of this kind on a complex 2-torus must be translation invariant (because of lemma \ref{lemma:Z.nought}). From the classification of the complex homogeneous surfaces \((X,G)\) \cite{McKay:2014}, this occurs for the complex homogeneous surfaces \(A1\), \(A2\), \(B\beta{2}\), \(B\gamma{4}\), \(B\delta{2}\), \(B\delta{4}\), \(C2\), \(C3\), \(C5\), \(C6\), \(C7\) and \(D1\). \end{example} \section{Reductive and parabolic Cartan geometries} \label{section:cartan geometries} Pick a complex homogeneous space \((X,G)\), with \(H \subset G\) the stabilizer of a point \(x_0 \in X\), and with the groups \(H \subset G\) having Lie algebras \(\LieH \subset \LieG\). The space \((X,G)\) is \emph{reductive} if \(H\) is a reductive linear algebraic group, \emph{rational} if \(X\) is compact and birational to projective space and \(G\) is semisimple in adjoint form. Recall that a \emph{Cartan geometry} (or a \emph{Cartan connection}) is a geometric structure infinitesimally modelled on a homogeneous space. The curvature of a Cartan geometry vanishes if and only if the Cartan geometry is an \((X,G)\)-structure. A \emph{holomorphic Cartan geometry} modelled on \((X,G)\) is a holomorphic \(H\)-bundle \(B \to M\) with a holomorphic connection \(\omega\) on \(B \times^H G\) so that the tangent spaces of \(B\) are transverse to the horizontal spaces of \(\omega\). A Cartan geometry or locally homogeneous geometric structure is \emph{reductive (parabolic)} if its model is reductive (rational). If a compact K\"ahler\xspace manifold has trivial canonical bundle and a holomorphic parabolic geometry then the manifold has a finite unbranched holomorphic covering by a complex torus and the geometry pulls back to be translation invariant \cite{McKay:2011} p. 3 theorem 1 and p. 9 corollary 2. \begin{example} Among complex homogeneous surfaces \((X,G)\) \cite{McKay:2014}, the rational homogeneous varieties are \(A1=\pr{\Proj{2},\PSL{3,\C{}}}\) and \[ C7=\pr{\Proj{1} \times \Proj{1}, \PSL{2,\C{}} \times \PSL{2,\C{}}}. \] Therefore any holomorphic locally homogeneous structure on a complex torus modelled on either of these surfaces is translation invariant. \end{example} \begin{theorem}\label{theorem:reductive} If a compact K\"ahler\xspace manifold has a holomorphic reductive Cartan geometry, then the manifold has a finite unbranched holomorphic covering by a complex torus and the geometry pulls back to be translation invariant. \end{theorem} \begin{proof} Holomorphically split \(\LieG=W \oplus \LieH\) for some \(H\)-module \(W\); this \(H\)-module is effective \cite{McKay2013} p. 9 lemma 6.1. At each point of \(B\), the Cartan connection splits into a 1-form valued in \(W\) and a connection 1-form, say \(\omega=\sigma + \gamma\). At each point of the total space \(B\) of the Cartan geometry, the 1-form \(\sigma\) is semibasic, so defines a 1-form \(\bar\sigma\) on the corresponding point of the base manifold, a coframe. Because \(H\) acts effectively on \(W\), the map \(\bar\sigma\) identifies the total space of the Cartan geometry with a subbundle of the frame bundle of the base manifold \cite{McKay2013} corollary 6.2. Hence the Cartan geometry is precisely an \(H\)-reduction of the frame bundle with a holomorphic connection. The tangent bundle admits a holomorphic connection, so has trivial characteristic classes \cite{Inoue/Kobayashi/Ochiai:1980}. Therefore the manifold admits a finite holomorphic covering by a complex torus \cite{Inoue/Kobayashi/Ochiai:1980}; without loss of generality assume that the manifold is a complex torus $T$. The trivialization of the tangent bundle pulls back to the \(H\)-bundle to be a multiple \(g \sigma\) for some \(g \in \GL{W}\), transforming under \(H\)-action, so defining a holomorphic map \(T \to \GL{W}\!/H\). But \(\GL{W}\!/H\) is an affine algebraic variety, so admits no nonconstant holomorphic maps from complex tori, so \(T \to \GL{W}\!/H\) is constant, hence without loss of generality is the identity, i.e. \(g\) is valued in \(H\), so there is a holomorphic global section of the bundle on which \(g=1\), trivializing the bundle. The connection is therefore translation invariant, and so the Cartan connection is translation invariant. \end{proof} \begin{example} Among complex homogeneous surfaces \((X,G)\) \cite{McKay:2014}, the reductive homogeneous surfaces are \(A2\), \(A3\), \(C2\), \(C2'\), \(C3\), \(C9\), \(C9'\), \(D1\), \(D1_1\), \(D1_2\), \(D1_3\), \(D1_4\) and \(D3\). Therefore any holomorphic locally homogeneous structure on a complex torus modelled on any one of these surfaces is translation invariant. \end{example} \begin{proposition} Suppose that \((X,G)\) is a product of a reductive homogeneous space with a rational homogeneous variety. Then every holomorphic Cartan geometry modelled on \((X,G)\) on any complex torus is translation invariant. \end{proposition} \begin{proof} Write \((X,G)=\pr{X_1 \times X_2, G_1 \times G_2}\) as a product of a reductive homogeneous space \(\pr{X_1,G_1}\) and a rational homogeneous variety \(\pr{X_2,G_2}\). The splitting of \(X\) into a product splits the tangent bundle of the torus into a product and the canonical bundle into a tensor product. Since the canonical bundle of the complex torus is trivial, the determinant line bundles of the two factors in our splitting are dual. The reductive geometry gives a holomorphic connection on the first factor of the splitting of the tangent bundle, so that the determinant line bundle of that factor is trivial. Therefore the determinant line bundle of the second factor is trivial. Taking a holomorphic section reduces the structure group of the parabolic part of the geometry to a reductive group \cite{McKay:2011} p. 3, and so the geometry is now reductive so the result follows from theorem~\vref{theorem:reductive}. \end{proof} \begin{example}\label{example:products} Among complex homogeneous surfaces \((X,G)\) \cite{McKay:2014}, those which are a product of a reductive homogeneous curve and a rational homogeneous curve are \(C5\), \(C5'\) and \(C6\). Therefore any holomorphic locally homogeneous structure on a complex torus modelled on any one of these surfaces is translation invariant. \end{example} \section{Lifting}\label{section:lifting} If a complex homogeneous space \((X,G)\) has underlying manifold \(X\) not simply connected, take the universal covering space \(\tilde{X} \to X\), lift the vector fields that generate the Lie algebra \(\LieG\) of \(G\) to \(\tilde{X}\) and generate an action of a covering group, call it \(\tilde{G}\), acting on \(\tilde{X}\). Caution: this process neither preserves nor reflects algebraicity. The developing map \(\delta\) of any \((X,G)\)-structure lifts to a local biholomorphism \(\tilde{\delta}\) to \(\tilde{X}\). If all of the deck transformations of \(\fundamentalGroup{X}\) arise as elements of \(\tilde{G}\), then the \((X,G)\)-structure is induced by a unique \((\tilde{X},\tilde{G})\)-structure. An \((X,G)\)-structure on a complex torus is then translation invariant just when the associated \(\pr{\tilde{X},\tilde{G}}\)-structure is translation invariant. \begin{example}\label{example:lifts} From the classification of complex homogeneous surfaces \cite{McKay:2014}, \((X,G)\) has universal covering space \((\tilde{X},\tilde{G})\) with deck transformations carried out by elements of \(\tilde{G}\) for any \((X,G)\) among \(B\beta{1}A1\), \(B\beta{1}D\), \(B\beta{1}E\), \(B\beta{2}'\), \(B\gamma{2}'\), \(B\delta{2}'\), \(C2'\), \(C5'\), \(D1_1\), \(D1_2\), \(D1_3\), \(D1_4\) and \(D1_5\). Therefore the proof of translation invariance of holomorphic \((X,G)\)-structures on complex 2-tori, for these \((X,G)\), reduces to the proof of translation invariance of holomorphic \((\tilde{X},\tilde{G})\)-structures on complex tori, for their universal covering spaces. \end{example} A slight modification of this procedure, using proposition~\vref{proposition:enlarge}: \begin{lemma} Suppose that we have a morphism \(\pr{\tilde{X},\tilde{G}} \to \pr{X',G'}\) of complex homogeneous spaces from the universal covering space \(\pr{\tilde{X},\tilde{G}} \to \pr{X,G}\) of a complex homogeneous space \((X,G)\). Suppose that \(\tilde{X} \to X'\) is a local biholomorphism and that \(\tilde{G} \to G'\) has closed image \(\bar{G} \subset G'\). Suppose that there is no positive dimensional compact complex torus in \(G'/\bar{G}\) acted on transitively by a subgroup of \(G'\). Suppose that all of the deck transformations of \(\fundamentalGroup{X}\) arise as elements of \(G'\). Then the developing map of any \((X,G)\)-structure lifts uniquely to the developing map of a unique \(\pr{X',G'}\)-structure. An \((X,G)\)-structure on a complex torus is then translation invariant just when the associated \(\pr{X',G'}\)-structure is translation invariant. \end{lemma} \begin{example}\label{example:lift.and.extend} Take any complex homogeneous surface \((X,G)\) with universal covering space \(\pr{\tilde{X},\tilde{G}}=B\beta 1\) in the notation of \cite{McKay:2014} (see example~\vref{example:B.beta.1} for the definition). The inclusion \(B\beta 1 \to B\beta 2\) puts the deck transformations of every quotient \(\pr{\tilde{X},\tilde{G}} \to (X,G)\) into the transformations of a group \(G'\) containing \(\tilde{G}\). Therefore for all complex homogeneous surfaces \((X,G)\) covered by \(B\beta 1\), every holomorphic \((X,G)\)-structure on any complex torus is translation invariant. \end{example} \section{Complex tori of complex dimension 0, 1 or 2} \begin{theorem} Every holomorphic locally homogeneous geometric structure on a complex torus of complex dimension 0, 1 or 2 is translation invariant. \end{theorem} \begin{proof} Corollary~\vref{corollary:positive.dim.symmetries} covers any complex torus of dimension 1. The tricks in examples~\ref{example:B.beta.1} to \ref{example:lift.and.extend} prove translation invariance of all \((X,G)\)-structures for all of the complex homogeneous surfaces \((X,G)\), from the classification \cite{McKay:2014}. \end{proof} \section{Conclusion} It seems that our methods are unable to prove the translation invariance of holomorphic solvable \((X,G)\)-structures on complex tori. We conjecture that all holomorphic locally homogeneous geometric structures on complex tori are translation invariant. Moreover, suppose that a complex compact manifold $M$ homeomorphic to a torus admits a holomorphic \((X,G)\)-structure. Then we conjecture that $M$ is biholomorphic to the quotient \(V/\Lambda\) of a complex vector space $V$ by a lattice $\Lambda$. In other words, nonstandard complex structures on real tori do not admit any holomorphic \((X,G)\)-structure and, more generally, do not admit any holomorphic rigid geometric structure. \bibliographystyle{amsplain}
1,116,691,498,846
arxiv
\section{Introduction} Quantum systems allow physical operations to perform some tasks that seems to be impossible in classical domain \cite{mes,mes1,mes2}. However with the nature of the operations performed it restricts correctness or exact behavior of the operations to act for the whole class of states of the quantum system. Possibilities or impossibilities of various kind of such operations acting on some specified system is then one of the basic tasks of quantum information processing. In case of cloning and deleting the input states must be orthogonal to each other for the exactness of the operation performed \cite{wootters,wootters1,wootters2,pati11,pati112}. Rather if the operation considered is spin-flipping \cite{gisin,unot,unot1} or Hadamard type then the input set of states enhanced to a great circle of the Bloch sphere \cite{patili,ghosh,ghosh1}. It indicates that any angle preserving operation has some restriction on the allowable input set of states. The unitary nature of all physical evolution \cite{krause} raised the question that whether the non-physical nature of the anti-unitary operations is a natural constraint over the system or not. In other words, it is nice to show how an impossible operation like anti-unitary, evolve with the physical systems concerned. First part of this paper concerns with a connection between general anti-unitary operations and evolution of a joint system through local operations together with classical communications, in short LOCC. Some constraint over the system are always imposed by the condition that the system is evolved under LOCC. For example, performing any kind of LOCC on a joint system shared between distinct parties, the amount of entanglement between some spatially separated subsystems can not be increased. If we further assume that the concerned system is pure bipartite, then by Nielsen's criteria \cite{nielsen,nielsen1} it is possible to determine whether a pure bipartite state can be transformed to another pure bipartite state with certainty by LOCC or not. Consequently we find that there are pairs of pure bipartite states, denoted by incomparable states which are not interconvertible under LOCC with certainty. The existence of such class of states prove that the amount of entanglement does not always determine the possibility of exact transformation of a joint system by applying LOCC. Now we first pose the problem that would be discussed in this paper. Suppose $\rho_{ABCD\ldots}$ be a state shared between distinct parties situated at distant locations. They are allowed to do local operations on their subsystems and also they may communicate any amount of classical information among themselves. But they do not know whether their local operations are valid physical operations or not. By valid physical operation we mean a completely positive map (may be trace-preserving or not) acting on the physical system. Sometimes an operation is confusing in the sense that it works as a valid physical operation for a certain class of states but not as a whole. Therefore they want to judge their local operations using quantum formalism or other physical principles, may be along with quantum formalism or may not be. No-signalling, non-increase of entanglement by LOCC are some of the good detectors of nonphysical operations \cite{signal,signal1,delsignal,delsignal1,flip,flip1}. In this paper we want to establish another good detector for a large number of nonphysical operations. The existence of incomparable states enables us to find that detector. Suppose $L_A\otimes L_B \otimes L_C\otimes L_D\otimes \cdots$ be an operation acting on the physical system represented by $\rho_{ABCD\ldots}$ and $\rho^{\prime}_{ABCD\ldots}$ be the transformed state. Now it is known that the states $\rho_{ABCD\ldots}$ and $\rho^{\prime}_{ABCD\ldots}$ are incomparable by the action of any deterministic LOCC, then we could certainly say that at least one of the operations $L_A, L_B, L_C, L_D, \cdots$ are nonphysical. Therefore if somehow we find two states that are incomparable and by an operation acting on any party (or a number of parties) one state is transformed to another then we certainly claim that the operation is a nonphysical one. We find several classes of nonphysical operations through this procedure and it is our main motivation in this work. The paper is organized as follows: in section 2 we describe what we actually mean by a physical operation and its relation with LOCC. In section 3 we describe the notion of incomparability for pure bipartite entangled states. In section 4 we show the nonphysical nature of the most general class of universal exact anti-unitary operators through the impossibility of inter-converting two incomparable states by deterministic LOCC. Lastly, in section 5 we show a large class of inner-product preserving operations are also non physical in nature, including the Hadamard operation. As a subcase of the above operations we reproduce the nonexistence of exact universal flipping machine \cite{incomfi}. In all the above cases we have tried to use minimum number of qubits (only on three spin directions along $x, y, z$) and the quantum system considered as simple as possible. Also the states considered here to prove the impossibilities are pure entangled states. \section{Physical Operations and LOCC} In this section we first describe the notion of a physical operation in the sense of Kraus \cite{krause}. Suppose a physical system is described by a state $\rho$. By a physical operation on $\rho$ we mean a completely positive map $\mathcal{E}$ acting on the system and described by \begin{equation} \mathcal{E}(\rho )= \sum_{k} A_{k} \rho A^{\dagger}_{k} \end{equation} where each $A_k$ is positive linear operator that satisfies the relation $ \sum_{k} A^{\dagger}_{k} A_{k} \leq I$. If $ \sum_{k} A^{\dagger}_{k} A_{k} = I$, then the operation is trace preserving. When the state is shared between a number of parties, say, A, B, C, D,. .... and each $A_k$ has the form $A_k = L^A_k \otimes L^B_k \otimes L^C_k \otimes L^D_k \otimes \cdots$ with all the $L^A_k, L^B_k, L^C_k, L^D_k, \cdots$ are linear positive operators, the operator is then called a separable superoperator. In this context we would like to mention an interesting result concerned with LOCC. Every LOCC is a separable superoperator but it is unknown to us whether the converse is also true or not. It is further affirmed that there are separable superoperators which cannot be expressed by finite LOCC \cite{bennet}. Now if a physical system evolved under LOCC (may be deterministic or stochastic) then quantum mechanics does not allow the system to behave arbitrarily. More precisely, under the action of any LOCC one could find some fundamental constraints over any entangled system. The content of entanglement will not increase under LOCC. This is usually known as the principle of non-increase of entanglement under LOCC. Further for any closed system as unitarity is the only possible evolution, the constraint is then: the entanglement content will not change under LOCC. So if we find some violation of these principles under the action of any local operation, then we certainly claim that the operation is not a physical one. No-cloning, no-deleting, no-flipping, all those theorems are already established with these principles, basically with the principles of non-increase of entanglement \cite{flip,flip1}. These kind of proof for those important no-go theorems will always give us a more powerful physically intuitive approaches for quantum information processing apart from the mathematical proofs that the dynamics should be linear as well as unitary. Linearity and unitarity are the building blocks of every physical operation \cite{krause,gisin1}. But within the quantum formalism we always search for better physical situations that are more useful and intuitive for quantum information processing. Existence of incomparable states in pure bipartite entangled systems allow us to use it as a new detector. We have already proved three impossibilities, viz., exact universal cloning, deleting and flipping operations by the existence of incomparable states under LOCC \cite{incomfi,incomfi1} and we would provide some further classes of nonphysical operations in this paper. \section{Notion of Incomparability} To present our work we need to define the condition for a pair of states to be incomparable with each other. The notion of incomparability of a pair of bipartite pure states directly follows from the necessary and sufficient condition for conversion of a pure bipartite entangled state to another by deterministic LOCC, i.e., with probability one. It is prescribed by M. A. Nielsen \cite{nielsen,nielsen1}. Suppose we want to convert the pure bipartite state $|\Psi\rangle$ of $d\times d$ system to another state $|\Phi\rangle$ shared between two parties, say, Alice and Bob by deterministic LOCC. Consider $|\Psi\rangle$, $|\Phi\rangle$ in their Schmidt bases $\{|i_A\rangle ,|i_B\rangle \}$ with decreasing order of Schmidt coefficients: $|\Psi\rangle= \sum_{i=1}^{d} \sqrt{\alpha_{i}} |i_A i_B\rangle$, $|\Phi\rangle= \sum_{i=1}^{d} \sqrt{\beta_{i}} |i_A i_B\rangle,$ where $\alpha_{i}\geq \alpha_{i+1}\geq 0$ and $\beta_{i}\geq \beta_{i+1}\geq0,$ for $i=1,2,\cdots,d-1,$ and $\sum_{i=1}^{d} \alpha_{i} = 1 = \sum_{i=1}^{d} \beta_{i}$. The Schmidt vectors corresponding to the states $|\Psi\rangle$ and $|\Phi\rangle$ are $\lambda_\Psi\equiv(\alpha_1,\alpha_2,\cdots,\alpha_d),$ $\lambda_\Phi\equiv(\beta_1,\beta_2,\cdots,\beta_d)$. Then Nielsen's criterion says $|\Psi\rangle\rightarrow| \Phi\rangle$ is possible with certainty under LOCC if and only if $\lambda_\Psi$ is majorized by $\lambda_\Phi,$ denoted by $\lambda_\Psi\prec\lambda_\Phi$ and described as, \begin{equation} \begin{array}{lcl}\sum_{i=1}^{k}\alpha_{i}\leq \sum_{i=1}^{k}\beta_{i}~ ~\forall~ ~k=1,2,\cdots,d \end{array} \end{equation} It is interesting to note that however majorization \cite{maj} criteria is an algebraic tool, it shows great applicability in different context of quantum information processing \cite{majappl,majappl1,majappl2,majappl3}. Now, as a consequence of non-increase of entanglement by LOCC, if $|\Psi\rangle\rightarrow |\Phi\rangle$ is possible under LOCC with certainty, then $E(|\Psi\rangle)\geq E(|\Phi\rangle)$ [where $E(\cdot)$ denote the von-Neumann entropy of the reduced density operator of any subsystem and known as the entropy of entanglement]. If the above criterion (2) does not hold, then it is usually denoted by $|\Psi\rangle\not\rightarrow |\Phi\rangle$. Though it may happen that $|\Phi\rangle\rightarrow |\Psi\rangle$ under LOCC. If it happens that $|\Psi\rangle\not\rightarrow |\Phi\rangle$ and $|\Phi\rangle\not\rightarrow |\Psi\rangle$ then we denote it as $|\Psi\rangle\not\leftrightarrow |\Phi\rangle$ and describe $(|\Psi\rangle, |\Phi\rangle)$ as a pair of incomparable states \cite{nielsen,incomp}. One of the peculiar feature of such incomparable pairs is that we are unable to say that which state has a greater amount of entanglement content than the other. Also for $2\times 2$ systems there are no pair of pure entangled states which are incomparable to each other. For our purpose, we now explicitly mention the criterion of incomparability for a pair of pure entangled states $|\Psi\rangle, |\Phi\rangle$ of $m\times n$ system where $\min \{ m,n \}=3$. Suppose the Schmidt vectors corresponding to the two states are $(a_1, a_2, a_3)$ and $(b_1, b_2, b_3)$ respectively, where $a_1> a_2> a_3~,~b_1> b_2> b_3~,~a_1+ a_2+ a_3=1=b_1+ b_2+ b_3$. In this case the condition for the pair of states $|\Psi\rangle, |\Phi\rangle$ to be are incomparable to each other can be written in the simplified form that\\ \begin{equation} \begin{array}{lcl}\verb"either," ~ ~ ~ ~a_1 > b_1 ~ ~\verb"and" ~ ~a_3 > b_3\\ \verb"or," ~ ~ ~ ~ ~ ~ ~ ~ ~ ~a_1 < b_1 ~ ~\verb"and" ~ ~a_3 < b_3 \end{array} \end{equation} must hold simultaneously. \section{Incomparability as a Detector for Anti-Unitary Operators} The general class of anti-unitary operations can be defined in the form, $\Gamma= CU;$ where $C$ is the conjugation operation and $U$ be the most general type of unitary operation on a qubit, in the form $$U=\left(% \begin{array}{cc} \cos\theta & e^{i\alpha}\sin\theta \\ -e^{i\beta}\sin\theta & e^{i(\alpha+\beta)}\cos\theta \\ \end{array}% \right)$$ Let us consider three qubit states with the spin-directions along $x,y,z$ as, $$|0_x \rangle=\frac{|0\rangle+|1\rangle}{\sqrt{2}}~,~|0_y \rangle=\frac{|0\rangle+i|1\rangle}{\sqrt{2}}~, |0_z\rangle=|0\rangle$$ The action of the operator $\Gamma$ on these three states can be described as, \begin{quote} \begin{equation} \begin{array}{lcl} \Gamma|0_x\rangle= (\frac{\cos \theta+ e^{-i \alpha}\sin \theta}{\sqrt{2}})|0\rangle +e^{-i \beta}(\frac{e^{-i \alpha} \cos\theta-\sin\theta}{\sqrt{2}})|1\rangle , \\ \Gamma|0_y\rangle= (\frac{\cos \theta-i e^{-i \alpha}\sin \theta}{\sqrt{2}})|0\rangle -e^{-i \beta}(\frac{ie^{-i \alpha} \cos\theta+\sin\theta}{\sqrt{2}})|1\rangle, \\ \Gamma |0_z \rangle= \cos \theta |0\rangle -e^{-i \beta} \sin \theta |1\rangle \end{array} \end{equation} \end{quote} To prove that this operation $\Gamma$ is nonphysical and its existence leads to an impossibility, we choose a particular pure bipartite state $|\chi^i \rangle_{AB}$ shared between two spatially separated parties Alice and Bob in the form, \begin{equation} \begin{array}{lcl}|\chi^i\rangle_{AB} & = &\frac{1}{\sqrt{3}} \{|0\rangle_A|0_z \rangle_B | 0_z \rangle_B + |1 \rangle_A | 0_x \rangle_B | 0_y \rangle_B\\ & &~ ~+ |2 \rangle_A |0_y \rangle_B |0_x \rangle_B\} \end{array} \end{equation} The impossibility we want to show here is that by the action of $\Gamma$ locally we are able to convert a pair of incomparable states deterministically. Now to show incomparability between a pair of pure bipartite states, the minimum Schmidt rank we require is three. So the joint state we consider above is a $3\times 4$ state where Alice has a qutrit and Bob has two qubits. The initial reduced density matrix of Alice's side is then, \begin{equation} \begin{array}{lcl} \rho_A^i &=&\frac{1}{3}~ \{P[|0\rangle]+P[|1\rangle]+P[|2\rangle]+ \frac{1}{2}(|0\rangle\langle1|+|1\rangle\langle0|\\ & & ~ ~ +|0\rangle\langle2| +|2\rangle\langle0|+|1\rangle\langle2| +|2\rangle\langle1|) \} \end{array} \end{equation} The Schmidt vector corresponding to the initial state $|\chi^i \rangle_{AB}$ is $(\frac{2}{3}, \frac{1}{6}, \frac{1}{6} )$. Assuming that Bob operates $\Gamma$ on one of the two qubits, say the last one in his subsystem, the joint state shared between Alice and Bob will transform to \begin{equation} \begin{array}{lcl} |\chi^f \rangle_{AB}&=& \frac{1}{\sqrt{3}} \{|0\rangle_A |0_z \rangle_B \Gamma(| 0_z \rangle_B) + |1 \rangle_A | 0_x \rangle_B \Gamma(| 0_y \rangle_B)\\ & & ~ ~+ |2 \rangle_A |0_y \rangle_B \Gamma(|0_x \rangle_B)\} \end{array} \end{equation} Tracing out Bob's subsystem we again consider the reduced density matrix of Alice's subsystem. The final reduced density matrix is \begin{equation} \begin{array}{lcl} \rho_A^f &=&\frac{1}{3}~ \{P[|0\rangle]+P[|1\rangle]+P[|2\rangle]+ \frac{1}{2}(|0\rangle\langle1|+|1\rangle\langle0|\\ & & ~ ~+|0\rangle\langle2| +|2\rangle\langle0|-i|1\rangle\langle2| +i|2\rangle\langle1|) \} \end{array} \end{equation} The Schmidt vector corresponding to the final state $|\chi^f \rangle_{AB}$ is $(\frac{1}{3}+\frac{1}{2\sqrt{3}}~, \frac{1}{3}~,$ $ \frac{1}{3}-\frac{1}{2\sqrt{3}})$. Interestingly, the Schmidt vector of the final state does not contain the arbitrary parameters of the anti-unitary operator $\Gamma$. It is now easy to check that the final and initial Schmidt vectors are incomparable as, $\frac{2}{3}~> \frac{1}{3}+\frac{1}{2\sqrt{3}}~> \frac{1}{3}~> \frac{1}{6}~> \frac{1}{3}-\frac{1}{2\sqrt{3}}$. Thus we have, $|\chi^i \rangle \not \leftrightarrow |\chi^f \rangle$ so that the transformation of the pure bipartite state $|\chi^i \rangle$ to $|\chi^f \rangle$ by LOCC with certainty is not possible following Nielsen's criteria. Though by applying the anti-unitary operator $\Gamma$ on Bob's local system the transformation $|\chi^i \rangle \rightarrow |\chi^f \rangle$ is performed exactly. This impossibility emerges out of the impossible operation $\Gamma$ which we have assumed to be exist and apply it to generate the impossible transformation. Thus we have observed the nonphysical nature of any anti-unitary operator $\Gamma$ through our detection process. As a particular case one may verify the non-existence of exact universal flipper by our method ( choose, $\theta = \pi/2, \alpha = 0, \beta = 0$). If instead of operating $\Gamma= CU$ we will operate only $U,$ \emph{i.e.}, the general unitary operator, the initial and final density matrices of one side will be seen to be identical, implying that there is not even a violation of No-Signalling principle. This is true as we only operate the unitary operator on any qubit not restricting on any particular choices, such as they will act isotropically for all the qubits, etc. Thus it can not even used to send a signal here.\\ \section{Inner Product Preserving Operations} In this section we relate the impossibility of some inner product preserving operations defined only on the minimum number of qubits $|0_x \rangle, |0_y \rangle, |0_z \rangle$. Here we consider the existence of the operation defined on these three qubits in the following manner, \begin{quote} \begin{equation} \begin{array}{lcl} |0_z \rangle \longrightarrow (\alpha |0_z\rangle + \beta |1_z\rangle),\\ |0_x\rangle \longrightarrow (\alpha |0_x\rangle + \beta |1_x\rangle), \\ |0_y\rangle \longrightarrow (\alpha |0_y\rangle + \beta |1_y\rangle), \end{array} \end{equation} \end{quote} where $|\alpha|^2 + |\beta|^2 =1.$ This operation exactly transforms the input qubit into an arbitrary superposition on the input qubit with its orthogonal one. To verify the possibility or impossibility of existence of this operation we consider a pure bipartite state shared between Alice and Bob: \begin{equation} \begin{array}{lcl}|\Pi^i\rangle_{AB}& = & \frac{1}{\sqrt{3}} \{|0\rangle_A (|0_z \rangle | 0_z \rangle)_B + |1 \rangle_A (| 0_x \rangle | 0_x \rangle)_B \\ & &~ ~+|2 \rangle_A (|0_y \rangle |0_y \rangle)_B\} \end{array} \end{equation} Reduced density matrix of Alice's side will be of the form, \begin{equation} \begin{array}{lcl} \rho_A^i &=&\frac{1}{3}~ \{P[|0\rangle]+P[|1\rangle]+P[|2\rangle]+ \frac{1}{2}(|0\rangle\langle1|+|1\rangle\langle0|\\ & & ~ ~+|0\rangle\langle2| +|2\rangle\langle0|-i|1\rangle\langle2| +i|2\rangle\langle1|) \} \end{array} \end{equation} The Schmidt vector corresponding to the initial joint state~ $|\chi^f \rangle_{AB}$ is ~ ~$(\frac{1}{3}+\frac{1}{2\sqrt{3}}~, \frac{1}{3}~, \frac{1}{3}-\frac{1}{2\sqrt{3}})$. If Bob has a machine which operates on the three input qubits $|0_x \rangle, |0_y \rangle, |0_z \rangle$ as defined in equation (9) and he operates that machine on his local system (say, on the last qubit). Then the joint state between Alice and Bob will evolve as, \begin{equation} \begin{array}{lcl}|\Pi^f \rangle_{AB}&=& \frac{1}{\sqrt{3}} \{|0\rangle_A |0_z \rangle_B (\alpha |0_z\rangle + \beta |1_z\rangle)_B + |1 \rangle_A | 0_x \rangle_B \\& & (\alpha | 0_x \rangle + \beta |1_x\rangle)_B + |2 \rangle_A |0_y \rangle_B (\alpha|0_y \rangle+ \beta |1_y\rangle)_B\} \end{array} \end{equation} Final reduced density matrix of Alice's side will be of the form, \begin{equation} \begin{array}{lcl} \rho_A^f &=&\frac{1}{3}~ \{P[|0\rangle]+P[|1\rangle]+P[|2\rangle]+ p(|0\rangle\langle1|+|1\rangle\langle0|)\\ & & ~ ~+q|0\rangle\langle2| +\overline{q}|2\rangle\langle0|+r|1\rangle\langle2| +\overline{r}|2\rangle\langle1|) \} \end{array} \end{equation}where $p=\frac{1}{2}~\{|\alpha|^2-|\beta|^2 +\alpha~\overline{\beta}+\beta~\overline{\alpha}\}$, ~$q=\frac{1}{2}~\{|\alpha|^2+~i|\beta|^2 +\alpha~\overline{\beta}-i\beta~\overline{\alpha}\}$ and $r=\frac{1}{2}~\{\alpha~\overline{\beta}+\beta~\overline{\alpha}-i\}$. The eigenvalue equation turns out to be, $$x^3-(p\overline{p} + q\overline{q} + r\overline{r})x+pr\overline{q}+\overline{pr}q=0,$$ where we denote $1-3\lambda=x.$ To compare the initial and final state we have to check whether the initial and final eigenvalues will satisfy either of the relations of equation (3). We rewrite, the above eigenvalue equation as \begin{equation} x^3-3Ax+B=0 \end{equation}with, $A=\frac{1}{3}(p\overline{p} + q\overline{q} + r\overline{r})\geq0$ and $B=pr\overline{q}+\overline{pr}q$. The eigenvalues can then be written as $\{\lambda_1\equiv\frac{1}{3}[1-2\sqrt{A}\cos(\frac{2\pi}{3}+\theta)], ~\lambda_2\equiv\frac{1}{3}[1-2\sqrt{A}\cos\theta],~ \lambda_3\equiv\frac{1}{3}[1-2\sqrt{A}\cos(\frac{2\pi}{3}-\theta)]\}$ where $\cos 3\theta=~\frac{-B}{2\sqrt{A^3}}$. We discuss the matter case by case (for details, see Appendix A). \textbf{Case-1} : For $\emph{B}<0$, we see an incomparability between the initial and final joint states if $A= \frac{1}{4}$. In case $A< \frac{1}{4}$ we observe that either there is an incomparability between the initial and final states or the entanglement content of the final state is larger than that of the initial states. Lastly if $A> \frac{1}{4}$ we also see a case of incomparability if the condition $2\sqrt{A}\cos(\frac{2\pi}{3}+\theta)~>~-\frac{\sqrt{3}}{2}$ holds. Numerical searches support that for real values of $(\alpha,~\beta)$ incomparability is seen almost everywhere in this region. \textbf{Case-2} : For $\emph{B}=0$, we found that there do not arise a case of incomparability. It is also seen that there is always an increase of entanglement by LOCC if $A< \frac{1}{4}$, which is the only possibility for real values of $\alpha,\beta$. \textbf{Case-3} : For $\emph{B}>0$ we also get a similar result like Case-1. Only the condition for incomparability in case $A> \frac{1}{4}$ if changed to the form that $2\sqrt{A}\cos\varphi < \frac{\sqrt{3}}{2}$ where $\varphi=\min\{\theta,~(\frac{2\pi}{3}-\theta)\}\in(\frac{\pi}{6},~\frac{\pi}{3}).$ It must be noted that for real values of $\alpha,\beta$ this subcase do not arise at all. In particular if we check the values of $\alpha, \beta$ be such that they represents the operations flipping(i.e., $\alpha = 0$) and Hadamard(i.e., $\alpha = \beta = \frac{1}{\sqrt{2}}$) respectively, we find from the above that in both the cases the initial and final states are incomparable. Thus we get almost in all cases some kind of violation of physical laws implying that the kind of inner product preserving operations defined on only three states is nonphysical in nature and we observe for a large class of such inner-product-preserving operation incomparability senses. To conclude this work proves a close relation between anti-unitary operators and the existence of incomparable states. Incomparability shows it is also able to detect nonphysical operations like Hadamard and some other inner-product preserving operations. This work also shows an interplay between LOCC, nonphysical operations and the entanglement behavior of quantum systems. {\bf Acknowledgements:} We would like to thank the referee for valuable suggestions and useful comments. The authors are grateful to Dr. A. K. Pati and Dr. P. Agarwal for useful discussions regarding this work. I.C. also acknowledges CSIR, India for providing fellowship during this work.
1,116,691,498,847
arxiv
\section{Acknowledgments} We thank Pier Stefano Corasaniti and Rachel Bean for valuable help and discussions. \begin {thebibliography} {} \bibitem{spergel} D.~N.~Spergel {\it et al.} [WMAP Collaboration], Astrophys.\ J.\ Suppl.\ {\bf 148} (2003) 175 [arXiv:astro-ph/0302209]. \bibitem{cosmo} A.~Melchiorri, L.~Mersini-Houghton, C.~J.~Odman and M.~Trodden, Phys.\ Rev.\ D {\bf 68} (2003) 043509 [arXiv:astro-ph/0211522]; R.~Bean and A.~Melchiorri, Phys.\ Rev.\ D {\bf 65} (2002) 041302 [arXiv:astro-ph/0110472]. \bibitem{Kamenshchik:2001cp} A.~Y.~Kamenshchik, U.~Moschella and V.~Pasquier, Phys.\ Lett.\ B {\bf 511} (2001) 265 [arXiv:gr-qc/0103004]. \bibitem{Chaplygin:1904} S.~Chaplygin, Sci.\ Mem.\ Moscow\ Univ.\ Math.\ Phys.\ {\bf 21} (1904) 1. \bibitem{Bento:2002ps} M.~C.~Bento, O.~Bertolami and A.~A.~Sen, Phys.\ Rev.\ D {\bf 66}, 043507 (2002) [arXiv:gr-qc/0202064]. \bibitem{Carturan:2002si} D.~Carturan and F.~Finelli, Phys.\ Rev.\ D {\bf 68}, 103501 (2003) [arXiv:astro-ph/0211626]. \bibitem{Gaztanaga:2004sk} E.~Gazta\~naga, M.~Manera and T.~Multamaki, arXiv:astro-ph/0407022. \bibitem{cora} P.~S.~Corasaniti, T.~Giannantonio and A.~Melchiorri, Phys.\ Rev.\ D {\bf 71} (2005) 123521 [arXiv:astro-ph/0504115]. \bibitem{Crittenden:1995ak} R.~G.~Crittenden and N.~Turok, Phys.\ Rev.\ Lett.\ {\bf 76} (1996) 575 [arXiv:astro-ph/9510072]. \bibitem{Amendola:2003bz} L.~Amendola, F.~Finelli, C.~Burigana and D.~Carturan, JCAP {\bf 0307} (2003) 005 [arXiv:astro-ph/0304325]. \bibitem{Ma:1995ey} C.~P.~Ma and E.~Bertschinger, Astrophys.\ J.\ {\bf 455} (1995) 7 [arXiv:astro-ph/9506072]. \bibitem{Bean:2003fb} R.~Bean and O.~Dore, Phys.\ Rev.\ D {\bf 69} (2004) 083503 [arXiv:astro-ph/0307100]. \bibitem{beanchap} R.~Bean and O.~Dore, Phys.\ Rev.\ D {\bf 68} (2003) 023515 [arXiv:astro-ph/0301308]. \bibitem{Seljak:1996is} U.~Seljak and M.~Zaldarriaga, Astrophys.\ J.\ {\bf 469} (1996) 437 [arXiv:astro-ph/9603033]. \bibitem{Sachs:1967er} R.~K.~Sachs and A.~M.~Wolfe, Astrophys.\ J.\ {\bf 147}, 73 (1967). \bibitem{Garriga:2003nm} J.~Garriga, L.~Pogosian and T.~Vachaspati, Phys.\ Rev.\ D {\bf 69}, 063511 (2004) [arXiv:astro-ph/0311412]. \bibitem{Amendola:2005rk} L.~Amendola, I.~Waga and F.~Finelli, JCAP {\bf 0511} (2005) 009 [arXiv:astro-ph/0509099]. \end{thebibliography} \newpage ~ \newpage \end {document}
1,116,691,498,848
arxiv
\section{Introduction} \label{sec:intro} \subsection{Overview} Let $f:X\to S$ be a smooth, proper morphism between complex algebraic varieties. Then, by the work of Griffiths~\cite{periods}, the associated local system $\mathcal H_{\mathbb Q} = R^k {f^*}\mathbb Q_X$ underlies a variation of pure Hodge structure of weight $k$, which can be described by a \emph{period map} \begin{equation} \varphi:S\to\Gamma\backslash D, \label{eqn:gr-1} \end{equation} where $\Gamma$ is the \emph{monodromy group} of the family. In the case where the morphism $X\to S$ is no longer smooth and proper the resulting local system underlies a variation of (graded-polarized) mixed Hodge structure over a Zariski open subset of $S$~\cite{sz}. As in the pure case considered by Griffiths, a variation of mixed Hodge structure can be described in terms of a period map which is formally analogous to \eqref{eqn:gr-1} except that $D$ is now a classifying space of graded polarized mixed Hodge structure~\cite{higgs,usui}. \par As we shall explain below \eqref{eqn:grhmetric}, there is a natural metric on such $D$, the \emph{Hodge metric}. Deligne's second order calculations involving this metric in the pure case \cite{deligne} can be extended to the mixed setting, as we show in this article. For instance, we find criteria as to when the induced Hodge metric on $S$ is K\"ahler. We also compute the curvature tensor of this metric, with special emphasis on cases of interest in the study of algebraic cycles, archimedean heights and iterated integrals. The alternative approach \cite[Chap. 12]{periodbook} in the pure case based on the Maurer-Cartan form does not seem to generalize as we encounter incompatibilities between the metric and the complex structure as demonstrated in \S~\ref{sec:reductive}. \subsection{The Pure Case} Returning to the pure case, we recall that $D$ parametrizes Hodge structures on a reference fiber $H_{\mathbb Q}$ of $\mathcal H_{\mathbb Q}$ with given Hodge numbers $\{h^{p,q}\}$ which are polarized by a non-degenerate bilinear form $Q$ of parity $(-1)^k$. The monodromy group $\Gamma$ is contained in the real Lie group $G_{\mathbb{R}}\subset GL(H_{\mathbb{R}})$ of automorphisms of the polarization $Q$. \par In terms of differential geometry, the first key fact is that $G_{\mathbb{R}}$ acts transitively on $D$ with compact isotropy, and hence $D$ carries a $G_{\mathbb{R}}$ invariant metric. There is a canonical such metric induced by the polarizing form $Q$ as follows. Set \begin{equation} \label{eqn:hmetric} h_F(x,y) := Q(C_Fx,\bar y), x,y\in H_{\mathbb{C}}, \end{equation} where $C_F|H^{p,q}= {\rm i}^{p-q}$ is the Weil-operator. That this in indeed a metric follows from the two Riemann bilinear relations: the first, $ Q(F^p,F^{k-p+1}) = 0 $ states that the Hodge decomposition is $h_F$-orthogonal and the second states that $h_F$ is a metric on each Hodge-component. Next, by describing the Hodge structures parameterized by $D$ in terms of the corresponding flags $$ F^p H_{{\mathbb{C}}} = \bigoplus_{a\geq p}\, H^{a,k-a} $$ we obtain an open embedding of $D$ into the flag manifold $\check{D}$ consisting of decreasing filtrations $F^* H_{\mathbb{C}}$ such that $\dim F^p = \sum_{a\geq k}\, h^{a,k-a}$ which satisfy only the first Riemann bilinear relation. In particular, via this embedding, the set $D$ inherits the structure of a complex manifold upon which the group $G_{\mathbb{R}}$ acts via biholomorphisms. \par As a flag manifold, the tangent space at $F$ to $\check{D}$ can be identified with a subspace of \begin{equation*} \bigoplus_p\, \text{Hom}(F^p,H_{\mathbb{C}}/F^p). \end{equation*} Via this identification, we say that a tangent vector is \emph{(strictly) horizontal} if it is contained in the subspace $$ \bigoplus_p\, \text{Hom}(F^p, F^{p-1}/F^p). $$ One of the basic results of~\cite{periods} is that the period map associated to a smooth proper morphism $X\to S$ as above is holomorphic, horizontal and locally liftable. \par By \cite[Theorem 9.1]{curv} the holomorphic sectional curvature of $D$ along horizontal tangents is negative and bounded away from zero. In particular, as a consequence of this curvature estimate, if $S\subset\bar S$ is a smooth normal crossing compactification with unipotent monodromy near $p\in\bar S -S$, then ~\cite{schmid} the period map $\varphi$ has at worst logarithmic singularities near $p$.% \subsection{Mixed Domains} In the mixed case, period maps of geometric origin are holomorphic and satisfy the analogous horizontality condition (\cite{usui,sz}). However, although there is a \emph{natural Lie group $G$} (see \S~\ref{ssec:homstr}) which acts transitively on the classifying spaces of graded-polarized mixed Hodge structure, the isotropy group is no longer compact, and hence there is no $G$-invariant hermitian structure. In spite of this, A. Kaplan observed in~\cite{kaplan} that one could construct a natural hermitian metric on $D$ in the mixed case which was invariant under a pair of subgroups $G_{\mathbb{R}}$ and $\exp(\Lambda)$ of $G$ which taken together act transitively on $D$. The subgroup $\exp(\Lambda)$ (see \S~\ref{ssec:tang}) depends upon a choice of base point in $D$ and intersects the group $G_{\mathbb{R}}$ non-trivially. Nonetheless, as we said before, by emulating the computations of Deligne in~\cite{deligne}, we are able to compute the curvature tensor of $D$ in the mixed case (cf. \S \ref{sec:curv}). \par Let us elaborate on this by defining the natural metric. A mixed Hodge structure $(F,W)$ on $V$ induces a unique functorial bigrading~\cite{tdh}, the \emph{Deligne splitting} \begin{equation} V_{\mathbb{C}} = \bigoplus_{p,q}\, I^{p,q} \label{eqn:DelSplit} \end{equation} such that $F^p = \bigoplus_{a\geq p}\, I^{a,b}$, $W_k = \bigoplus_{a+b\leq k}\, I^{a,b}$ and \[ \bar I^{p,q} = I^{q,p} \mod \bigoplus_{a<q,b<p}\, I^{a,b}. \] Moreover, the summand $I^{p,q}$ maps isomorphically onto the subspace $H^{p,q}$ of $\gr^W_{p+q}$, and hence if $(F,W)$ is graded-polarized we can pull back the graded Hodge inner product \eqref{eqn:hmetric} on $\gr^W$ to a hermitian form on $V$: \begin{equation} \label{eqn:grhmetric} h_{(F,W)}(x,y) = h_{FGr^W}( \gr^W x, \gr^W y). \end{equation} This is the \emph{Hodge metric} alluded to previously. By functoriality it induces Hodge metrics on $\End(V)$ and hence also on the Lie algebra of $G$. It is these metrics that form our principal subject of investigation of this paper. \subsection{Examples} \label{ssec:exmples} To get an idea of the nature of these metrics in the mixed situation we give a few examples. \begin{enumerate} \item Consider the mixed Hodge structure on the cohomology of quasi-projective curves. So, let $X $ be a compact Riemann surface of genus $g$ and $S$ be a finite set of points on $X$. Then, $F^1 H^1(X \setminus S,{\mathbb{C}})$ consists of holomorphic 1-forms $\Omega$ on $X\setminus S$ with at worst simple poles along $S$, and the mixed Hodge metric is given by \begin{equation} ||\Omega||^2 = 4\pi^2\sum_{p\in S}\, |\text{Res}_p(\Omega)|^2 + \sum_{j=1}^g\, \left|\int_X\, \Omega\wedge\bar\varphi_j\right|^2, \label{eqn:rs-mhm} \end{equation} where $\{\varphi_j\}$ is unitary frame for $H^{1,0}(X)$ with respect to the standard Hodge metric on $H^1(X,{\mathbb{C}})$. \par To verify this, we recall that in terms of Green's functions, the subspace $I^{1,1}$ can be described as follows: If $H$ is the space of real-valued harmonic functions on $X\setminus S$ with at worst logarithmic singularities near the points of $S$, then \begin{equation} I^{1,1}\cap H^1(X\setminus S,{\mathbb{R}}) = \{ \sqrt{-1}\partial f | f\in H \} \label{eqn:i11} \end{equation} Indeed, the elements of $I^{1,1}$ will be meromorphic 1-forms with simple poles along $S$. The elements $\sqrt{-1}\partial f$ are also real cohomology classes since the imaginary part is exact. Direct calculation using \eqref{eqn:i11} and Stokes theorem shows that $I^{1,1}$ consists of the elements in $F^1$ which pair to zero against $H^{0,1}$. Therefore, the terms $\int_X\,\Omega\wedge\bar\varphi_j$ appearing in \eqref{eqn:rs-mhm} only compute the Hodge inner product for the component of $\Omega$ in $I^{1,0}$. \item Recall that the dilogarithm \cite[\S 1]{hain} is the double integral \[ \ln_2(x)=\int_0^x w_1\cdot w_2,\quad w_1= \frac{1}{2\pi {\rm i}} \cdot \frac {dz}{1-z},\quad w_2=\frac{1}{2\pi {\rm i}}\cdot \frac {dz}{z}. \] For the corresponding variation of mixed Hodge structure arising from the mixed Hodge structure on $\pi_1(\mathbb P^1-\{0,1,\infty\} ,x)$, the pull back metric is given by \begin{equation} \|\nabla_{d/dz}\|^2 = \left[\frac{1}{|z|^2} + \frac{1}{|z-1|^2} \right]. \label{eqn:dilog-ex} \end{equation} For a proof, we refer to \S~\ref{defoinAb}. \item Consider mixed Hodge structures whose Hodge numbers are $h^{0,0} = h^{-1,-1} =1$. The corresponding classifying space is isomorphic to ${\mathbb{C}}$ with the Euclidean metric. In particular, the curvature is identically zero. \item Let $(X,\omega)$ be a compact K\"ahler manifold of dimension $n$, and $(F,W)$ denote the mixed Hodge structure on $V = \bigoplus_p\, H^p(X,\mathbb C)$ defined by setting $I^{p,q} = H^{n-p,n-q}(X)$. For any $u\in H^{1,1}(X)$ let $N(u)$ denote the linear map on $V$ defined by \begin{equation} N(u)v = u\wedge v \label{eqn:pmhs} \end{equation} Then, $N(u)$ is of type $(-1,-1)$. If the real part of $u$ is K\"ahler then $(e^{iN(u)}\cdot F,W)$ is a {\it polarized mixed Hodge structure} which encodes information about both the complex structure of $X$ and the choice of $u\in H^{1,1}(X)$. In particular, since this mixed Hodge structure is split over $\mathbb R$, it follows that $e^{iN}\cdot F$ is a pure Hodge structure~\cite{degeneration,schmid}. \end{enumerate} \par The properties the Hodge metric has in the pure case are no longer valid in the mixed situation. This is already clear from Example~3: we can not expect $D$ to have holomorphic sectional curvature which is negative and bounded away from zero along horizontal directions. Nonetheless, period maps of variations of mixed Hodge structure of geometric origin satisfy a system of admissibility conditions which ensure that they have good asymptotic behavior. At the level of $D$-modules, this is exemplified by Saito's theory of mixed Hodge modules. At the level of classifying spaces, one has the analogs of Schmid's nilpotent orbit theorem~\cite{dmj,nilp} and the $SL_2$-orbit theorem~\cite{knu,sl2anddeg}. \subsection{Results} \begin{enumerate} \item A mixed period domain $D$ is an open subset of a homogeneous space for a complex Lie group $G_{{\mathbb{C}}}$, and hence we can identify $T_F(D)$ with a choice \eqref{eqn:tang-alg} of complement $\germ q$ to the stabilizer of $F$ in $\Lie{G_{{\mathbb{C}}}}$. In analogy with Th\'eor\`eme $(5.16)$ of \cite{deligne}, the holomorphic sectional curvature in the direction $u\in\germ q\simeq T^{1,0}_F(D)$ is given by (cf.~Theorem \eqref{thm:hol-curv}): $$ \aligned R_\nabla (u,\bar u) = &- [(\ad{{\bar u }_+^*})_\germ q ,(\ad{{\bar u }_+})_\germ q] - \ad{[ u, \bar u]_0} \\ &- \left(\ad{([u,\bar u]_+ + [u,\bar u]_+^*)} \right)_\germ q \endaligned $$ where the subscripts ${}_{\germ q}$, ${}_0$, ${}_+$ denote projections onto various subalgebras of $\Lie{G_{{\mathbb{C}}}}$, and $*$ is adjoint with respect to the mixed Hodge metric; {the adjoint operation is meant to be preceded by the projection operator ${}_+$.} \item In the pure case it is well known \cite[Prop. 7.7]{periods2} that the ``top'' Hodge bundle\footnote{In standard notation; it differs from the notation employed in \cite{periods2}. } $\FF^n$ is positive in the differential geometric sense while the ``dual'' bundle $\FF^0/\FF^1$ is negative. In the mixed setting, the Chern form of the top Hodge bundle is non-negative, and positive wherever the $(-1,1)$-component of the derivative of the period map acts non-trivially on the top Hodge bundle. See Corollary \eqref{corr:top-hodge}. \item By~\cite{Zhiqin}, the pseudo-metric obtained by pulling back the Hodge metric along a variation of \emph{pure} Hodge structure is also K\"ahler, and so it is a natural question to ask when there are more instances where the pullback of the mixed Hodge metric along a mixed period map is K\"ahler. In \S \ref{sec:kaehler}, we answer this question in terms of a system of partial differential equations; in particular we prove: \begin{*thm}[c.f. Theorem~\ref{thm:kahler2}] Let $\VV$ be a variation of mixed Hodge structure with only two non-trivial weight graded-quotients $\gr^W_a$ and $\gr^W_b$ which are adjacent, i.e. $|a-b|=1$. Then, the pullback of the mixed Hodge metric along the period map of $\VV$ is pseudo-K\"ahler. \end{*thm} An example (cf. \S \ref{defoinAb}) of a variation of mixed Hodge structure of the type described at the end of the previous paragraph arises in \emph{homotopy theory} as follows: Let $X$ be a smooth complex projective variety and $J_x$ be the kernel of the natural ring homomorphism ${\mathbb{Z}}\pi_1(X,x)\to{\mathbb{Z}}$. Then, the stalks $J_x/J_x^3$ underlie a variation of mixed Hodge structure with weights 1 and 2 and constant graded Hodge structure~\cite{hain}. We show: \begin{prop*}[c.f. Corollaries~\ref{corr:fund-group}, \ref{corr:kahler}] If the differential of the period map of $J_x/J_x^3$ is injective for a smooth complex projective variety $X$ then the pull back metric is K\"ahler and its holomorphic sectional curvature of is non-positive. \end{prop*} Concerning the injectivity hypothesis, which is directly related to mixed Torelli theorems we note that these hold for compact curves \cite{hain} as well as once punctured curves \cite{kaenders}. \item The curvature of a \emph{Hodge--Tate domain} is identically zero: \begin{prop*}[c.f.~Lemma~\ref{lemma:tate-case} and Corollary~\ref{corr:kahler}] Suppose $h^{p,q}=0$ unless $p=q$. Then the curvature of the mixed Hodge metric is identically zero, and pulls back to a K\"ahler pseudo-metric along any period map $\varphi:S\to\Gamma\backslash D$. \end{prop*} Consequently, a necessary condition for a period map $\varphi:S\to\Gamma\backslash D$ of Hodge-Tate type to have injective differential is that $S$ support a K\"ahler metric of holomorphic sectional curvature $\leq 0$. Important examples of such variations arise in the study of mixed Tate motives and polylogarithms~\cite{3pts} and mirror symmetry~\cite{d2m}. Another example is the intersection cohomology of a polytope which by~\cite{cat} carries a polarized mixed Hodge structure of Hodge--Tate type which is split over $\mathbb R$. \item Let $X\to \Delta^r$ be a holomorphic family of compact K\"ahler manifolds of dimension $n$ and $(F(s),W)$ be the corresponding variation of mixed Hodge structure defined by setting $I^{p,q} = H^{n-p,n-q}(X_s)$ as in \eqref{eqn:pmhs}. Suppose that $\lambda_1,\dots,\lambda_k\in H^{1,1}(X_s)$ for all $s$ (e.g. a set of K\"ahler classes common to all members of the family). By taking real and imaginary parts with respect to $H^{1,1}(X_s,\mathbb R)$, we can assume that each $\lambda_j$ is real. Let $L_{\mathbb C}$ be the complex linear span of $\lambda_1,\dots,\lambda_r$ and $u:\Delta^r\to L_{\mathbb C}$ be a holomorphic function. Then, \begin{equation} (e^{{\rm i} N(u(s))} \cdot F(s),W) \label{eqn:kahler-def} \end{equation} is a variation of mixed Hodge structure. The curvature of the corresponding classifying space is non-positive along directions tangent to \eqref{eqn:kahler-def}, and strictly negative wherever the period map of $F(s)$ has non-zero derivative. See Example \eqref{exmple:kahler-def}. The resulting metric is also pseudo-K\"ahler, cf. Corollary \eqref{corr:kahler}. \item Turning now to \emph{algebraic cycles}, recall that by~\cite{ANF}, a normal function is equivalent to an extension in the category of variations of mixed Hodge structure\footnote{Note: We have performed a Tate twist to make $\mathcal H$ have weight -1 here.} \begin{equation} 0 \to\mathcal H \to \VV \to {\mathbb{Z}}(0) \to 0. \label{eqn:nf-2} \end{equation} The classical example comes from the Abel-Jacobi map for degree zero divisors on a compact Riemann surface and its natural extension \begin{equation} \text{AJ}: \text{CH}^k_{\rm hom}(Y) \to J^k(Y) \label{eqn:AJ-map} \end{equation} to homologically trivial algebraic cycles on a smooth complex projective variety $Y$~\cite{periods}. Application of this construction pointwise to a family of algebraic cycles $Z_s\subset Y_s$ yields the prototypical example of a \emph{normal function} \begin{equation} \nu:S\to J(\mathcal H) \label{eqn:nf-1} \end{equation} where $\mathcal H$ is the variation of pure Hodge structure attached to the family $Y_s$. \begin{prop*} 1. The pullback of the mixed Hodge metric along a normal function is a pseudo-K\"ahler (c.f. Example \ref{kaehlerexmples}).\\ 2. In the case where the underlying variation of pure Hodge structure is constant (e.g. a family of cycles on a fixed smooth projective variety $Y$), the holomorphic sectional curvature is semi-negative (Corollary~\ref{curvneg:2}). \end{prop*} \par Using the polarization of $\mathcal H$, one can construct a natural \emph{biextension} line bundle $B\to S$ whose fibers parametrize mixed Hodge structures with graded quotients $$ \gr^W_0\cong{\mathbb{Z}}(0),\qquad \gr^W_{-1}\cong\mathcal H_s,\qquad \gr^W_{-2}\cong{\mathbb{Z}}(1) $$ and such that the extension between $\gr^W_0$ and $\gr^W_{-1}$ is determined by $\nu(s)$ and the extension from $\gr^W_{-1}$ and $\gr^W_{-2}$ is determined by the dual of $\nu(s)$. \par As noted by Richard Hain, the biextension line bundle $B$ carries a natural hermitian metric $h$ which is based on measuring how far the mixed Hodge structure defined by $b\in B_s$ is from being split over $\mathbb R$. In~\cite{nilp}, the first author and T. Hayama prove that for $B\to\Delta^{*r}$ arising from an admissible normal function with unipotent monodromy, the resulting biextension metric is of the form \begin{equation}\label{eqn:bm} h = e^{-\varphi} \end{equation} with $\varphi\in L^1_{\rm loc}(\Delta^r)$, i.e. it defines a singular hermitian metric in the sense of~\cite{dem} and hence can be used to compute the Chern current of the extension of $\bar B$ obtained by declaring the admissible variations of mixed Hodge structure to define the extending sections (cf.~\cite{nilp,bp2}). For this situation we show (\S \ref{sec:biexts}): \begin{prop*} Let $S$ be a curve and let $\mathcal B $ be a variation of biextension type over $S$. Then the Chern form of the biextension metric \eqref{eqn:bm} is the $(1,1)$--form \begin{equation*} -\frac{1}{2\pi{\rm i}} \partial\bar\partial h (s) = \frac 12 [\gamma^{-1,0},\bar\gamma^{-1,0}] \,ds \wedge \overline{ds}, \end{equation*} where $\gamma^{-1,0}$ is the Hodge component of type $(-1,0)$ of $\varphi_*(d/ds)$ viewed as an element of $\mathfrak g_{\mathbb{C}}$. For self-dual variations this form is semi-negative. \end{prop*} \begin{rmq} This result was also obtained Richard Hain (\S 13, \cite{hain-2}) by a different method. \end{rmq} We then deduce (see Cor.~\ref{biextcoroll} for a precise statement): \begin{corr*} Let $\mathcal B$ be a self-dual biextension over $S$ with associated normal function $\nu$. Then, the Chern form of the biextension metric vanishes along every curve in the zero locus of $\nu$. \end{corr*} \par The asymptotic behavior of the biextension metric is related to the Hodge conjecture: Let $L$ be a very ample line bundle on a smooth complex projective variety $X$ of dimension $2n$ and $\bar P$ be the space of hyperplane sections of $X$. Then, over the locus of smooth hyperplane sections $P\subset\bar P$, we have a natural variation of pure Hodge structure $\mathcal H$ of weight $2n-1$. Starting from a primitive integral, non-torsion Hodge class $\zeta$ of type $(n,n)$ on $X$, we can then construct an associated normal function $\nu_{\zeta}$ by taking a lift of $\zeta$ to Deligne cohomology. The Hodge conjecture is then equivalent~\cite{gg,bfnp} to the existence of singularities of the normal function $\nu_{\zeta}$ (after passage to sufficiently ample $L$). In~\cite{bp2}, it will be shown that the existence of singularities of $\nu_{\zeta}$ is detected by the failure of the biextension metric to have a smooth extension to $\bar P$. \end{enumerate} \subsection{Structure} We start properly in \S \ref{sec:class} and summarize the basic properties of the classifying spaces of graded-polarized mixed Hodge structures following~\cite{higgs} and compute the dependence of the bigrading \eqref{eqn:DelSplit} on $F\in D$ up to second order. Using these results, we then compute the curvature tensor and the holomorphic sectional curvature of $D$ in \S \ref{sec:curv}--\ref{sec:holseccurv}. \par \par In \S\ref{sec:hb} and \S\ref{sec:biexts} we compute the curvature of the Hodge bundles and the biextension metrics using similar techniques. Likewise, in \S \ref{sec:kaehler} we use the computations of \S \ref{sec:holseccurv} to determine when the pull back of the mixed Hodge metric along a period map is K\"ahler. In \S\ref{defoinAb} we show how these calculations apply to particular situations of geometric interest. \par In \S \ref{sec:reductive}, we construct a classifying space $D$ which is a reductive domain such that its natural complex structure is not compatible with the usual complex structure making the Hodge metric a hermitian equivariant metric. So the Chern connection for the Hodge metric is not the same as the one coming from the Maurer-Cartan form on $G_{\mathbb{C}}$. This makes the calculations in the mixed setting intrinsically more involved than in the pure case, even in the case of a split mixed domain. \par \medskip \noindent\textbf{Acknowledgements.} Clearly, we should first and foremost thank A. Kaplan for his ideas concerning mixed domains and their metrics. Next, we want to thank Ph. Eyssidieux , P. Griffiths, S. Grushevsky, R. Hain, C. Hertling, J.M. Landsberg and C. Robles for their interest and pertinent remarks. The cooperation resulting in this paper started during a visit of the first author to the University of Grenoble; he expresses his thanks for its hospitality. \section{Classifying Spaces} \label{sec:class} \subsection{Homogeneous Structure}\label{ssec:homstr} We begin this section by reviewing some material on classifying spaces of graded-polarized mixed Hodge structure \cite{usui} which appears in \cite{higgs,dmj,sl2anddeg}. Namely, in analogy with the pure case, given a graded-polarized mixed Hodge structure $(F,W)$ with underlying real vector space $V_{\mathbb{R}}$, the associated classifying space $D$ consists of all decreasing filtrations of $V_{{\mathbb{C}}}$ which pair with $W$ to define a graded-polarized mixed Hodge structure with the same graded Hodge numbers as $(F,W)$. The data {for} $D$ is therefore $$ (V_{{\mathbb{R}}},W_{\bullet},\{Q_{\bullet}\},h^{\bullet,\bullet}) $$ where $W_{\bullet}$ is the weight filtration, $\{Q_{\bullet}\}$ are the graded-polarizations and $h^{\bullet,\bullet}$ are the graded Hodge numbers. \par To continue, we recall that given a point $F\in D$ the associated bigrading \eqref{eqn:DelSplit} gives a functorial isomorphism $ V_{{\mathbb{C}}}\cong \gr^W $ which sends $I^{p,q}$ to $H^{p,q}\subseteq \gr^W_{p+q}$ via the quotient map. The pullback of the standard Hodge metrics on $\gr^W$ via this isomorphism then defines a mixed Hodge metric on $V_{{\mathbb{C}}}$ which makes the bigrading \eqref{eqn:DelSplit} orthogonal and satisfies $$ h_F(u,v) = {\rm i}^{p-q}Q_{p+q}([u],[\bar v]) $$ if $u$, $v\in I^{p,q}$. By functoriality, the point $F\in D$ induces a mixed Hodge structure on $\text{End}(V)$ with bigrading \begin{equation} \text{End}(V_{\mathbb C}) = \bigoplus_{r,s}\, \text{End}(V)^{r,s} \label{eqn:induced-bigrading} \end{equation} which is orthogonal with respect the associated metric \begin{equation} h_F(\alpha,\beta) = \mathop{\rm Tr}\nolimits(\alpha \beta^*) \label{eqn:mhm} \end{equation} where $\beta^*$ is the adjoint of $\beta$ with respect to $h$. \par Let $\text{GL}(V_{{\mathbb{C}}})^W\subset\text{GL}(V_{{\mathbb{C}}})$ denote the Lie group of complex linear automorphisms of $V_{{\mathbb{C}}}$ which preserve the weight filtration $W$. For $g\in\text{GL}(V_{{\mathbb{C}}})^W$ we let $Gr(g)$ denote the induced linear map on $\gr^W$. Let $G_{\mathbb C}$ be the subgroup consisting of elements which induce complex automorphisms of the graded-polarizations of $W$, and $G_{{\mathbb{R}}} = G_{\mathbb C}\cap GL(V_{\mathbb R})$. \par In the pure case, $G_{{\mathbb{R}}}$ acts transitively on the classifying space and $G_{{\mathbb{C}}}$ acts transitively on the compact dual. The mixed case is slightly more intricate: Let $G$ denote the subgroup of elements of $G_{{\mathbb{C}}}$ which act by real transformations on $\gr^W$. Then, $$ G_{{\mathbb{R}}}\subset G \subset G_{{\mathbb{C}}} $$ and we have the following result: \begin{thm}[\protect{\cite[\S 3]{higgs}}] The classifying space $D$ is a complex manifold upon which $G$ acts transitively by biholomorphisms. \end{thm} \begin{rmq} Hertling~\cite{pmhs} defines a period domain of polarized mixed Hodge structures on a fixed real vector space $V$ equipped with a polarization $Q$ and weight filtration induced by a nilpotent infinitesimal isometry $N$ of $(V,Q)$. The difference with our approach is that the latter domain is homogeneous under the subgroup of $G$ consisting of elements commuting with $N$. So in a natural way it is a submanifold of our domain. \end{rmq} \subsection{Hodge Metric on the Lie Algebra} \label{ssec:tang} Let $\eg_{{\mathbb{R}}} = \text{Lie}(G_{{\mathbb{R}}})$ and $\eg_{{\mathbb{C}}} = \text{Lie}(G_{{\mathbb{C}}})$. By functoriality, any point $F\in D$ induces a mixed Hodge structure on $\eg_{{\mathbb{C}}}=\eg_{{\mathbb{R}}}\otimes{\mathbb{C}}$ with bigrading inherited from the one on $\End(V_{\mathbb{C}})$, i.e. $\eg ^{r,s} = \eg_{\mathbb{C}}\cap \End(V)^{r,s}$. For future reference, we note that: \begin{itemize} \item $\eg_{\mathbb{C}}\cap \End(V)^{r,s}=0$ if $r+s>0$; \item $W_{-1}\End(V) \subset \eg_{{\mathbb{C}}}$. \item The orthogonal decomposition \begin{equation} \End(V_{{\mathbb{C}}}) = \eg_{{\mathbb{C}}}\oplus\eg_{{\mathbb{C}}}^{\perp} \label{eqn:orth-1} \end{equation} induces a decomposition \begin{equation} \End(V)^{p,-p} = \eg_{{\mathbb{C}}}^{p,-p}\oplus (\eg_{{\mathbb{C}}}^{\perp})^{p,-p} \label{eqn:orth-2} \end{equation} \end{itemize} Let us go back for a moment to the situation of a polarized Hodge structure $(V,Q)$. Then $u\in \End(V)$ can be written as a sum $u=\frac 12 (u+\Tr u) +\frac 12(u-\Tr u)$ of $Q$--symmetric and $Q$--skew symmetric parts, where one defines transpose with respect to the polarizing form: \[ Q(u x, y)= Q(x, \Tr u y),\quad \forall x,y\in V. \] The $Q$--skew symmetric endomorphisms belong to $\eg$. With respect to the Hodge metric the decomposition into symmetric and skew-symmetric endomorphisms is just $\End(V)= \eg\oplus \eg^\perp$ and $[\eg,\eg^\perp]\subset \eg^\perp$. For the \emph{mixed} situation this is the same on the pure weight zero part of $\End^W(V)$: \begin{equation} \gr^W_0\End^W(V_{\mathbb{C}}) \cong \bigoplus_p\, \End(V_{\mathbb{C}} )^{-p,p}. \label{eqn:orth-4} \end{equation} In particular, \begin{equation} [\eg_{{\mathbb{C}}}^{p,-p},(\eg_{{\mathbb{C}}}^{\perp})^{q,-q}] \subset (\eg_{{\mathbb{C}}}^{\perp})^{p+q,-p-q}. \label{eqn:orth-3} \end{equation} \begin{rmk} In general, for a mixed Hodge structure which is not split over $\mathbb R$, the operations of taking adjoint with respect to the mixed Hodge metric and complex conjugate do not commute. \end{rmk} \par By the defining properties of the bigrading \eqref{eqn:DelSplit}, it follows that \begin{equation} \eg_{{\mathbb{C}}}^F = \bigoplus_{r\geq 0}\, \eg^{r,s} \label{eqn:isotopy-alg} \end{equation} is the Lie algebra of the stabilizer of $F\in D$ with respect to the action of $G_{{\mathbb{C}}}$ on $\check D$. So: \begin{lemma}\label{TangentIdent} The map $$ u\in \eg_{{\mathbb{C}}} \mapsto \gamma_*(d/dt)_0,\qquad \gamma(t) = e^{tu}\cdot F $$ determines an isomorphism between the vector space complement \begin{equation} \germ q_F = \bigoplus_{r<0}\, \eg^{r,s} \label{eqn:tang-alg} \end{equation} and $T_F(D)$. \end{lemma} \par For $F\in D$ let $\pi_{\germ q}$ denote the composition of orthogonal projection $\End(V_{{\mathbb{C}}})\to\eg_{{\mathbb{C}}}$ with projection $\eg_{{\mathbb{C}}}\to\germ q_F$ with respect to the decomposition \begin{equation} \eg_{{\mathbb{C}}} = \eg^F\oplus\germ q_F . \label{eqn:gc-decomp} \end{equation} For use in the proof of the next result, observe that \begin{equation} *:\End(V)^{p,q}\to \End(V)^{-p,-q} ; \label{eqn:adjoint-pq} \end{equation} we also record that by Lemma \eqref{lemma:adjoint} below that $\alpha\in\eg^{p,-p}\implies\alpha^*\in\eg^{-p,p}$. \begin{lemma} \label{compatibilities} Let $f\in \eg_{\mathbb{C}}^F$. We have the following two equalities: \begin{eqnarray} \text{as operators on $\eg_{\mathbb{C}}$ we have } \pi_{\germ q}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\ad{f})^n & = & \left(\pi_{\germ q}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \ad f\right)^n ; \label{eqn:condense-1}\\ \text{as operators on $\germ q$ we have } \pi_{\germ q}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\ad{f^*})^n &= &\left(\pi_{\germ q}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \ad {f^*}\right)^n. \label{eqn:condense-2} \end{eqnarray} \end{lemma} \proof In both cases, we induct on $n$, with the base case $n=1$ a tautology. We begin with \eqref{eqn:condense-1}, and observe that \begin{equation} (\ad{f})^{n} u = v + w . \label{eqn:induct-1} \end{equation} with $v\in\germ q_F$ and $w\in \eg^F_{{\mathbb{C}}}$. Therefore, $ (\ad{f})^{n+1} u = [f,v] + [f,w] $ and hence \begin{equation} \pi_{\germ q}((\ad{f})^{n+1} u) = \pi_{\germ q}[f,v]. \label{eqn:induct-2} \end{equation} By equation \eqref{eqn:induct-1}, $v=\pi_{\germ q}((\ad{f})^{n} u)$ which is equal to $(\pi_{\germ q}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \ad{f})^n u$ by induction. Substituting this identity into \eqref{eqn:induct-2} gives $$ \pi_{\germ q}((\ad{f})^{n+1} u) = (\pi_{\germ q}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \ad{f})^{n+1} u. $$ \par To verify \eqref{eqn:condense-2}, we observe that by equations \eqref{eqn:isotopy-alg} and \eqref{eqn:adjoint-pq} it follows that if $\alpha\in\eg_{{\mathbb{C}}}^F$ then \begin{equation} \ad\alpha^*:\bigoplus_{p<0}\, \End(V)^{p,q} \to \bigoplus_{p<0}\, \End(V)^{p,q} . \label{eqn:bracket-q} \end{equation} Suppose now that $u\in\germ q_F$ and $f\in\eg_{{\mathbb{C}}}^F$ and write \begin{equation} (\ad{f^*})^{n} u = v + w + w' \label{eqn:induct-3} \end{equation} with $v\in\germ q_F$, $w\in \eg^F_{{\mathbb{C}}}$, and $w'\in\eg_{{\mathbb{C}}}^{\perp}$. By \eqref{eqn:bracket-q} it follows that $w=0$ since $$ \germ q_F\subseteq\bigoplus_{p<0}\, \End(V)^{p,q}. $$ Therefore, $(\ad{f^*})^{n+1} u = [f^*,v] + [f^*,w']$ and hence \begin{equation} \pi_{\germ q}((\ad{f^*})^{n+1} u) = \pi_{\germ q}[f^*,v] + \pi_{\germ q}[f^*,w']. \label{eqn:induct-4} \end{equation} By \eqref{eqn:induct-3}, $v = \pi_{\germ q}((\ad{f^*})^{n} u)=(\pi_{\germ q}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \ad{f^*})^n u$, by the induction hypothesis. Substitution of this identity into \eqref{eqn:induct-4} gives $$ \pi_{\germ q}((\ad{f^*})^{n+1} u) = (\pi_{\germ q}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \ad{f^*})^{n+1} u + \pi_{\germ q}[f^*,w']. $$ Thus, we need to prove that $\pi_{\germ q}[f^*,w']=0$. For this we observe that $f^* = f^*_0 + f^*_+$ and $w' = w'_0 + w'_+$ where $f_0$, $w_0$ are in $\oplus_p\,\End(V)^{p,-p}$ and $f^*_+$ and $w'_+$ belong to $\oplus_{p+q>0}\, \End(V)^{p,q}$. Since elements of $\germ q_F$ preserve $W$, it follows that $$ \pi_{\germ q}[f^*,w'] = \pi_{\germ q}[f^*_0,w'_0] $$ which vanishes by \eqref{eqn:orth-3} since $f^*_0\in\eg_{{\mathbb{C}}}$ and $w'_0\in\eg_{{\mathbb{C}}}^{\perp}$. \qed\endproof \begin{rmk} \label{mtremark} The preceding proof simplifies considerably in the pure case since then $f^*= \overline{w (f)}$ with $w|\eg^{-p,p}= \text{ multiplication by }(-1)^{p+1}$ (see e.g. Lemma~\ref{eqn:piz} below). Using this remark, it follows that the statement of Lemma~\ref{compatibilities} remains valid for a pure Mumford-Tate domain, i.e. when $G$ is replaced by the Mumford-Tate group of a pure Hodge structure. See also Remark~\ref{mtremark}.(2). \end{rmk} \vskip 10pt Define \begin{equation}\label{eqn:lambda} \Lambda = \bigoplus_{r,s<0}\, \eg^{r,s} \end{equation} and note that since the conjugation condition appearing in \eqref{eqn:DelSplit} can be recast as \begin{equation} \label{eqn:UnderConju} \bar \eg^{p,q}\subset \eg^{q,p}+[\Lambda, \eg^{q,p}], \end{equation} it follows that $\Lambda$ has a real form \begin{equation}\label{eqn:LambdaRealForm} \Lambda_{{\mathbb{R}}} = \Lambda\cap\eg_{{\mathbb{R}}}. \end{equation} \begin{lemma}[\protect{\cite[Lemma 4.11]{higgs}}] \label{higgs} If $g\in G_{\mathbb{R}}\cup\exp(\Lambda)$ then $$ g (I_F^{p,q})= I_{g\cdot F}^{p,q} $$ \end{lemma} Recall that a mixed Hodge structure $(F,W)$ is said to be split over ${\mathbb{R}}$ if $$ \overline{I^{p,q}} = I^{q,p}. $$ Those mixed Hodge structures make up a real analytic subvariety $D_{\mathbb{R}}\subset D$. To any given mixed Hodge structure $(F,W)$, one associates a special split real mixed Hodge structure $\hat F=e_F\cdot F$ as follows. \begin{prop}[\protect{\cite[Prop. 2.20]{degeneration}}] \label{SplitMHS} Given a mixed Hodge structure here is a unique $\delta \in \Lambda_{\mathbb{R}} $ such that $\hat I^{p,q} =\exp{(-{\rm i} \delta} (I^{p,q}))$ is the Deligne splitting of a split real mixed Hodge structure $\hat F= e_F\cdot F$. \end{prop} \par A \emph{splitting operation} is a particular type of fibration $D\to D_{\mathbb{R}}$ of $D$ over the locus of split mixed Hodge structures (cf. Theorem $(2.15)$ \cite{sl2anddeg}). Our calculations below use the following result due to Kaplan: \begin{thm}[\protect{\cite{kaplan}}]\label{group-decomp} Given a choice of splitting operation and choice of base point $F\in D$, for each element $g\in G$ exists a distinguished decomposition \[ g = g_{{\mathbb{R}}}\exp{(\lambda)}\cdot f ,\qquad \lambda\in \Lambda ,\quad g_{{\mathbb{R}}}\in G_{{\mathbb{R}}}, \quad f\in \exp( W_{-1} \eg\germ l(V_{{\mathbb{C}}}))\cap G^F. \] Moreover, if the splitting operation is an analytic or $C^\infty$ map, the map $(F,g)\mapsto (g_{\mathbb{R}},e^\lambda, f)$ is analytic, respectively $C^\infty$. \end{thm} \par Let $\text{Flag}(D)$ denote the flag variety containing $D$, i.e. the set of all complex flags of $V_{{\mathbb{C}}}$ with the same rank sequence as the flags parametrized by $D$. Then, since $G\subset G_{{\mathbb{C}}}$ acts transitively on $D$, it follows that the orbit of any point $F\in D$ under $G_{{\mathbb{C}}}$ gives a well defined \lq\lq compact dual\rq\rq{} $\check D\subset\text{Flag}(D)$ upon which $G_{{\mathbb{C}}}$ acts transitively by biholomorphisms. As in the pure case, $D$ is an open subset of $\check D$ with respect to the analytic topology. \par Using this identification, the mixed Hodge metric \eqref{eqn:mhm} induces a hermitian structure on $D$. In analogy with Lemma \eqref{higgs} and the fact that $G$ acts by isometry on $\gr^W$ it follows that \begin{lemma}[\cite{kaplan,sl2anddeg}] \label{KaplanChange} For any $g=g_{\mathbb{R}} e^\lambda$, $g_{\mathbb{R}}\in G_{\mathbb{R}}$, $\lambda\in \Lambda$, the mixed Hodge metric on $\eg_{\mathbb{C}}$ changes equivariantly: \[ h_{g\cdot F }( \Ad{(g)}\alpha ,\Ad {g)}\beta )= h_F(\alpha ,\beta ) , \,\forall \alpha ,\beta \in \eg. \] and hence $g:T_F(D)\to T_{g\cdot F}(D)$ is an isometry. \end{lemma} \begin{rmk} (1) In~\cite{knu,knu2}, the authors consider a different metric on $D$ which is obtained by replacing the bigrading \eqref{eqn:DelSplit} attached to $(F,W)$ by the bigrading attached to the {\it canonical} or {\it $sl_2$-splitting} of $(F,W)$. They then twist this metric by a distance to the boundary function in order to its extension to the boundary (\S 4,\cite{knu2}). In particular, although the resulting metric on $D$ in invariant under $G_{{\mathbb{R}}}$, it is no longer true that $g\in\exp(\Lambda)$ induces an isometry from $T_F(D)$ to $T_{g\cdot F}(D)$. The metric of~\cite{knu,knu2} is not quasi-isometric to the metric considered in this paper except when $D$ is pure. See \cite{nilp} for details on the geometry of this metric.\\ (2) The previous Lemma implies that, understanding how the decomposition appearing in Theorem~\ref{group-decomp} depends on $F\in D$ up to second order is sufficient to compute the curvature of $D$ (cf.\cite{deligne}). \end{rmk} \par For future use, we introduce the subalgebras \begin{equation} \germ n_+ := \bigoplus_{ a\ge 0,b<0 } \eg^{a,b},\qquad \germ n_- := \bigoplus_{ a< 0,b\ge 0 } \eg^{a,b}. \label{eqn:SplitEnd} \end{equation} Then, recalling the definition \eqref{eqn:lambda} of $\Lambda$, we have a splitting $$ \eg_{{\mathbb{C}}} = \germ n_+\oplus\eg^{0,0}\oplus\germ n_-\oplus\Lambda $$ and we let \begin{equation} \label{eqn:projections} \aligned \End(V_{\mathbb{C}}) & \to & \germ n_+,\, \eg^{0,0}, \germ n_- ,\, \Lambda\\ u & \mapsto & \, u_+, \,u_0,\, u_-, \,u_\Lambda \endaligned \end{equation} denote orthogonal projection from $\End(V_{{\mathbb{C}}})$ to $\eg_{{\mathbb{C}}}$ followed by projection onto the corresponding factor above. \par We conclude this section with a formula for the adjoint operator $\alpha\mapsto\alpha^*$ with respect to the mixed Hodge metric. \begin{lemma}\label{lemma:adjoint} Let $\ez= \bigoplus_p \eg^{-p,p}$ and denote \begin{equation}\label{eqn:piz} \pi_\ez: \End(V_{\mathbb{C}}) \to \ez \end{equation} the corresponding orthogonal projection. Then (with $C_F$ the Weil operator of $\gr^WV$) we have $$ \alpha \in\ez \implies\alpha ^*=-\Ad{(C_F)} \pi_\ez(\bar \alpha ). $$ \end{lemma} \proof In the pure case, the statement is well known. Since both sides belong to $\ez$, we only have to check that we get the right formula on $\gr^W_0(\eg_{\mathbb C})$. \qed\endproof \subsection{Second Order Calculations} \label{ssec:secondorder} In this subsection, we compute the second order behavior of the decomposition of $g = \exp(u)$ given in Theorem~\ref{group-decomp}. The analogous results to first order appear in \cite{higgs}. \par Employing the notation\footnote{We simplify notation by writing $\germ q$ instead of $\germ q_F$.} from \eqref{eqn:tang-alg} and \eqref{eqn:SplitEnd} consider the following splitting \begin{equation}\label{eqn:Double} \eg_{\mathbb{C}}=\underbrace{\eg^{0,0}\oplus \germ n_+}_{\eg_{\mathbb{C}}^F} \oplus \underbrace{\germ n_-\oplus\Lambda}_{\germ q}. \end{equation} Since $\germ q$ is a complement to $\eg_{\mathbb{C}}^F$, the map \begin{equation} u\in\germ q \mapsto e^u\cdot F \label{eqn:coord} \end{equation} restricts to biholomorphism of a neighborhood $U$ of $0$ in $\germ q$ onto a neighborhood of $F$ in $D$. Relative to this choice of coordinates, the identification of $\germ q$ with $T_F(D)$ coincides with the one considered above (cf. \eqref{eqn:tang-alg}). \par We need to compare this with the real structure on $\eg_{\mathbb{C}} = \eg_{{\mathbb{R}}}\otimes{\mathbb{C}}$. As usual, we write \[ \alpha= \re (\alpha) + {\rm i} \cdot \im (\alpha) ,\quad \re (\alpha) =\frac 12 (\alpha+\bar\alpha),\quad {\rm i} \cdot \im (\alpha)= \frac 12( \alpha- \bar \alpha). \] \begin{lemma*} [\protect{\cite[Theorem 4.6]{higgs}}] Set \[ \Im (\eg^{0,0}):= \sett{\varphi\in\eg^{0,0}}{\bar\varphi^{(0,0)}=-\varphi} . \] Then \begin{equation}\label{eqn:decomposition} \eg_{\mathbb{C}}= \eg_{\mathbb{R}}\oplus \Im (\eg^{0,0}) \oplus \germ n_+\oplus {\rm i} \Lambda_{\mathbb{R}}. \end{equation} \end{lemma*} \begin{corr}\cite[Corollary 4.7]{higgs} There exists a neighborhood of $1\in G_{\mathbb{C}}$ such that every element $g$ in this neighborhood can be written uniquely as \[ g=g_{\mathbb{R}} \exp{( \lambda)} \exp(\varphi), \quad g_{\mathbb{R}}\in\eg_{\mathbb{R}},\, \lambda\in {\rm i} \Lambda_{\mathbb{R}}, \quad \varphi\in \eg^{0,0} \oplus \germ n_+ \subset \eg^F_{\mathbb{C}}, \] where $\varphi^{0,0}$ is purely imaginary. \end{corr} This implies that, possibly after shrinking $U$ there are unique functions $\gamma,\lambda, \varphi: U \to \eg_{\mathbb{R}}, {\rm i} \Lambda_{\mathbb{R}}, \eg_{\mathbb{C}}^F$ respectively such that \begin{equation}\label{eqn:expu} \exp(u) = \underbrace{\exp{(\gamma(u) )}}_{\text{\rm in }G_{\mathbb{R}}}\cdot \exp{( \lambda(u))} \cdot \underbrace{\exp{(\varphi(u))}}_{\text{\rm in }G^F_{\mathbb{C}}}. \end{equation} Now we introduce $g(u)=\exp(u) =g_{\mathbb{R}}(u)\cdot \exp(\lambda(u) ) \cdot \exp{(\varphi(u))}$ as functions on $U\cap \germ q$. \medskip \par As a prelude to the next result, we recall that by the Campbell--Baker--Hausdorff formula $$ e^x e^y = e^{x+y+\frac{1}{2}[x,y] + \cdots} $$ Alternatively, making the change of variables $u=-y$, $v=x+y$ this can be written as $$ e^{u+v}e^{-u} = e^{\psi(t_0,t_1,\dots)} $$ where $t_m = (\ad u)^mv$ and $\psi$ is a universal Lie polynomial. In a later computation (see \S~\ref{sec:kaehler}) we need more information, namely on the shape of the part linear in $v$: \begin{equation} \psi_1(u,v) = \sum_m\, \frac{1}{m+1}\,t_m = \frac{e^{\ad u} - 1}{\ad u}v + \text{ higher order terms in } v. \label{eqn:2.10} \end{equation} \begin{prop}\label{secondorder} Let $F\in D$ and $u= u_-+u_\Lambda \in \germ n_-\oplus\Lambda = \germ q$. Then, \[ \varphi(u)= -\bar u_+ + \frac 12[u, \bar u]_0 + [u, \bar u ]_+ + \frac 12[\bar u, { \bar{u}_\Lambda}]_+ + O^3(u,\bar u) \] where the subscripts mean the orthogonal projections onto $\eg^{0,0}$, $\Lambda$, $\germ n_+$ respectively. \end{prop} \proof For the linear approximation note that \[ u= \re[2 (u_-) - \bar{u}_\Lambda ] - {\rm i} \im (\bar{u}_\Lambda) -\bar u_+ \in \eg_{\mathbb{R}}\oplus {\rm i}\Lambda_{\mathbb{R}}\oplus \eg^F_{\mathbb{C}} \] and that equation \eqref{eqn:expu} yields the first degree approximation $u=\gamma_1(u)+\lambda_1(u)+\varphi_1(u)$ so that the result follows by uniqueness. \par The computation proceeds by expanding the left hand side of $$ \exp{(\lambda)}\exp{(\varphi)} \exp{(-u)} =\exp{(-\gamma)} \in G_{\mathbb{R}} $$ using the Campbell--Baker--Hausdorff formula, and then using the fact that the right hand side is real. To first order the decomposition is $$ u = \gamma_1(u) + \lambda_1(u) + \varphi_1(u) $$ where $$ \aligned \gamma_1(u) &= u + \bar u - \frac{1}{2}\pi_{\Lambda}(\bar u) - \frac{1}{2}\overline{\pi_{\Lambda}(\bar u)} \\ \lambda_1(u) &= -\frac 12\pi_{\Lambda}(\bar u) + \frac{1}{2}\overline{\pi_{\Lambda}(\bar u)} \\ \varphi_1(u) &= -\bar u + \pi_{\Lambda}(\bar u) \endaligned $$ where we have used $\pi_{\Lambda}$ to denote projection to $\Lambda$ for clarity regarding the order of complex conjugation, since these two operations do not commute. The second degree approximation then yields that \[ \lambda_2 + \varphi_2 + \frac 12\left( [\lambda_1,\varphi_1-u] -[\varphi_1,u] \right) \text { is real.} \] The projection to $\germ n_+$ equals $[\varphi_2]_+ + \frac 12\left( [\lambda_1, \varphi_1 -u] _+ -[ \varphi_1,u ] _+\right)$. Since $\bar\lambda_1=-\lambda_1$, the reality constraint implies that \[ \aligned (\varphi_2)_+ &= - \frac 12 \left\{ [\lambda_1,\varphi_1+\bar\varphi_1-u-\bar u]_+ +[\bar\varphi_1,\bar u]_+-[\varphi_1,u]_+ \right \}\\ &= - \frac 12 \left\{ [\bar\varphi_1,\bar u]_+-[\varphi_1,u]_+ +[\lambda_1,\varphi_1+\bar\varphi_1-u-\bar u]_+ \right\} . \endaligned \] By the conjugation rules $\bar \germ n^\pm \subset \bar \germ n_\mp +\Lambda$, the fact that $\Lambda$, $\germ n_+,\germ n_-$ are subalgebras, and using $[\germ n_{\pm},\Lambda] \subset \germ n_{\pm}+\Lambda$ this simplifies to \[ (\varphi_2)_+ = -\frac 12 \left\{ [\lambda_1,\varphi_1-\bar u]_+ + [\bar\varphi_1,\bar u]_+- [\varphi_1, u]_+ \right\} . \] Now set $\varphi_1= -\bar u + \pi_\Lambda(\bar u)$ so that $\varphi_1-\bar u= -2 \bar u \mod \Lambda$ so that the first term reads $\frac 12 [2\lambda_1, \bar u]_+$, and since $ \overline{\varphi_1} = -\overline{\pi_+\bar u }$, the second term becomes $\frac 12[\overline{\bar u_+}, \bar u]_+$ while the last simplifies to $-\frac 12[\bar u, u]_+$ ; in total we get \[ (\varphi_2)_+ = \frac 12 [2\lambda_1+ \overline{\pi_+\bar u } , \bar u]_+ +\frac 12[\lambda_1,\bar u]_+ \] Putting $ 2\lambda_1 = \overline{ \pi_\Lambda \bar u} - \pi_\Lambda(\bar u)$ so that $2\lambda_1+\overline{\pi_+\bar u}= u- \pi_\Lambda(\bar u)$ shows \[ (\varphi_2)_+= \frac 12\left\{ [u,\bar u]_+ -[\bar u_\Lambda,\bar u]_+ - [\bar u,u]_+\right\}, \] which is indeed equal to the stated expression for $(\varphi_2)_+$. Similarly we find for the $\eg^{0,0}$-component \[ (\varphi_2)_0= \frac 12 [u,\bar u]_0.\quad\quad \qed \] \endproof \begin{corr}\label{corr:H-form} Let $F\in D$. Then, $$ h_{e^u\cdot F}(L_{e^u *}\alpha ,L_{e^u *}\beta ) = h_F(\exp{H(u)}\alpha ,\beta ),\quad \alpha,\beta\in\germ q $$ denote the local form of the mixed Hodge metric on $T(D)$ relative to the choice of {coordinates} \eqref{eqn:coord}. Then, up to second order in\footnote{We write $x_\germ q$ instead of $\pi_\germ q x$ for clarity and if no confusion is likely.} $(u,\bar u)$ \begin{eqnarray*} H(u) &=& \underbrace{- \ad{(\bar u)_+^*)}_\germ q}_{(1,0)\text {-\rm term} } +\underbrace{ - (\ad{ (\bar u)_+})_\germ q}_{(0,1)\text {-\rm term}}\\ && + \underbrace{ \frac 12 \left( \ad{ [\bar u , \bar u_\Lambda ]_+ + [\bar u , \bar u_\Lambda ]_+^*} \right)_\germ q}_{(2,0)+(0,2) \text{-\rm term}} \\ && + \underbrace{ \left( \frac 12 [ (\ad{ (\bar u)_+^*})_\germ q ,(\ad{(\bar u)_+})_\germ q ] +(\ad{ [ u, \bar u]_0})_\germ q + \ad{[u,\bar u]_+} +\ad {[u,\bar u]_+^*} \right)_\germ q }_{(1,1) \text{-\rm term}}. \end{eqnarray*} Here, by "$A(x,y)$ is a $(p,q)$-term" we mean $A(tx,ty)= t^p\bar t^q A(x,y)$. \end{corr} \proof Let us first check the assertion about types. This follows directly from the the facts that $\ad{}$ and $\pi_\germ q$ is ${\mathbb{C}}$-linear, while for any operator $A$, one has $(tA)^*=\bar t A^*$ and $\overline{tA}=\bar t \bar A$ Let us now start the calculations. By \eqref{eqn:expu}, we have \begin{equation}\label{eqn:intermediatestep} \aligned h_{e^u\cdot F}(L_{e^u *}\alpha ,L_{e^u *}\beta ) &= h_F(L_{\exp(\varphi(u)*}\alpha ,L_{\exp(\varphi(u)*}\beta )) \\ &= h_F(\pi_{\germ q}\Ad{\exp(\varphi(u))}\alpha , \pi_{\germ q}\Ad{\exp(\varphi(u))}\beta )) \\ &= h_F(\pi_{\germ q}\Ad{\exp(\varphi(u))}\alpha , \Ad{\exp(\varphi(u))}\beta )) \endaligned \end{equation} since $\eg_{{\mathbb{C}}}^F$ and $\germ q$ are orthogonal with respect to the mixed Hodge metric at $F$. Therefore, $$ \aligned h_{e^u\cdot F}(L_{e^u *}\alpha ,L_{e^u *}\beta ) &= h_F( \Ad{\exp(\varphi(u))^*} \pi_{\germ q}\Ad{\exp(\varphi(u))}\alpha ,\beta )) \\ &= h_F( \exp(\ad{\varphi(u)^*}) \pi_{\germ q}\exp(\ad{\varphi(u)})\alpha ,\beta )) \\ &= h_F( \exp(\ad{\varphi(u)^*}) \exp(\pi_{\germ q}\ad{\varphi(u)})\alpha ,\beta )) \\ \endaligned $$ by equation \eqref{eqn:condense-1}. Likewise, although $$ \exp(\ad{\varphi(u)^*}) \exp(\pi_{\germ q}\ad{\varphi(u)})\alpha $$ is in general only an element of $\End(V_{{\mathbb{C}}})$, since we are pairing it against an element $\beta \in\germ q$, it follows that $$ \aligned h_{e^u\cdot F}(L_{e^u *}\alpha ,L_{e^u *}\beta ) &= h_F(\pi_{\germ q}\exp(\ad{\varphi(u)^*}) \exp(\pi_{\germ q}\ad{\varphi(u)})\alpha ,\beta )) \\ &= h_F(\exp(\pi_{\germ q}\ad{\varphi(u)^*}) \exp(\pi_{\germ q}\ad{\varphi(u)})\alpha ,\beta )), \endaligned $$ where the last equality follows from \eqref{eqn:condense-2}. By the Baker--Campbell--Hausdorff formula, up to third order in $(u,\bar u)$ the product of the exponents in the previous formula can be replaced by \[ \exp\left(\pi_{\germ q}\ad{\varphi(u)^*} + \pi_{\germ q}\ad{\varphi(u)} + \frac{1}{2}[\pi_{\germ q}\ad{\varphi(u)}^*, \pi_{\germ q}\ad{\varphi(u)}]\right). \] So, we may assume that \[ H(u)= \pi_{\germ q}\ad{\varphi(u)^*} + \pi_{\germ q}\ad{\varphi(u)} + \frac{1}{2}[\pi_{\germ q}\ad{\varphi(u)}^*, \pi_{\germ q}\ad{\varphi(u)}]. \] To obtain the stated formula for $H(u)$, insert the formulas from Proposition~\ref{secondorder} into the above equations and compute up to order 2 in $u$ and $\bar u$. Use is made of the equality $[u,\bar u]_0^*= [u,\bar u]_0$ guaranteed by Lemma~\ref{lemma:adjoint}. \qed\endproof \section{Curvature of the Chern Connection} \label{sec:curv} \par We begin this section by recalling that given a holomorphic vector bundle $E$ equipped with a hermitian metric $h$, there exists a unique {\it Chern connection} $\nabla$ on $E$ which is compatible with both $h$ and the complex structure $\bar\partial$. With respect to any local holomorphic framing of $E$, the connection form of $\nabla$ is given by \begin{equation} \theta = \text{\bf h}^{-1}\partial\text{\bf h} , \label{eqn:conn-1} \end{equation} where $\text{\bf h}$ is the transpose of the Gram--matrix of $h$ with respect to the given frame. The curvature tensor is then \begin{equation} R_{\nabla} = \bar\partial\,\theta. \label{eqn:conn-2} \end{equation} \begin{thm}\label{thm:connection} The connection 1-form of the mixed Hodge metric with respect to the trivialization of the tangent bundle given in Lemma~\ref{TangentIdent} is $$ \theta(\alpha) = - \left( \ad(\bar \alpha)_+^*\right)_\germ q $$ for $\alpha\in\germ q_F\cong T_F(D)$. \end{thm} \proof By Corollary \eqref{corr:H-form}, this is the first order holomorphic term of $H(u)$.\qed \endproof % \begin{lemma}\label{lemma:curvature} Let $(D,h)$ be a complex hermitian manifold and let $U\subset D$ be a coordinate neighborhood centered at $F\in D$ and let $\alpha,\beta \in T_F (U) \otimes{\mathbb{C}}$ be of type $(1,0)$. In a local holomorphic frame, write the transpose Gram-matrix $\mathbf{h}_U=(h(e^j,e^i) )= \exp H$ for some function $H$ with with values in the hermitian matrices and with $H(0)=0$. Then at the origin one has \[ R_\nabla (\alpha ,\bar \beta ) = -\partial_\alpha \partial_{\bar \beta } H + \frac 12 \left[ \partial_{\bar\beta } H, \partial_{\alpha} H \right]. \] \end{lemma} \proof Since the curvature is a tensor, its value on vector fields at a given point only depends on the fields at that point. Choose a complex surface $u:V\hookrightarrow U$, $V\subset{\mathbb{C}}^2$ a neighborhood of 0 (with coordinates $(z,w)$) and $u_*(d/dz)_0 = (\partial_\alpha )_0$, $u_*(d/dw)_0 = (\partial_\beta )_0$. Replace $\text{\bf h}$ by $\text{\bf h}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} u$ and write it as $$ h = \exp(H) = I + H + \frac{1}{2}H^2 + O^3(z,\bar z). $$ Formulas \eqref{eqn:conn-1},\eqref{eqn:conn-2} tell us that the curvature at the origin equals $$(\bar\partial h \wedge \partial h + \partial \bar\partial h)_0.$$ This $2$-form evaluates on the pair of tangent vectors $(\partial_z,\partial_{\bar w})$ as \begin{equation} \label{eqn:curve3} R_\nabla(\alpha,\bar \beta)= \partial_{\bar w}h \raise1pt\hbox{{$\scriptscriptstyle\circ$}} \partial_z h \ -\partial_{z}\partial_{\bar w}h. \end{equation} Now use the Taylor expansion of $h$ up to order 2 of which we give some relevant terms\footnote{Remember $H$ is a matrix so that $\partial_zH$ and $\partial_{\bar w}H$ do not necessarily commute.}: $$ \aligned h(z,\bar z,w,\bar w)_2 &= I + (\partial_z H)_0 z + (\partial_{\bar w} H)_0 {\bar w} + \dots + \\ &+\text{terms involving } z^2,w^2,\bar z^2,\bar w^2+ \\ &+\left(\partial_z\partial_{\bar w}\, H +\frac{1}{2}(\partial_z\, H)(\partial_{\bar w}\, H) +\frac{1}{2}(\partial_{\bar w}\, H)(\partial_z\, H) \right)_0\, z\bar w \\ & +\text{terms involving } z\bar z,w\bar z, w\bar w \endaligned $$ Now substitute in \eqref{eqn:curve3}.\qed\endproof \par As a first consequence, we have: \begin{lemma}\label{lemma:tate-case} The submanifold $\exp(\Lambda) \cdot F$ of $D$ is a flat submanifold with respect to the Hodge metric. In particular, the holomorphic sectional curvature in directions tangent to this submanifold is identically zero. \end{lemma} \proof If $\mathbf{f}$ is a unitary Hodge-frame for the mixed Hodge structure on $V$ corresponding to $F$, then for all $g\in\exp(\Lambda)$, $(L_g)_* \mathbf f$ is a unitary Hodge frame at $g\cdot F$ and this gives a holomorphic unitary frame on the entire orbit. Hence the Chern connection is identically zero. This also follows immediately from the formula for the connection form given above. \qed\endproof \medskip \begin{thm}\label{thm:hol-curv} Let $D$ be a period domain for mixed Hodge graded-polarized structures. Let $\nabla$ be the Chern connection for the Hodge metric on the holomorphic tangent bundle $T(D)$ at $F$. Then for all tangent vectors $u \in T^{1,0}_F(D)\simeq \germ q$ we have \begin{equation*} \label{eqn:EndCalc} R_\nabla (u,\bar u) = - [(\ad{{\bar u }_+^*})_\germ q ,(\ad{{\bar u }_+})_\germ q] { - \ad{[ u, \bar u]_0} - \left(\ad{([u,\bar u]_+ + [u,\bar u]_+^*)} \right)_\germ q}. \end{equation*} We use the following convention: for all $u \in\eg$ we write $u ^*_0,u _+^*,u _-^*$ to mean: first project onto $\eg^{0,0}$, respectively $\germ n_+$, $\germ n_-$ and \emph{then} take the adjoint. \end{thm} \proof Apply the formula of Lemma~\eqref{lemma:curvature}. Proceeding as in the proof of that Lemma, choose a complex curve $u(z)$ tangent to $u\in T_FD$ and write $H(u(z))=H(z,\bar z)$. We view the curve $u(z)$ as an element of $\germ q$, i.e., in the preceding expression we replace $u$ by $zu $ and $\bar u$ by $\bar z \bar u $. Then from Corollary~\ref{corr:H-form} we have $\partial_z H(0)=- (\ad{ (\bar u )_+^*} )_\germ q$, $\partial_{\bar z} H(0)=-( \ad{(\bar u )_+})_\germ q$ and \[ \begin{array}{lcl} \partial_z\partial_{ \bar z} H(0)&=& \frac 12[(\ad{(\bar u )_+^*})_\germ q ,(\ad{(\bar u )_+})_\germ q] + \left(\ad{[u,\bar u]_0}\right)_\germ q\\ && \quad\quad + \left(\ad{[\bar u, u]_+ + [\bar u,u]_+^*} \right)_\germ q. \end{array} \] Since at the point $F\in D$ we have $R_\nabla (u,\bar u) = -\partial_u \partial_{\bar u } H +\frac 12 [ \partial_{ \bar u} H , \partial_{u} H] $, the result follows. \qed\endproof \begin{rmk} \label{mtremark}(1) Note that in the pure case this gives back $R_\nabla(u,\bar u)= - \ad{[u,\bar u]_0}$ as it should.\\ (2) By Remark~\ref{mtremark}, the formula for the curvature of a Mumford-Tate domain for pure Hodge structures is the same as the one for the curvature of a pure period domain. \\( 3) Exactly the same proof shows that the full curvature tensor, evaluated on pairs of tangent vectors $\set{u,v}\in T^{1,0}_FD$ is given by % \[ \begin{array}{lll} R_\nabla (u,\bar v) &=& - ( [(\ad{\bar u^*_+})_\germ q , (\ad{\bar v_+})_\germ q ] )\\ & \hphantom{=} & -\frac 12( \ad{[u,\bar v]_0}+ \ad{[v,\bar u]_0}) \\ & \hphantom{=} & \qquad\quad + \left( \ad{ ( [\bar v, u]_+} +[ \bar u ,v]_+^* )\right)_\germ q. \end{array} \] Alternatively, one may use $(5.14.3)$ of \cite{deligne}. In that formula $R(u,v)$ stands for the curvature in any pair $(u,v)$ of complex directions. So $R(u,v)= R(u^{1,0},v^{0,1})- R(v^{1,0},u^{0,1} )=R_\nabla(u^{1,0},v^{0,1})-R_\nabla(v^{1,0},u^{0,1})$. \end{rmk} \section{Holomorphic Sectional Curvature in Horizontal Directions}\label{sec:holseccurv} Recall that the holomorphic sectional curvature is given by \begin{equation} R(u):= h(R_{\nabla}(u,\bar u)u,u)/h(u,u)^2 \label{eqn:hol-curv}. \end{equation} Our aim is to prove: \begin{thm} Let $u\in T_F(D)$ be a \emph{horizontal vector} of unit length. Then $R(u)=A_1+A_2+A_3+A_4$ where \begin{eqnarray*} A_1 &= & - \|[\bar u_+,u]_{\germ q}\|^2,\\ A_2 & = & \|[\bar u_+^*,u]_{\germ q}\|^2,\\ A_3 &= &-h([[u,\bar u]_0,u],u)\\ A_4 &=& -h([[u,\bar u]_+,u]_{\germ q},u) -h(u,[[u,\bar u]_+,u]_{\germ q}). \end{eqnarray*} Each of these terms is real. \end{thm} \begin{proof} We start by stating the following two self-evident basis principles which can be used to simplify \eqref{eqn:hol-curv}: \begin{itemize} \item Orthogonality: The decomposition $\eg=\bigoplus \eg^{p,q}$ is orthogonal for the Hodge metric; \item Jacobi identity: For all $X,Y,Z\in \End(V)$ we have $$ [X,[Y,Z]] = [[X,Y],Z] + [Y,[X,Z] $$ \item Metric conversion: The relation \begin{equation} \label{eqn:MetConv} h([X,Y],Z) = h(Y,[X^*,Z]) \end{equation} implies \begin{equation} \label{eqn:MetTrick} -h(\ad{[X ,X ^*]}Y ,Y )= \| [X ,Y ]\|^2-\|X ^*,Y ]\|^2 \end{equation} \end{itemize} Theorem~\ref{thm:hol-curv} and the previous rules imply: $$ \aligned h(R_{\nabla}(u,\bar u)u,u) &= - h([( \ad{ (\bar u_+)^*})_\germ q ,(\ad{(\bar u)_+})_\germ q]u,u) -h((\ad{[u,\bar u]_0})u,u) \\ &\qquad -h((\ad{[u,\bar u]_+})u,u) -h((\ad{[u,\bar u]_+^*})_{\germ q} u,u) \\ &=- \|[\bar u_+,u]_{\germ q}\|^2 + \|[\bar u_+^*,u]_{\germ q}\|^2 -h([[u,\bar u]_0,u],u) \\ &\qquad -h([[u,\bar u]_+,u]_{\germ q},u) -h(u,[[u,\bar u]_+,u]_{\germ q}). \endaligned $$ This shows that $h(R_{\nabla}(u,\bar u)u,u) = A_1+A_2+A_3 + A_4$ where the terms $A_j$ are as stated. In particular, the terms $A_1,A_2,A_4$ are real. Metric conversion allows us to show that $A_3$ is real: since $[u,\bar u]_0 = [\alpha, \pi_\ez \bar\alpha]=[\alpha, \alpha^*]$ we find that \begin{equation}\label{eqn:a3} \aligned A_3 &= -h([[\alpha,\alpha^*],u],u)) \\ &= \|[\alpha,u] \|^2 - \| [\alpha^*,u]\|^2 \in {\mathbb{R}}. \endaligned \end{equation} \end{proof} \par The next result gives the refinement of the curvature calculations with respect to the decomposition of a horizontal vector into its Hodge components: \begin{thm} Let $u= \sum_{j \le 1}u^{-1,j}$ set \footnote{Recall the notation \eqref{eqn:piz}.} \[ \aligned \alpha &=& u^{-1,1},\quad \beta &=& u^{-1,0} ,\qquad \qquad \lambda &=& \sum\nolimits_{j\ge 1} u^{-1,-j} \\ \bar\alpha_+ &=& \alpha^* +\epsilon,\quad \alpha^* &=& \pi_\ez\bar \alpha _+ = \bar\alpha_+^{1,-1}, \qquad \epsilon &=& \sum\nolimits_{j\ge 2}\bar\alpha_+^{0,-j}. \endaligned \] Then, \begin{eqnarray*} A_1 &= & - \left(\|[\bar\beta_++\epsilon, \alpha ]\|^2+ \|[\bar\beta_++\epsilon,\beta]\|^2+\|[\bar\beta_++\epsilon,\lambda]_\germ q\|^2\right),\\ A_2 & = & \left(\|[\alpha,\beta]\|^2+\|[\alpha,\lambda]\|^2+\|[\bar\beta^*_+,\beta]_\germ q \|^2+\|[\bar\beta^*_++\epsilon^*,\lambda]\|^2\right),\\ A_3 &= & \|[\alpha,\beta] \|^2 +\|[\alpha,\lambda] \|^2 - \| [\alpha^*,\alpha]\|^2- \| [\alpha^*,\beta]\|^2 - \| [\alpha^*,\lambda]\|^2 \\ A_4 &=& -2\|[\alpha^*,\lambda]\|^2 -2\|[\alpha^*,\beta]\|^2 + R(\alpha,\beta,\lambda), \end{eqnarray*} where \[ R(\alpha,\beta,\lambda)= -2\text{\rm Re} \left (h([[\lambda,\alpha^*],\lambda],\lambda)+ h([[\alpha^*,\beta],\lambda],\lambda)+h([[\alpha^*,\lambda],\beta],\lambda)\right) . \] This last term vanishes if $\lambda$ has pure type. \\ Moreover, in the ${\mathbb{R}}$--split situation we have $\bar\alpha^+=\alpha^*$ so that $\epsilon=0$. \end{thm} \begin{proof} \textbf{The term $A_3$.} Inserting $u=\alpha+\beta+\lambda$ in \eqref{eqn:a3} immediately gives the $A_3$-term.\\ \textbf{The terms $A_1, A_2$.} We start by noting that $\bar u_+ = \bar\alpha_++\bar \beta_+=\alpha^*+\epsilon+ \bar \beta_+$ and so (note the precedence of the operators!) $\bar u_+^*= \alpha+\epsilon^*+\bar \beta^*_+$. Accordingly, $$ [\bar u_+,u]_{\germ q} = [\alpha^*+\bar\beta_++\epsilon ,u]_\germ q,\qquad [\bar u_+^*,u]_{\germ q} = [\alpha+\bar\beta^*_++\epsilon^*,u]. $$ The first expression gives $$ \aligned A_1 =& - \| [\bar\beta_++\epsilon ,u]\|^2\\ =& - (\|[\bar\beta_+ +\epsilon,\alpha]\|^2 +\|[\bar\beta_++\epsilon,\beta]\|^2 + \|[\bar\beta_++\epsilon,\lambda]\|^2). \endaligned $$ by orthogonality. The second expression expands as: $$ [\bar u_+^*,u]_{\germ q} = [\alpha,u] + [\bar\beta^*_++\epsilon^*,\alpha]_{\germ q} + [\bar\beta^*_++\epsilon^*,\beta+\lambda]_{\germ q} $$ For weight reasons, $[\bar\beta^*_+,\alpha]_{\germ q} = 0$ and $[\epsilon^*,\alpha]_\germ q=[\epsilon^*,\beta]_\germ q=0$. Therefore, by orthogonality: $$ A_2 = \|[\bar u^*,u]_{\germ q}\|^2 = \| [\alpha,\beta]\|^2 + \| [\alpha,\lambda]\|^2 + \|[\bar\beta^*_+,\beta]_{\germ q}\|^2 + \|[\bar\beta^*_+,\lambda]_{\germ q}\|^2+\|[\epsilon^*,\lambda]_\germ q\|^2 . $$ \textbf{The term $A_4$.} To calculate $A_4$, we observe that $$ [u,\bar u]_+ =[\beta,\bar\alpha_+]+ [\lambda,\bar\alpha_+]= [\beta, \alpha^* +\epsilon ]+ [\lambda,\alpha^*+\epsilon]. $$ So $h([[u,\bar u]_+,u],u) = h([[\beta,\alpha^*+\epsilon],u],u)+ h([ [\lambda,\alpha^*+\epsilon],u],u)$ and we consider each term separately. For the first term, note that $[[\beta,\epsilon],u]$ as well as $ [[\lambda,\epsilon],u] $ belong to $\bigoplus_{j\ge 1} \eg^{-2,-j}$ and hence are both orthogonal to $u$ and we can discard these terms. Moreover, $[\beta,\alpha^*]\in \eg^{0,-1}$ and so, by orthogonality, $$ \aligned h([[\beta,\alpha^*],u],u) &= h([[\beta,\alpha^*],\alpha],\beta) + h([[\beta,\alpha^*],\beta],\lambda)+h([[\beta,\alpha^*],\lambda],\lambda). \endaligned $$ Since $ -h([\alpha,[\beta,\alpha^*]],\beta) = -h([\beta,\alpha^*],[\alpha^*,\beta]) = \|[\alpha^*,\beta]\|^2 $ we find for the first term $$ \aligned h([[\beta,\alpha^*],u],u) = \|[\alpha^*,\beta]\|^2+ h([[\beta,\alpha^*],\beta],\lambda)+h([[\beta,\alpha^*],\lambda],\lambda). \endaligned $$ Note that $[\lambda,\alpha^* ]\in \bigoplus_{j\ge 0}\,\eg^{0,-2-j}$ so that by orthogonality, $$ \aligned h([[\lambda, \alpha ^*],u],u) &= h([[\lambda, \alpha^*],\lambda],\beta) + h([[\lambda, \alpha^*],\alpha + \lambda],\lambda) . \endaligned $$ The second term thus simplifies to $$ \aligned h([[\lambda,\alpha^*],\alpha],\lambda) + h([[\lambda,\alpha^*],\lambda],\lambda) &= -h([\alpha,[\lambda,\alpha^*]],\lambda) + h([[\lambda,\alpha^*],\lambda],\lambda) \\ &= -h([\lambda,\alpha^*],[\alpha^*,\lambda]) + h([[\lambda,\alpha^*],\lambda],\lambda) \\ &= \|[\alpha^*,\lambda]\|^2 + h([[\lambda,\alpha^*],\lambda],\lambda) . \endaligned $$ It follows that $$ \aligned A_4 & = -2\|[\alpha^*,\lambda]\|^2-2\|[\alpha^*,\beta]\|^2 \\ & \hspace{4em}-\text{\rm Re} \left (h([[\lambda,\alpha^*],\lambda],\lambda)+ h([[\alpha^*,\beta],\lambda],\lambda)+h([[\alpha^*,\lambda],\beta],\lambda)\right).\qed \endaligned $$ \end{proof} \begin{rmk}\label{rmk:epsilon} We claim that $\epsilon$ and the Deligne splitting $\delta$ of $(F,W)$ are related as follows: \[ \epsilon= [-2{\rm i}\delta,\bar\alpha]_+. \] To see this, apply the Deligne splitting: $$ \alpha = \Ad(e^{{\rm i} \delta})\alpha^{\ddag} $$ where $\alpha^{\ddag}$ is type $(-1,1)$ at the split mixed Hodge structure $(\hat F,W)$ defined by $\hat F = e^{-{\rm i}\delta}F$. At that point the complex conjugate and the adjoint of $\alpha^{\ddag}$ coincide. Therefore, \[ \aligned \alpha^* &= \Ad (e^{{\rm i}\delta} ) [ {\alpha^{\ddag}}^*]_{\hat F} = \Ad (e^{{\rm i}\delta} )[ \overline{ \alpha^{\ddag}}]_{\hat F}\\ \bar\alpha &= \Ad(e^{-{\rm i}\delta})[\overline{\alpha^{\ddag}} ]_{\hat F} = \Ad(e^{{\rm i}\delta})[ \Ad(e^{-2{\rm i}\delta})\overline{\alpha^{\ddag}}]_{\hat F} \endaligned \] Consequently, $$ \aligned \epsilon &= (\alpha^* -\bar\alpha)_+\\ &= \Ad(e^{{\rm i}\delta}) ((\Ad(e^{-2{\rm i}\delta})-1)\overline{\alpha^{\ddag}})_{+,\hat F} \\ &= \Ad(e^{{\rm i}\delta}) [-2i\delta,\bar\alpha^{\ddag}]_{+,\hat F} \\ &= [-2{\rm i}\delta,\Ad(e^{{\rm i}\delta})\overline{\alpha^{\ddag}}]_+ \\ &= [-2{\rm i}\delta,\Ad(e^{2{\rm i}\delta})\bar\alpha]_+ \\ &= [-2{\rm i}\delta,\Ad(e^{2{\rm i}\delta})\bar\alpha]_+\\ &= [-2{\rm i}\delta,\bar\alpha]_+. \endaligned $$ \end{rmk} \par We shall now discuss particular cases. \begin{corr} \label{hsecex2} The holomorphic sectional curvature along a horizontal direction $u = \alpha + \lambda$ with $\alpha$ type $(-1,1)$ and $\lambda\in\Lambda$ equals $$ R(u)= \frac{ 2\|[\alpha,\lambda]\|^2 +f(u,\epsilon) - 3\|[\alpha^*,\lambda]\|^2 - \|[\alpha,\alpha^*]\|^2 - \text{\rm Re} (h([[\lambda,\alpha^*],\lambda],\lambda))}{(\|\alpha\|^2+\|\lambda\|^2)^2}, $$ where $f(u,\epsilon)= -\left(\|[\alpha,\epsilon]\|^2+\| [\lambda,\epsilon] \|^2\right)+\| [\lambda,\epsilon^*]\|^2$. In particular: \begin{itemize} \item $R(u)\le 0$ if $[\alpha,\lambda]=0=[\lambda,\epsilon^*]$ and $\lambda$ is of pure type $(-1,-k)$ for some $k<0$ (since $[[\lambda ,\alpha^*],\lambda]$ and $\lambda$ have different types), and $R(u)<0$ as soon as $\alpha\neq 0$. \item $R(u)>0$ if $[\alpha^*,\lambda]=0=[u,\epsilon]$ provided $2\|[\alpha,\lambda]\|^2+ \| [\lambda,\epsilon^*]\|^2 >\|[ \alpha^*,\alpha]\|^2$. \end{itemize} \end{corr} \begin{exmple}\label{exmple:kahler-def} Let us return to the setting of the variation of mixed Hodge structure \eqref{eqn:kahler-def} arising from a variation of K\"ahler moduli along a family of compact K\"ahler manifolds. The original variation $F(s)$ of a direct sum of pure Hodge structures that can be expressed locally as $$ F(s) = e^{\Gamma(s)} \cdot F(0) $$ where $\Gamma:\Delta^r\to\germ q$ vanishes at $0$ and takes values in $\eg^{-1,1}\oplus\eg^{-2,2}\oplus\cdots$. The requirement that each $\gamma_j$ be of type $(-1,-1)$ for all $F(s)$ implies that $$ \Ad(e^{-\alpha(s)})\lambda_j = e^{-\ad\alpha(s)}\lambda_j $$ is horizontal at $F(0)$ for all $s$. Via differentiation along a holomorphic arc through $s=0$, this fact implies that $[\alpha'(0),\gamma_j] = 0$ since $\alpha'(0)\in\eg^{-1,1}$ and $\alpha(0) = 0$. \par The local normal form of the variation \eqref{eqn:kahler-def} is therefore $$ \tilde F(s) = e^{{\rm i} N(u(s))}e^{\alpha(s)} \cdot F(0) $$ where $u(s)$ takes values in the complex linear span $L_{\mathbb C}$ of $\gamma_1,\dots,\gamma_k$. Accordingly, the derivative of $(\tilde F(s),W)$ at $s=0$ is $$ \xi = \xi^{-1,-1} + \xi^{-1,1},\qquad \xi^{-1,-1} ={\rm i} N(u'(0)),\qquad \xi^{-1,1} = \alpha'(0) $$ where $[\xi^{-1,-1},\xi^{-1,1}]=0$. By Corollary \ref{hsecex2}, this proves that the holomorphic sectional curvature along the direction $\xi$ is zero provided that $\epsilon=0$. \par To verify that $\epsilon=0$, observe that since the mixed Hodge structures $(F(s),W)$ are all split over $\mathbb R$, the element $\delta$ attached to $(\tilde F(0),W)$ is defined by the equation $$ e^{-{\rm i} N(\overline{u(0)})} \cdot Y_{(F(0),W)} = e^{-2{\rm i} \delta}e^{{\rm i} N(u(0))} \cdot Y_{(F(0),W)} $$ Since $\delta$ commutes with all $(p,p)$-morphisms of $(\tilde F(0),W)$, it follows from the previous equation that $\delta = N(\re(u(0)))$. Accordingly, $\delta$ is real and belongs to $L_{{\mathbb{C}}}$ and so $$ [\bar\alpha'(0),\delta] = \overline{[\alpha'(0),\delta]} = 0. $$ By Remark \eqref{rmk:epsilon}, it follows that $\epsilon=0$. \end{exmple} \begin{corr}\label{curvapll2} The holomorphic sectional curvature along a horizontal direction $u = \alpha + \beta$ with $\alpha$ type $(-1,1)$ and $\beta$ type $(-1,0)$ is $$ \aligned R(u)&= \frac{-n(\alpha,\beta)+p(\alpha,\beta) } {(\|\alpha\|^2+\|\beta\|^2)^2},\\ n(\alpha,\beta) &:= \|[\alpha^*+\epsilon,\alpha]\|^2 + \|[\epsilon,\beta]\|^2 + 3 \|[\alpha^*,\beta]\|^2 +\|[\alpha,\bar\beta_+] \|^2 + \|[\bar\beta_+,\beta]\|^2,\\ p(\alpha,\beta)&:= \|[\alpha,\beta]\|^2 +\| [\bar\beta^*_+,\beta]_{\germ q}\|^2. \endaligned $$ In particular, if $\alpha=0$, $[\beta,\bar\beta_+]=0=[\epsilon,\beta]$ (which is the case if $W_{-1}\eg_{\mathbb{C}}$ is abelian) we have $ {R}(u)\ge 0$ \end{corr} Next, we look at a unipotent variation of mixed Hodge structure in the sense of Hain and Zucker \cite{unipotvars}. These are the variations where the pure Hodge structures on the graded quotients are constant so that $u^{-1,1}=0$ and hence $\epsilon=0$. This situation occurs in two well known geometric examples: \begin{itemize} \item The VMHS on $J_x/J_x^3$, $x\in X$ where $X$ is a smooth complex projective variety; \item The VMHS attached to a family of homologically trivial algebraic cycles moving in a fixed variety $X$. \end{itemize} \begin{corr} For the curvature coming from a unipotent variation we have $$ R(u)= \frac{- \|[\bar\beta_+,\beta]\|^2 - \| [\bar\beta_+,\lambda]\|^2 + \| [\bar\beta^*_+,\beta]_{\germ q}\|^2 +\| [\bar\beta^*_+,\lambda]_\germ q\|^2 }{(\|\beta\|^2+\|\lambda\|^2)^2}. $$ \end{corr} \section{Curvature of Hodge Bundles} \label{sec:hb} \subsection{Hodge Bundles over Period Domains} In this subsection, we compute the curvature of the Hodge bundles over the classifying space $D$ using the methods of \S~\ref{ssec:secondorder}. Since the Hodge bundles of a variation of mixed Hodge structure $\VV\to S$ are obtained by pulling back the Hodge bundles of $D$ along local liftings of the period map, this furnishes a computation of the curvature of the Hodge bundles of a variation of mixed Hodge structure. \par Let $F\in D$ and $\germ q$ be the associated nilpotent subalgebra \eqref{eqn:tang-alg} and $U$ be a neighborhood of zero in $\germ q$ such that the map $u\to e^u\cdot F$ is a biholomorphism onto a neighborhood of $F$. Then, we obtain a local holomorphic framing for the bundle $\FF^p$ over $U$ via the sections $\alpha(u) = e^u\alpha$ for fixed $\alpha\in F^p$. Let $\beta(u) = e^u\beta$ be another such section of $\FF^p$ over $U$, and $L_g$ denote the linear action of $g\in GL(V_{{\mathbb{C}}})$ on $V_{{\mathbb{C}}}$. Let $\Pi$ denote orthogonal projection from $V_{{\mathbb{C}}}$ to $F^p$. Then, as in \S~\ref{ssec:secondorder} by \eqref{eqn:expu}, the metric is $$ \aligned h_{e^u\cdot F}(\alpha(u),\beta(u)) &= h_{F}(L_{\exp(\varphi(u))}\alpha,L_{\exp(\varphi(u))}\beta) \\ &= h_{F}(\Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} L_{\exp(\varphi(u))}\alpha,L_{\exp(\varphi(u))}\beta) \\ &= h_{F}(L_{\exp(\varphi(u)^*)}\Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} L_{\exp(\varphi(u))}\alpha,\beta) \\ &= h_{F}(\Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} L_{\exp(\varphi(u)^*)} \Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} L_{\exp(\varphi(u))}\alpha,\beta). \endaligned $$ In analogy with \S~\ref{ssec:tang}, we have the identity $$ \Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} L_{\exp(\varphi(u))} = L_{\exp(\Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \varphi(u))}, $$ since $\varphi(u)$ belongs to the subalgebra preserving $F^p$. The identity $$ \Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} L_{\exp(\varphi(u)^*)} = L_{\exp(\Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \varphi(u)^*)} $$ is also straightforward because $\varphi(u)$ is a sum of components of Hodge type $(a,b)$ with $a\geq 0$. As such $\varphi(u)^*$ is a sum of components of Hodge type $(-a,-b)$ with $-a\leq 0$, and hence there is no way for the action of $\varphi(u)^*$ to move a vector of Hodge type $(c,d)$ with $c<p$ back into $F^p$. \par Accordingly, by the universal nature of the Campbell--Baker--Hausdorff formula, the only difference between the computation of the curvature of $\FF^p$ and the curvature of $T(D)$ is that for the former we are use the linear action $GL(V_{{\mathbb{C}}})$ and $\eg\germ l(V_{{\mathbb{C}}})$ and project orthogonally to $F^p$ whereas in the later we use the adjoint action and project orthogonally to $\germ q$. \begin{corr} Let $\Pi$ denote orthogonal projection from $V_{{\mathbb{C}}}$ to $F^p$. Then, the curvature of the Hodge bundle $\FF^p$ over $D$ in the directions $u$, $v\in T^{\rm hol}_{F}(D)$ is \[ \begin{array}{lll} R_\nabla (u,\bar v) &=& - ( [\Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar u^*_+), \Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar v_+)] )\\ & \hphantom{=} & -\frac 12 \left( \Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} ([u,\bar v]_0) + \Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} ([v,\bar u]_0)\right) \\ & \hphantom{=} & \qquad\quad + \Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \left([\bar v, u]_+ +[\bar u ,v]_+^* \right) . \end{array} \] Taking account of the fact that the terms with subscript $+$ (without an adjoint) and subscript $0$ always preserve $F^p$ this simplifies to \[ \begin{array}{lll} R_\nabla (u,\bar v) &=& - ( [\Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar u^*_+), \bar v_+] )\\ & \hphantom{=} & -\frac 12 \left([u,\bar v]_0 + [v,\bar u]_0\right) \\ & \hphantom{=} & \qquad\quad + \left([\bar v, u]_+ + \Pi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} [\bar u ,v]_+^* \right) \end{array} \] \end{corr} \par The computation of the curvature of the quotient bundle $\FF^p/\FF^{p+1}$ proceeds along the same lines as the computation of the curvature of $\FF^p$. However, in this case the corresponding projection operator $\Pi'$ sends $V_{{\mathbb{C}}}$ to $$ \FF^p/\FF^{p+1}\cong U^p := \bigoplus_q\,{\mathcal I}^{p,q}_{(F,W)}. $$ The identity $$ \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} L_{\exp(\varphi(u))} = L_{\exp(\Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \varphi(u))} $$ results from the fact that elements of $\eg_{{\mathbb{C}}}^{F}$ have Hodge components of type $(a,b)$ with $a\geq 0$ and such an element moves $U^p$ to $U^{p+a}$. A similar argument works for $\Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \varphi(u)^*$. \begin{corr}\label{corr:hodge-bundle} Let $\Pi'$ denote orthogonal projection from $V_{{\mathbb{C}}}$ to $U^p$ at $F$. Then, the curvature of the Hodge bundle $\FF^p/\FF^{p+1}$ over $D$ in the directions $u$, $v\in T^{\rm hol}_{F}(D)$ is \[ \begin{array}{lll} R_\nabla (u,\bar v) &=& - ( [\Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar u^*_+), \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar v_+)] )\\ & \hphantom{=} & -\frac 12 \left( \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} ([u,\bar v]_0) + \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} ([v,\bar u]_0)\right) \\ & \hphantom{=} & \qquad\quad + \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \left([\bar v, u]_+ +[\bar u ,v]_+^* \right). \end{array} \] Taking account of the fact that the terms with subscript 0 preserve $U^p$ it follows that \[ \begin{array}{lll} R_\nabla (u,\bar v) &=& - ( [\Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar u^*_+), \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar v_+)] )\\ & \hphantom{=} & -\frac 12 \left([u,\bar v]_0 + [v,\bar u]_0\right) \\ & \hphantom{=} & \qquad\quad + \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \left([\bar v, u]_+ +[\bar u ,v]_+^* \right). \end{array} \] \end{corr} \subsection{First Chern Forms and Positivity} Let us calculate the first Chern form of the Hodge bundles $\UU^p$ over a disk $\Delta:f \to D$ with local coordinate $s$. Set $f(s)=F_s$ and $u=f_*(d/ds)_{F_s}$. We also let \[ u^{(p)}: \UU^p \to \UU^{p-1},\quad u^{p}= \alpha^{(p)}+\beta^{(p)}+\lambda^{(p)} \] be the restriction of $u$ to $\UU^p$ and $\alpha^{(p)},\beta^{(p)}$ and $\lambda^{(p)}$ the decomposition into types $(-1,1), (-1,0)$, respectively $\sum_{k\ge 1} (-1,-k)$. Then we have \begin{lemma} \label{chrnformHodgebundles} The first Chern form $c_1(\UU^{p})$ involves only the components $\alpha^{(p)}$ of $u$ of type $(-1,1)$ and locally can be written \[ c_1(\UU^{p}) = \frac{1}{2\pi {\rm i}} \, \left( \|| \alpha^{(p)}\||_{F_s} - \|| \alpha^{(p+1)}\||_{F_s} \right) ds\wedge d\bar s. \] \end{lemma} \proof We have to calculate $\mathop{\rm Tr}\nolimits R_\nabla (u,\bar u)$ using Cor.~\ref{corr:hodge-bundle}. Let us write $u= \alpha+\beta+\lambda$ as before. Since $\Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar u_+) = \bar\beta_+ $, we find \begin{eqnarray} \left[ \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar u_+^*), \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar u_+) \right] &= & [\bar\beta^*_+,\bar\beta_+] \label{eqn:hc1} \\ \left[ u, \bar u \right]_0&=& [\alpha, \alpha^*] \label{eqn:hc2} \\ \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} [\bar u, u]_+ &=& [ \alpha^*,\beta+\lambda] \label{eqn:hc3} . \end{eqnarray} The first two terms preserve the bi-degree but this is not the case for \eqref{eqn:hc3}. So, computing traces, we can discard it. The vanishing of the trace of $[\bar\beta_+^*,\bar\beta_+]$ follows from the standard calculation $$ \mathop{\rm Tr}\nolimits ([A^*,A]) = \mathop{\rm Tr}\nolimits (A^*A) - \mathop{\rm Tr}\nolimits (AA^*) = \mathop{\rm Tr}\nolimits (AA^*) - \mathop{\rm Tr}\nolimits (AA^*) = 0 $$ with $A= \bar\beta_+\in\text{End}(\mathcal U^p)$. On the other hand, since $\alpha$ maps $\mathcal U^p$ to $\mathcal U^{p-1}$ this argument does not apply \eqref{eqn:hc2}, and so \[ \aligned \mathop{\rm Tr}\nolimits R_\nabla (u,\bar u) &= - \mathop{\rm Tr}\nolimits [\bar\beta^*_+,\beta]\, | \UU^{p} - \mathop{\rm Tr}\nolimits [\alpha, \alpha^*] \, |\UU^{p}\\ &= \|| \alpha^{(p)}\||_{F_s} - \|| \alpha^{(p+1)}\||_{F_s}. \qed \endaligned \] \endproof \begin{corr}\label{corr:top-hodge} The "top" Hodge bundle, say $\UU^n\simeq \FF^n$ (which is a holomorphic sub bundle of the total bundle) has a non-negative Chern form: \[ c_1(\UU^{n}) = \frac{{\rm i} }{2\pi} \, \left( \|| \alpha^{(n)}\||_{F_s} \right) ds\wedge d\bar s\, \ge \,0 . \] \end{corr} As in \cite[Prop. 7.15]{periods2} one deduces form Lemma~\ref{chrnformHodgebundles} also: \begin{corr} Let $\EE^p:= \FF^p/\FF^{p+1}$ and put \[ K(\FF^\bullet):= \bigotimes _p (\det (\EE^p))^{\otimes p}. \] Then the first Chern form of $K(\FF^\bullet)$ is non-negative and is zero precisely in the horizontal directions $(-1,k)$ with $k\le 0$. \end{corr} Let us now consider the curvature form itself. \begin{exmple} Consider the case with two adjacent weights $0\subset W_0\subset W_1=V$. Split the top Hodge bundle as $\FF_n= {\mathcal I}^{n,-n}\oplus {\mathcal I}^{n,-n+1}$ and decompose the curvature matrix accordingly \[ R(u,\bar u) =\begin{pmatrix} \alpha^*\raise1pt\hbox{{$\scriptscriptstyle\circ$}}\alpha + \bar\beta\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \bar\beta^* & \alpha^*\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \beta\\ -\beta^*\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \alpha& \alpha^*\raise1pt\hbox{{$\scriptscriptstyle\circ$}}\alpha - \bar\beta^*\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \bar\beta \end{pmatrix},\, u=\alpha+\beta. \] We see that for $v\in V_{\mathbb{C}}$, $\|R(v) (u,\bar u)\|_F= \Tr \bar v R (u,\bar u) v \ge 0$ if $u=\alpha$, but $\|R(v) (\beta,\bar \beta)\|_F =\|\bar\beta^*(v^{(-n)}) \|_F- \| \bar\beta(v^{(-n+1)}) \|_F$ which need not be $\ge 0$. \end{exmple} From the preceding example it follows that we can expect positive curvature at most in the $\alpha$-direction. In fact, this is true: \begin{prop} \label{curvtopHodge} The "top" Hodge bundle, say $\UU^n\simeq \FF^n$ has a positive curvature in the $\alpha$-directions and has identically zero curvature in the $\lambda$-directions. \end{prop} \proof We note the diagonal terms in the curvature form involve $\alpha^{(q)}\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\alpha^{(q)})^*$ acting on ${\mathcal I}^{n,q}$. Let $r$ be the minimal $q$ with ${\mathcal I}^{n,q}\not=0$ and consider the splitting $\UU^n ={\mathcal I}^{n,r}\oplus{\mathcal I}^{n,r+1}\oplus {\mathcal I}^{n,>r+1}$. Assume $\beta=0$. The matrix of the curvature form splits accordingly: \[ R(u,\bar u) = \begin{pmatrix} \alpha^* \raise1pt\hbox{{$\scriptscriptstyle\circ$}} \alpha &0 & \alpha^*\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \lambda \\ 0 &\alpha^* \raise1pt\hbox{{$\scriptscriptstyle\circ$}} \alpha & 0 \\ -\lambda^*\raise1pt\hbox{{$\scriptscriptstyle\circ$}}\alpha& 0 & \alpha^* \raise1pt\hbox{{$\scriptscriptstyle\circ$}} \alpha \end{pmatrix},\, u=\alpha+\lambda. \] So with $v\in \UU^{n}$ one finds for $u=\alpha+\lambda$: \[ R(v)(u,\bar u)= \|\alpha(v)\|_F ^2 \ge 0 \] with equality if $\alpha(v)=0$. \qed\endproof Here is an example of a variation where $\beta=0$: \begin{exmple} Consider \emph{higher normal functions associated to motivic cohomology} $H^p_{\MM} (q)$, see \cite{3authors}. Indeed, these give extension of $R^{p-1}\pi_*{\mathbb{Z}}(q)$ with $p-2q-1<0$ where $\pi: X\to S$ is a smooth projective family. \\ Assume moreover that the cohomology $H^{p-1}(X_t)$ of the fibres $X_t$ is such that the non-zero Hodge numbers are $h^{p-1-q,q},\cdots h^{q, p-1,q}$ (i.e. the Hodge structure has level $=p-1-2q$). With $n=2q+1-p$ the non-zero Hodge numbers of the mixed variation are, besides $h^{0,0}$ indeed precisely $h^{-n,0},\dots ,h^{0,-n}$. Here $\beta=0$ while $\lambda\not=0$. \end{exmple} \subsection{Variations of Mixed Hodge Structure} We want to stress that, although the above calculations are done on the period domain, they apply also for variations of mixed Hodge structure: the Hodge bundles simply pull back and so does the Hodge metric. What remains to be done is to identify the actions of $u,v$ when these are tangent to period maps. To do this and also as a check on the preceding calculations, we shall now compute the curvature of the Hodge bundles of a variation of mixed Hodge structure starting from Griffiths computation for a \emph{variation of pure Hodge structure} $\mathcal H$. To this end, we recall that the Gauss--Manin connection $\nabla$ of $\mathcal H$ decomposes as $$ \nabla= \bar\theta_0 + \underbrace{\bar\pd_0 + \pd_0}_{D} + \theta_0, $$ where $\bar\pd_0$ and $\pd_0$ are conjugate differential operators of type $(0,1)$ and $(1,0)$ respectively which preserve the Hodge bundles $\mathcal H^{p,q}$, while $\theta_0$ is an endomorphism valued 1-form which sends $\mathcal H^{p,q}$ to $\mathcal H^{p-1,q+1}\otimes\EE^{1,0}$ and $\bar\theta_0$ is the complex conjugate of $\theta_0$. The connection $D=\bar\pd_0 + \pd_0$ is hermitian with respect to the Hodge metric $$ dh(u,v) = h((\bar\pd_0 + \pd_0)u,v) + h(u,(\bar\pd_0 + \pd_0)v). $$ In particular, since $\bar\pd_0$ coincides with the induced action of the $(0,1)$-part of the Gauss--Manin connection acting on $$ \mathcal H^{p,q} \cong \FF^p/\FF^{p+1}, $$ it follows that $D$ is the \emph{Chern connection}, i.e., the hermitian holomorphic connection of the system of Hodge bundles attached to $\mathcal H$. Expanding out $$ (\bar\theta_0 + \bar\pd_0 + \pd_0 + \theta_0)^2 = 0 $$ and decomposing with respect to Hodge types shows that $$ R_{D} = -(\theta_0\wedge\bar\theta_0 + \bar\theta_0\wedge\theta_0). $$ If $d/ds$ is a holomorphic vector field on $S$, the value $u$ of $\theta_0(f_*(d/ds))$ at zero belongs to $\eg^{-1,1}$ and $R_D(u,\bar u)= -[u, \bar u]$ which checks with the previous calculation. \par To compute the curvature of the Hodge bundles $\FF^p/\FF^{p+1}$ of a variation of \emph{mixed} Hodge structure, $\VV\to S$ we consider the $C^{\infty}$-subbundles $\UU^p$ obtained by pulling back $\UU^p \to D$ along the variation, i.e. $$ {\mathcal I}^{p,q}(s) = I^{p,q}_{(\FF(s),\mathcal W)},\qquad \UU^p = \bigoplus_q\,{\mathcal I}^{p,q} $$ By \cite{higgs}, the Gauss--Manin connection of $\VV$ decomposes as $$ \nabla = \tau_0 + \bar\pd + \pd + \theta $$ where $\bar\pd$ and $\pd$ are differential operators of type $(0,1)$ and $(1,0)$ which preserve $\UU^p$ whereas $ \theta:\UU^p \to \UU^{p-1}\otimes\EE^{1,0}$ and $ \tau_0:\UU^p \to\UU^{p+1}\otimes\EE^{0,1}$. One has \[ \aligned {\mathcal I}^{p,q} &\mapright{\tau_0} ({\mathcal I}^{p+1,q-1}\otimes \EE_S^{0,1}), \\ {\mathcal I}^{p,q} &\mapright{\theta=(\theta_0,\theta_-)} ({\mathcal I}^{p-1,q+1}\otimes \EE_S^{1,0}) \oplus( \oplus_{k\ge 2}{\mathcal I} ^{p-1,q+k} \otimes \EE_S^{1,0}). \endaligned \] Similarly $$ \aligned {\mathcal I}^{p,q} & \mapright{\pd} {\mathcal I}^{p,q} \otimes \EE_S^{1,0},\\ {\mathcal I}^{p,q} & \mapright{\bar\pd = (\bar\pd_0 , \tau_-)} ({\mathcal I}^{p,q} \otimes \EE_S^{0,1}) \oplus( \oplus_{k \ge 1} {\mathcal I}^{p,q-k} \otimes \EE_S^{0,1}) . \endaligned $$ To unify notation, we also write $\pd = \pd_0$. Then, we have $$ \nabla = \tau_0 + \tau_- + \bar\pd_0 + \pd_0 + \theta_- + \theta_0 $$ In particular, relative to the $C^{\infty}$ isomorphism of $\gr^W_k$ with $$ \EE_k := \bigoplus_{p+q=k}\,{\mathcal I}^{p,q} $$ the induced action of $\nabla$ on $\gr^W_k$ coincides with the action of $$ D_0 = \tau_0 + \bar\pd_0 + \pd_0 + \theta_0 $$ on $\EE_k$. Given that the mixed Hodge metric is just the pullback of the Hodge metric on $\gr^W_k$ via the isomorphism with $\EE_k$, it follows that $\bar\pd_0 + \pd_0$ is a hermitian connection on $\UU^p$. In particular, since the induced holomorphic structure on $\UU^p$ is given by $\bar\pd$ and by the adjoint property, it follows that \begin{equation} D = \underbrace{\tau_-+\bar\pd_0}_{\bar\partial} + \pd_0 - \tau_-^* \label{eqn:hh-conn} \end{equation} is the Chern connection of $\UU^p$ relative to the mixed Hodge metric. Thus, $$ R_D = R_{(\bar\pd + \pd_0) - \tau_-^*} = R_{(\bar\pd + \pd_0)} - (\bar\pd + \pd_0)\tau_-^* + \tau_-^*\wedge\tau_-^*. $$ To simplify this, observe that $\tau_-^*$ is a differential form of type $(1,0)$, so we must have $$ -\pd\tau_-^* + \tau_-^*\wedge\tau_-^* = 0 $$ in order to get a differential form of type $(1,1)$. Therefore, $$ R_D = R_{(\bar\pd + \pd_0)} - \bar\pd\tau_-^* . $$ Expanding out $$ \nabla^2 = (\tau_0 + \bar\pd + \pd_0 + \theta)^2 = 0, $$ it follows that \begin{equation} R_{(\bar\pd + \pd_0)} = -(\theta\wedge\tau_0 + \tau_0\wedge\theta) \label{eqn:higgs-conn} \end{equation} and hence $$ R_D = -(\theta\wedge\tau_0 + \tau_0\wedge\theta) - \bar\pd\tau_-^* . $$ \par To continue, we note that $$ \bar\pd\tau_-^* = (\bar\pd_0 + \tau_-)\tau_-^* = \bar\pd_0\tau_-^* + \tau_-\wedge\tau_-^* + \tau_-^*\wedge\tau_- $$ and so \begin{equation} R_D = -(\theta\wedge\tau_0 + \tau_0\wedge\theta) -(\tau_-\wedge\tau_-^* + \tau_-^*\wedge\tau_-) - \bar\pd_0\tau_-^*. \label{eqn:D-curv} \end{equation} To finish the calculation, we differentiate the identity $$ h(\tau_-(\sigma_1),\sigma_2) = h(\sigma_1,\tau_-^*(\sigma_2)) $$ and take the $(1,1)$ part to obtain $$ \aligned h((\pd_0 \tau_-)(\sigma_1) &+ \tau_-(\pd_0 \sigma_1),\sigma_2) + h(\tau_-(\sigma_1),\bar\pd_0\sigma_2) \\ &= h(\pd_0\sigma_1,\tau_-^*(\sigma_2)) + h(\sigma_1,(\bar\pd_0\tau_-^*)(\sigma_2) + \tau_-^*(\bar\pd_0 \sigma_2)). \endaligned $$ Using the properties of the adjoint, this simplifies to \begin{equation*} \bar\pd_0\tau_-^* = (\pd_0\tau_-)^* . \end{equation*} It remains to compute $\pd_0\tau_- = \pd\tau_-$. To do this, first observe that $$ R_{\bar\pd + \pd} = R_{\bar\pd_0 + \pd_0 + \tau_-} = R_{\bar\pd_0 + \pd_0} + (\bar\pd_0 + \pd)\tau_-+ \tau_-\wedge\tau_-. $$ Now note that equation \eqref{eqn:higgs-conn} implies that $R_{\bar\pd + \pd}$ is of type $(1,1)$, and hence $$ R_{\bar\pd + \pd_0} = R_{\bar\pd_0 + \pd_0} + \pd\tau_-, $$ since $R_{\bar\pd_0 + \pd_0}$ is also of type $(1,1)$ as the curvature of hermitian holomorphic connection for $h$ and $\bar\pd_0$. Moreover, since $\bar\pd_0 + \pd_0$ preserves the bigrading by $\mathcal I^{p,q}$ whereas $\pd\tau_-$ lowers weights, it follows from \eqref{eqn:higgs-conn} that $$ \pd\tau_- = -(\theta_-\wedge\tau_0 + \tau_0\wedge\theta_-). $$ \begin{corr} The curvature of the Hodge bundles of a variation of mixed Hodge structure $\VV\to S$ is $$ R_D = -(\theta\wedge\tau_0 + \tau_0\wedge\theta) -(\theta_-\wedge\tau_0 + \tau_0\wedge\theta_-)^* -(\tau_-\wedge\tau_-^* + \tau_-^*\wedge\tau_-). $$ \end{corr} Let us compare the above results with the ones obtained on the period domain. \begin{prop} Let $\theta(\xi) = u$. then the action of $R_D(\xi,\bar\xi)$ on $\UU^p$ agrees with the action of $R_{\nabla}(u,\bar u)$ on $U^p$ from Corollary \eqref{corr:hodge-bundle}. More precisely, the four terms in the expression for $R_{\nabla}(u,\bar u)$ compare as follows \[ \aligned {[}\Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar u^*_+), \Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} (\bar u_+)] &= (\theta\wedge\tau_0 + \tau_0\wedge\theta)(\xi, \bar \xi) \\ - [u,\bar u]_0 &= -( \theta_0\wedge\tau_0 + \tau_0\wedge\theta_0)(\xi,\bar\xi)\\ -\Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} [u,\bar u]_+&= -(\theta_-\wedge\tau_0 + \tau_0\wedge\theta_-)(\xi,\bar\xi) ,\\ -\Pi'\raise1pt\hbox{{$\scriptscriptstyle\circ$}} [u,\bar u]_+^*&= -(\theta_-\wedge\tau_0 + \tau_0\wedge\theta_-)^*(\xi,\bar\xi). \endaligned \] \end{prop} \proof Recall that for vector valued $A$ of type $(1,0)$ and $B$ of type $(0,1)$ we have $$ (A\wedge B + B\wedge A)(\xi,\bar\xi) = [A(\xi),B(\bar\xi)]. $$ A check of Hodge types shows that $\tau_-(\xi) = \Pi'\circ (\bar u)_+$ and hence $$ -(\tau_-\wedge\tau_-^* + \tau_-^*\wedge\tau_-)(\xi,\bar\xi) = -[\Pi'\circ (\bar u)_+^*,\Pi'\circ (\bar u)_+] $$ which is the first term of $R_\nabla(u,\bar u)$. The partial term $$ -(\theta_0\wedge\tau_0 + \tau_0\wedge\theta_0)(\xi,\bar\xi) = -[u,\bar u]_0 $$ is extracted from $-(\theta\wedge\tau_0 + \tau_0\wedge\theta)$. What remains of this term, $$ -(\theta_-\wedge\tau_0 + \tau_0\wedge\theta_-), $$ computes $-\Pi'\circ[u,\bar u]_+$. \qed\endproof \section{The Case of Abelian $W_{-1}\eg_{\mathbb{C}}$} \label{defoinAb} \subsection*{Negative Curvature} \begin{prop} \label{negcurv} Assume that we have a period map \[ F:\Delta \to D,\quad s\mapsto F(s), \] with $\beta=F_*(d/ds)(0) \in \eg^{-1,0}$. Moreover, suppose that $W_{-1}\eg_{\mathbb{C}}$ is an abelian subspace of $\eg_{\mathbb{C}}$. Then the curvature of the pull back metric is $\le 0$. \end{prop} \proof We have seen in Corollary~\ref{curvapll2} that the holomorphic sectional curvature of the Hodge on $D$ at $F(0)$ is non-negative. However, when we pull back a metric, the curvature gets an extra term which is $\le 0$. We shall show that due to the fact that $W_{-1}\eg_{{\mathbb{C}}}$ is abelian, the pull back metric gains sufficient negativity to compensate positivity. \par By the choice of coordinates \eqref{eqn:coord}, we can write the period map in the local normal form $$ F(s) = e^{\Gamma(s)} \cdot F(0), $$ where $\Gamma(s)$ is a holomorphic function taking values in the intersection of $W_{-1}\eg_{{\mathbb{C}}}$ and $\germ q = \germ q_{F(0)}$, i.e. $\Gamma(s)\in \eg^{-1,0}$. Kaplan's decomposition (Theorem~\ref{group-decomp}) in this situation simplifies to \begin{equation}\label{eqn:w2AbelianSimplifies} e^{\Gamma(s)} = \underbrace{e^{\Gamma(s) + \bar\Gamma(s)}}_{g_{{\mathbb{R}}}(s) } \cdot \underbrace{e^{-\bar\Gamma(s)}}_{f(s)} \end{equation} thanks to the fact that $W_{-1}\eg_{{\mathbb{C}}}$ is abelian. Relative to the trivialization $T_{F(0)}(D)\cong T_{e^u\cdot F(0)}(D)$ given by left translation by $e^u$, we have by \eqref{eqn:2.11} $$ F_*(\pd/\pd s) = \pd\Gamma/\pd s, $$ since commutativity implies that $\psi_1(\Gamma(s),\pd\Gamma/\pd s)= \pd\Gamma/\pd s$ in our case (see \eqref{eqn:2.10} for the definition of $\psi_1$). Remark that \eqref{eqn:w2AbelianSimplifies} shows that $\varphi(s)=-\bar \Gamma(s)$. So, by Eqn.~\eqref{eqn:intermediatestep}, the pullback metric is therefore $$ \aligned h(s) &= h_{F(s)}(F_*(\pd/\pd s), F_*(\pd/\pd s))\\ &= \| \pi_\germ q\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \Ad{e^{-\bar\Gamma(s)}}(\pd\Gamma/\pd s) \|^2_{F(0)} \\ &= \| (\pd\Gamma/\pd s)\|^2_{F(0) }, \endaligned $$ again by commutativity. The function $\gamma(s) = \partial \Gamma/\partial s$ is a holomorphic function and so $\partial \gamma(s)/\partial\bar s=0$. Put $\dot\gamma=\partial \gamma(s)/\partial s$ and $h_o = h_{F(0)}$. Then, the curvature of the pullback metric is: $$ \aligned K &= -\frac{1}{h}\frac{\pd^2}{\pd s \pd \bar s}\log h = -\frac{1}{h_o(\gamma,\gamma)}\frac{\pd^2}{\pd s \pd \bar s}\log h_o(\gamma,\gamma) \\ &= -\frac{1}{h_o(\gamma,\gamma)}\frac{\pd}{\pd s}\left( \frac{h_o(\gamma,\dot\gamma)}{h_o(\gamma,\gamma)}\right) \\ &= -\frac{1}{h_o(\gamma,\gamma)} \frac{h_o(\dot\gamma,\dot\gamma)h_o(\gamma,\gamma) - h_o(\dot\gamma,\gamma) h_o(\gamma,\dot\gamma)} {h_o(\gamma,\gamma)^2} \\ &= \frac{| h_o(\dot\gamma,\gamma)|^2 - h_o(\dot\gamma,\dot\gamma)h_o(\gamma,\gamma)} {h^3(\gamma,\gamma)} \leq 0, \endaligned $$ where the last step follows the Cauchy-Schwarz inequality for $h_o(\dot\gamma,\gamma)$. \qed\endproof \begin{rmq} The proof shows that the Gaussian curvature of the pullback is negative wherever $\gamma$ and $\dot\gamma$ are linearly independent. \end{rmq} We give a first application to the situation of a normal function with fixed underlying Hodge structure $H$ of weight $-1$, i.e. a family of extensions \[ 0 \to H \to E \to \gr^W_0={\mathbb{Z}}(0) \to 0. \] \begin{corr} \label{curvneg:2} Let $\nu:S\to J(H)$ be a normal function with fixed underlying Hodge structure $H$. Then the holomorphic sectional curvature of the pull back of the Hodge metric is semi-negative. \end{corr} \begin{rmk} Via isomorphism $\Ext^1_{\text{\rm MHS}}(A,B)\cong\Ext^1({\mathbb{Z}}(0),B\otimes A^{\vee})$, this result applies to families of cycles on a fixed variety $X$ and the VMHS on $J_x/J_x^3$ of a smooth projective variety. \end{rmk} \subsection*{Another Application: Mixed Hodge Structures and Fundamental Groups} We treat this in some detail with an eye towards a reader less acquainted with this material. \par Let $X$ be a smooth complex algebraic variety, and ${\mathbb{Z}}\pi_1(X,x)$ be the group ring consisting of all finite, formal ${\mathbb{Z}}$-linear combinations of elements of $\pi_1(X,x)$. The augmentation ideal $J_x$ is defined to be the kernel of the ring homomorphism $$ \epsilon:{\mathbb{Z}}\pi_1(X,x)\to{\mathbb{Z}} $$ which maps each element $g\in\pi_1(X,x)$ to $1\in{\mathbb{Z}}$. By the work of Morgan \cite{M}, the quotients $J_x/J_x^k$ carry functorial mixed Hodge structures constructed from the minimal model of the de Rham algebra of $X$. We follow Hain's alternative approach \cite{hain}; the mixed Hodge structure on $J_x/J_x^k$ can be described using so called iterated integrals as follows: The iterated integral on $\theta_1,\dots,\theta_r\in\EE^1(X)$, $$ \int\, \theta_1\cdots\theta_r $$ assigns to each smooth path $\gamma:[0,1]\to X$ the integral of $\theta_1\cdots\theta_r$ over the standard simplex in ${\mathbb{R}}^r$, i.e. $$ \int_{\gamma}\, \theta_1\cdots\theta_r = \int_{0\leq t_1\leq\cdots\leq t_r\leq 1}\, \theta_1(\gamma_*(d/d t_1))\cdots \theta_r(\gamma_*(d/d t_r)) dt_1\cdots d t_r. $$ Such an iterated integral is said to have length $r$. The spaces $\Hom_{{\mathbb{Z}}}(J_x/J^{s+1}_x,{\mathbb{C}})$ can be described as spaces of certain linear combinations of iterated integrals of lengths $\le s$. We only need their description for $s=2$: \begin{thm}[\protect{ \cite[Prop. 3.1.]{hain}}]\label{thm:it-int} The iterated integral \begin{equation} \int\,\theta + \sum_{j,k}\, a_{jk}\int\,\theta_j\theta_k \label{eqn:it-int} \end{equation} is a homotopy functional if and only if $\theta_1,\dots,\theta_r$ are closed and \begin{equation} d\theta + \sum_{jk}\, a_{jk}\theta_j \wedge \theta_k = 0. \label{eqn:it-int-eq} \end{equation} \end{thm} The mixed Hodge structure $(F,W)$ on $\Hom_{{\mathbb{Z}}}(J_x/J^{s+1}_x,{\mathbb{C}})$ is described on the level of iterated integrals as follows. Such a sum belongs $F^p$ if and only if each integrand $\theta_1\cdots\theta_k$ contains at least $p$ terms $\theta_j\in\Omega^1(X)$. As for the weight filtration, $\alpha$ belongs to $W_k$ if and only if $\alpha$ is representable by a sum of iterated integrals of length $\leq k$ plus the number of logarithmic terms $dz_j/z_j$ in the integrand. Suppose next that $H^1(X)$ has pure weight $\ell=1$ or $\ell=2$. The first happens for $X$ projective, the second for instance when the compactification of $X$ is ${\mathbb{P}}^1$. In these situations, following \cite[\S 6]{hain}, the dual of $J_x/J_x^3$ is an extension of pure Hodge structures. To explain the result, note that the cup-product pairing $H^1(X) \otimes H^1(X)\to H^2(X)$ is a morphism of pure Hodge structures. It follows that \[ K:=\ker \left[ H^1(X) \otimes H^1(X)\to H^2(X)\right] \] carries a pure Hodge structure of weight $2\ell$. Theorem~\ref{thm:it-int} now implies: \begin{*thm} The mixed Hodge structure on $\Hom_{\mathbb{Z}}(J/J^3,{\mathbb{C}})$ is the extension of pure Hodge structures of weight $\ell$ and $2\ell$ given by \[ 0\to H^1(X) \to \Hom_{\mathbb{Z}}(J/J^3,{\mathbb{C}})\mapright{p} K \to 0. \] Explicitly, the iterated integral $ \int\,\theta + \sum_{j,k}\, a_{jk}\int\,\theta_j \theta_k$ is mapped by $p$ to $\sum a_{jk} [\theta_j]\otimes [\theta_k]$ which, by construction, belongs to $K$. The kernel of $p$ can be identified with the the length one homotopy integrals $ \int\,\theta$ , i.e. those with $d\theta =0$. Hence $\ker p\simeq H^1(X)$. It follows that the graded pieces have a natural polarization coming from the one on $H^1(X)$ and which is given by these identifications. \end{*thm} In particular, the above implies that if $X$ is smooth projective, the graded polarized mixed Hodge structure on $\Hom_{\mathbb{Z}}(J/J^3,{\mathbb{C}})$ has two adjacent weights and so if we now leave $X$ fixed but vary the base point, we get a family of mixed Hodge structures over $X$ for which $W_1\eg_{\mathbb{C}}$ is abelian and by Proposition~\ref{negcurv} we conclude: \begin{corr}\label{corr:fund-group} Let $X$ be a smooth complex projective variety, and suppose that the differential of the period map of $J_x/J_x^3$ is injective. Then the holomorphic sectional curvature of $X$ is $\le 0$. \end{corr} \subsubsection*{Complements: Flat Structure and Hodge Metric} 1. The \textit{flat structure} given by the local system attached to $J/J^3$ may be described as follows: Fix a point $x_o\in X$ and let $U$ be a simply connected open subset containing $x_o$. Given a point $x\in U$ let $\gamma:[0,1]\to U$ be a smooth path connecting $x_o$ to $x$. Then, conjugation \begin{equation} \alpha\mapsto\gamma \alpha \gamma^{-1} \label{eqn:conj-pi1} \end{equation} defines an isomorphism $\pi_1(X,x)\to\pi_1(X,x_0)$ which is independent of $\gamma$ since $U$ is simply connected. Trivializing $(J/J^3)^*$ using \eqref{eqn:conj-pi1}, we then obtain the period map via the change of base point formula (see \cite[Remark 6.6]{hain}): \begin{equation} \int_{\gamma\alpha\gamma^{-1}}\,\theta_1\theta_2 = \int_{\alpha}\,\theta_1\theta_2 + \left(\int_{\gamma}\,\theta_1\right)\left(\int_{\alpha}\,\theta_2\right) - \left(\int_{\gamma}\,\theta_2\right)\left(\int_{\alpha}\,\theta_1\right) \label{eqn:change-base-pt} \end{equation} one then obtains the following result via differentiation: \begin{lemma}\label{lemma:gauss-manin} The flat connection $\nabla$ of $(J/J^3)^*$ operates on iterated integrals via the following rules: $$ \nabla_{\xi}\left(\int\,\theta_1\theta_2\right) =\theta_1(\xi)\left(\int\,\theta_2\right) - \theta_2(\xi)\left(\int\,\theta_1\right) $$ and $\nabla_{\xi}(\int\,\theta) = 0$. \end{lemma} As a check of the formula for $\nabla$ given in Lemma~\ref{lemma:gauss-manin}, note that by Theorem \ref{thm:it-int} that the iterated integral \eqref{eqn:it-int} appears in $(J/J^3)^*$ only if $\theta_j$ and $\theta_k$ is closed for all $j$, $k$ an equation \eqref{eqn:it-int-eq} holds. Therefore, $$ \nabla^2 \left(\int\,\theta + \sum_{j,k}\, a_{jk}\int\,\theta_j \theta_k\right) = \sum a_{ij}\left(d\theta_j\int\theta_k - d\theta_j \int\theta_k\right) = 0 $$ because $d\theta_j=0$. Likewise, direct calculation using Lemma \eqref{lemma:gauss-manin} shows that the Hodge filtration $\FF$ of $(J/J^3)^*$ is holomorphic and horizontal with respect to $\nabla$, and the weight filtration $W$ is flat. \noindent 2. By way of illustration we shall prove the correctness of the expression~\eqref{eqn:dilog-ex} for \textit{the mixed Hodge metric} as announced in the introduction. First of all (for $X={\mathbb{P}}^1\setminus \set{0,1,\infty}$) $$ \nabla\int \frac{dz}{z} \cdot \frac{dz}{1-z} = \frac{dz}{z}\int\frac{dz}{z-1} -\frac{dz}{z-1}\int\frac{dz}{z}, $$ and, secondly, from the above discussion it follows that \[ \left\| \int\frac{dz}{z-1}\right \|^2= h([\frac{dz}{z-1}] ,[\frac{dz}{z-1}])= (4\pi)^2. \] where $h$ is the Hodge metric on $H^1(X)$ (and similarly for $\|\int \frac{dz}{z} \|^2$). \noindent 3. As a further illustration, let us calculate \textit{the mixed Hodge metric} when we specify the preceding to a compact Riemann surface $X$ of genus $g>1$. Let $\theta_1,\dots,\theta_g$ be an unitary basis of $H^{1,0}(X)$ with respect to the Hodge metric. Then, up to a scalar, the metric on $X$ obtained by pulling back the mixed Hodge metric via the period map of $(J/J^3)^*$ is given by $$ \left\|d/dz\right\|^2 = \sum_{j=1}^g\, \|\theta_j(d/dz)\|^2. $$ This follows directly from Lemma \eqref{lemma:gauss-manin} and the discussion on the mixed Hodge structure on $(J/J^3)^*$ we just gave. \begin{rmq} The above description of the mixed Hodge metric holds more generally for any smooth complex projective variety. \end{rmq} \section{K\"ahler Condition} \label{sec:kaehler} Let $h$ be a hermitian metric on a complex manifold $M$. Then, given any system of local holomorphic coordinates $(z_1,\dots,z_m)$ on $M$, the associated fundamental 2-form $\Omega$ is given by the formula $$ \Omega = -\frac{\sqrt{-1}}{2}\sum_{j,k} h_{jk} dz_j\wedge d\bar z_k, \qquad h_{jk} = h\left(\frac{\partial}{\partial{z_j}}, \frac{\partial}{\partial{z_k}}\right) $$ Moreover, $\bar\Omega = \Omega$ since $h_{jk} = \bar h_{kj}$. Accordingly, as $\Omega$ is both real and of type $(1,1)$, the K\"ahler condition $d\Omega = 0$ is equivalent to the assertion that: \def\deriv#1#2{\frac{\pd #1}{\pd #2}} \begin{equation} \deriv{h_{jk}}{z_{\ell}} = \deriv{h_{\ell k}}{z_j},\qquad j<\ell \label{eqn:1.2} \end{equation} \par Let $F:\Delta^r\to D$ be a holomorphic, horizontal map and polydisk with local coordinates $(s_1,\dots,s_r)$ which vanish at $0\in\Delta^r$. Let $\germ q$ be the subalgebra \eqref{eqn:tang-alg} attached to $F(0)$. Recalling the local biholomorphism \eqref{eqn:coord} mapping a neighborhood of $0\in\germ q$ to a neighborhood of $F(0)$ in $D$, locally we can write as in ~\cite{higgs} $$ F(s) = e^{\Gamma(s)} \cdot F(0) $$ for a unique $\germ q$-valued \emph{holomorphic} function $\Gamma$ which vanishes at $0$. \begin{thm}\label{thm:kahler} Let $h = F^*(h_D)$ denote the pullback of the mixed Hodge metric $h_D$ to $S$. Then \begin{equation} \begin{aligned} \deriv{h_{jk}}{s_{\ell}} &= h_{F(0)}\left(\frac{\pd^2\Gamma}{\pd s_j\pd s_{\ell}}(0) , \deriv{\Gamma}{ s_k}(0)\right) \nonumber \\ &- h_{F(0)}\left(\deriv{\Gamma}{s_j}(0), \pi_{\eq}\left[\pi_+\left(\overline{\deriv{\Gamma}{s_{\ell}}(0)}\right), \deriv{\Gamma}{s_k}(0)\right]\right). \end{aligned}\label{eqn:1.3} \end{equation} Hence, setting $\xi_j = \frac{\partial\Gamma}{\partial s_j}(0)$, $h$ is K\"ahler if and only if \begin{equation} h(\xi_j,\pi_\germ q [\pi_+(\bar\xi_{\ell}),\xi_k]) - h(\xi_{\ell},\pi_\germ q[\pi_+(\bar\xi_j),\xi_k]) = 0. \label{eqn:1.3bis} \end{equation} In particular, $h$ is K\"ahler if \begin{equation} \pi_{\eq}\left[\pi_+\left(\overline{\deriv{\Gamma}{s_{\ell}}(0)}\right), \deriv{\Gamma}{s_k}(0)\right] = 0. \label{eqn:1.4} \end{equation} \end{thm} \proof As a general remark, note that the symmetry condition \eqref{eqn:1.2} is satisfied for the first term of the left hand side of \eqref{eqn:1.3} and so $h$ is K\"ahler if and only this relation is valid for the second term. This is \eqref{eqn:1.3bis}. The condition \eqref{eqn:1.4} thus implies K\"ahlerness.\\ \textit{Step 1}: Calculation of the push forward of a holomorphic vector field. Let $ \frac{d}{ds}$ be a holomorphic vector field on the polydisk. We need not only the first order behavior near $F(0)$ but also near $F(p)$. Recall that by Lemma~ \ref{TangentIdent} we have $F(s)= e^{\Gamma(s)}\cdot F(0)$ with $\Gamma(s)\in \germ q$ and thus, given a test function $\zeta$ at $p$, we have \begin{equation*} \aligned F_*\left(\frac{d}{ds}\right)_p\zeta &= \left(\frac{d}{ds}\right)_p\zeta(e^{\Gamma(s)} \cdot F(0)) \\ &= \left(\frac{d}{ds}\right)_p \zeta(e^{\Gamma(s)}e^{-\Gamma(p)} \cdot F(p)) \\ &= \left(\frac{d}{ds}\right)_p \zeta(e^{\Gamma(p) + (\Gamma(s)-\Gamma(p))} e^{-\Gamma(p)} \cdot F(p)) \\ & = \left(\frac{d}{ds}\right)_p \zeta(e^{\psi_1(\Gamma(p),\Gamma(s)-\Gamma(p))} \cdot F(p)), \endaligned \end{equation*} where we have used the Campbell-Baker-Hausdorff formalism \eqref{eqn:2.10}. Now use the Taylor-expansion about $p$ up to order $1$ to deduce \begin{equation} \label{eqn:2.8} F_*\left(\frac{d}{ds}\right)_p \zeta = \frac{d}{ds} \zeta \left( e^{\psi_1(\Gamma(p), \frac{d\Gamma}{d s}(p) ) } \cdot F(p)\right). \end{equation} Let $\pi_\germ q^{F(s)}$ denote projection onto $q_{F(s)}$ via the decomposition $$ \eg_{{\mathbb{C}}} = \eg_{{\mathbb{C}}}^{F(s)}\oplus \germ q_{F(s)}. $$ Recall now the identification from Lemma~\ref{TangentIdent} between the tangent space at $F(s)$ and $ \eg_{{\mathbb{C}}}^{F(s )}$. Then \eqref{eqn:2.8} gives the result we are after: \begin{equation} F_*\left(\frac{d}{ds}\right)_p = \pi_\germ q^{F(p)} \psi_1\left(\Gamma(p),\frac{d\Gamma}{d s}(p)\right). \label{eqn:2.11} \end{equation} \textit{Step 2}: Compute the translate of the tangent vector to $F(0)$. For this we are changing gears: now $p$ is viewed as close to $0$ and we write $F(s)$ instead of $F(p)$ and we are going to compute up to second order in $s$. We need to compare $\pi_\germ q=\pi_\germ q^{F(0)}$ and $\pi_\germ q^{F(s)}$. We do this by rewriting \eqref{eqn:expu} with $u$ replaced by $\Gamma(s)$ \begin{equation} \exp(\Gamma(s)) = \underbrace{g_{\mathbb{R}} (s)}_{\text{\rm in }G_{\mathbb{R}}}\cdot \exp{( \lambda(s))} \cdot \underbrace{\exp{(\varphi(s))}}_{\text{\rm in }G^{F(0)}}. \end{equation} Now note that $\Ad{g_{\mathbb{R}}(s)\cdot e^{\lambda(s)}}$ maps $ \End (V)^{i,j}_{F(0)}$ to $\End (V)^{i,j}_{F(s)}$ and since $\pi_\germ q^{F(s)}$ is defined in terms of projections onto such components, \begin{equation*} \aligned \pi_\germ q ^{F(s)}& = \Ad{g_{\mathbb{R}}(s)} \cdot \Ad{e^{\lambda(s)} }\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \pi_\germ q \raise1pt\hbox{{$\scriptscriptstyle\circ$}} \Ad{e^{-\lambda(s)} } \cdot \Ad{g^{-1}_{\mathbb{R}}}(s) \\ &= \Ad{g_{\mathbb{R}}(s)} \cdot \Ad{e^{\lambda(s)} }\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \pi_\germ q \raise1pt\hbox{{$\scriptscriptstyle\circ$}} \Ad{e^{\varphi(s)} } \cdot \Ad{e^{-\Gamma(s)}}. \endaligned \end{equation*} Hence, at the point $F(s)$, \begin{equation} F_*\left(\frac{d}{ds}\right) = \Ad{g_{\mathbb{R}}(s)} \Ad{e^{\lambda(s)} } \raise1pt\hbox{{$\scriptscriptstyle\circ$}} \pi_\germ q \left( e^{\ad {\varphi(s) }} e^{-\ad{\Gamma(s)} } \psi_1( \Gamma(s),\frac{d \Gamma}{ d s}(s)) \right). \end{equation} We conclude: \begin{equation}\label{eqn:CompareTangents} F_*\left(\frac{d}{ds}\right) = L_{e^{\Gamma(s)} *} \left( e^{-\ad{\Gamma(s)} } \psi_1( \Gamma(s),\frac{d \Gamma}{ d s}(s))\right). \end{equation} A particular property of period maps is their \emph{horizontality}, i.e. all tangents to the image $F(s)$ of a period map belong to $U^{-1}_{F(s)}=\bigoplus_q I^{-1,q}_{F(s)}$. Working this out means \[ e^{-\ad{\Gamma(s)}} \frac{\partial}{\partial s_j} e^{\ad{\Gamma(s)}} \in U^{-1}_{F(0)} \] and as in the proof of \cite[Theorem 6.9]{higgs} is equivalent to the commutativity relation \begin{equation} \label{eqn:Commut} \left[\frac{\partial \Gamma}{\partial s_j}(0),\frac{\partial \Gamma}{\partial s_k}(0)\right] = 0 \end{equation} and one deduces that $$ \psi_1(\Gamma(s), \partial\Gamma/\partial s_j)= \frac{\partial \Gamma}{\partial s_j}(s) \text{ up to \emph{second} order in } s. $$ and hence, applying the Taylor formula about $0$, the fact that $\Gamma(0)=0$ and the commutativity relation \eqref{eqn:Commut} one finds also up to 2nd order in $(s,\bar s)$: \[ e^{-\ad{\Gamma(s)}}\psi_1(\Gamma(s), \partial\Gamma/\partial s_j)=\frac{\partial \Gamma}{\partial s_j}(s) = \frac{\partial \Gamma}{\partial s_j}(0) + \sum_r\frac{\partial ^2\Gamma}{\partial s_j \partial s_r}(0) s_r. \] it follows that \begin{equation} F_*\left(\frac{\partial}{\partial s_j}\right) = \pi_q^{F(s)}\left(\frac{\partial \Gamma}{\partial s_j} (s) \mod O^2(s)\right). \label{eqn:vect-field} \end{equation} \textit{Step 3}: Calculation of the pull back metric for general holomorphic maps. We once more change gears and replace $d/ds$ by $\partial/ \partial s_j$, where $(s_1,\dots,s_n)$ are the standard coordinates on the polydisk. Compare with Eqn.~\eqref{eqn:intermediatestep} and use \eqref{eqn:CompareTangents} to get the following expression for the pull back metric \begin{equation}\label{eqn:PullBackMetric} \aligned h_{jk}(s) & = h_{F(s)}(\partial/ \partial s_j, \partial / \partial s_k)\\ &= h_{F(0)}(\pi_\germ q( e^{\ad{\varphi(s)}} e^{-\ad{\Gamma(s)} } e^{\psi_1( \Gamma(s),\frac{\partial \Gamma}{ \partial s_j}(s))}), \\ & \hspace{8em} \pi_\germ q( e^{\ad{\varphi(s)}} e^{-\ad{\Gamma(s)} } e^{ \psi_1( \Gamma(s),\frac{\partial \Gamma}{ \partial s_k}(s))})). \endaligned \end{equation} \textit{Step 4}: Pull back metric for a period map. According to Eqn.~\eqref{eqn:PullBackMetric} we need also $\varphi(s)=\varphi(\Gamma(s))$. For this invoke Proposition~\ref{secondorder}; we get up to second order in $s$: \[ \varphi(s)= - \sum_r \left( \overline{ \frac{\partial\Gamma}{ \partial s_r}(0)}\right)_+\bar s_r. \] For simplicity of notation, set \[ \aligned \xi_{kj}:& = \left( \frac{\partial ^2\Gamma}{\partial s_j\partial s_k} (0)\right) \\ \theta_k(s,\bar s) : &= \xi_k+ \sum_t \xi_{kt} s_t - \sum_t \pi_\germ q \left[ (\bar \xi_t)_+, \xi_k \right] \bar s_t . \endaligned \] It follows that up to first order in $(s,\bar s)$ we have \[ \aligned h_{jk} (s) &= h_{F(0)} \left(\theta_j(s,\bar s), \theta_k (s,\bar s)\right). \endaligned \] Differentiating gives the result. \qed\endproof \begin{rmk} \label{UpTo3dOrder} More generally, one can prove that $$ F_*\left(\frac{\pd}{\pd s_j}\right) = \pi^{F(s)}_\germ q\left(\frac{\pd\Gamma}{\pd s_j}(s) + \frac 12 A_j(s)\mod O^3(s) \right) $$ where $$ A_j(s) = \sum_{b,c}\,\left( \left[ \xi_b, \xi_{jc} \right] -\frac 12\left[\xi_j, \xi_{bc}\right]\right) s_b s_c. $$ \end{rmk} \begin{corr}\label{corr:kahler} The pullback of the mixed Hodge metric along an immersion is K\"ahler in the following cases: \renewcommand{\labelenumi}{\rm (\alph{enumi}) \begin{enumerate} \item Variations of pure Hodge structure (Lu's result~\cite{Zhiqin}); \item Hodge--Tate variations; \item The variations of mixed Hodge structure attached to $J_x/J_x^3$ for a smooth complex projective variety; \item The variations from \S~\ref{ssec:exmples}. Example 4 arising from the commuting deformations of the complex and K\"ahler structure of a compact K\"ahler manifold. \end{enumerate} \end{corr} \begin{proof} In case (a), the derivatives of $\Gamma$ at zero are of type $(-1,1)$ and so \begin{equation} [\pi_+(\overline{d\Gamma/ds_{\ell}(0)}),d\Gamma/ds_k(0)] \label{eqn:brac-1} \end{equation} is type $(0,0)$ which is annihilated by $\pi_{\eq}$. In case (b), $\pi_+(\overline{d\Gamma}) = 0$. In case (c) the bracket \eqref{eqn:brac-1} is of type $(-1,-1)$ which is zero due to the short length of the weight filtration. In case (d), the bracket \eqref{eqn:brac-1} has terms of type $(0,0)$ and $(0,-2)$, both of which are annihilated by $\pi_q$. \end{proof} \begin{rmk} In case (d) one can also show that the the holomorphic sectional curvature will be $\leq 0$. \end{rmk} \begin{thm}\label{thm:kahler2} Let $\VV$ be a variation of mixed Hodge structure with only two non-trivial weight graded-quotients $\gr^W_a$ and $\gr^W_b$ which are adjacent, i.e. $|a-b|=1$. Then, the pullback of the mixed Hodge metric along the period map of $\VV$ is pseudo-K\"ahler. \end{thm} \proof We shall prove the symmetry relation \eqref{eqn:1.3} which in our situation due short nature of the weight filtration reduces to \begin{equation} h(\xi_j,[\bar\xi_{\ell},\xi_k]) - h(\xi_{\ell},[\bar\xi_j,\xi_k]) = 0. \label{eqn:kahler-1} \end{equation} Without loss of generality, we can assume that $\xi_j$, $\xi_k$, $\xi_{\ell}$ are of pure Hodge type. Inspection of the possibilities shows that the only non-trivial case is when $X=\xi_j$ and $Y=\xi_{\ell}$ are type $(-1,0)$ and $Z=\xi_k$ is type $(-1,1)$. Since by Lemma~\ref{lemma:adjoint} we have $Z^*=-\bar Z$ in this case, the formula \eqref{eqn:MetConv} and the fact that $h$ is hermitian gives $$ \aligned h(X,[\bar Y, Z])&=h([X,\bar Z], \bar Y))\\ &=h(Y, [\bar X,Z]), \endaligned $$ which is \eqref{eqn:kahler-1}.\qed\endproof \begin{exmple} \label{kaehlerexmples} In particular, Theorem~\ref{thm:kahler2} applies to the tautological variations of Hodge structure over the moduli spaces $\mathcal M_{g,n}$ and more generally, to families of pairs $(X_s,Y_s)$ of a smooth projective variety $X_s$ and a hypersurface $Y_s\subset X_s$ as well as a family of normal functions \eqref{eqn:nf-2} over a curve $S$ with $\mathcal H$ fixed and whose period map is an immersion. \end{exmple} \section{Biextension Line Bundle}\label{sec:biexts} Recall from the introduction that in this special case the graded Hodge numbers $h^{p,q}$ are zero unless $p+q=-2,-1,0$ and furthermore, $h^{0,0}=h^{-1,-1}=1$; the mixed Hodge structure is described as a biextension \begin{equation}\label{eqn:biextension} \aligned 0 \to & \gr^W_{-1} \to W_0/W_{-2} \to \gr^W_0={\mathbb{Z}}(0) \to 0\\ 0 \to & \gr^W_{-2} ={\mathbb{Z}}(1) \to W_{-1} \to \gr^W_{-1} \to 0. \endaligned \end{equation} As explained below, a family of such mixed Hodge structures over a parameter space $S$ comes with a biextension metric $h_{\rm biext}(s)$. Its Chern form will be shown to be semi-positive along any curve, provided the biextension is self-dual: see Theorem.~\ref{plurisubharmonicity}. \par The point in this section is that the mixed Hodge structure is in general \emph{not} split and that the metric $h_{\rm biext}$ can be found by comparing the given mixed Hodge structure $(F,W)$ on the real vector space $W_0$ to its Deligne splitting $(e^{-{\rm i} \delta_{F,W}} F,W)$ where we recall from \cite[Prop. 2.20]{degeneration} that \begin{equation} \label{eqn:delta} \delta_{F,W}=\frac 12\im Y _{F,W} = \frac{1}{4{\rm i}} (Y_{F,W} - \bar Y_{F,W}) \in \Lambda_{F,W}\cap \eg_{\mathbb{R}}. \end{equation} Here $Y_{F,W}\in \End(V_{\mathbb{C}})$ equals multiplication by $p+q$ on Deligne's $I^{p,q}(V)$. \par Since $\gr_{-2}^W \simeq {\mathbb{R}}$ and similarly for $\gr_0^W $, fixing bases, the map $\delta_{F,W}$ can then be viewed as a real number $\delta$, depending on $(F,W)$. By \cite[\S 5]{nilp}, there exists a further real number $\lambda$ depending only on $W$ such that the positive number $ h(F,W)= e^{-2\pi \delta/\lambda} $ depends only on the equivalence class of the extension. \par Let us apply this in our setting of a family $(\FF,W)$ of biextensions over a complex curve $S$. Then \begin{equation}\label{eqn:biextmetr} h_{\rm biext}(s):= h(F_s,W)= e^{\frac{-2\pi \delta_{F_s,W}}{\lambda}}\ \end{equation} turns out to be a hermitian metric on $S$. \par As before we write \begin{equation} F(s) = e^{\Gamma(s)}\cdot F, \label{eqn:lnf} \end{equation} where $F=F(0)$ and $\Gamma(s)$ is a holomorphic function on a coordinate patch in $S$ with values in $\germ q$. This is the main result we are after: \begin{thm} \label{chernformofhm} Let $S$ be a curve and let $\mathcal \FF $ be a variation of biextension type over $S$ with local normal form \eqref{eqn:lnf}. Let $\gamma^{-1,0}$ be the Hodge component of type $(-1,0)$ of $\Gamma'(0)$. The Chern form of the biextension metric \eqref{eqn:biextmetr} is the $(1,1)$--form \begin{equation}\label{eqn:biextrel} \aligned -\frac{1}{2\pi{\rm i}} \partial\bar\partial \,h_{\rm biext}(s)= &{\rm i} \frac{\partial ^2 \,\,\delta(s)}{\partial s \partial \bar s \phantom{X} } ds\wedge \overline{ds}\\ = &\frac 12 [\gamma^{-1,0},\bar\gamma^{-1,0}] \,ds \wedge \overline{ds}. \endaligned \end{equation} \end{thm} \proof Let \begin{equation} e^{\Gamma(s)} = g_{{\mathbb{R}}}(s) e^{\lambda(s)} f(s) \label{eqn:group-decomp} \end{equation} as usual. Then, by Lemma~\ref{higgs} we have $Y(s)= g_{\mathbb{R}}(s)e^{\lambda(s)} Y$, where $Y = Y_{(F,W)}$. If we set $f(s) = e^{\varphi(s)}$, using \eqref{eqn:delta}, we get \begin{equation} \frac{\partial^2 }{\partial \bar s \partial s} \delta(s)= \frac{1}{2}\im \frac{\partial^2}{\partial s \partial\bar s} \underbrace{e^{\Gamma(s)}e^{-\varphi(s)}}_{d(s)}\cdot Y . \label{eqn:delta-laplacian} \end{equation} Therefore, since $\Gamma(s)$ is holomorphic, $$ \frac{\partial}{\pd \bar s} d(s) \cdot Y =\Ad(e^{\Gamma(s)})\left(\frac{\pd}{\pd\bar s} e^{- \ad{\varphi(s)} }\cdot Y\right) $$ and so \begin{equation} \aligned \frac{\partial^2 }{\partial s \partial \bar s} d(s) \cdot Y =& \left(\frac{\pd}{\pd s} e^{\ad\Gamma(s)}\right) \left(\frac{\pd}{\pd\bar s}e^{-\ad \varphi(s)} Y\right) \\ & \, + \Ad e^{\Gamma(s)} \left(\frac{\partial^2 e^{-\ad \varphi(s)} }{\partial s \partial \bar s} Y \right). \label{eqn:laplace-1} \endaligned \end{equation} We now consider the Taylor expansion $$ \varphi(s) = \sum_{j,k}\, \varphi_{jk}\, s^j \bar s^k,\qquad \varphi_{00} = 0. $$ By Lemma~\ref{secondorder}, we also know \begin{eqnarray} \varphi_{10}&=&0, \label{eqn:expand1}\\ \varphi_{01}& =& - (\overline{\Gamma'(0)})_+, \label{eqn:expand2}\\ \varphi_{11}& = & [\gamma,\bar \gamma] _0+ [\gamma,\bar \gamma] _+ \nonumber\\ &= & [\gamma^{-1,1},\bar \gamma^{-1,1}] _0 + [\gamma^{-1,1},\bar \gamma^{-1,0}]. \label{eqn:expand3} \end{eqnarray} Formula \eqref{eqn:expand1} shows that the term with $s\bar s$ in the Taylor expansion of $$\frac{\partial^2}{\partial s \partial \bar s} e^{-\ad \varphi(s)} Y$$ is just $[\varphi_{11},Y]$ and we deduce $$ \left.\frac{\pd}{\pd\bar s}\frac{\pd}{\pd s}e^{-\ad \varphi(s) )} Y\right|_0 = - [\varphi_{11},Y]. $$ Together with equation \eqref{eqn:laplace-1} it follows that \begin{equation}\label{eqn:dbardgamma} \left.\frac{\partial^2}{\partial s \partial\bar s}d(s)\cdot Y \right|_0 =- [\Gamma'(0),[\varphi_{01},Y]] - [\varphi_{11},Y] \end{equation} Eqn.~\eqref{eqn:expand2} states that $\varphi_{0,1}= -\overline{\Gamma'(0)}_+$. Let $\gamma = \Gamma'(0)$. By horizontality and the short length of the weight filtration, $$ \gamma = \gamma^{-1,1} + \gamma^{-1,0} + \gamma^{-1,-1}. $$ Moreover, since $(F,W)$ is a biextension \begin{equation*} \bar \gamma^{-1,1} \in\mathfrak g^{1,-1},\qquad \bar \gamma^{-1,0} \in\mathfrak g^{0,-1},\qquad \bar \gamma^{-1,-1} \in\mathfrak g^{-1,-1} \end{equation*} Therefore, \begin{equation*} - \varphi_{01} = (\overline{\Gamma'(0)})_+ = \bar\gamma^{-1,1} + \bar\gamma^{-1,0}. \end{equation*} In particular, since $\ad Y$ acts as multiplication by $a+b$ on $\mathfrak g^{a,b}$ it follows that \begin{equation} \label{eqn:phi-01} \aligned - [\Gamma'(0),[\varphi_{01},Y]] &=& [\gamma,[\bar\gamma^{-1,1} + \bar\gamma^{-1,0},Y]] = [\gamma,\bar\gamma^{-1,0}] \\ &=& [\gamma^{-1,1},\bar\gamma^{-1,0}] + [\gamma^{-1,0},\bar\gamma^{-1,0}]. \endaligned \end{equation} Finally, using \eqref{eqn:expand3} , \begin{equation*} \aligned \varphi_{11}& = [\gamma,\bar \gamma] _0+ [\gamma,\bar \gamma] _+ \\ &= [\gamma^{-1,1},\bar \gamma^{-1,1}] _0 + [\gamma^{-1,1},\bar \gamma^{-1,0}], \endaligned \end{equation*} so that \begin{equation} \label{eqn:gamma11} [ \varphi_{11},Y]= - [\gamma^{-1,1},\bar \gamma^{-1,0}]. \end{equation} Combining Eqns.~\eqref{eqn:dbardgamma}--\eqref{eqn:gamma11}, we have: \begin{equation} \left.\frac{\partial^2}{\partial s \partial\bar s} d(s) \cdot Y \right|_0 = [\gamma^{-1,1},\bar\gamma^{-1,0}] + [\gamma^{-1,0},\bar\gamma^{-1,0}] + [\bar\gamma^{-1,1},\gamma^{-1,0}] . \end{equation} The result then follows from \eqref{eqn:delta-laplacian}. \qed\endproof \par So far, we have not assumed anything special about the biextension variation $\FF$. Of special interest in connection with the Hodge conjecture is the case where the two \emph{normal functions} appearing in \eqref{eqn:biextension} are self-dual with respect to the polarization $Q$ on $H := \gr ^W_{-1}$. \begin{thm} \label{plurisubharmonicity} Let $h$ be the Hodge metric on $ \gr^W_{-1}$ and let $\FF$ be a self-dual biextension over a curve $S$ with local normal form at a disk $(\Delta,s)$ at $s_0\in S$ given by $F(s)=e^{\Gamma(s)}$. Choose a lift $e(0) \in I^{0,0}_F$ of $1\in {\mathbb{Z}}(0)$ and let \[ \gamma= \Gamma'(0) \in \End(W_0)_{\mathbb{C}},\quad t:= \gamma^{-1,0} (e(0))\in I^{-1,0}_F, \] where $\gamma^{-1,0}$ is the Hodge component of type $(-1,0)$ of $\Gamma'(0)$. Let $\nu\in\Ext^1_{\VMHS}({\mathbb{Z}}(0),\gr ^W_{-1} \FF)$ and its dual be the two normal functions associated to the biextension and let $\delta(s)$ be the Deligne $\delta$-splitting of $\FF_s$. Then \begin{enumerate} \item the value of the infinitesimal invariant $\partial\nu$ for the normal function $\nu$ at $s_0$ can be identified with $t$. \item \begin{equation} \left.\frac{\partial^2}{\partial s \partial \bar s} \delta (s)\right|_0 (e(0)) = h(t,t)\in {\mathbb{R}}_{\ge 0} , \,\quad t=\gamma^{-1,0}(e(0)). \label{eqn:laplace-4} \end{equation} \item The Chern form of the Hodge metric is semi-negative. \end{enumerate} \end{thm} \proof 1. The point here is that $\gamma^{1,0} \in \Hom(I^{0,0}_F, I^{-1,0}_F) $ is the derivative at $s_0$ of the period map for the normal function $\nu$ which, from the set-up gets identified with $t$.\\ 3. Follows from Theorem~\ref{chernformofhm} and 2.\\ 2. Recall \eqref{eqn:biextrel}. We have \begin{eqnarray*} \frac{1}{2 {\rm i}} [\gamma^{-1,0},\bar \gamma^{-1,0}]e(0) &=& - \frac{1}{2 {\rm i}}\left( \gamma^{-1,0}(\bar\gamma^{-1,0}(e(0))) - \bar\gamma^{-1,0}(\gamma^{-1,0}(e(0))) \right)\\ &=& - \frac{1}{2 {\rm i}}\left( \gamma^{-1,0}(\bar t) - \overline{\gamma^{-1,0}(\overline{t})}\right)\\ &=& - \im (\gamma^{-1,0}(\bar t) ). \end{eqnarray*} Next, we express self-duality. Observe that the derivative of the period map of the dual extension $\nu^*$ can be expressed as a functional on $W_{-1}$: it is zero on $W_{-2}$ and self-duality means precisely that on $H^*=\Hom(H,{\mathbb{Z}}(1)) $ it restricts to the functional\footnote{For simplicity we have discarded the Tate twist.} \begin{equation*} \beta =Q(s,-) \in H^* \mapsto - Q(s, t)\in {\mathbb{C}}. \end{equation*} This formula implies that, tracing through the identifications, one has $\gamma^{-1,0}(\bar t )= - Q(\bar t, \gamma^{-1,0}e(0)) =- Q(\bar t, t)= Q(t,\bar t)$ and hence: \begin{eqnarray*} \frac{1}{2 {\rm i}} [\gamma^{-1,0},\bar \gamma^{-1,0}]e(0) &=& - \im( Q( t, \bar t )). \end{eqnarray*} Since $h(t,t)= Q(-{\rm i} t,\bar t)= -{\rm i} Q(t,\bar t)$ is real, we get indeed $ \frac{1}{2{\rm i}} [\gamma^{-1,0},\bar \gamma^{-1,0}]e(0)= h(t,t)\in {\mathbb{R}}$. \qed\endproof \begin{corr} \label{biextcoroll} If $\VV$ is a variation of biextension type over a curve $S$ with self-dual extension data, then $\delta(s)$ is a subharmonic function which vanishes exactly at the points $s\in S$ for which the infinitesimal invariants of the associated normal functions vanish. \end{corr} \section{Reductive Domains And Complex Structures}\label{sec:reductive} In this section we consider special classifying domains: the reductive ones. Recall that a homogeneous space $D=G/H$ with $G$ a real Lie-group acting from the left on $D$ is \emph{reductive} if the Lie algebra $\eh=\Lie H$ has a vector space complement $\eum$ which is $\ad{H}$-invariant: \begin{equation}\label{eqn:Reductive} \eg= \eh\oplus \germ n,\quad [\eh,\germ n]\subset \germ n. \end{equation} Note that this implies that $\germ n$ is the tangent space at the canonical base point of $D=G/H$; moreover, the tangent bundle is the $G$-equivariant bundle associated to the adjoint representation of $H$ on $\germ n$. \subsection{Domains for Pure Hodge Structures} These are reductive: in this situation $\germ n_{\mathbb{C}}:= \germ n_+\oplus \germ n_-$ (see \eqref{eqn:SplitEnd}) is the complexification of $\germ n:= \germ n_{\mathbb{C}}\cap \eg$ and this is the desired complement. Let us recall from \cite[Chap. 12]{periodbook} how the connection form for the metric connection (the one for the Hodge metric) can be obtained. Start with the Maurer-Cartan form $\omega_G$ on $G$. It is an $\eg$-valued $1$-form on $G$. Decompose $\omega_G$ according to the reductive splitting. Then $\omega=\omega^\eh$, the $\eh$--valued part, is a connection form for the principal bundle $p:G \to G/H=D$. Let $\rho: H \to \gl E$ be a (differentiable) representation and let $[E]= G\times _\rho E$ be the associated vector bundle. It has an induced connection which can be described as follows. Locally over any open $U\subset D$ over which $p$ has a section $s:U \to G$, the bundle $[E]$ gets trivialized and the corresponding connection form then is $s^*(\dot\rho\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \omega)$, where $\rho: \eh \to \End E$ is the derivative of $\rho$. \par In the special case where $E= T_FD$ this leads to a canonical connection $\nabla_D$ on the holomorphic tangent bundle of $D$. If $D$ is a period domain this canonical connection is the Chern connection for the Hodge metric. From this description the curvature can then directly be calculated: \begin{*thm}[\protect{\cite[Cor. 11.3.16]{periodbook} }] Let $D$ be a period domain for pure polarized Hodge structures and let $\alpha,\beta\in \germ n=T_FD$ . Then $F_D\in A^{1,1}_D(\End \germ n)$, the curvature form of the canonical connection $\nabla_D$ on the holomorphic tangent bundle of $D$ evaluates at $F$ as: \[ R_D(\alpha,\bar\beta)= - \ad{ [\alpha,\bar \beta ]} ^\eh. \] \end{*thm} \begin{rmk}\label{CompatibleComplexStructure} The above proof for the pure case makes crucial use of the compatibility of the complex structure of $D$ and reductive structure: First, one needs the complex structure coming from the inclusion $D=G/G^F \subset \check D= G_{\mathbb{C}}/G^F_{\mathbb{C}}$ to see that the Maurer-Cartan form is the real part of a holomorphic form, the Maurer-Cartan form on $G_{\mathbb{C}}$ and hence $\omega$ is the real part of a holomorphic form. Next, one uses that the complex structure $J$ on $\germ n=T_F D$ is such that $\germ n^\pm\subset \germ n_{\mathbb{C}}=T_F \check D $ is the eigenspace for $J$ with eigenvalue ${\rm i}$. In the mixed case there are situations where the domain is reductive, but the complex structure then does not behave as in the pure case, as we now show. \end{rmk} \subsection{Split Domains} Mixed domains are seldom reductive, and, even if they are, we shall see that the complex structure does not satisfy the compatibility required by Remark~\ref{CompatibleComplexStructure}. \begin{exmples} 1. Suppose $\Lambda=0$. Then equation \eqref{eqn:Double} implies that $\germ n= \germ n_{\mathbb{C}}\cap \eg_{\mathbb{R}}$ is the desired complement. Note that in the pure case this equals also $\germ n_{\mathbb{C}}\cap \eg$. This difference will influence the curvature calculations. Domains with $\Lambda=0$ are called \emph{split domains} because they parametrize split mixed Hodge structures. We investigate these below in more detail. \newline 2. We consider the general mixed situation. Let $D^{\rm split}$ be the subdomain of $D$ parametrizing split mixed Hodge structures\footnote{This has been called $D_{\mathbb{R}}$ in \S~2.}. This domain can be identified with $G_{\mathbb{R}}/ G^F_{\mathbb{R}}$, where $F$ is a fixed split mixed Hodge structure. Note that $\germ n_{\mathbb{C}}\oplus \Lambda$ has a real structure which makes $D^{\rm split}$ a reductive domain for the splitting \[ \eg_{\mathbb{R}}= \underbrace{\eg^{0,0}\cap \eg_{\mathbb{R}}}_{\Lie{G^F_{\mathbb{R}}}} \oplus (\germ n_{\mathbb{C}}\oplus \Lambda)_{\mathbb{R}}. \] In general $D^{\rm split}$ only has the structure of a differentiable manifold. \newline 3. In general the group $G_{\mathbb{R}}$ does not act transitively on $D$. But there is another natural subgroup of $G$ which does act transitively. To explain this, introduce (for $r<0$): \begin{eqnarray*} G^W_r & := & \sett{g\in G}{ \text{ for all $k$ the restriction } g| (W_k/W_{k+r}) \text{ is real}.} \end{eqnarray*} Note that $G^W_{-2}$ contains $\exp(\Lambda)$ as well as $G_{\mathbb{R}}$ and hence it acts transitively on $D$. Under the minimal condition \[ \Lie{G^W_{-2}}=\eg_{\mathbb{R}}\oplus {\rm i} \Lambda \] we clearly get a reductive splitting \[ \Lie{G^W_{-2}} = \eg^{0,0}\cap \eg_{\mathbb{R}} \oplus \left[ (\germ n_{\mathbb{C}}\oplus \Lambda)_{\mathbb{R}}\oplus {\rm i} \Lambda_{\mathbb{R}} \right]. \] Domains which satisfy this condition are called \emph{close to splitting}. An example is provided by the so-called type II domains from \cite{sl2anddeg}. Note that in general $(\germ n_{\mathbb{C}}\oplus \Lambda)_{\mathbb{R}}$ does not admit a complex structure: $\dim\Lambda$ can be odd!. \end{exmples} \subsection{Two Step Filtrations} \label{ssec:TwoStep} This case has been treated in detail in \cite[\S~2]{usui}. The domains in question are examples of split domains, and hence they are reductive. The mixed Hodge structure they parametrize indeed split over ${\mathbb{R}}$ since the associated weight filtration has only two consecutive steps, say $0=W_0\subset W_1\subset W_2=H$. Assume that we are given two polarizations on $W_0$ and $\gr^W_2$, both denoted $Q$. One can choose an adapted (real) basis for $H$ which \begin{itemize} \item restricts to a $Q$--symplectic basis $(a_1,\dots,a_g,b_1,\dots,b_g)$ for $W_1$; \item the remainder of the basis $(c_1,\dots,c_k,c'_1,\dots,c'_k,d_1,\dots,d_\ell)$ projects to a basis for $\gr^W_2$ diagonalizing $Q$, i.e. $Q=$diag$(-\mathbf{1}_{2k},\mathbf{1}_\ell)$. \end{itemize} Then \[ G= \sett{ \begin{pmatrix} A & B \\ 0 & C\end{pmatrix}}{A\in \smpl{g;{\mathbb{R}}}, \, C\in \ogr{2k,\ell}, \, B\in {\mathbb{C}}^{2g\times (2k+\ell)}}, \] reflecting the Levi-decomposition. More invariantly, the two matrices $A$ and $C$ on the diagonal give the semi-simple part $G^{\rm ss}$ while the matrices $B$ give the unipotent radical \[ G^{\rm un}\simeq \Hom_{\mathbb{C}}( \gr^W_2,W_1). \] Here, the isomorphism (via the exponential map) in fact identifies $G^{\rm un}$ with its Lie-algebra: \begin{equation} \label{eqn:UnPart} \eg^{\rm un}= \Hom_{\mathbb{C}}( \gr^W_2,W_1), \end{equation} the endomorphisms in $\eg$ which lower the weight by one step. The real group $G_{\mathbb{R}}$ consists of the group given by similar matrices, except that now the matrices $B$ are taken to be real. Next, fix a Hodge flag $F^2\subset F^1\subset F^0=H_{\mathbb{C}}$ which has the following adapted unitary basis \begin{eqnarray*} (\underbrace{\underbrace{f_1,\dots,f_k}_{F^2}, d_1,\dots,d_\ell, f'_1,\dots,f'_g}_{F^1}, \bar f'_1,\dots, \bar f'_g,\bar f_1,\dots,\bar f_k), \\ f_k:= \frac{1}{\sqrt 2} [c_k- {\rm i} c'_k] , \quad f'_k= \frac{1}{\sqrt 2} [a_k- {\rm i} b_k]. \hspace{7em} \end{eqnarray*} The group $G^F$ consists of the subgroup of $G$ with $A=\begin{pmatrix} U & -V\\ V& U\end{pmatrix}$, $U+{\rm i} V\in \ugr g$, $C\in \ogr{2k}\times\ogr{\ell}$ and the matrices $B$ form the complex vector space \begin{equation} \label{eqn:FixedInUnpart1} \mathbf{B}:= \sett{\begin{pmatrix} B' \\ -{\rm i} B' \end{pmatrix}}{B'\in {\mathbb{C}}^{g\times (2k+\ell)}}. \end{equation} In terms of the Hodge decomposition, this last part, the unipotent part, corresponds to \begin{equation}\label{eqn:FixedInUnpart2} \eg^F\cap \eg^{\rm un}_F= \eg^{0,-1}_F\oplus \eg^{1,-2}_F, \end{equation} a space of complex dimension $g(2k+\ell)$ as required. The Lie-algebra structure of $\eg$ makes this an Abelian subalgebra. The natural complex structure on $G/G^F$ comes from the open embedding $G/G^F\subset G_{\mathbb{C}}/G_{\mathbb{C}}^F$. Consider now $G^{\rm un}/G^F\cap G^{\rm un}$. This is a complex subvariety so that by \eqref{eqn:UnPart} and \eqref{eqn:FixedInUnpart1} \[ G^{\rm un}/G^F\cap G^{\rm un}=\eg^{\rm un}/\eg^{\rm un}\cap \eg^F= \ \eg^{\rm un} /\eg^{0,-1}_F\oplus \eg^{1,-2}_F= \eg^{-1,0}_F\oplus \eg^{-2,1}_F \] which by \eqref{eqn:FixedInUnpart2} can be identified with the quotient space ${\mathbb{C}}^{2g\times (2k+\ell)}/\mathbf{B}$. Note however that \[ \eg^F_{\mathbb{R}}\cap \eg^{\rm un}= 0 \] and so \[ G^{\rm un}_{\mathbb{R}}/G^F\cap G^{\rm un}_{\mathbb{R}}=\eg^{\rm un}_{\mathbb{R}}=\Hom_{\mathbb{R}}( \gr^W_2,W_1) \] and this gets a complex structure $J_1$ thanks to the weight 1 Hodge structure induced by $F$ on $W_1$ : \[ \Hom_{\mathbb{R}}( \gr^W_2,W_1)\simeq \left(\gr^W_2 \right)^*\otimes_{\mathbb{C}} W_1^{0,1}. \] \emph{These two complex structures are not isomorphic}. To explain this, first note that $G^{\rm ss}/ G^F\cap G^{\rm ss}=D_1\times D_2$ the product of the domain $D_1\simeq \mathbf{H}_g$, parametrizing weight $1$ Hodge polarized structures with $h^{1,0}=g$ and $D_2$ parametrizing weight $2$ polarized Hodge structures with $h^{2,0}=k, h^{1,1}=\ell$. And the natural projection \[ G/G^F \to G^{\rm ss}/ G^F\cap G^{\rm ss}=D_1\times D_2 \] is a holomorphic bundle with fiber associated to the adjoint representation of $G^{\rm ss}\cap G^F$ on $\eg^{\rm un}/\eg^{\rm un}\cap \eg^F$. Explicitly, this action is \[ g \cdot [B] = [ABC^{-1}] ,\quad g= \begin{pmatrix} A & 0 \\ 0 & C\end{pmatrix} . \] The group $G^{\rm ss}$ acts on this fiber bundle by holomorphic transformations from the left: $g\in G^{\rm ss}$ sends the fiber over $F$ biholomorphically to the fiber over $g\cdot F$. Similarly, \begin{equation} \label{eqn:ComplexBundle} G_{\mathbb{R}}/G^F_{\mathbb{R}} \to G^{\rm ss} / G^F_{\mathbb{R}}\cap G^{\rm ss}=D_1\times D_2 \end{equation} is a real-analytic complex vector bundle associated to the $G^{\rm ss}\cap G^F$--representation space $\eg^{\rm un}_{\mathbb{R}}$. This is also a holomorphic fiber bundle: if $U \in \ugr{g}$ and $V\in [\ogr{2k}\times \ogr{\ell}]$, the action on $\varphi\in \Hom_{\mathbb{R}}(\gr^W_2,W_1)$ is given by $\varphi\mapsto U\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \varphi\raise1pt\hbox{{$\scriptscriptstyle\circ$}} V^{-1}$ and hence is $J_1$-complex. However, the action of $G^{\rm ss}$ on this bundle is no longer holomorphic: $g=(U,V) \in \smpl{g}\times \ogr{2k,\ell}$ sends $\varphi$ in the fiber over $F$ to $U\raise1pt\hbox{{$\scriptscriptstyle\circ$}} \varphi$ in the fiber over $g\cdot F$ and since $U$ and $J_1$ only commute when $U\in \ugr{g}$ this is \emph{not} a $J_1$-complex-linear isomorphism. We next consider the reductive decomposition in more detail. On the level of Lie algebras we introduce \[ \eh:= \eg^{0,0}\cap \eg ,\quad \germ n= \germ n^{\rm ss}\oplus \eg^{\rm un}_{\mathbb{R}},\quad \germ n^{ss}= [\eg^{-2,2}\oplus \eg^{-1,1}\oplus \eg^{1,-1}\oplus \eg^{2,-2}]\cap \eg, \] leading to the required reductive decomposition $ \eg_{\mathbb{R}} = \eh \oplus \germ n. $ \par Moreover, the natural complex structure on $\germ n^{\rm ss}$ (i.e. the one which induces the complex structure on the base $D_1\times D_2$ of the fiber bundle \eqref{eqn:ComplexBundle}) together with the complex structure $J_1$ induce a complex structure $J$ on $\germ n$ whose $\pm{\rm i}$--eigenspaces inside $\germ n_{\mathbb{C}}$ are given by \[ \germ n_+ = \eg^{1,-1}_F \oplus \eg^{2,-2}_F \oplus \Hom_{\mathbb{C}}(I^{2,0}_F\oplus I^{1,1}_F\oplus I^{0,2}_F, I^{0,1}_F) \] respectively \[ \germ n_- = \eg^{-1,1}_F \oplus \eg^{-2,2}_F \oplus \Hom_{\mathbb{C}}(I^{2,0}_F\oplus I^{1,1}_F\oplus I^{0,2}_F, I^{1,0}_F). \] Here we use the standard embedding $\Hom_{\mathbb{C}}( I^{i,j}_F, I^{k,\ell}_F)\subset \eg_{\mathbb{C}}$. These complex subspaces of $\eg_{\mathbb{C}}$ are in fact Abelian subalgebras. Note that the isomorphism \[ T_F D\simeq \germ n \simeq \germ n_-. \] gives $T_FD$ the complex structure which is required in the standard curvature calculations for reductive domains, as explained above. However, as we have seen, this structure is not the one which comes from the embedding $D=G/G^F\hookrightarrow G_{\mathbb{C}}/G_{\mathbb{C}}^F$. %
1,116,691,498,849
arxiv
\section{Introduction} Up to the present, a great deal of empirical analyses have pointed out the existence of non-Gaussian activities lying in time-varying phenomena such as the log-return of stock prices, the spike noise of neurons, and so on. To incorporate them into statistical modeling, L\'{e}vy processes, which can be interpreted as a natural continuous time version of random walks, are regarded as crucial components. Hence the statistical theory of the stochastic differential equation driven (SDE) by them (including themselves as a matter of course) based on high-frequency samples has been developed so far. Since its genuine likelihood cannot generally be obtained in a closed form, many other feasible ways have been considered, for instance, the threshold based estimation for jump diffusion models by \cite{ShiYos06} and \cite{OgiYos11}, the least absolute deviation (LAD)-type estimation for L\'{e}vy driven Ornstein-Uhlenbeck models by \cite{Mas10}, the non-Gaussian stable quasi-likelihood estimation for locally stable driven SDE models by \cite{Mas16}, and the least square estimation for small L\'{e}vy driven SDE models by \cite{LonMaShi16}. These literatures assume the precise structure of their driving noises. As for the methods not assuming it, we refer to the Gaussian quasi-likelihood (GQL) based estimation schemes proposed in \cite{Mas13-2} and \cite{MasUeh17-2}. However, all of the above studies premise that the data-generating model is correctly specified even though the risk of model misspecification is essentially inevitable. It should be considered in the deviation of the asymptotic properties of estimators and estimating functions such as for ensuring the reliability of the statistical methods and comparing candidate models. In this paper, we give the theoretical framework of the parametric inference for misspecified L\'{e}vy driven SDE models based on high-frequency samples for the first time. Let $X$ be the one-dimensional stochastic process defined on the complete filtered probability space $(\Omega,\mathcal{F}, (\mathcal{F}_t )_{t\in\mathbb{R}_+},P)$ such that \begin{equation}\label{tSDE} dX_t=A(X_t)dt+C(X_{t-})dZ_t, \end{equation} where: \begin{itemize} \item $Z$ is a one-dimensional c\`{a}dl\`{a}g L\'{e}vy process without Wiener part. It is independent of the initial variable $X_0$ and satisfies $E[Z_1]=0$, $Var[Z_1]=1$, and $E[|Z_1|^q]<\infty$ for all $q>0$; \item The coefficients $A:\mathbb{R}\mapsto\mathbb{R}$ and $C:\mathbb{R}\mapsto\mathbb{R}$ are Lipschitz continuous; \item $\mathcal{F}_t:=\sigma(X_0)\vee\sigma(Z_s; s\leq t)$. \end{itemize} We suppose that the observations $(X_{t_0},\dots,X_{t_n})$ are obtained from the solution path $X$ in the so-called ``rapidly increasing experimental design", that is, $t_j\equiv t_j^n:=jh_n$, $T_n:=nh_n\to\infty$, and $nh_n^2\to0$. We consider the following parametric one-dimensional SDE model: \begin{equation}\label{pSDE} dX_t=a(X_t,\alpha)dt+c(X_{t-},\gamma)dZ_t, \end{equation} where the functional forms of the coefficients $a:\mathbb{R}\times\Theta_\alpha\mapsto\mathbb{R}$ and $c:\mathbb{R}\times\Theta_\gamma\mapsto\mathbb{R}$ are supposed to be known except for a finite-dimensional unknown parameter $\theta:=(\alpha,\gamma)$ being an element of the bounded convex domain $\Theta:=\Theta_\alpha\times\Theta_\gamma\subset\mathbb{R}^p$. Here the true coefficients $(A,C)(\cdot)$ are not necessarily in the parametric family $\{(a,c)(\cdot,\theta): \theta\in\Theta\}$, namely, the coefficients are possibly misspecified. Under such situation, we adopt the staged Gaussian quasi-likelihood estimation procedure used in \cite{MasUeh17-2} to estimate an {\it optimal} value $\theta^\star$ of $\theta$. The {\it optimality} is herein determined by minimizing the Kullback-Leibler (KL) divergence like quantities \eqref{KL1} and \eqref{KL2} below; they are the probability limits of the staged GQL. Hereinafter, the terminologies ``misspecified" and ``misspecification" will be used for the misspecification with respect to the coefficient unless another meaning is specifically indicated. Concerning misspecified ergodic diffusion models, it is shown in \cite{UchYos11} that although the misspecification with respect to their diffusion term deteriorates the convergence rate of the scale (diffusion) parameter, the Gaussian quasi-maximum likelihood estimator (GQMLE) still has asymptotic normality. In the paper, the differential equations endowed with their infinitesimal generator (cf. \cite{ParVer01}) is the key to derive the behavior of the GQMLE. However, since the infinitesimal generator of $X$ contains the integro-operator with respect to the L\'{e}vy measure of $Z$, it is difficult to verify the existence and regularity of the corresponding equation. To avoid such obstacle, we will substantially invoke the theory of the extended Poisson equation (EPE) for homogeneous Feller Markov processes established in \cite{VerKul11}. Applying the result of \cite{VerKul11} for \eqref{tSDE}, the existence and weighted H\"{o}lder regularity of the solution of EPEs will be derived under a mighty mixing condition on $X$. Building on the result, we will provide the asymptotic normality of our staged GQMLE and its tail probability estimates under sufficient regularity and moment conditions on the ingredients of \eqref{tSDE} and \eqref{pSDE}. Here we note that the absence of Wiener part in \eqref{tSDE} is essential while it is not in the correctly specified case, for more details, see Remark \ref{conj}. \begin{table}[t] \begin{center} \caption{GQL approach for ergodic diffusion models and ergodic L\'{e}vy driven SDE models } \begin{tabular}{cccc} \hline &&&\\[-3.5mm] Model & \multicolumn{2}{c}{Rates of convergence}& Ref.\\ & drift & scale\\ \hline correctly specified diffusion & $\sqrt{T_n}$& $\sqrt{n}$ &\cite{Kes97}, \cite{UchYos12} \\[1.5mm] misspecified diffusion & $\sqrt{T_n}$& $\sqrt{T_n}$ & \cite{UchYos11} \\ [1.5mm] correctly specified L\'{e}vy driven SDE& $\sqrt{T_n}$& $\sqrt{T_n}$ &\cite{Mas13-1}, \cite{MasUeh17-2} \\ [1.5mm] misspecified L\'{e}vy driven SDE & $\sqrt{T_n}$ &$\sqrt{T_n}$ & this paper\\ \hline \end{tabular} \label{comp} \end{center} \end{table} It will turn out that the convergence rate of the scale parameters is $\sqrt{T_n}$, and it is the same as the correctly specified case, see \cite{Mas13-2} and \cite{MasUeh17-2}. This is different from the diffusion case (cf. Table \ref{comp}). Such difference may be caused from applying the GQL to non-Gaussian driving noises, that is, the efficiency loss of the GQMLE may occur even in the correctly specified case. Indeed, the non-Gaussian stable quasi-likelihood is known to estimate the drift and scale parameters faster than the GQMLE in correctly specified locally $\beta$-stable driven SDE models (cf. \cite{Mas16}); each of their convergence rates are $\sqrt{n}h_n^{1-1/\beta}$ and $\sqrt{n}$, respectively. Further, for correctly specified locally $\beta$-stable driven Ornstein-Uhlenbeck models, the LAD-type estimators of \cite{Mas10} tend to the true value at the speed of $\sqrt{n}h_n^{1-1/\beta}$ and it is also faster than that of the GQMLE. However, in exchange for its efficiency, the GQL approach is worth considering by the following reasons: \begin{itemize} \item It does not include any special functions (e.g. Bessel function, Whittaker function, and so on), infinite expansion series and analytically unsolvable integrals, thus computation based on it is not relatively time-consuming. \item It focuses only on the (conditional) mean and covariance structure, thus it does not need so much restriction on the driving noise and is robust against the noise misspecification. In other words, we can construct reasonable estimators of the drift and scale coefficients in the unified way if only the driving L\'{e}vy noise has moments of any order. \end{itemize} Our result ensures that even if the coefficients are misspecified, the GQL still works for L\'{e}vy driven SDE models and completely inherits its merit written in above. The rest of this paper is organized as follows: In Section \ref{ES}, we introduce our estimation procedure and assumptions. Section \ref{Main} provides our main results in the following turn: \begin{enumerate} \item the tail probability estimates of the GQMLE; \item the existence and weighted H\"{o}lder regularity of the solution of EPEs for L\'{e}vy driven SDEs; \item the asymptotic normality of the GQMLE at $\sqrt{T_n}$-rate. \end{enumerate} A simple numerical experiment is presented in Section \ref{NE}. We give all proofs of our results in Section \ref{AP}. \section{Estimation scheme and Assumptions}\label{ES} For notational convenience, we will use the following symbols without any mention: \begin{itemize} \item $\partial_x$ is referred to as a differential operator for any variable $x$. \item $x_n\lesssim y_n$ implies that there exists a positive constant $C$ being independent of $n$ satisfying $x_n\leq Cy_n$ for all large enough $n$. \item $\bar{S}$ denotes the closure of any set $S$. \item We write $Y_j=Y_{t_j}$ and $\Delta_j Y=Y_j-Y_{j-1}$ for any stochastic process $Y$. \item For any matrix valued function $f$ on $\mathbb{R}\times\Theta$, we write $f_s(\theta)=f(X_s,\theta)$; especially we write $f_j(\theta)=f(X_j,\theta)$. \item $\eta$ stands for the law of $X_0$. \item $E^j[\cdot]$ denotes the conditional expectation with respect to $\mathcal{F}_{t_j}$. \end{itemize} We define our staged GQMLE $\hat{\theta}_{n}:=(\hat{\alpha}_{n},\hat{\gamma}_{n})$ in the following manner: \begin{enumerate} \item {\it Drift-free estimation of $\gamma$.} \label{step1} Define the Maximizing-type estimator (so-called $M$-estimator) $\hat{\gamma}_{n}$ by \begin{equation} \hat{\gamma}_{n}\in\mathop{\rm argmax}_{\gamma\in\bar{\Theta}_\gamma}\mathbb{G}_{1,n}(\gamma), \nonumber \end{equation} for the $\mathbb{R}$-valued random function \begin{equation*} \mathbb{G}_{1,n}(\gamma):=-\frac{1}{T_n}\sum_{j=1}^{n} \left\{h_n\log c^2_{j-1}(\gamma)+\frac{(\Delta_j X)^2}{c^2_{j-1}(\gamma)}\right\}. \end{equation*} \item {\it Weighted least square estimation of $\alpha$.} Define the least square type estimator $\hat{\alpha}_{n}$ by \begin{equation} \hat{\alpha}_{n}\in\mathop{\rm argmax}_{\alpha\in\bar{\Theta}_\alpha}\mathbb{G}_{2,n}(\alpha), \nonumber \end{equation} for the $\mathbb{R}$-valued random function \begin{equation} \mathbb{G}_{2,n}(\alpha):=-\frac{1}{T_n}\sum_{j=1}^{n} \frac{(\Delta_j X-h_na_{j-1}(\alpha))^2}{h_nc^2_{j-1}(\hat{\gamma}_{n})}. \nonumber \end{equation} \end{enumerate} Let $\mathbb{G}_1: \Theta_\gamma\mapsto \mathbb{R}$ and $\mathbb{G}_{2}:\Theta_\alpha\mapsto\mathbb{R}$ be \begin{align} &\mathbb{G}_1(\gamma):=-\int_\mathbb{R} \left(\log c^2(x,\gamma)+\frac{C^2(x)}{c^2(x,\gamma)}\right)\pi_0(dx),\label{KL1}\\ &\mathbb{G}_2(\alpha):=-\int_\mathbb{R} c(x,\gamma^\star)^{-2}(A(x)-a(x,\alpha))^2\pi_0(dx), \label{KL2} \end{align} where $\pi_0$ denotes the probability measure on $\mathbb{R}$ introduced later. For these functions, we define an {\it optimal} value $\theta^\star:=(\alpha^\star,\gamma^\star)$ of $\theta$ by \begin{equation*} \gamma^\star\in\mathop{\rm argmax}_{\gamma\in\bar{\Theta}_\gamma}\mathbb{G}(\gamma),\quad \alpha^\star\in\mathop{\rm argmax}_{\alpha\in\bar{\Theta}_\alpha}\mathbb{G}(\alpha). \end{equation*} We assume that $\theta^\star$ is unique, and that it is in the interior of $\Theta$. \begin{Rem}\label{decay} Although our estimation method ignores the drift term in the first stage, the effect of it asymptotically vanishes. This is because the scale term dominates the small time behavior of $X$ in $L_2$-sense. Specifically, we can derive \begin{equation*} E^{j-1}\left[\left(\int_{t_{j-1}}^{t_j} f_{s-}(\theta)dJ_s\right)^2\right]\lesssim h_n f^2_{j-1}, \quad E^{j-1}\left[\left(\int_{t_{j-1}}^{t_j} g_{s}(\theta)ds\right)^2\right]\lesssim h_n^2 g^2_{j-1}, \end{equation*} for suitable functions $f$ and $g$. Indeed, it has already been shown that the asymptotic behavior of the scale estimator constructed by our manner is the same as the conventional GQL estimator in the case of correctly specified ergodic diffusion models (cf. \cite{UchYos12}) and ergodic L\'{e}vy driven SDE models (cf. \cite{MasUeh17-2}). Such ignorance should be helpful in reducing the number of simultaneous optimization parameters, thus our estimators are expected to numerically be more stabilized and their calculation should be less time-consuming. Moreover, by choosing appropriate functional forms, each estimation stage is reduced to a convex optimization problem. For example, if $a$ and $c$ are linear and log-linear with respect to parameters, respectively, then the above argument holds. As for other candidates of $(a,c)$ and details, see \cite[Example 3.8]{MasUeh17-2}. \end{Rem} \begin{Rem} From the functional form of $\mathbb{G}_1$ and $\mathbb{G}_2$, the maximizers of them correspond with the true parameters in the correctly specified case. Hence an intuitive interpretation of $\theta^\star$ is the parameter value which yields the closest model to the data-generating model measured by the KL divergence like quantities \eqref{KL1} and \eqref{KL2}. The estimation procedure at the first stage can be regarded as Stein's loss function (or Jensen-Bregman LogDet divergence) based estimation, too. Thus $\theta^\star$ can also be interpreted as the minimizer of each expected loss (Stein's loss and a weighted $L_2$ loss) with respect to $\pi_0$. \end{Rem} To derive our asymptotic results, we introduce some assumptions below. Let $\nu_0$ be the L\'{e}vy measure of $Z$. \begin{Assumption}\label{Moments} \begin{enumerate} \item $E[Z_1]=0$, $Var[Z_1]=1$, and $E[|Z_1|^q]<\infty$ for all $q>0$. \item The Blumenthal-Getoor index (BG-index) of $Z$ is smaller than 2, that is, \begin{equation*} \beta:=\inf\left\{\gamma\geq0: \int_{|z|\leq1}|z|^\gamma\nu_0(dz)<\infty\right\}<2. \end{equation*} \end{enumerate} \end{Assumption} We remark that the condition on the BG-index does not excessively restrict the candidates of $Z$, let alone the first condition. Indeed, there are a lot of L\'{e}vy processes satisfying Assumption \ref{Moments}, for example, bilateral gamma process, normal tempered stable process, normal inverse Gaussian process, variance gamma process, and so on. \begin{Assumption}\label{Smoothness} \begin{enumerate} \item The coefficients $A(\cdot)$ and $C(\cdot)$ are Lipschitz continuous and twice differentiable, and their first and second derivatives are of at most polynomial growth. \item The drift coefficient $a(\cdot,\alpha^\star)$ and scale coefficient $c(\cdot,\gamma^\star)$ are Lipschitz continuous, and $c(\cdot,\cdot)$ is invertible for all $(x,\gamma)$. \item For each $i \in \left\{0,\dots1\right\}$ and $k \in \left\{0,\dots5\right\}$, the following conditions hold: \begin{itemize} \item The coefficients $a$ and $c$ admit extension in $\mathcal{C}(\mathbb{R}\times\bar{\Theta})$ and have the partial derivatives $(\partial_x^i \partial_\alpha^k a, \partial_x^i \partial_\gamma^k c)$ possessing extension in $\mathcal{C}(\mathbb{R}\times\bar{\Theta})$. \item There exists nonnegative constant $C_{(i,k)}$ satisfying \begin{equation}\label{polynomial} \sup_{(x,\alpha,\gamma) \in \mathbb{R} \times \bar{\Theta}_\alpha \times \bar{\Theta}_\gamma}\frac{1}{1+|x|^{C_{(i,k)}}}\left\{|\partial_x^i\partial_\alpha^ka(x,\alpha,\gamma)|+|\partial_x^i\partial_\gamma^kc(x,\gamma)|+|c^{-1}(x,\gamma)|\right\}<\infty. \end{equation} \end{itemize} \end{enumerate} \end{Assumption} We note that the first part of Assumption \ref{Moments} and Assumption \ref{Smoothness} ensures the existence of the strong solution of SDE \eqref{tSDE}, that is, there exists a measurable function $g$ such that $X=g(X_0,Z)$. Given a function $\rho:\mathbb{R}\to\mathbb{R}^+$ and a signed measure $m$ on a one-dimensional Borel space, we define \begin{equation}\nonumber ||m||_\rho=\sup\left\{|m(f)|:\mbox{$f$ is $\mathbb{R}$-valued, $m$-measurable and satisfies $|f|\leq\rho$}\right\}. \end{equation} Hereinafter $P_t(x,\cdot)$ denotes the transition probability of $X$. \begin{Assumption}\label{Stability} \begin{enumerate} \item There exists a probability measure $\pi_0$ such that for every $q>0$ we can find constants $a>0$ and $C_q>0$ for which \begin{equation}\label{Ergodicity} \sup_{t\in\mathbb{R}_{+}} \exp(at) ||P_t(x,\cdot)-\pi_0(\cdot)||_{h_q} \leq C_qh_q(x), \quad x\in\mathbb{R}, \end{equation} where $h_q(x):=1+|x|^q$. \item For all $q>0$, we have \begin{equation} \sup_{t\in\mathbb{R}_{+}}E_0[|X_t|^q]<\infty. \end{equation} \end{enumerate} \end{Assumption} \begin{Assumption}\label{Identifiability} There exist positive constants $\chi_\gamma$ and $\chi_\alpha$ such that for all $(\gamma,\alpha)\in\Theta$, \begin{align*} &\mathbb{Y}_1(\gamma):=\mathbb{G}_1(\gamma)-\mathbb{G}_1(\gamma^\star)\leq-\chi_\gamma|\gamma-\gamma^\star|^2,\\ &\mathbb{Y}_2(\alpha):=\mathbb{G}_2(\alpha)-\mathbb{G}_2(\alpha^\star)\leq-\chi_\alpha|\alpha-\alpha^\star|^2. \end{align*} \end{Assumption} We introduce a $p\times p$-matrix $\Gamma:=\begin{pmatrix}\Gamma_\gamma&O\\ \Gamma_{\alpha\gamma}&\Gamma_\alpha\end{pmatrix}$ whose components are defined by: \begin{align*} &\Gamma_\gamma:=2\int_\mathbb{R}\frac{\partial_\gamma^{\otimes2}c(x,\gamma^\star)c(x,\gamma^\star)-(\partial_\gamma c(x,\gamma^\star))^{\otimes2}}{c^4(x,\gamma^\star)}(C^2(x)-c^2(x,\gamma^\star))\pi_0(dx)\\ &\qquad -4\int_\mathbb{R} \frac{(\partial_\gamma c(x,\gamma^\star))^{\otimes2}}{c^4(x,\gamma^\star)}C^2(x)\pi_0(dx),\\ &\Gamma_{\alpha\gamma}:=2\int_\mathbb{R} \partial_\alpha a(x,\alpha^\star)\partial_\gamma^\top c^{-2}(x,\gamma^\star)(A(x)-a(x,\alpha^\star)) \pi_0(dx),\\ &\Gamma_\alpha:=2\int_\mathbb{R}\frac{\partial_\alpha^{\otimes2} a(x,\alpha^\star)}{c^2(x,\gamma^\star)}(a(x,\alpha^\star)-A(x))\pi_0(dx)+2\int_\mathbb{R}\frac{(\partial_\alpha a(x,\alpha^\star))^{\otimes2}}{c^2(x,\gamma^\star)}\pi_0(dx), \end{align*} where $x^{\otimes2}:=x^\top x$ for any vector $x$. \begin{Assumption}\label{nd} $\Gamma$ is invertible. \end{Assumption} \section{Main results}\label{Main} In this section, we state our main results only for the fully misspecified case, that is, both of the true coefficients $A(\cdot)$ and $C(\cdot)$ do not belong to the parametric family $\{(a,c)(\cdot,\theta): \theta\in\Theta\}$. Concerning the partly misspecified case (i.e. for either of $A$ and $C$ is specified), similar results can be derived just as the corollaries (see, Remark \ref{partly}). All of their proofs will be given in Appendix. The first result provides the tail probability estimates of the normalized $\hat{\theta}_{n}$: \begin{Thm}\label{TPE} Suppose that Assumptions \ref{Moments}-\ref{nd} hold. Then, for any $L>0$ and $r>0$, there exists a positive constant $C_L$ such that \begin{equation}\label{eq: TPE} \sup_{n\in\mathbb{N}} P(|\sqrt{T_n}(\hat{\theta}_{n}-\theta^\star)|>r)\leq \frac{C_L}{r^L}. \end{equation} \end{Thm} Polynomial-type tail probability estimates ensure the moment convergence of the corresponding estimators which is theoretically essential such as in the deviation of an information criterion, residual analysis, and the measurement of $L_q$-prediction error. To establish Theorem \ref{TPE}, we will rely on the theory of {\it polynomial-type large deviation inequality} (PLDI) introduced by \cite{Yos11}. In order to achieve the sufficient condition for the PLDI, the moment bounds and strong identifiability conditions with respect to the corresponding estimating functions are needed. These conditions mostly imply the asymptotic normality of the corresponding $M$-estimators based on Markov-type estimating functions $M_n(\theta)=\sum_{j=1}^{n} m(X_{j-1},X_j;\theta)$. However, such conditions are insufficient for our case. More specifically, we will face the following situation: \begin{equation*} \mbox{((scaled) quasi-score function)} = (\mbox{sum of martingale difference})+(\mbox{intractable term})+(\mbox{negligible term}), \end{equation*} where the intractable term is expressed as \begin{equation*} \sqrt{\frac{h_n}{n}}\sum_{j=1}^{n} f_{j-1}(\theta^\star)=\frac{1}{\sqrt{T_n}}\int_0^{T_n} f_s(\theta^\star)ds+o_p(1), \end{equation*} with a specific measurable function $f$ satisfying $\pi_0(f)=0$. The celebrated CLT-type theorems for such single integral functional form of Markov processes have been reported in many literatures, for example, \cite[Theorem 2.1]{Bha82}, \cite[Theorem V\hspace{-.1em}I\hspace{-.1em}I\hspace{-.1em}I 3.65]{JacShi03}, \cite[Theorem 2.1]{KomWal12}, \cite[Corollary 4.1]{VerKul13}, and the references therein. However the combination with the original leading term makes it difficult to clarify the asymptotic behavior of the left-hand-side. To handle this difficulty, we invoke the concept of the EPE introduced in \cite{VerKul11}: \begin{Def}\cite[Definition 2.1]{VerKul11} We say that a measurable function $f:\mathbb{R}\to\mathbb{R}$ belongs to the domain of the extended generator $\tilde{\mathcal{A}}$ of a c\`{a}dl\`{a}g homogeneous Feller Markov process $Y$ taking values in $\mathbb{R}$ if there exists a measurable function $g:\mathbb{R}\to\mathbb{R}$ such that the process \begin{equation*} f(Y_t)-\int_0^t g(Y_s)ds, \qquad t\in\mathbb{R}^+, \end{equation*} is well defined and is a local martingale with respect to the natural filtration of $Y$ and every measure $P_x(\cdot):=P(\cdot|Y_0=x),\ x\in\mathbb{R}$. For such a pair $(f,g)$, we write $f\in\mbox{Dom}(\tilde{\mathcal{A}})$ and $\tilde{\mathcal{A}} f\overset{EPE}=g$. \end{Def} \begin{Rem} In the previous definition, the terminology ``Feller" means that the corresponding transition semigroup $T_t$ is a mapping $C_b(\mathbb{R})$ into $C(\mathbb{R})$. When it comes to $X$, its homogeneous, Feller and (strong) Markov properties are guaranteed by the argument in \cite[Theorem 6.4.6]{App09} and \cite[3.1.1 (ii)]{Mas07}. \end{Rem} Hereinafter $y^{(i)}$ is referred to as the $i$-th component of any vector $y$. We consider the following EPEs: \begin{align} \label{u1} \tilde{\mathcal{A}} u_1^{(j_1)}(x)&\overset{EPE}=-\frac{\partial_{\gamma^{(j_1)}} c(x,\gamma^\star)}{c^3(x,\gamma^\star)}(c^2(x,\gamma^\star)-C^2(x)),\\ \label{u2} \tilde{\mathcal{A}} u_2^{(j_2)}(x)&\overset{EPE}=-\frac{\partial_{\alpha^{(j_2)}} a(x,\alpha^\star)}{c^2(x,\gamma^\star)}(A(x)-a(x,\alpha^\star)), \end{align} for the extended generator $\tilde{\mathcal{A}}$ of $X$, $j_1\in\{1,\dots,p_\gamma\}$ and $j_2\in\{1,\dots,p_\alpha\}$. Henceforth $E^x$ is referred to as the expectation operator with the initial condition $X_0=x$, that is, \begin{equation*} E^x[g(X_t)]=\int_\mathbb{R} g(y)P_t(x,dy), \end{equation*} for any measurable function $g$. The next proposition ensures the existence of the solutions and verifies their behavior: \begin{Prop}\label{EPE} Under Assumption \ref{Moments}-\ref{Stability}, there exist unique solutions of \eqref{u1} and \eqref{u2}, and the solution vectors $f_1:=(f_1^{(j_1)})_{j_1\in\{1,\dots,p_\gamma\}}$ and $f_2:=(f_2^{(j_2)})_{j_2\in\{1,\dots,p_\alpha\}}$ satisfy \begin{equation*} \sup_{x,y\in\mathbb{R}}\frac{|f_i(x)-f_i(y)|}{|x-y|^{1/{p_i}}(1+|x|^{q_iL_i}+|y|^{q_iL_i})}<\infty, \quad \mbox{for} \ i\in\{1,2\}, \end{equation*} where any $p_i\in(1,\infty)$, $q_i={p_i}/(p_i-1)$, and some positive constants $L_1$ and $L_2$. Furthermore, $f_1(X_t)+\int_0^t \partial_\gamma c(X_s,\gamma^\star)(c^2(X_s,\gamma^\star)-C^2(X_s))/c^3(X_s,\gamma^\star)ds$ and $f_2(X_t)+\int_0^t \partial_\alpha a(X_s,\alpha^\star)(A(X_s)-a(X_s,\alpha^\star))/c^2(X_s,\gamma^\star)ds$ are $L_2$-martingale with respect to $(\mathcal{F}_t, P_x)$ for every $x\in\mathbb{R}$, and their explicit forms are given as follows: \begin{align*} f_1(x)&=\int_0^\infty E^x\left[\frac{\partial_\gamma c(X_t,\gamma^\star)}{c^3(X_t,\gamma^\star)}(c^2(X_t,\gamma^\star)-C^2(X_t))\right]dt,\\ f_2(x)&=\int_0^\infty E^x\left[\frac{\partial_\alpha a(X_t,\alpha^\star)}{c^2(X_t,\gamma^\star)}(A(X_t)-a(X_t,\alpha^\star))\right]dt. \end{align*} \end{Prop} \begin{Rem} Thanks to the result of the previous theorem and assumptions on the coefficients, $f_1(X_t)+\int_0^t \partial_\gamma c(X_s,\gamma^\star)(c^2(X_s,\gamma^\star)-C^2(X_s))/c^3(X_s,\gamma^\star)ds$ and $f_2(X_t)+\int_0^t \partial_\alpha a(X_s,\alpha^\star)(A(X_s)-a(X_s,\alpha^\star))/c^2(X_s,\gamma^\star)ds$ have finite second-order moments. Thus, slightly refining the argument in \cite[the proof of Proposition V\hspace{-.1em}I\hspace{-.1em}I 1.6]{RevYor99} with the monotone convergence theorem, the $L_2$-martingale property of them with respect to $(\mathcal{F}_t, P_x)$ can be replaced by the $L_2$-martingale property with respect to $(\mathcal{F}_t, P)$ in the previous proposition. \end{Rem} Building on the previous proposition, now we can obtain the asymptotic normality of $\sqrt{T_n}(\hat{\theta}_{n}-\theta^\star)$: \begin{Thm}\label{AN} Under Assumptions \ref{Moments}-\ref{nd}, there exists a nonnegative definite matrix $\Sigma\in\mathbb{R}^p\otimes\mathbb{R}^p$ such that \begin{equation*} \sqrt{T_n}(\hat{\theta}_{n}-\theta^\star)\overset{\mathcal{L}}\longrightarrow N(0,\Gamma^{-1}\Sigma(\Gamma^{-1})^\top), \end{equation*} and the form of $\Sigma:=\begin{pmatrix}\Sigma_\gamma&\Sigma_{\alpha\gamma}\\\Sigma_{\alpha\gamma}^\top&\Sigma_{\alpha}\end{pmatrix}$ is given by: \begin{align*} &\Sigma_\gamma=4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\gamma c(x,\gamma^\star)}{c^3(x,\gamma^\star)}C^2(x)z^2+f_1(x+C(x)z)-f_1(x)\right)^{\otimes2}\pi_0(dx)\nu_0(dz),\\ &\Sigma_{\alpha\gamma}=-4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\gamma c(x,\gamma^\star)}{c^3(x,\gamma^\star)}C^2(x)z^2+f_1(x+C(x)z)-f_1(x)\right)\\ &\qquad \qquad \quad\left(\frac{\partial_\alpha a(x,\alpha^\star)}{c^2(x,\gamma^\star)}C(x)z+f_2(x+C(x)z)-f_2(x)\right)^\top\pi_0(dx)\nu_0(dz),\\ &\Sigma_\alpha=4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\alpha a(x,\alpha^\star)}{c^2(x,\gamma^\star)}C(x)z+f_2(x+C(x)z)-f_2(x)\right)^{\otimes2}\pi_0(dx)\nu_0(dz). \end{align*} \end{Thm} \begin{Rem}\label{partly} If either of the coefficients is correctly specified, the right-hand side of the associated EPE \eqref{u1} or \eqref{u2} is identically 0. Thus we have \begin{align*} &\Sigma_\gamma=4\int_\mathbb{R}\left(\frac{\partial_\gamma c(x,\gamma_0)}{c(x,\gamma_0)}\right)^{\otimes2}\pi_0(dx)\int_\mathbb{R} z^4\nu_0(dz),\\ &\Sigma_{\alpha\gamma}=-4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\gamma c(x,\gamma_0)}{c(x,\gamma_0)}z^2\right)\left(\frac{\partial_\alpha a(x,\alpha^\star)}{c(x,\gamma_0)}z+f_2(x+c(x,\gamma_0)z)-f_2(x)\right)^\top\pi_0(dx)\nu_0(dz),\\ &\Sigma_\alpha=4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\alpha a(x,\alpha^\star)}{c(x,\gamma_0)}z+f_2(x+c(x,\gamma_0)z)-f_2(x)\right)^{\otimes2}\pi_0(dx)\nu_0(dz), \end{align*} in the case that the scale coefficient is correctly specified and \begin{align*} &\Sigma_\gamma=4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\gamma c(x,\gamma^\star)}{c^3(x,\gamma^\star)}C^2(x)z^2+f_1(x+C(x)z)-f_1(x)\right)^{\otimes2}\pi_0(dx)\nu_0(dz),\\ &\Sigma_{\alpha\gamma}=-4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\gamma c(x,\gamma^\star)}{c^3(x,\gamma^\star)}C^2(x)z^2+f_1(x+C(x)z)-f_1(x)\right)\left(\frac{\partial_\alpha a(x,\alpha_0)}{c^2(x,\gamma^\star)}C(x)z\right)^\top\pi_0(dx)\nu_0(dz),\\ &\Sigma_\alpha=4\int_\mathbb{R}\left(\frac{\partial_\alpha a(x,\alpha_0)}{c^2(x,\gamma^\star)}C(x)\right)^{\otimes2}\pi_0(dx), \end{align*} in the case that the drift coefficient is correctly specified. In above equations, $\gamma_0$ and $\alpha_0$ are referred to as the true values, namely, they satisfy that $c(x,\gamma_0)\equiv C(x)$ and $a(x,\alpha_0)\equiv A(x)$ on the whole state space of $X$. \end{Rem} \begin{Rem}\label{conj} In this remark, we suppose that the data-generating model defined on the probability space $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\in\mathbb{R}_+},P)$ is supposed to be \begin{equation}\label{twMolde} dY_t=A(Y_t)dt+B(Y_t)dW_t+C(Y_{t-})dZ_t, \end{equation} where $W$ is a standard Wiener process independent of $(Y_{0},Z)$, $\mathcal{F}_t:=\sigma(Y_0)\vee\sigma((W_s,Z_s);s\leq t)$ and $B:\mathbb{R}\to\mathbb{R}$ is a measurable function. We look at the following parametric model: \begin{equation*} dY_t = a(Y_t,\alpha)dt+b(Y_t,\gamma)dW_t+c(Y_{t-},\gamma)dZ_t, \end{equation*} where $b:\mathbb{R}\times\Theta_\gamma\to\mathbb{R}$ is a measurable function. Here other ingredients are similarly defined as above and we use the same notations for its transition probability, invariant measure, and so on. When the true coefficients $(A,B,C)$ are correctly specified, the GQMLE still has asymptotic normality and the sufficient conditions for it are easy to check (cf. \cite{Mas13-2}). However, we note that it is difficult to give such conditions when they are misspecified. This is because our methodology based on the EPE becomes insufficient due to the presence of Wiener component in the deviation of the asymptotic variance (see, the proof of Theorem \ref{AN}). To formally derive a similar result to Theorem \ref{AN}, we may additionally have to impose the following condition: \medskip \textbf{Condition A}: There exists a unique $C^2$-solution $f$ on $\mathbb{R}$ of \begin{equation}\label{PE} \mathcal{A} f(x)=A(x)\partial_x f(x)-\frac{1}{2}B(x)\partial_x^2 f(x)+\int_\mathbb{R} (f(x+C(x)z)-f(x)-\partial_x f(x)C(x)z)\nu_0(dz)=g(A(x),B(x),C(x)), \end{equation} where $g(A(x),B(x),C(x))$ is a specific function satisfying $\int_\mathbb{R} g(A(x),B(x),C(x))\pi_0(dx)=0$. Furthermore, the first and second derivatives of $f$ are of at most polynomial growth. \medskip Under \textbf{Condition A}, the limit distribution of the GQMLE can be derived by combining the proof of \cite{UchYos12} and Theorem \ref{AN}. It is known that the theory of viscosity solutions for integro-differential equations ensures the existence of $f$ in limited situation, for instance, see \cite{BarBucPar97}, \cite{BarImb08}, \cite{Ham16} and \cite{HamMor16}. However, it is not so for the regularity of $f$. As another attempt to confirm \textbf{Condition A}, the associated EPE $\tilde{\mathcal{A}}\tilde{f}\overset{EPE}=g$ may possibly be helpful. This is because the existence and uniqueness of the solution $\tilde{f}$ of the EPE can be verified in an analogous way to Theorem \ref{EPE}, and if $\tilde{f}$ admits $C^2$-property and growth conditions in \textbf{Condition A}, then $\tilde{f}$ satisfies \eqref{PE}. The latter argument can formally be shown as follows: \medskip It is enough to check $\mathcal{A}\tilde{f}=g$. Since $\tilde{f}(Y_t)-\int_0^t g(A(Y_s),B(Y_s),C(Y_s))ds$ is a martingale with respect to $(\mathcal{F}_t,P_x)$ for all $x\in\mathbb{R}$, we have \begin{equation*} E^x\left[\tilde{f}(Y_t)-\int_0^t g(A(Y_s),B(Y_s),C(Y_s))ds\right]=\tilde{f}(x). \end{equation*} Hence it follows from It\^{o}'s formula that as $t\to0$, \begin{align*} \left|\frac{E^x[\tilde{f}(Y_t)]-\tilde{f}(x)}{t}-g(A(x),B(x),C(x))\right|&=\left|\frac{1}{t}\int_0^t \left(E^x[g(A(Y_s),B(Y_s),C(Y_s))]-g(A(x),B(x),C(x))\right)ds\right|\\ &=\left|\frac{1}{t}\int_0^t \int_0^sE^x[\mathcal{A} g(A(Y_u),B(Y_u),C(Y_u))]duds\right|\lesssim t \to 0. \end{align*} \medskip In this sketch, we implicitly assume suitable regularity and moment conditions on each ingredient, but they are reduced to be conditions on the true coefficients $(A,B,C)$. Thus, verifying the behavior of \begin{equation*} \tilde{f}(x)=\int_0^\infty E^x[g(A(Y_t),B(Y_t),C(Y_t))] dt=\int_0^\infty \int_\mathbb{R} g(A(y),B(y),C(y))P_t(x,dy) \end{equation*} leads to \textbf{Condition A}. Just for L\'{e}vy driven Ornstein-Uhlenbeck models, we can observe the property of $\tilde{f}(x)=\int_0^\infty E^x[g(A(Y_t),B(Y_t),C(Y_t))] dt$ based on the explicit form of the solution (cf. Example \ref{ex1}). Although, for general L\'{e}vy driven SDEs, the gradient estimates of their transition probability making use of Malliavin calculus have been investigated lately (cf. \cite{Wan14}, \cite{WanXuZha15}, and the references therein), the property of $\tilde{f}(x)=\int_0^\infty E^x[g(A(Y_t),B(Y_t),C(Y_t))] dt$ is still difficult to be checked as far as the author knows. Since these are out of range of this paper, we will not treat them later. \end{Rem} \begin{Example}\label{ex1} Here we consider the following Ornstein-Uhlenbeck model: \begin{equation*} dX_t=-\alpha X_tdt+dZ_t, \end{equation*} for a L\'{e}vy process $Z$ not necessarily being pure-jump type and a positive constant $\alpha$. Applying It\^{o}'s formula to $\exp(at)X_t$, we have $X_t=X_0\exp(-\alpha t)+\int_0^t \exp(\alpha(s-t))dZ_s$ and $E^x[f(X_t)]=\int_\mathbb{R} f(x\exp(-\alpha t)+y)p_t(dy)$ for a suitable function $f$. Here $p_t$ is the probability distribution function of $\int_0^t \exp(\alpha(s-t))dZ_s$ whose characteristic function $\hat{p}_t$ is given by: \begin{equation}\label{cf} \hat{p}_t(u)=\exp\left\{\int_0^t \psi(\exp(\alpha(s-t))u)ds\right\}, \end{equation} for $\psi(u):=\log E[\exp(iuZ_1)]$ (cf. \cite[Theorem 3.1]{SatYam84}). In this case, $X$ fulfills Assumption \ref{Stability} provided that Assumption \ref{Moments}-(1) holds, and that the L\'{e}vy measure $\nu_0$ of $Z$ has a continuously differentiable positive density $g$ on an open neighborhood around the origin (for more details, see \cite[Section 5]{Mas13-2}). Under such condition, if $f$ is differentiable and itself and its derivative are of at most polynomial growth, we have \begin{align*} \left|\partial_x \left(\int_0^\infty E^x[f(X_t)]dt\right) \right|&=\left|\int_0^\infty \left(\int_\mathbb{R} \partial_x f(x\exp(-\alpha t)+y)p_t(dy)\right)\exp(-\alpha t)dt\right|\\ &\lesssim \int_0^\infty \left\{1+|x|^K+(1+|x|^{2K})\exp(-at)\right\}\exp(-\alpha t)dt\lesssim 1+|x|^{2K}, \end{align*} for a positive constant K. We can derive similar estimates with respect to its higher-order derivatives in the same way. \end{Example} Let $J$ be a L\'{e}vy process such that its moments of any-order exists and its triplet is $(0,b,\nu^J)$ (cf. \cite{App09}). Here $b$ is allowed to be $0$. Mimicking the previous example, we write $p^J_t$ as the probability distribution function of $ \int_0^t \exp(\alpha(s-t)) dJ_s$ for a positive constant $\alpha>0$ and $\psi^J(u)$ stands for $\log E[\exp(iuJ_1)]$ below. Combining the argument in Remark \ref{conj} and Example \ref{ex1}, we obtain the following corollary: \begin{Cor} For a natural number $k\geq 2$, let $f$ be a polynomial growth $C^k$-function whose derivatives are of at most polynomial growth. Suppose that the integral of $f$ with respect to the Borel probability measure $\pi_0$ whose characteristic function is $\exp\left\{\int_0^\infty \psi^J(\exp(-\alpha s)u)ds\right\}$ is $0$, and that $\nu^J$ has a continuously differentiable positive density on an open neighborhood around the origin. Then, the function $g(x):=\int_0^\infty E^x[f(x\exp(-\alpha t)+\int_0^t \exp(\alpha(s-t))dJ_s)]dt=\int_0^\infty\int_\mathbb{R} f(x\exp(-\alpha t)+y)p^J_t(dy)dt$ on $\mathbb{R}$ is the unique solution of the following (first or second order) integro-differential equation \begin{equation}\label{ide} -\alpha x\partial_x g(x)-\frac{1}{2}b\partial_x^2 g(x)+ \int_{\mathbb{R} } \left(g(x+z)-g(x)-\partial g(x)z\right)\nu^J(dz)=f(x), \end{equation} and moreover, $g$ is also a polynomial growth $C^k$-function. \end{Cor} \begin{Rem} If the L\'{e}vy measure $\nu^J$ is symmetric (i.e. the imaginary part of $\psi^J$ is 0), the equation \eqref{ide} is solvable for many odd functions $f$ as a matter of course. More specifically, for $k\in\mathbb{N}$ and $f(x)=x^{2k+1}$, the solution $g$ is \begin{align*} g(x)&=\int_0^\infty\int_\mathbb{R} (x\exp(-\alpha t)+y)^{2k+1} p^J_t(dy)dt\\ &=\int_0^\infty \int_\mathbb{R} \sum_{i=0}^{2k+1} {}_{2k+1}C_i (x\exp(-\alpha t))^i y^{2k+1-i} p^J_t(dy)dt. \end{align*} By observing the derivatives of the characteristic function, $\int_\mathbb{R} y^{2k+1-i} p^J_t(dy)$ can be expressed by the moments of $J$, hence the explicit expression of $g$ is available. \end{Rem} \begin{Rem} Beside the estimation of $\theta$, what is of special interest is the inference for $\nu_0$ which may often be an infinite dimensional parameter. Even for $(A,C)$ being constant and specified (i.e. $X$ is a L\'{e}vy process with drift), it may be interest in its own right and enormous papers have addressed this problem so far. We refer to \cite{Mas15} for comprehensive accounts under $Z$ being assumed to have a certain parametric structure. As for the situation where just a few information on $Z$ is available, one of plausible attempts is the method of moments proposed in \cite{Fig08}, \cite{Fig09}, and \cite{NicReiSohTra16}, for example. Especially \cite{NicReiSohTra16} established a Donsker-type functional limit theorem for empirical processes arising from high-frequently observed L\'{e}vy processes. When the coefficients $A$ and $C$ are nonlinear functions but specified, the residual based method of moments for $\nu_0$ by \cite{MasUeh17-1} is effective: using the GQMLE $\hat{\theta}_{n}:=(\hat{\alpha}_{n},\hat{\gamma}_{n})$, we have \begin{align*} &\frac{1}{T_n}\sum_{j=1}^{n} \varphi\left(\frac{\Delta_j X-h_na_{j-1}(\hat{\alpha}_{n})}{c_{j-1}(\hat{\gamma}_{n})}\right)\cip \int_\mathbb{R} \varphi(z)\nu_0(dz),\\ &\hat{D}_n\sqrt{T_n}\begin{pmatrix}\hat{\theta}_{n}-\theta_0\\ \frac{1}{T_n}\sum_{j=1}^{n} \varphi\left(\frac{\Delta_j X-h_na_{j-1}(\hat{\alpha}_{n})}{c_{j-1}(\hat{\gamma}_{n})}\right)- \int \varphi(z)\nu_0(dz)\end{pmatrix} \overset{\mathcal{L}}\rightarrow N(0,I_{p+q}), \end{align*} for an appropriate $\mathbb{R}^q$-valued function $\varphi$ and a $(p+q)\times(p+q)$ matrix $\hat{D}_n$ which can be constructed only by the observations. For instance, we can choose $\varphi(z)=z^r$ and $\varphi(z)=\exp(iuz)-1-iuz$ (to estimate the $r$-th cumulant of $Z$ and the cumulant function of $Z$, respectively) as $\varphi$; see \cite[Assumption 2.7]{MasUeh17-1} for the precise conditions on $\varphi$. As for misspecified case, if the misspecification is confined within the drift coefficient, then this scheme is still valid thanks to the faster diminishment of the mean activity in small time (cf. Remark \ref{decay}). \end{Rem} \section{Numerical experiments}\label{NE} \begin{figure}[t] \caption{The plot of the density functions of (i) $NIG(10,0,10,0)$ (black dotted line), (ii) $bGamma(1,\sqrt{2},1,\sqrt{2})$ (green line), (iii) $NIG(25/3,20/3,9/5,-12/5)$ (blue line), and $N(0,1)$ (red line).} \begin{center} \includegraphics[width=80mm]{compdens.eps} \end{center} \label{compdens} \end{figure} \begin{figure}[t] \caption{The boxplot of case (i); the target optimal values are described by dotted lines.} \begin{center} \includegraphics[width=150mm]{Casei.eps} \end{center} \end{figure} \begin{figure}[t] \caption{The boxplot of case (ii); the target optimal values are described by dotted lines.} \begin{center} \includegraphics[width=150mm]{Caseii.eps} \end{center} \end{figure} \begin{figure}[t] \caption{The boxplot of case (iii); the target optimal values are described by dotted lines.} \begin{center} \includegraphics[width=150mm]{Caseiii.eps} \end{center} \end{figure} \begin{table}[t] \begin{center} \caption{The performance of our estimators; the mean is given with the standard deviation in parenthesis. The target optimal values are given in the first line of each items.} \scalebox{0.85}{\begin{tabular}{ccccccccccc} \hline &&&&&&&&&&\\[-3.5mm] $T_n$ & $n$ & $h_n$ & \multicolumn{2}{c}{(i) (0.33,1.41)} & \multicolumn{2}{c}{(ii) (0.37, 1.41)} & \multicolumn{2}{c}{(iii) (0.37, 1.41)} &\multicolumn{2}{c}{diffusion (0.33, 1.41)} \\ &&& $\hat{\alpha}_{n}$ & $\hat{\gamma}_{n}$ & $\hat{\alpha}_{n}$ & $\hat{\gamma}_{n}$ & $\hat{\alpha}_{n}$ & $\hat{\gamma}_{n}$ & $\hat{\alpha}_{n}$ & $\hat{\gamma}_{n}$ \\ \hline 50& 1000 & 0.05&0.38&1.41&0.40&1.39&0.40&1.39&0.38&1.41 \\ &&&(0.12)&(0.11)&(0.16)&(0.29)&(0.15)&(0.19)&(0.13)&(0.10)\\ 100& 5000 & 0.02&0.37&1.41&0.39&1.39&0.38&1.39&0.36&1.41\\ &&&(0.09)&(0.08)&(0.11)&(0.23)&(0.11)&(0.15)&(0.09)&(0.08)\\ 100&10000&0.01&0.36&1.41&0.37&1.39&0.38&1.40&0.36&1.41\\ &&&(0.08)&(0.07)&(0.09)&(0.22)&(0.10)&(0.15)&(0.08)&(0.07)\\ \hline \end{tabular} \label{Result}} \end{center} \end{table} We suppose that the data-generating model is the following L\'{e}vy driven Ornstein-Uhlenbeck model: \begin{equation*} dX_t=-\frac{1}{2}X_tdt+dZ_t,\quad X_0=0, \end{equation*} and that the parametric model is described as: \begin{equation*} dX_t=\alpha (1-X_t)dt+\frac{\gamma}{\sqrt{1+X_t^2}}dZ_t. \end{equation*} The functional form of the coefficients is the same in \cite[Example 3.1]{UchYos11}. We conduct numerical experiments in three situations: (i) $\mathcal{L}(Z_t)=NIG(10,0,10t,0)$, (ii) $\mathcal{L}(Z_t)=bGamma(t,\sqrt{2},t,\sqrt{2})$, and (iii) $\mathcal{L}(Z_t)=NIG(25/3,20/3,9/5t,-12/5t)$. $NIG$ (normal inverse Gaussian) random variable is defined by the normal mean-variance mixture of inverse Gaussian random variable, and $bGamma$ (bilateral Gamma) random variable is defined by the difference of two independent Gamma random variables. For their technical accounts, we refer to \cite{Bar98} and \cite{KucTap08-1}. To visually observe their non-Gaussianity, each density function at $t=1$ is plotted with the density of $N(0,1)$ in Figure \ref{compdens} altogether. By taking the limit of \eqref{cf}, the characteristic function $\hat{p}(\cdot)$ of the invariant measure $\pi_0$ is given by \begin{equation} \hat{p}(u)=\exp\left\{\int_0^\infty \psi\left(\exp\left(-\frac{s}{2}\right)u\right)ds\right\}, \end{equation} where $\psi(u):=\log E[\exp(iuZ_1)]$. Differentiating $\hat{p}$, we have $\tilde{\kappa}_j=2\kappa_j/j$ for the $j$-th cumulant $\tilde{\kappa}_j$ (resp. $\kappa_j$) of $Y\sim \pi_0$ (resp. $Z_1$). Hence we obtain \begin{align*} &\mathbb{G}_{1}(\gamma)=-2\log\gamma-\frac{2}{\gamma^2}+\int_\mathbb{R} \log(1+x^2)\pi_0(dx),\\ &\mathbb{G}_{2}(\alpha)=-\frac{1}{\gamma^\star}\left\{\frac{1}{4}\int_\mathbb{R} x^3\pi_0(dx)+\alpha\left(1-\int_\mathbb{R} x^3\pi_0(dx)+\int_\mathbb{R} x^4\pi_0(dx)\right)+\alpha^2\left(3-2\int_\mathbb{R} x^3\pi_0(dx)+\int_\mathbb{R} x^4\pi_0(dx)\right)\right\}. \end{align*} By solving the estimating equations, the target optimal values are given by \begin{align*} \gamma^\star=\sqrt{2}, \ \alpha^\star=\frac{1-\int_\mathbb{R} x^3\pi_0(dx)+\int_\mathbb{R} x^4\pi_0(dx)}{2(3-2\int_\mathbb{R} x^3\pi_0(dx)+\int_\mathbb{R} x^4\pi_0(dx))}. \end{align*} In the calculation, we used $\int_\mathbb{R} x\pi_0(dx)=0$ and $\int_\mathbb{R} x^2\pi_0(dx)=1$. Thus, in each case, the optimal value $\theta^\star:=(\alpha^\star,\gamma^\star)$ is given as follows: (i) $\theta^\star=\left( 803/2406,\sqrt{2}\right)\approx(0.3337, 1.4142)$, (ii) $\theta^\star=\left(11/30,\sqrt{2}\right)\approx(0.3667, 1.4142)$, and (iii) $\theta^\star=\left( 609/1658, \sqrt{2}\right)\approx(0.3673, 1.4142)$ (we write approximated values obtained by rounding off $\theta^\star$ to four decimal places). Solving the corresponding estimating equations, our staged GQMLE are calculated as: \begin{align*} \hat{\alpha}_n=-\frac{\sum_{j=1}^n (X_{j-1}-1)(X_j-X_{j-1})(X_{j-1}^2+1)}{h_n\sum_{j=1}^n(X_j-1)^2(X_{j-1}^2+1)}, \ \hat{\gamma}_{n}=\sqrt{\frac{1}{nh_n}\sum_{j=1}^n(X_j-X_{j-1})^2(X_{j-1}^2+1)}. \end{align*} We generated 10000 paths of each SDE based on Euler-Maruyama scheme and constructed the estimators along with the above expressions, independently. In generating the small time increments of the driving noises, we used the function \texttt{rng} equipped to YUIMA package in R \cite{BroIac13}. Together with the diffusion case $\left(\theta^\star=\left(1/3,\sqrt{2}\right)\approx(0.3333, 1.4142)\right)$, the mean and standard deviation of each estimator is shown in Table \ref{Result} where $n$ and $h_n=5n^{-2/3}$ denote the sample size and observation interval, respectively. We also present their boxplots to enhance the visibility. We can observe the followings from the table and boxplots: \begin{itemize} \item Overall, the estimation accuracy of $\hat{\theta}_{n}$ improves as $T_n$ and $n$ increase and $h_n$ decrease, and this tendency reflects our main result. \item The result of case (i) is almost the same as the diffusion case. This is thought to be based on the well-known fact that $NIG(\delta,0,\delta t,0)$ tends to $N(0,t)$ in total variation norm as $\delta\to\infty$ for any $t>0$. Indeed, Figure \ref{compdens} shows that the density functions of $NIG(10,0,10,0)$ and $N(0,1)$ are virtually the same. \item Concerning case (ii), the standard deviation of $\hat{\gamma}_{n}$ is relatively worse than the other cases. This is natural because the asymptotic variance of $\hat{\gamma}_{n}$ includes the forth-order-moment of $Z$, and $bGamma(1,\sqrt{2},1,\sqrt{2})$ has the highest kurtosis value as can be seen from Figure \ref{compdens}. \item In case (iii), the performance of $\hat{\alpha}_{n}$ is the worst in this experiment. This may cause from the fact that only $NIG(25/3,20/3,9/5,-12/5)$ is not symmetric. \end{itemize} \section{Appendix}\label{AP} Throughout of the proofs, for functions $f$ on $\mathbb{R}\times\Theta$, we will sometimes write $f_s$ and $f_{j-1}$ instead of $f_s(\theta^\star)$ and $f_{j-1}(\theta^\star)$ just for simplicity. \medskip \noindent\textbf{Proof of Theorem \ref{TPE}} \indent In light of our situation, it is sufficient to check the conditions [A1''], [A4'] and [A6] in \cite{Yos11} for $\mathbb{G}_{1,n}$ and $\mathbb{G}_{2,n}$, respectively. For the sake of convenience, we simply write $\mathbb{Y}_{1,n}(\gamma):=\mathbb{G}_{1,n}(\gamma)-\mathbb{G}_{1,n}(\gamma^\star)$ and $\mathbb{Y}_{2,n}(\alpha):=\mathbb{G}_{2,n}(\alpha)-\mathbb{G}_{2,n}(\alpha^\star)$ below. Without loss of generality, we can assume $p_\gamma=p_\alpha=1$. First we treat $\mathbb{G}_{1,n}(\cdot)$. The conditions hold if we show \begin{align} &\sup_{n\in\mathbb{N}} E\left[|\sqrt{T_n}\partial_\gamma \mathbb{G}_{1,n}(\gamma^\star)|^K\right]<\infty, \label{mb1}\\ &\sup_{n\in\mathbb{N}} E\left[|\sqrt{T_n}(\partial_\gamma^2\mathbb{G}_{1,n}(\gamma^\star)+\Gamma_\gamma)|^K\right]<\infty, \label{mb2}\\ &\sup_{n\in\mathbb{N}} E\left[\sup_{\gamma\in\Theta_\gamma}|\partial_\gamma^3 \mathbb{G}_{1,n}(\gamma)|^K\right]<\infty,\label{mb3}\\ &\sup_{n\in\mathbb{N}} E\left[\sup_{\gamma\in\Theta_\gamma}|\sqrt{T_n}\left(\mathbb{Y}_{1,n}(\gamma)-\mathbb{Y}_1(\gamma)\right)|^K\right]<\infty\label{mb4}, \end{align} for any $K>0$. The first two derivatives of $\mathbb{G}_{1,n}$ are given by \begin{align*} &\partial_\gamma \mathbb{G}_{1,n}(\gamma)=-\frac{2}{T_n}\sum_{j=1}^n\left\{\frac{\partial_\gamma c_{j-1}(\gamma)}{c_{j-1}(\gamma)}h_n-\frac{\partial_\gamma c_{j-1}(\gamma)}{c^3_{j-1}(\gamma)}(\Delta_j X)^2\right\},\\ &\partial_\gamma^2 \mathbb{G}_{1,n}(\gamma)=-\frac{2}{T_n}\sum_{j=1}^{n}\left\{\frac{\partial_\gamma^{2}c_{j-1}(\gamma)c_{j-1}(\gamma)-(\partial_\gamma c_{j-1})^{2}}{c^2_{j-1}(\gamma)}h_n-\frac{\partial_\gamma^{2}c_{j-1}(\gamma)c_{j-1}(\gamma)-3(\partial_\gamma c_{j-1}(\gamma))^{2}}{c^4_{j-1}(\gamma)}(\Delta_j X)^2\right\}. \end{align*} We further decompose $\partial_\gamma \mathbb{G}_{1,n}(\gamma^\star)$ as \begin{align*} \partial_\gamma \mathbb{G}_{1,n}(\gamma^\star)=-\frac{2}{n}\sum_{j=1}^n\frac{\partial_\gamma c_{j-1}}{c^3_{j-1}}\left(c^2_{j-1}-C^2_{j-1}\right)+\frac{2}{T_n}\sum_{j=1}^n\frac{\partial_\gamma c_{j-1}}{c^3_{j-1}}\left\{(\Delta_j X)^2-h_nC^2_{j-1}\right\}. \end{align*} Since the optimal value $\theta^\star$ is in the interior of $\Theta$, the interchange of the derivative and the integral implies that $\partial_\gamma c(x,\gamma^\star)(c^2(x,\gamma^\star)-C^2(x))/c^3(x,\gamma^\star)$ is centered in the sense that its integral with respect to $\pi_0$ is 0. Thus \cite[Lemma 4.3]{Mas13-1} and \cite[Lemma 5.3]{MasUeh17-1} lead to \eqref{mb1} and \eqref{mb4}. We also have \begin{align*} \partial_\gamma^2 \mathbb{G}_{1,n}(\gamma)&=-\frac{2}{n}\sum_{j=1}^{n}\left\{\frac{\partial_\gamma^{2}c_{j-1}(\gamma)c_{j-1}(\gamma)-(\partial_\gamma c_{j-1})^{2}}{c^2_{j-1}(\gamma)}-\frac{\partial_\gamma^{2}c_{j-1}(\gamma)c_{j-1}(\gamma)-3(\partial_\gamma c_{j-1}(\gamma))^{2}}{c^4_{j-1}(\gamma)}C^2_{j-1}\right\}\\ &+\frac{2}{T_n}\sum_{j=1}^{n}\frac{\partial_\gamma^{2}c_{j-1}(\gamma)c_{j-1}(\gamma)-3(\partial_\gamma c_{j-1}(\gamma))^{2}}{c^4_{j-1}(\gamma)}\left\{(\Delta_j X)^2-h_nC^2_{j-1}\right\}. \end{align*} Again applying \cite[Lemma 4.3]{Mas13-1} and \cite[Lemma 5.3]{MasUeh17-1}, we obtain \eqref{mb2}. Via simple calculation, the third and fourth-order derivatives of $\mathbb{G}_{1,n}$ can be represented as \begin{equation*} \partial_\gamma^i\mathbb{G}_{1,n}(\gamma)=\frac{1}{n}\sum_{j=1}^{n} g_{j-1}^i(\gamma)+\frac{1}{T_n}\sum_{j=1}^{n} \tilde{g}_{j-1}^i(\gamma) \left\{(\Delta_j X)^2-h_n C^2_{j-1}\right\}, \quad \mbox{for} \ i\in\{3,4\}, \end{equation*} with the matrix-valued functions $g^i(\cdot,\cdot)$ and $\tilde{g}^i(\cdot,\cdot)$ defined on $\mathbb{R}\times\Theta_\gamma$, and these are of at polynomial growth with respect to $x\in\mathbb{R}$ uniformly in $\gamma$. Hence \eqref{mb3} follows from Sobolev's inequality (cf. \cite[Theorem 1.4.2]{Ada73}). Thus \cite[Theorem 3-(c)]{Yos11} leads to the tail probability estimates of $\hat{\gamma}_{n}$. From Taylor's expansion, we get \begin{align*} &\mathbb{Y}_{2,n}(\alpha)\\ &=\frac{1}{T_n}\sum_{j=1}^{n} \frac{2\Delta_j X(a_{j-1}(\alpha)-a_{j-1}(\alpha^\star))+h_n(a_{j-1}^2(\alpha^\star)-a_{j-1}^2(\alpha))}{c^2_{j-1}(\gamma^\star)}\\ &+\left(\int_0^1\frac{1}{(T_n)^{3/2}}\sum_{j=1}^{n} \left\{2\Delta_j X(a_{j-1}(\alpha)-a_{j-1}(\alpha^\star))+h_n(a_{j-1}^2(\alpha^\star)-a_{j-1}^2(\alpha))\right\}\partial_\gamma c^{-2}_{j-1}(\gamma^\star+u(\hat{\gamma}_{n}-\gamma^\star))du\right)\\ &\qquad(\sqrt{T_n}(\hat{\gamma}_{n}-\gamma^\star))\\ &:= \tilde{\mathbb{Y}}_{2,n}(\alpha)+\bar{\mathbb{Y}}_{2,n}(\alpha)(\sqrt{T_n}(\hat{\gamma}_{n}-\gamma^\star)). \end{align*} Sobolev's inequality leads to \begin{align*} &E\left[\left|\sqrt{T_n}\bar{\mathbb{Y}}_{2,n}(\alpha)\right|^K\right]\\ &\leq E\left[\sup_{\gamma\in\Theta_\gamma}\left|\frac{1}{T_n}\sum_{j=1}^{n} \left\{2\Delta_j X(a_{j-1}(\alpha)-a_{j-1}(\alpha^\star))+h_n(a_{j-1}^2(\alpha^\star)-a_{j-1}^2(\alpha))\right\}\partial_\gamma c^{-2}_{j-1}(\gamma)\right|^K\right]\\ &\lesssim \sup_{\gamma\in\Theta_\gamma}\left\{E\left[\left|\frac{1}{T_n}\sum_{j=1}^{n} \left\{2\Delta_j X(a_{j-1}(\alpha)-a_{j-1}(\alpha^\star))+h_n(a_{j-1}^2(\alpha^\star)-a_{j-1}^2(\alpha))\right\}\partial_\gamma c^{-2}_{j-1}(\gamma)\right|^K\right]\right.\\ &\left.\quad\qquad+E\left[\left|\frac{1}{T_n}\sum_{j=1}^{n} \left\{2\Delta_j X(a_{j-1}(\alpha)-a_{j-1}(\alpha^\star))+h_n(a_{j-1}^2(\alpha^\star)-a_{j-1}^2(\alpha))\right\}\partial_\gamma^2 c^{-2}_{j-1}(\gamma)\right|^K\right]\right\}, \end{align*} for $K>1$. The last two terms of the right-hand-side are finite from \cite[Lemma 5.3]{MasUeh17-1}, and the moment bounds of the theree functions $\sqrt{T_n}\partial_\alpha^i\bar{\mathbb{Y}}_{2,n}(\alpha)$ ($i\in\{1,2,3\}$) can analogously be obtained. Thus combined with the tail probability estimates of $\hat{\gamma}_{n}$ and Schwartz's inequality, it suffices to show the conditions for \begin{align*} &\tilde{\mathbb{G}}_{2,n}(\alpha):=-\frac{1}{T_n}\sum_{j=1}^{n} \frac{(\Delta_j X-h_na_{j-1}(\alpha))^2}{h_nc^2_{j-1}(\gamma^\star)},\\ &\tilde{\mathbb{Y}}_{2,n}(\alpha):= \frac{1}{T_n}\sum_{j=1}^{n} \frac{2\Delta_j X(a_{j-1}(\alpha)-a_{j-1}(\alpha^\star))+h_n(a_{j-1}^2(\alpha^\star)-a_{j-1}^2(\alpha))}{c^2_{j-1}(\gamma^\star)}, \end{align*} instead of $\mathbb{G}_{2,n}(\alpha)$ and $\mathbb{Y}_{2,n}(\alpha)$, respectively. Since their estimates can be proved in a similar way to the first half, we omit the details. $\square$ \medskip To derive Proposition \ref{EPE}, we prepare the next lemma. For $L_1$ metric $d(\cdot,\cdot)$ on $\mathbb{R}$, we define the coupling distance $W(\cdot,\cdot)$ between any two probability measures $P$ and $Q$ by \begin{equation*} W(P,Q):=\inf\left\{\int_{\mathbb{R}^2} d(x,y)d\mu(x,y): \mu\in M(P,Q)\right\}=\inf\left\{\int_{\mathbb{R}^2} |x-y|d\mu(x,y): \mu\in M(P,Q)\right\}, \end{equation*} where $M(P,Q)$ denotes the set of all probability measures on $\mathbb{R}^2$ with marginals $P$ and $Q$. $W(\cdot,\cdot)$ is called the probabilistic Kantrovich-Rubinstein metric (or the first Wasserstein metric). The following assertion gives the exponential estimates of $W(P_t(\cdot,\cdot),\pi_0)$: \begin{Lem} If Assumption \ref{Stability} holds, then for any $q>1$, there exists a positive constant $C_q$ such that for all $x\in\mathbb{R}$, \begin{equation*} W(P_t(x,\cdot),\pi_0)\leq C_q\exp(-at)(1+|x|^q). \end{equation*} \end{Lem} \begin{proof} We introduce the following Lipschitz semi-norm for a suitable real-valued function $f$ on $\mathbb{R}$: \begin{equation*} ||f||_L:=\sup\{|f(x)-f(y)|/|x-y|: x\neq y \ \mbox{in} \ \mathbb{R} \}. \end{equation*} From Kantorovich-Rubinstein theorem (cf. \cite[Theorem 11.8.2]{Dud02}) and Assumption \ref{Stability}, it follows that for all $x\in\mathbb{R}$, \begin{align*} W(P_t(x,\cdot),\pi_0)&=\sup\left\{\left|\int_\mathbb{R} f(y) \{P_t(x,dy)-\pi_0(dy)\}\right|: ||f||_L\leq1\right\}\\ &=\sup\left\{\left|\int_\mathbb{R} (f(y)-f(0)) \{P_t(x,dy)-\pi_0(dy)\}\right|: ||f||_L\leq1\right\}\\ &\leq \sup\left\{\left|\int_\mathbb{R} h(y) \{P_t(x,dy)-\pi_0(dy)\}\right|: |h(y)|\leq 1+|y|^q\right\}\\ &\leq C_q\exp(-at)(1+|x|^q). \end{align*} \end{proof} \medskip \noindent\textbf{Proof of Proposition \ref{EPE}} \indent It is enough to check the conditions of \cite[Theorem 3.1.1 and Theorem 3.1.3]{VerKul11} for $p_\gamma=p_\alpha=1$. As was mentioned in the proof of Theorem \ref{TPE}, $g_1(x):=-\partial_\gamma c(x,\gamma^\star)(c^2(x,\gamma^\star)-C^2(x))/c^3(x,\gamma^\star)$ and $g_2(x):=-\partial_\alpha a(x,\alpha^\star)(A(x)-a(x,\alpha^\star))/c^2(x,\gamma^\star)$ are centered. In the following, we give the proof concerning $g_1$ and omit its index $1$ for simplicity. The regularity conditions on the coefficients imply that there exist positive constants $L$ and $D$ such that $|g(x)-g(y)|\leq D (2+|x|^{L}+|y|^{L}) |x-y|$. Making use of the trivial inequalities $|x-y|^{l}\leq|x|^{l}+|y|^{l}$ and $|x|^l\leq (1\vee |x|^{L+l})$ for any $l\in(0,1)$ and $x,y\in\mathbb{R}$, we have \begin{equation*} \sup_{x\neq y}\frac{|g(x)-g(y)|}{(2+|x|^{L+1-1/p}+|y|^{L+1-1/p}) |x-y|^{1/p}}<\infty, \quad \mbox{for any} \ p>1. \end{equation*} Recall that we put $h_L(x)=1+|x|^L$ in Assumption \ref{Stability}. The inequality \eqref{Ergodicity} gives \begin{equation*} \int_\mathbb{R} h_L(y) P_t(x,dy) \leq ||P_t(x,\cdot)-\pi(\cdot)||_{h_L}+\int_\mathbb{R} (1+|y|^L) \pi_0(dy)\leq \left(C_L+\int_\mathbb{R} (1+|y|^L) \pi_0(dy)\right) h_L(x). \end{equation*} We write $L'=L+1-1/p$ for abbreviation. Building on this estimate and the previous lemma, the conditions of \cite[Theorem 3.1.1 and Theorem 3.1.3]{VerKul11} are satisfied with \begin{align*} &p=p, q=\frac{p}{p-1}, d(x,y)=|x-y|, r(t)=\exp(-at), \phi(x)=1+|x|^{L'},\\ &\psi(x)=2^{q-1}\left(C_{qL'}+\int_\mathbb{R} h_{qL'}(y)\pi_0(dy)\right) h_{qL'}(x),\\ &\chi(x)=2^{q^2-1}\left(C_{qL'}+\int_\mathbb{R} h_{qL'}(y) \pi_0(dy)\right)^q\left(C_{q^2L'}+\int_\mathbb{R} h_{q^2L'}(y)\pi_0(dy)\right)h_{q^2L'}(x), \end{align*} and here these symbols correspond to the ones used in \cite{VerKul11}. As for $g_2$, the conditions can be checked as well. Hence the desired result follows. $\square$ \medskip To derive the asymptotic normality of $\hat{\theta}_{n}$, the following CLT-type theorem for stochastic integrals with respect to Poisson random measures will come into the picture: \begin{Lem}\label{CLTPRM} Let $N(ds,dz)$ be a Poisson random measure associated with one-dimensional L\'{e}vy process defined on a stochastic basis $(\Omega, \mathcal{F},(\mathcal{F}_t)_{t>0},P)$ whose L\'{e}vy measure is written as $\nu_0$. Assume that a continuous vector-valued function $f$ on $\mathbb{R}_+\times\mathbb{R}\times\mathbb{R}$ and a $\mathcal{F}_t$-predictable process $H_t$ satisfy: \begin{enumerate} \item $E\left[\int_0^T\int_\mathbb{R} |f(T,H_s,z)|^k\nu_0(dz)ds\right]<\infty$ for all $T>0$ and $k=2,4$ and their exists a nonnegative matrix $C$ such that $E\left[\int_0^T\int_\mathbb{R} f(T,H_s,z)^{\otimes2}\nu_0(dz)ds\right]\to C$ as $T\to\infty$; \item there exists $\delta>0$ such that $E\left[\int_0^T\int_\mathbb{R} |f(T,H_s,z)|^{2+\delta}\nu_0(dz)ds\right]\to0$ as $T\to\infty$. \end{enumerate} Then $\int_0^T\int_\mathbb{R} f(T,H_s,z)\tilde{N}(ds,dz)\cil N(0,C)$ as $T\to\infty$ for the associated compensated Poisson random measure $\tilde{N}(ds,dz)$. \end{Lem} \begin{proof} By Cramer-Wald device, it is sufficient to show only one-dimensional case. This proof is almost the same as \cite[Theorem 14. 5. I]{DalVer08}. For notational brevity, we set \begin{equation*} X_1(t):=\int_0^{t}\int_\mathbb{R} f(T,H_s,z)\tilde{N}(ds,dz), \quad X_2(t):=\int_0^{t}\int_\mathbb{R} |f(T,H_s,z)|^2\nu_0(dz)ds, \end{equation*} Introduce a stopping time $S:=\inf\{t>0: X_2(t)\geq C\}$. Note that $X_2(S)=C$ because $X_2(t)$ is continuous. Define a random function $\zeta(u,t)$ by \begin{equation*} \zeta(u,t)=\exp\left\{iuX_1(t\wedge S)+\frac{u^2}{2}X_2(t\wedge S)\right\}. \end{equation*} Applying It\^{o}'s formula, we obtain \begin{align*} \zeta(u,T)&=1+iu\int_0^{T\wedge S}\zeta(u,s-)dX_1(s)+\frac{u^2}{2}\int_0^{T\wedge S}\zeta(u,s-)dX_2(s)\\ &+\sum_{0<s\leq T\wedge S}(\zeta(u,s-)\exp\left\{iu\Delta X_1(s)\right\}-\zeta(u,s-)-iu\zeta(u,s-)\Delta X_1(s))\\ &=1+\int_0^{T\wedge S}\int_\mathbb{R}\zeta(u,s-)\left(\exp\left\{iuf(T,H_s,z)\right\}-1\right)\tilde{N}(ds,dz)\\ &+\int_0^{T\wedge S}\int_\mathbb{R}\zeta(u,s-)\left(\exp\left\{iuf(T,H_s,z)\right\}-1-iuf(T,H_s,z)+\frac{u^2}{2}|f(T,H_s,z)|^2\right)\nu_0(dz)ds. \end{align*} For later use, we here present the following elementary inequality (cf. \cite{Dur10}): for all $u\in\mathbb{R}$ and $n\in\mathbb{N}\cup\{0\}$, \begin{equation}\label{yu:ele} \left|\exp(iu)-\sum_{j=0}^n\frac{(iu)^j}{j!}\right|\leq \frac{|u|^{n+1}}{(n+1)!}\wedge\frac{2|u|^n}{n!}. \end{equation} By the definition of $S$, we have $|\zeta(u,T)|\leq\exp\left\{u^2C/2\right\}$. Since $\int_0^T\int_\mathbb{R}\zeta(u,s-)\left(\exp\left\{iuf(T,H_s,z)\right\}-1\right)\tilde{N}(ds,dz)$ is an $L_2$-martingale (cf. \cite[Section 4]{App09}) from these estimates, the optional sampling theorem implies that \begin{equation*} E\left[\int_0^{T\wedge S}\int_\mathbb{R}\zeta(u,s-)\left(\exp\left\{iuf(T,H_s,z)\right\}-1\right)\tilde{N}(ds,dz)\right]=0. \end{equation*} Next we show that \begin{equation*} E\left[\int_0^{T\wedge S}\int_\mathbb{R}\zeta(u,s-)\left(\exp\left\{iuf(T,H_s,z)\right\}-1-iuf(T,H_s,z)+\frac{u^2}{2}|f(T,H_s,z)|^2\right)\nu_0(dz)ds\right]\to0. \end{equation*} Again using the above estimates, we have \begin{align*} &\left|E\left[\int_0^{T\wedge S}\int_\mathbb{R}\zeta(u,s-)\left(\exp\left\{iuf(T,H_s,z)\right\}-1-iuf(T,H_s,z)+\frac{u^2}{2}|f(T,H_s,z)|^2\right)\nu_0(dz)ds\right]\right|\\ &\leq E\left[\int_0^{T\wedge S}\int_\mathbb{R}\exp\left\{\frac{u^2}{2}C\right\}\left(\frac{|uf(T,H_s,z)|^3}{6}\wedge|uf(T,H_s,z)|^2\right)\nu_0(dz)ds\right]\\ &\leq C_\delta\exp\left\{\frac{u^2}{2}C\right\}E\left[\int_0^T\int_\mathbb{R} |uf(T,H_s,z)|^{2+\delta}\nu_0(dz)ds\right]\to0, \end{align*} where $C_\delta$ is a positive constant such that $|x|^3/6\wedge|x|^2\leq C_\delta |x|^{2+\delta}$ for all $x\in\mathbb{R}$. At last we observe that $X_1(T\wedge S)-X_1(T)\cip0$. In view of Lenglart's inequality and the isometry property of stochastic integral with respect to Poisson random measure (cf. \cite[Section 4]{App09}), it suffices to show $E[\int_{T\wedge S}^T \int_\mathbb{R} |f(T,H_s,z)|^2\nu_0(dz)ds]\to0$. However the latter convergence is clear from Assumption (1). Hence the proof is complete. \end{proof} \medskip Next we show the following lemma which gives the fundamental small time moment estimate of $X$: \begin{Lem}\label{FEV} Under Assumptions \ref{Moments} and \ref{Smoothness}, it follows that \begin{equation}\label{ME1} E^{j-1}[|X_s-X_{j-1}|^p]\lesssim h_n (1+|X_{j-1}|^K), \end{equation} for any positive constant $p\in(1\vee\beta,2)$ and $s\in(t_{j-1},t_j]$. \end{Lem} \begin{proof} Recall that $\int |z|^p\nu_0(dz)<\infty$ from Assumption \ref{Moments}. By Lipschitz continuity of the coefficients and \cite[Theorem 1.1]{Fig08}, it follows that \begin{align*} &E^{j-1}[|X_s-X_{j-1}|^p]\\ &\lesssim E^{j-1}\left[\left|\int_{t_{j-1}}^s (A_s-A_{j-1})ds+\int_{t_{j-1}}^s(C_{s-}-C_{j-1})dZ_s\right|^p+h_n^p |A_{j-1}|^p+h_n|C_{j-1}|^p\int_\mathbb{R} |z|^p\nu_0(dz)\right]+o_p(h_n)\\ &\lesssim h_n(1+|X_{j-1}|^K+o_p(1))+h_n^{p-1}\int_{t_{j-1}}^{t_j} E^{j-1}[|X_s-X_{j-1}|^p] ds+E^{j-1}\left[\left|\int_{t_{j-1}}^{t_j}(C_{s-}-C_{j-1})dZ_s\right|^p\right]. \end{align*} Applying Burkholder-Davis-Gundy's inequality (cf. \cite[Theorem 48]{Pro04-2}), we have \begin{align*} E^{j-1}\left[\left|\int_{t_{j-1}}^{t_j}(C_{s-}-C_{j-1})dZ_s\right|^p\right]&\lesssim E^{j-1}\left[\left(\int_{t_{j-1}}^{t_j}\int_\mathbb{R} (C_{s-}-C_{j-1})^2z^2N(ds,dz)\right)^{p/2}\right]\\ &= E^{j-1}\left[\left(\sum_{t_{j-1}\leq s<t_{j-1}} (C_{s-}-C_{j-1})^2(Z_s-Z_{s-})^2\right)^{p/2}\right]\\ &\leq E^{j-1}\left[\sum_{t_{j-1}\leq s<t_{j-1}} |C_{s-}-C_{j-1}|^p|Z_s-Z_{s-}|^p\right]\\ &= \int_{t_{j-1}}^{t_j} E^{j-1}[|X_s-X_{j-1}|^p]ds\int_\mathbb{R} |z|^p\nu_0(dz), \end{align*} for the Poisson random measure $N(ds,dz)$ associated with $Z$. Hence Gronwall's inequality gives \eqref{ME1}. \end{proof} \medskip \noindent\textbf{Proof of Theorem \ref{AN}} \indent According to Cramer-Wald device, it is enough to show for $p_\gamma=p_\alpha=1$ . From a similar estimates used in Theorem \ref{TPE}, we have \begin{align} \sqrt{T_n}\partial_\gamma\mathbb{G}(\gamma^\star)&=-\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n}\left\{\frac{\partial_\gamma c_{j-1}}{c^3_{j-1}}(h_nc^2_{j-1}-(\Delta_j X)^2)\right\}\nonumber\\ &=-\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n}\left\{\frac{\partial_\gamma c_{j-1}}{c^3_{j-1}}(h_nc^2_{j-1}-C^2_{j-1}(\Delta_j Z)^2)\right\}+o_p(1)\nonumber\\ &=-\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n}\left\{\frac{\partial_\gamma c_{j-1}}{c^3_{j-1}}C^2_{j-1}(h_n-(\Delta_j Z)^2)\right\}-\frac{2}{\sqrt{T_n}}\int_0^{T_n}\frac{\partial_\gamma c_s}{c^3_s}(c_s^2-C_s^2)ds\nonumber\\ &-\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n}\int_{t_{j-1}}^{t_j}\left\{\frac{\partial_\gamma c_{j-1}}{c^3_{j-1}}(c_{j-1}^2-C_{j-1}^2)-\frac{\partial_\gamma c_s}{c^3_s}(c_s^2-C_s^2)\right\}ds+o_p(1)\nonumber\\ &=\mathbb{F}_{1,n}+\mathbb{F}_{2,n}+\mathbb{F}_{3,n}+o_p(1) \label{dec}. \end{align} We evaluate each term separately below. Rewriting $\mathbb{F}_{1,n}$ in a stochastic integral form via It\^{o}'s formula, we have \begin{align*} \mathbb{F}_{1,n}&=-\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n} \int_{t_{j-1}}^{t_j}\int_\mathbb{R} \frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2z^2\tilde{N}(ds,dz)-\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n} \int_{t_{j-1}}^{t_j}\int_\mathbb{R} \left(\frac{\partial_\gamma c_{j-1}}{c_{j-1}^3}C_{j-1}^2-\frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2\right)z^2\tilde{N}(ds,dz)\\ &-\frac{4}{\sqrt{T_n}}\sum_{j=1}^{n}\frac{\partial_\gamma c_{j-1}}{c_{j-1}^3}C_{j-1}^2\int_{t_{j-1}}^{t_j} (Z_{s-}-Z_{j-1})dZ_s. \end{align*} for the compensated Poisson random measure $\tilde{N}(ds,dz)$ associated with $Z$. Using Burkholder's inequality and the isometry property, it follows that \begin{align*} &E\left[\left(\frac{1}{\sqrt{T_n}}\sum_{j=1}^{n} \int_{t_{j-1}}^{t_j}\int_\mathbb{R} \left(\frac{\partial_\gamma c_{j-1}}{c_{j-1}^3}C_{j-1}^2-\frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2\right)z^2\tilde{N}(ds,dz)\right)^2\right]\\ &\lesssim \frac{1}{T_n}\sum_{j=1}^{n} E\left[\left(\int_{t_{j-1}}^{t_j}\int_\mathbb{R} \left(\frac{\partial_\gamma c_{j-1}}{c_{j-1}^3}C_{j-1}^2-\frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2\right)z^2\tilde{N}(ds,dz)\right)^2\right]\\ &\lesssim \frac{1}{T_n}\sum_{j=1}^{n} \int_{t_{j-1}}^{t_j} E\left[\left(\int_0^1\partial_x\left(\frac{\partial_\gamma c}{c^3}C^2\right)(X_{j-1}+u(X_s-X_{j-1}))du\right)(X_s-X_{j-1})\right]ds\\ &\lesssim \frac{1}{T_n}\sum_{j=1}^{n}\int_{t_{j-1}}^{t_j} \sqrt{\sup_{t>0} E[1+|X_t|^K]}\sqrt{E[(X_s-X_{j-1})^2]}ds\\ &\lesssim \sqrt{h_n}, \end{align*} and that \begin{equation*} E\left[\left|\int_{t_{j-1}}^{t_j} (J_{s-}-J_{j-1})dJ_s\right|^2\right]\lesssim \int_{t_{j-1}}^{t_j} E[|J_{s-t_{j-1}}|^2]ds\leq h_n^2. \end{equation*} Hence \begin{equation*} \mathbb{F}_{1,n}=-\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n} \int_{t_{j-1}}^{t_j}\int_\mathbb{R} \frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2z^2\tilde{N}(ds,dz)+o_p(1). \end{equation*} Let us turn to observe $\mathbb{F}_{2,n}$. Let $f_{i,t}:=f_i(X_t)$ for $i=1,2$, and especially, let $f_{i,j}:=f_i(X_{t_j})$. From Proposition \ref{EPE}, we obtain \begin{align*} \mathbb{F}_{2,n}&=-\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n} \left(f_{1,j}- f_{1,j-1}+\int_{t_{j-1}}^{t_j} \frac{\partial_\gamma c_s}{c^3_s}(c_s^2-C_s^2)ds\right)-\frac{2}{\sqrt{T_n}}(f_{1,n}-f_{1,0})\\ &=-\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n} \left(f_{1,j}- f_{1,j-1}+\int_{t_{j-1}}^{t_j} \frac{\partial_\gamma c_s}{c^3_s}(c_s^2-C_s^2)ds\right)+o_p(1). \end{align*} For abbreviation, we simply write $\xi_{1,j}(t)=f_{1,t}- f_{1,j-1}+\int_{t_{j-1}}^t \partial_\gamma c_s(c_s^2-C_s^2)/c^3_sds$ below. According to Proposition \ref{EPE}, the weighted H\"{o}lder continuity of $f$, and Lemma \ref{FEV}, $\{\xi_{1,j}(t), \mathcal{F}_{t_{j-1}+t}:t\in[0,h_n]\}$ turns out to be an $L_2$-martingale. Thus the martingale representation theorem \cite[Theorem I\hspace{-.1em}I\hspace{-.1em}I. 4. 34]{JacShi03} implies that there exists a predictable process $s\mapsto \tilde{\xi}_{1,j}(s,z)$ such that \begin{equation*} \xi_{1,j}(t)=\int_{t_{j-1}}^t\int_\mathbb{R} \tilde{\xi}_{1,j}(s,z)\tilde{N}(ds,dz). \end{equation*} Hence the continuous martingale component of $\xi_{1,j}$ is 0. By the property of $f$, we can define the stochastic integral $\int_{t_{j-1}}^t\int_\mathbb{R} \left(f_1(X_{s-}+C_{s-}z)-f_1(X_{s-})\right)\tilde{N}(ds,dz)$ on $t\in[t_{j-1},t_j]$ and this process is also an $L_2$-martingale with respect to $\{\mathcal{F}_{t_{j-1}+t}:t\in[0,h_n]\}$. Utilizing \cite[Theorem I. 4. 52]{JacShi03} and \cite[Corollary I\hspace{-.1em}I. 6. 3]{Pro04-2}, we have \begin{align*} & E\left[\left|\frac{1}{\sqrt{T_n}}\sum_{j=1}^{n} \left\{\xi_{1,j}(t_j)-\int_{t_{j-1}}^{t_j}\int_\mathbb{R} \left(f_1(X_{s-}+C_{s-}z)-f_1(X_{s-})\right)\tilde{N}(ds,dz) \right\}\right|^2\right]\\ & \lesssim \frac{1}{T_n}\sum_{j=1}^{n} E\left[\left|\xi_{1,j}(t_j)-\int_{t_{j-1}}^{t_j}\int_\mathbb{R} \left(f_1(X_{s-}+C_{s-}z)-f_1(X_{s-})\right)\tilde{N}(ds,dz) \right|^2\right]\\ & = \frac{1}{T_n}\sum_{j=1}^{n} E\left[\left[\xi_{1,j}(\cdot)-\int_{t_{j-1}}^\cdot\int_\mathbb{R} \left(f_1(X_{s-}+C_{s-}z)-f_1(X_{s-})\right)\tilde{N}(ds,dz)\right]_{t_j}\right]=0. \end{align*} Here $[Y_\cdot]_t$ denotes the quadratic variation for any semimartingale $Y$ at time $t$, and we used Burkholder's inequality for a martingale difference between the first line and the second line. By similar estimates above, we have $\mathbb{F}_{3,n}=o_p(1)$. Having these arguments in hand, it turns out that \begin{equation*} \sqrt{T_n}\partial_\gamma\mathbb{G}(\gamma^\star)=-\frac{2}{\sqrt{T_n}}\int_0^{T_n} \int_\mathbb{R} \left(\frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2z^2+f_1(X_{s-}+C_{s-}z)-f_1(X_{s-})\right)\tilde{N}(ds,dz)+o_p(1). \end{equation*} We can deduce from Assumption \ref{Smoothness} and Proposition \ref{EPE} that there exist positive constants $K,K',K''$ and $\ep_0<1\wedge(2-\beta)$ such that for all $z\in\mathbb{R}$ \begin{align*} &\sup_t \left\{\frac{1}{t}\int_0^t E\left[\left(\frac{\partial_\gamma c_{s}}{c^3_{s}}C_{s}^2z^2+f_1(X_{s}-C_{s}z)-f_1(X_{s})\right)^2\right]ds\right\}\\ &\lesssim \sup_t \left\{\frac{1}{t}\int_0^t (|z|^{2-\epsilon_0}\vee z^4)(1+\sup_t E[|X_t|^K]+(1+\sup_t E[|X_t|^{K'}])|z|^{K''})ds\right\}\\ &\lesssim (|z|^{2-\epsilon_0}\vee z^4)(1+|z|^{K''}), \end{align*} and the last term is $\nu_0$-integrable. Then, there exist positive constants $K$ and $K'$ (possibly take different values from the previous ones) such that for any $z\in\mathbb{R}$, \begin{align*} &\left|\frac{1}{t}\int_0^t E\left[\left(\frac{\partial_\gamma c_{s}}{c^3_{s}}C_{s}^2z^2+f_1(X_{s}+C_{s}z)-f_1(X_{s})\right)^2\right]ds\right.\\ &\qquad \qquad\qquad\left.-\int_\mathbb{R}\left(\frac{\partial_\gamma c(y,\gamma^\star)}{c^3(y,\gamma^\star)}C^2(y)z^2+f_1(y+C(y)z)-f_1(y)\right)^2\pi_0(dy)\right|\\ &\leq \frac{1}{t}\int_0^t\int_\mathbb{R} \int_\mathbb{R} \left|\left(\frac{\partial_\gamma c(y,\gamma^\star)}{c^3(y,\gamma^\star)}C^2(y)z^2+f_1(y+C(y)z)-f_1(y)\right)^2 \right| (P_t(x,dy)-\pi_0(dy)) \eta(dx)ds\\ &\lesssim (|z|^{2-\epsilon_0}\vee z^4)(1+ |z|^{K'}) \frac{1}{t}\int_0^t\int_\mathbb{R} ||P_t(x,\cdot)-\pi_0(\cdot)||_{h_K}\eta(dx)ds\\ &\to0. \end{align*} Thus the dominated convergence theorem and the isometry property give \begin{align*} &\lim_{n\to\infty} E\left[\left(\frac{1}{\sqrt{T_n}}\int_0^{T_n} \int_\mathbb{R} \left(\frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2z^2+f_1(X_{s-}+C_{s-}z)-f_1(X_{s-})\right)\tilde{N}(ds,dz)\right)^2\right]\\ &=\lim_{n\to\infty}\frac{1}{T_n}\int_0^{T_n} \int_\mathbb{R} E\left[\left(\frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2z^2+f_1(X_{s-}+C_{s-}z)-f_1(X_{s-})\right)^2\right]\nu_0(dz)ds\\ &=\frac{1}{4}\Sigma_\gamma. \end{align*} It follows from Assumption \ref{Stability} and Proposition \ref{EPE} that \begin{equation*} \lim_{n\to\infty}E\left[\int_0^{T_n}\int_\mathbb{R}\left\{\frac{1}{\sqrt{T_n}}\left(\frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2z^2+f_1(X_{s-}+C_{s-}z)-f_1(X_{s-})\right)\right\}^{2+K} \nu_0(dz)ds\right]\to0. \end{equation*} From Taylor expansion around $\gamma^\star$, $\partial_\alpha\mathbb{G}_{2,n}(\alpha)$ is decomposed as: \begin{align*} \sqrt{T_n}\partial_\alpha\mathbb{G}_{2,n}(\alpha^\star)&=\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n}\frac{\partial_\alpha a_{j-1}}{c^2_{j-1}}(\Delta_j X-h_na_{j-1})+\frac{2}{T_n}\sum_{j=1}^{n}\partial_\alpha a_{j-1}(\Delta_j X-h_na_{j-1}) \partial_\gamma c^{-2}_{j-1}(\sqrt{T_n}(\hat{\gamma}_{n}-\gamma^\star))\\ &+\left(\int_0^1\frac{2}{(T_n)^{3/2}}\sum_{j=1}^{n}\partial_\alpha a_{j-1}(\Delta_j X-h_na_{j-1}) \partial_\gamma^2 c^{-2}_{j-1}(\gamma^\star+u(\hat{\gamma}_{n}-\gamma^\star))du\right)(\sqrt{T_n}(\hat{\gamma}_{n}-\gamma^\star))^2 \end{align*} Sobolev's inequality and the tail probability estimates of $\hat{\gamma}_{n}$ imply that the third term of the right-hand-side is $o_p(1)$. Hence a similar manner to the first half leads to \begin{align*} &\sqrt{T_n}\partial_\alpha\mathbb{G}_{2,n}(\alpha^\star)-\frac{2}{T_n}\sum_{j=1}^{n}\partial_\alpha a_{j-1}(\Delta_j X-h_na_{j-1}) \partial_\gamma c^{-2}_{j-1}(\sqrt{T_n}(\hat{\gamma}_{n}-\gamma^\star)) \\ &=\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n}\frac{\partial_\alpha a_{j-1}}{c^2_{j-1}}(\Delta_j X-h_na_{j-1})+o_p(1)\\ &=\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n}\frac{\partial_\alpha a_{j-1}}{c^2_{j-1}}C_{j-1}\Delta_j Z +\frac{2}{\sqrt{T_n}}\int_0^{T_n} \frac{\partial_\alpha a_s}{c^2_s}(A_s-a_s)ds+o_p(1)\\ &=\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n}\int_{t_{j-1}}^{t_j}\int_\mathbb{R}\frac{\partial_\alpha a_s}{c^2_{s-}}C_{s-}\tilde{N}(ds,dz)+\frac{2}{\sqrt{T_n}}\sum_{j=1}^{n}\left(f_{2,j}-f_{2,j-1}+\int_{t_{j-1}}^{t_j} \frac{\partial_\alpha a_s}{c^2_s}(A_s-a_s)ds\right)+o_p(1)\\ &=\frac{2}{\sqrt{T_n}}\int_0^{T_n}\int_\mathbb{R}\left(\frac{\partial_\alpha a_s}{c^2_{s-}}C_{s-}z+f_2(X_{s-}+C_{s-}z)-f_2(X_{s-})\right)\tilde{N}(ds,dz)+o_p(1), \end{align*} and we have \begin{align*} &\lim_{n\to\infty}E\left[\left(\frac{1}{\sqrt{T_n}}\int_0^{T_n}\int_\mathbb{R}\left(\frac{\partial_\alpha a_s}{c^2_{s-}}C_{s-}z+f_2(X_{s-}+C_{s-}z)-f_2(X_{s-})\right)\tilde{N}(ds,dz)\right)^2\right]=\frac{1}{4}\Sigma_\alpha,\\ &\lim_{n\to\infty}E\left[\int_0^{T_n}\int_\mathbb{R}\left\{\frac{1}{\sqrt{T_n}}\left(\frac{\partial_\alpha a_s}{c^2_{s-}}C_{s-}z+f_2(X_{s-}+C_{s-}z)-f_2(X_{s-})\right)\right\}^{2+K}\nu_0(dz)ds\right]=0. \end{align*} From the isometry property and the trivial identity $xy=\left\{(x+y)^2-(x-y)^2\right\}/4$ for any $x,y\in\mathbb{R}$, it follows that \begin{align*} \lim_{n\to\infty} &E\left[\left(\frac{1}{\sqrt{T_n}}\int_0^{T_n} \int_\mathbb{R} \left(\frac{\partial_\gamma c_{s-}}{c^3_{s-}}C_{s-}^2z^2+f_1(X_{s-}+C_{s-}z)-f_1(X_{s-})\right)\tilde{N}(ds,dz)\right)\right.\\ &\left.\times\left(\frac{1}{\sqrt{T_n}}\int_0^{T_n}\int_\mathbb{R}\left(\frac{\partial_\alpha a_s}{c^2_{s-}}C_{s-}z+f_2(X_{s-}+C_{s-}z)-f_2(X_{s-})\right)\tilde{N}(ds,dz)\right)\right]=-\frac{1}{4}\Sigma_{\alpha\gamma}. \end{align*} Hence the moment estimates in the proof of Theorem \ref{TPE}, Lemma \ref{CLTPRM} and Taylor's formula yield that \begin{align*} &\ \quad\sqrt{T_n}\begin{pmatrix}-\partial_\gamma^2 \mathbb{G}_{1,n}(\gamma^\star)&0\\ -\frac{2}{T_n}\sum_{j=1}^{n}\partial_\alpha a_{j-1}(\Delta_j X-h_na_{j-1}) \partial_\gamma c^{-2}_{j-1}& -\partial_\alpha^2 \mathbb{G}_{2,n}(\alpha^\star)\end{pmatrix}\begin{pmatrix}\hat{\gamma}_{n}-\gamma^\star \\ \hat{\alpha}_{n}-\alpha^\star\end{pmatrix}\\ &=\sqrt{T_n}\begin{pmatrix}\partial_\gamma \mathbb{G}_{1,n}(\gamma^\star)\\\partial_\alpha \mathbb{G}_{2,n}(\alpha^\star)\end{pmatrix}+o_p(1)\overset{\mathcal{L}}\longrightarrow N(0,\Sigma). \end{align*} To achieve the desired result, it suffices to show \begin{align*} -\partial_\gamma^2 \mathbb{G}_{1,n}(\gamma^\star)\overset{p}\to \Gamma_\gamma,\ -\partial_\alpha^2 \mathbb{G}_{2,n}(\alpha^\star)\overset{p}\to \Gamma_\alpha,\ \mbox{and}\ -\frac{2}{T_n}\sum_{j=1}^{n}\partial_\alpha a_{j-1}(\Delta_j X-h_na_{j-1}) \partial_\gamma c^{-2}_{j-1}\overset{p}\to\Gamma_{\alpha\gamma}. \end{align*} However the first two convergence are straightforward from the proof of Theorem \ref{TPE}, and the last convergence follows from ergodic theorem. Thus the proof is complete. \subsection*{Acknowledgement} The author thanks to Professor H. Masuda for his constructive comments. He is also grateful to the valuable advice of Professor A. M. Kulik about extended Poisson equation. This work was supported by JST CREST Grant Number JPMJCR14D7, Japan.
1,116,691,498,850
arxiv
\section{Introduction} \label{intro} Gauge symmetries have been crucial in the development of theoretical models of fundamental interactions~\cite{Wilson-74,Weinberg-book,ZJ-book}. They also play a relevant role in statistical and condensed-matter physics ~\cite{Wen-book, Anderson-15, GASVW-18, Sachdev-19, SSST-19, GSF-19, Wetterich-17, FNN-80, INT-80}. For instance, it has been suggested that they may effectively emerge as low-energy effective symmetries of many-body systems. However, because of the presence of microscopic gauge-symmetry violations, it is crucial to understand the role of the gauge-symmetry breaking (GSB) perturbations. One would like to understand whether these perturbations do not change the low-energy dynamics---in this case the gauge-symmetric theory would describe the asymptotic dynamics even in the presence of some (possibly small) violations---or whether even small perturbations can destabilize the emergent gauge model--- consequently, a gauge-invariant dynamics would be observed only if an appropriate tuning of the model parameters is performed. This issue is also crucial in the context of analog quantum simulations, for example, when controllable atomic systems are engineered to effectively reproduce the dynamics of gauge-symmetric theoretical models, with the purpose of obtaining physical information from the experimental study of their quantum dynamics in laboratory. Several proposals of artificial gauge-symmetry realizations have been reported, see, e.g., the review articles~\cite{ZCR-15,Banuls-etal-20,BC-20} and references therein, in which the gauge symmetry is expected to effectively emerge in the low-energy dynamics. Previous studies of the critical behavior (or continuum limit) of three-dimensional (3D) lattice gauge theories with scalar matter have shown the emergence of two different scenarios. One possibility is that scalar-matter and gauge-field correlations are both critical at the transition point. This behavior is related to the existence of a charged fixed point (FP) in the renormalization-group (RG) flow of the corresponding continuum gauge field theory \cite{ZJ-book}. This is realized in some of the transitions observed in the 3D lattice Abelian-Higgs (AH) model with noncompact gauge fields \cite{BPV-21}. Alternatively, it is possible that only scalar-matter correlations are critical at the transition. The gauge variables do not display long-range critical correlations, although their presence is crucial to identify the gauge-invariant scalar-matter degrees of freedom that develop the critical behavior. This typically occurs in lattice AH models with compact gauge variables. In Ref.~\cite{BPV-21-gsb} we studied GSB perturbations in the 3D lattice AH model with noncompact gauge field, at transitions controlled by the charged FP of the RG flow of 3D electrodynamics with multicomponent charged scalar fields~\cite{HLM-74,FH-96,IZMHS-19,ZJ-book,MZ-03} (also called AH field theory). In that case a photon-mass GSB term, however small, gives rise to a drastic departure from the gauge-invariant continuum limit of the statistical lattice gauge theory, driving the system towards a different critical behavior (continuum limit). This is due to the fact that a photon-mass term qualitatively changes the phase diagram of the noncompact lattice AH theory~\cite{BPV-21}. The Coulomb phase is no longer present and therefore, the nature of the transition to the Higgs phase, previously controlled by the charged fixed point, varies. One ends up with a new critical behavior with noncritical gauge fields. At transitions where gauge fields are not critical, the role of gauge symmetries is more subtle, but still crucial to determine the continuum limit. Even if gauge correlations are not critical, gauge fields prevent non-gauge invariant correlators from acquiring nonvanishing vacuum expectation values and therefore, from developing long-range order. Therefore, gauge symmetries effectively reduce the number of degrees of freedom of the matter-field critical modes. The lattice CP$^{N-1}$ model~\cite{PV-19-CP,TIM-05,TIM-06} or, more generally, the lattice AH model with compact gauge fields~\cite{PV-19-AH3d}, where an $N$-component complex scalar field is gauge-invariantly coupled to a compact U(1) gauge field associated with the links of the lattice, have transitions of this type. In these models the role of the U(1) gauge symmetry is that of hindering some scalar degrees of freedom, i.e., those related to a local phase, from becoming critical. As a consequence, the critical behavior or continuum limit is driven by the condensation of a gauge-invariant tensor scalar operator, and the corresponding continuum field theory is associated with a Landau-Ginzburg-Wilson (LGW) field theory with a tensor field, but without gauge fields. For $N=2$, the LGW approach predicts that the continuum limit of the CP$^1$ model (or of the $N=2$ lattice compact AH model) is equivalently described by the LGW field theory for a three-component real vector field with O(3) global symmetry. In this paper we investigate the effects of perturbations breaking the gauge symmetry when gauge-field correlations are not critical and the gauge symmetry acts only to prevent some of the matter degrees of freedom from becoming critical. For this purpose we consider a lattice AH model with compact gauge fields, focusing on the case of two scalar components. We consider the most natural and simplest GSB term, adding to the Hamiltonian (or action, in the high-energy physics terminology) a term that is linear in the gauge link variable. This GSB term is similar to the photon mass term considered in noncompact lattice AH models, and indeed it is equivalent to it in the limit of small noncompact field $A_{{\bm x},\mu}$, as can be immediately seen by expanding the exponential relation between the compact and noncompact gauge fields, i.e., $\lambda_{{\bm x},\mu}=\exp(iA_{{\bm x},\mu})$. At variance with what was obtained in Ref.~\cite{BPV-21-gsb}, where we considered GSB perturbations at transitions controlled by charged FPs, in the present case the linear GSB perturbation does not change the continuum limit, at least for a sufficiently small finite strength of the perturbation. In Fig.~\ref{phdiagn2} we anticipate a sketch of the phase diagram of the lattice CP$^1$ model in the presence of a GSB term whose strength is controlled by the parameter $w$. It presents three different phases: a high-temperature (small $J$) disordered phase, and two low-temperature (large $J$) phases with different orderings. When the GSB perturbation is small, the low-temperature phase is qualitatively analogous to that of the gauge-invariant model, i.e., it is characterized by the condensation of a bilinear tensor scalar operator. In the RG language, the GSB perturbation turns out to be irrelevant at the FP of the gauge-invariant theory. On the other hand, when the GSB parameter $w$ is sufficiently large, there is a second ordered phase characterized by the condensation of the vector scalar field. These phases are separated by three different transition lines, meeting at a multicritical point. Along the disordered-tensor (DT) transition line we observe the same critical behavior as in the CP$^1$ model: the tensor degrees of freedom behave as in the O(3) vector model. The GSB term is irrelevant and the continuum limit of the model with a finite GSB breaking is the same as that of the gauge-invariant model. Along the DV line, we observe instead a different continuum limit: vector and tensor degrees of freedom behave as in the O(4) model. Finally, on the TV line we observe an O(2) vector critical behavior. \begin{figure}[tbp] \includegraphics[width=0.95\columnwidth, clip]{ph_diagram.eps} \caption{Sketch of the phase diagram of the lattice CP$^1$ model in the presence of the GSB term $H_b=-w\sum_{{\bm x},\mu} {\rm Re}\,\lambda_{{\bm x},\mu}$. This is a particular case of the lattice AH model with two-component scalar matter and compact gauge variables for $\gamma=0$. The phase diagram is characterized by three different phases: a disordered phase (small $J$), a tensor-ordered phase where the tensor operator $Q$ condenses (large $J$ and small $w$), and a vector-ordered phase where the vector field ${\bm z}_{\bm x}$ condenses (large $J$ and $w$). These phases are separated by the DT, DV, and TV transition lines, where CP$^1$/O(3), O(4) vector, and O(2) vector critical behavior is observed.} \label{phdiagn2} \end{figure} The paper is organized as follows. In Sec.~\ref{model} we define the lattice AH model with compact gauge variables and the linear GSB perturbation. In Sec.~\ref{obs} we define the observables that we consider in the numerical study and review the main properties of the finite-size scaling (FSS) analyses employed. In Sec.~\ref{phdiacrbeh} we discuss some limiting cases where the thermodynamic behavior is known and propose a possible phase diagram for the model. In Sec.~\ref{numres} we discuss our numerical results for $N=2$, which support the conjectured phase diagram. Finally, in Sec.~\ref{conclu} we summarize and draw our conclusions. In the Appendix we report some universal curves in the O($N$) vector model, that allow us to identify the universality class of the different transitions. \section{The model} \label{model} \subsection{The lattice Abelian-Higgs model with compact gauge variables} \label{coAH} A compact lattice formulation of the three-dimensional AH model is obtained by associating complex $N$-component unit vectors ${\bm z}_{\bm x}$ with the sites ${\bm x}$ of a cubic lattice, and U(1) variables $\lambda_{{\bm x},\mu}$ with each link connecting the site ${\bm x}$ with the site ${\bm x}+\hat\mu$ (where $\hat\mu=\hat{1},\hat{2},\ldots$ are unit vectors along the positive lattice directions). The partition function of the system reads \begin{eqnarray} &&Z = \sum_{\{{\bm z},\lambda\}} e^{-H_{\rm AH}({\bm z},\lambda)}\,, \label{partfunc}\\ && H_{\rm AH}({\bm z},\lambda) = H_z({\bm z},\lambda) + H_{\lambda}(\lambda)\, . \label{Hdef} \end{eqnarray} We define \begin{eqnarray} H_z = - J\, N \sum_{{\bm x}, \mu} 2\,{\rm Re}\, \lambda_{{\bm x},\mu}\, \bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu} \,, \label{Hzdef} \end{eqnarray} where the sum runs over all lattice links, and \begin{equation} H_\lambda = - \gamma \sum_{{\bm x},\mu>\nu} 2\,{\rm Re}\, \Pi_{{\bm x},{\mu}\nu} \, , \label{Hladef} \end{equation} where \ \begin{equation} \Pi_{{\bm x},{\mu}\nu} = \lambda_{{\bm x},{\mu}} \,\lambda_{{\bm x}+\hat{\mu},{\nu}} \,\bar{\lambda}_{{\bm x}+\hat{\nu},{\mu}} \,\bar{\lambda}_{{\bm x},{\nu}} \,, \end{equation} and the sum runs over all plaquettes of the cubic lattice. The AH Hamiltonian is invariant under the global SU($N$) transformations \begin{equation} z_{\bm x} \to U z_{\bm x}\,,\qquad U\in\mathrm{SU}(N)\,, \label{sunsym} \end{equation} and the local U(1) gauge transformations \begin{eqnarray} z_{\bm x} \to e^{i\theta_{\bm x}} z_{\bm x}\,,\quad \lambda_{{\bm x},\mu} \to e^{i\theta_{\bm x}} \lambda_{{\bm x},\mu} e^{-i\theta_{{\bm x}+\hat{\mu}}}\,, \label{gaugsym} \end{eqnarray} where $\theta_{\bm x}$ is an arbitrary space-dependent real function. The parameter $\gamma\ge 0$ plays the role of inverse gauge coupling. For $\gamma=0$, the model is a particular lattice formulation of the 3D CP$^{N-1}$ model, which is quadratic in the scalar-field variables and linear in the gauge variables. We can obtain a lattice formulation without explicit gauge fields by integrating out the link variables. We obtain \begin{equation} Z = \sum_{\{{\bm z},\lambda\}} e^{-H_z({\bm z},\lambda)} = \sum_{\{{\bm z}\}} \prod_{{\bm x},\mu} I_0\left(2 J N |\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|\right)\,, \label{partzint} \end{equation} where $I_0(x)$ is a modified Bessel function. The corresponding effective Hamiltonian is \begin{equation} H_{{\rm eff}} = - \sum_{{\bm x},\mu} \ln I_0\left(2 J N |\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|\right)\,, \label{hlaeff} \end{equation} which is invariant under the gauge transformations (\ref{gaugsym}) even in the absence of gauge fields. For $J$ small, since $I_0(x) = 1 + x^2/4 + O(x^4)$, the Hamiltonian $H_{{\rm eff}}$ simplifies to \begin{equation} H_{CP} = - J^2 N^2 \sum_{{\bm x},\mu} |\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|^2, \end{equation} which represents another equivalent formulation of the CP$^{N-1}$ model. The compact AH model presents two phases when varying $J$ and $\gamma$ \cite{PV-19-AH3d}, separated by a transition line, whose nature does not depend on the gauge parameter $\gamma$. The appropriate order parameter is the bilinear gauge-invariant operator \begin{equation} Q_{{\bm x}}^{ab} = \bar{z}_{\bm x}^a z_{\bm x}^b - {1\over N} \delta^{ab}\,, \label{qdef} \end{equation} which is a hermitian and traceless $N\times N$ matrix. It transforms as $Q_{{\bm x}} \to {U}^\dagger Q_{{\bm x}} \,{U}$ under the global SU($N$) transformations. Its condensation signals the spontaneous breaking of the global SU($N$) symmetry. \subsection{Linear breaking of the U(1) gauge symmery} \label{u1gb} In the following we investigate the effects of perturbations breaking gauge invariance. In particular, we consider the simplest gauge-breaking perturbation that is linear in the $\lambda$ variables. We consider the extendend Hamiltonian \begin{equation} H_e({\bm z},\lambda) = H_{\rm AH}({\bm z},\lambda) + H_{b}(\lambda)\, , \label{Hext} \end{equation} where \begin{equation} H_{b} = - w \sum_{{\bm x},\mu} {\rm Re} \,\lambda_{{\bm x},\mu}\,. \label{linpert} \end{equation} Note that, if we perform the change of variables \begin{equation} {\bm z}_{\bm x} \to (-1)^{x_1+x_2+x_3}{\bm z}_{\bm x}\ ,\quad \lambda_{{\bm x},\mu}\to -\lambda_{{\bm x},\mu}\,, \label{F-to-AF} \end{equation} we reobtain the action (\ref{Hext}), with $w$ replaced by $-w$. Thus, the phase diagram is independent of the sign of $w$ (but, for $w < 0$, the relevant vector order parameters would be staggered quantities). Thus, in the following we only consider the case $w\ge 0$. When $\gamma=0$, one can straightforwardly integrate out the link variables $\lambda$, obtaining \begin{eqnarray} Z &=& \sum_{\{{\bm z},\lambda\}} e^{-H_{z}({\bm z},\lambda) - H_{b}(\lambda)} \nonumber\\ &=& \sum_{\{{\bm z}\}} \prod_{{\bm x},\mu} I_0\left(2 J N |\hat{w}+\bar{{\bm z}}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|\right), \label{partzint2} \end{eqnarray} where $\hat{w}=w/(2JN)$. The corresponding effective Hamiltonian reads \begin{equation} H_{{\rm eff}} = - \sum_{{\bm x},\mu} \ln I_0\left(2 J N |\hat{w} + \bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|\right)\,. \label{hlaeff2} \end{equation} The Bessel function should be irrelevant for the critical behavior. Since, the argument of the Bessel function can be equivalently written as \begin{equation} |\hat{w} + \bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|^2 = \hat{w}^2 + 2 \hat{w}\, \hbox{Re}\, (\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}) + |\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|^2, \end{equation} we expect that, for infinite gauge coupling, the model has the same critical behavior as a model with Hamiltonian \begin{equation} \widetilde{H}_{{\rm eff}} = - J^2 N^2 \sum_{{\bm x},\mu}\Big[ |\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|^2 + 2 \hat{w} \,{\rm Re} \,\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu} \Big]\,. \label{hlaeffexpmix} \end{equation} Such a Hamiltonian is the sum of a U($N$) invariant CP$^{N-1}$ model and an O($2N$) invariant vector model, which represents now the gauge-breaking perturbation. Thus, the introduction of the linear gauge-breaking term (\ref{linpert}) is equivalent to the addition of a ferromagnetic vector interaction ${\rm Re}(\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu})$ among the vector fields. Although this has been shown for $\gamma = 0$, we expect this equivalence to hold at criticality for any positive finite $\gamma$. We would like now to show that, at transitions where the vector fields show a critical behavior, one observes an enlarged O($2N$) symmetry. For this purpose, note that the CP$^{N-1}$ interaction in Eq.~(\ref{hlaeffexpmix}) can be rewritten as \begin{equation} |\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|^2 = ({\rm Re} \,\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu})^2 + ({\rm Im}\, \bar{\bm z}_{\bm x} \cdot \Delta_\mu {\bm z}_{\bm x})^2, \label{hlaeffexp} \end{equation} where $\Delta_\mu {\bm z}_{\bm x} \equiv {\bm z}_{{\bm x}+\hat{\mu}} - {\bm z}_{\bm x}$, and we used the fact that $\bar{\bm z}_x \cdot {\bm z}_x=1$. The first term represents an additional O($2N$) invariant vector ferromagnetic interaction, while the second one is only invariant under U($N$) transformations. We will now argue that the latter term is irrelevant at the O($2N$) fixed point. For this purpose, we consider the usual LGW approach and define a real field $\Psi_{ai}$, $a=1,...,N$ and $i=1,2$, which represents a coarse-grained version of the field ${\bm z}_{\bm x}$, the correspondence being \begin{equation} {\rm Re} \, {\bm z}_{\bm x}^a \to \Psi_{a1}({\bm x}) \,,\qquad {\rm Im} \, {\bm z}_{\bm x}^a \to \Psi_{a2}({\bm x}) \,, \label{psicorr} \end{equation} The coarse-grained Hamiltonian corresponding to model (\ref{hlaeffexpmix}) is given by \begin{eqnarray} &&L_{\rm LGW} = {1\over 2} \sum_{ai,\mu} (\partial_\mu \Psi_{ai})^2 + {r\over 2} \sum_{ai} \Psi_{ai}^2 \label{lgwrfixed}\\ &&\;\; + {u\over 4!} (\sum_{ai} \Psi_{ai}^2)^2 + v \sum_{a,\mu} (\Psi_{a1}\,\partial_\mu \Psi_{a2} - \Psi_{a2}\,\partial_\mu \Psi_{a1})^2\,. \nonumber \end{eqnarray} The first three terms are O($2N$) symmetric and represent the coarse-grained version of the O($2N$)-invariant part of the Hamiltonian. The last term corresponds to the O($2N$)-symmetry breaking term, i.e., to the last term in Eq.~(\ref{hlaeffexp}). Since this quartic term contains two derivatives, its naive dimension is six close to four dimensions and, therefore, it is generally expected to be irrelevant at the three-dimensional O($2N$)-symmetric fixed point. This symmetry enlargement is a general result that holds for generic 3D vector systems with global U($N$) invariance (without gauge symmetries)~\cite{BPV-21-gsb}. It should be stressed that this symmetry enlargement is only present in the large-scale critical behavior. Moreover, it assumes that the vector fields are critical at the transition, since the LGW theory is defined in terms of coarse-grained $z$ fields. Therefore, no O($2N$) behavior is expected at transitions where vector correlations are short-ranged (as we discuss below this may also occur for finite values of $w$). \subsection{Axial-gauge fixing} In a lattice gauge theory the redundancy arising from the gauge symmetry can be eliminated by adding a gauge fixing, which leaves gauge-invariant observables unchanged. For example, one may consider the {\em axial gauge}, obtained by fixing the link variable $\lambda_{{\bm x},\mu}$ along one direction, \begin{equation} {\rm axial}\;{\rm gauge}: \qquad \lambda_{{\bm x},3}=1\,. \label{axialgauge} \end{equation} For periodic boundary conditions, this constraint cannot be satisfied on all sites, and a complete gauge fixing is obtained by setting $\lambda_{{\bm x},\mu}=1$ on a maximal lattice tree~\cite{Creutz-book,ItzDro-book}, that necessarily involves some links that belong to an orthogonal plane. To avoid this problem, one can use $C^*$ boundary conditions~\cite{KW-91,LPRT-16,BPV-21-gsb}. In this case, one can require condition (\ref{axialgauge}) on all sites $\bm x$. In the following we will study the phase diagram of the model (\ref{Hext}) in the presence of the axial gauge fixing. Note that, also in the case, the phase diagram is independent of the sign of $w$. Instead of Eq.~(\ref{F-to-AF}) one should consider the mapping ${\bm z}_{\bm x} \to (-1)^{x_1+x_2}{\bm z}_{\bm x}$, $\lambda_{{\bm x},\mu} \to - \lambda_{{\bm x},\mu}$. It is interesting to write down the effective Hamiltonian that is obtained by integrating the gauge fields. In the axial gauge, repeating the same arguments presented in Sec.\ref{u1gb}, we obtain \begin{eqnarray} \widetilde{H}_{{\rm eff}} &= & - J^2 N^2 \sum_{{\bm x},\mu}\Big[ |\bar{{\bm z}}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu}|^2 + 2 \hat{w} \,{\rm Re} \,\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu} \Big]\, \nonumber \\ && -2 J N \sum_{{\bm x} } {\rm Re} \,\bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat{3}}, \label{Haxial-eff} \end{eqnarray} where, in the first sum, $\mu$ takes only the values 1 and 2. We this obtain a stacked CP$^{N-1}$-O($2N$) ferromagnetic system, in which the different layers are ferromagnatically coupled by a standard vector interaction. \section{Observables and FSS analyses} \label{obs} In our numerical analysis we consider the vector and tensor two-point functions of the scalar field, respectively defined as \begin{equation} G_V({\bm x},{\bm y}) = \hbox{Re}\ \langle \bar{\bm z}_{\bm x} \cdot {\bm z}_{\bm y}\rangle \label{vectorfu} \end{equation} and \begin{equation} G_T({\bm x},{\bm y}) = \langle {\rm Tr}\, Q_{\bm x}Q_{\bm y} \rangle\,, \label{gxyp} \end{equation} where $Q$ is the bilinear operator defined in Eq.~(\ref{qdef}). The corresponding susceptibility and second-moment correlation length are defined by the relations \begin{eqnarray} &&\chi_{V/T} = \sum_{{\bm x}} G_{V/T}({\bm x}) = \widetilde{G}_{V/T}({\bm 0}), \label{chisusc}\\ &&\xi_{V/T}^2 \equiv {1\over 4 \sin^2 (\pi/L)} {\widetilde{G}_{V/T}({\bm 0}) - \widetilde{G}_{V/T}({\bm p}_m)\over \widetilde{G}_{V/T}({\bm p}_m)}, \label{xidefpb} \end{eqnarray} where $\widetilde{G}_{V/T}({\bm p})=\sum_{{\bm x}} e^{i{\bm p}\cdot {\bm x}} G_{V/T}({\bm x})$ is the Fourier transform of $G_{V/T}({\bm x})$, and ${\bm p}_m = (2\pi/L,0,0)$. In our FSS analysis we consider the ratios \begin{equation} R_{V/T} = \xi_{V/T}/L\, \label{rvt} \end{equation} associated with the vector and tensor correlation lengths, and the vector and tensor Binder parameters \begin{eqnarray} && U_{V/T} = \frac{\langle \mu_{V/T}^2\rangle}{\langle \mu_{V/T} \rangle^2} \,, \label{binderdef}\\ &&\mu_{V} = \sum_{{\bm x},{\bm y}} {\rm Re}\,\bar{\bm z}_{\bm x}\cdot {\bm z}_{\bm y}\,, \qquad \mu_{T} = \sum_{{\bm x},{\bm y}} {\rm Tr}\, Q_{\bm x}Q_{\bm y}\,. \nonumber \end{eqnarray} We recall that, at continuous phase transitions driven by a relevant Hamiltonian parameter $r$, the RG invariant quantities associated with the critical modes are expected to scale as (we denote a generic RG invariant quantity by $R$)~\cite{PV-02} \begin{eqnarray} R(L,w) &\approx & f_R(X) \,,\quad X =(r-r_c)\,L^{1/\nu}\,, \label{scalbeh} \end{eqnarray} where $\nu$ is the critical exponent associated with the correlation length. Scaling corrections decaying as $L^{-\omega}$ have been neglected in Eq.~(\ref{scalbeh}), where $\omega$ is the exponent associated with the leading irrelevant operator. The function $f_R(X)$ is universal up to a multiplicative rescaling of its argument. In particular, $U^*\equiv f_U(0)$ and $R_\xi^*\equiv f_{R_\xi}(0)$ are universal, depending only on the boundary conditions and the aspect ratio of the lattice. To verify universality, we will plot $U$ versus $R_\xi$. The data are expected to approach a universal curve, i.e., \begin{equation} U = F_U(R) + O(L^{-\omega})\,, \label{yfur} \end{equation} where $F_U(R)$ is universal and independent of any normalization. It only depends on the boundary conditions and aspect ratio of the lattice. \section{Phase diagram and critical behaviors} \label{phdiacrbeh} Before presenting the results of the numerical simulations, we discuss some limiting cases that allow us to determine the phase diagram of the model. Because of the symmetry under the change of the sign of $w$, we only consider the case $w \ge 0$. \subsection{The model for $w=0$ and $w=\infty$} \label{sec4.A} For $w=0$ we recover the lattice AH model with compact U(1) gauge variables. Its phase diagram has been extensively studied in the literature, see Refs.~\cite{PV-19-AH3d,PV-20}. For any $\gamma\ge 0$, there is a disordered phase for small $J$ and an ordered phase for large $J$, where the gauge-invariant operator $Q$ defined in Eq.~(\ref{qdef}) condenses. For $N=2$ the transition occurs at ~\cite{PV-19-AH3d} $J_c=0.7102(1)$ for $\gamma=0$, $J_c=0.4145(5)$ for $\gamma=0.5$, and $J_c=0.276(1)$ for $\gamma=1$. For this value of $N$ the transitions are continuous and belong to the O(3) vector universality class~\cite{PV-02} as the transition in the CP$^1$ model corresponding to $\gamma = 0$. Accurate estimates of the O(3) critical exponents can be found in Refs.~\cite{Hasenbusch-20, Chester-etal-20-o3, KP-17, HV-11, PV-02, CHPRV-02, GZ-98}. In the following we use the estimate of the correlation-length exponent $\nu$ of Ref.~\cite{Hasenbusch-20}, $\nu= 0.71164(10)$. For $N > 2$, transitions are instead of first order \cite{PV-19-AH3d,PV-20}. In the limit $\gamma\to\infty$, the plaquette operator $\Pi_{{\bm x},{\mu}\nu}$ converges to 1. In infinite volume, this implies that $\lambda_{{\bm x},\mu} = 1$ modulo gauge transformations. We thus recover the O$(2N)$ vector model. For $N=2$ the relevant model is the O(4) model, which has a transition at~\cite{BFMM-96,CPRV-96,BC-97} $J_c=0.233965(2)$. Estimates of the O(4) critical exponents can be found in Refs.~\cite{HV-11,PV-02}: for example, $\nu=0.750(2)$. The O(4) fixed point at $\gamma=\infty$ is unstable with respect to nonzero gauge couplings. For finite values of $\gamma$ it only gives rise to crossover phenomena~\cite{PV-19-AH3d}. It is important to stress that, in the limit $\gamma\to \infty$, one obtains O($2N$) behavior only for gauge-invariant quantities, as $\lambda_{{\bm x},\mu} = 1$ modulo gauge transformations. For instance, tensor correlations in the AH model converge to the corresponding O($2N$) correlations in the limit. Instead, quantities that are not gauge invariant are not related to the corresponding quantities of the O($2N$) model. For instance, vector correlations satisfy $G_V({\bm x}) = \delta_{{\bm x},{\bm 0}}$ for any $\gamma$ and are therefore not related to vector correlations in the O($2N$) model. In a finite volume with periodic boundary conditions, the limit $\gamma\to \infty$ is more subtle. Indeed, Polyakov loops, i.e., the product of the gauge fields along nontrivial paths that wrap around the lattice, do not order in the limit. This implies that one cannot set $\lambda_{{\bm x},\mu} = 1$ on all sites. Rather, on some boundary links one should set \begin{eqnarray} && \lambda_{(L,n_2,n_3),1} = \tau_1, \nonumber \\ && \lambda_{(n_1,L,n_3),2} = \tau_2, \nonumber \\ && \lambda_{(n_1,n_2,L),3} = \tau_3, \end{eqnarray} where $\tau_1$, $\tau_2$, and $\tau_3$ are three space-independent boundary phases that should be integrated over. We thus obtain an O($2N$) model with U(1)-fluctuating boundary conditions. This argument generalizes a similar result that holds in systems with real fields and ${\mathbb Z}_2$ gauge invariance \cite{Hasenbusch-96}. In the latter case, one obtains models with fluctuating periodic-antiperiodic boundary conditions. The behavior for $w\to \infty$ is definitely simpler. In this limit $\lambda_{{\bm x},\mu}$ converges to 1 trivially. Thus, for any $\gamma$ both gauge-invariant and non gauge-invariant quantities behave as in the standard O($2N$) vector model. \subsection{The model for $J\to \infty$} \label{sec4.B} For $J\to\infty$, the relevant configurations are those that minimize the Hamiltonian term $H_z$. This implies that \begin{equation} {\bm z}_{\bm x} = \lambda_{{\bm x},\mu} {\bm z}_{{\bm x} + \hat{\mu}}. \label{zJinfinity} \end{equation} By repeated application of this relation, one can verify that the product of the gauge fields along any lattice loop, including nontrivial loops that wrap around the lattice, is always 1. Therefore, the gauge variables can be written (in a finite volume with periodic boundary conditions, too) as \begin{equation} \lambda_{{\bm x},\mu} = \bar\psi_{\bm x} \psi_{{\bm x}+\hat\mu}\,, \qquad \psi_{\bm x} \in {\rm U}(1) \,. \label{puregauge} \end{equation} Substituting in Eq.~(\ref{zJinfinity}), we obtain \begin{equation} {\bm z}_{\bm x} \psi_{\bm x} = {\bm z}_{{\bm x} + \hat{\mu}} \psi_{{\bm x} + \hat{\mu}}. \end{equation} We can thus define a constant unit-length vector ${\bm v} = {\bm z}_{\bm x} \psi_{\bm x}$, so that \begin{equation} {\bm z}_{\bm x} = \bar{\psi}_{\bm x} \, {\bm v}. \label{puregaugez} \end{equation} Because of Eqs.~(\ref{puregauge}) and (\ref{puregaugez}), the only nontrivial Hamiltonian term is $H_b$ defined in Eq.~(\ref{linpert}), which reduces to the XY Hamiltonian \begin{equation} H_b = - w \sum_{{\bm x},\mu} {\rm Re}\, \bar{\psi}_{\bm x} \, \psi_{{\bm x}+\hat\mu} \,. \label{hbJinf} \end{equation} The XY model has a continuous transition at~\cite{CHPV-06,DBN-05} $w_c= 0.4541652(11)$. Estimates of the critical exponents can be found in Refs.~\cite{CHPV-06,Hasenbusch-19,CLLPSSV-19}; for example, $\nu=0.6717(1)$. Thus, for $J=\infty$ we expect two phases, which can be equivalently characterized by using correlations of the gauge field or vector correlations. Indeed, relation (\ref{puregaugez}) implies \begin{equation} G_V({\bm x},{\bm y}) = \hbox{\rm Re}\, \langle \bar{\bm z}_{\bm x} \cdot {\bm z}_{\bm y}\rangle = \hbox{\rm Re}\, \langle \bar{\psi}_{\bm x} \, \psi_{\bm y}\rangle \,. \label{gvinfJ} \end{equation} \subsection{The model for $\gamma=\infty$} \label{sec4.C} We have already discussed this limit for $w=0$. In that case, gauge-invariant quantities behave as in the O($2N$) model, although with different boundary conditions. For $w\not=0$, as the model is not gauge invariant, we cannot get rid of the gauge fields. However, we can still rewrite the gauge fields as in Eq.~(\ref{puregauge})---for the moment, we ignore the subtleties of the boundary conditions---and therefore obtain the following effective Hamiltonian \begin{eqnarray} H_{\gamma=\infty} &=& 2 J N \sum_{{\bm x}, \mu} \hbox{\rm Re}\, \bar{\psi}_{\bm x} \psi_{{\bm x}+\hat\mu} \, \bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x}+\hat\mu} \nonumber \\ && - w \sum_{{\bm x},\mu} {\rm Re} \, \bar{\psi}_{\bm x} \psi_{{\bm x}+\hat\mu}. \end{eqnarray} If we define new variables ${\bm Z}_{\bm x} = {\bm z}_{\bm x} \psi_{\bm x}$, the Hamiltonian $H_{\gamma=\infty}$ is the sum of two contributions that corresponding to two independent models: an O($2N$) model with coupling $J$ and fields ${\bm Z}_{\bm x}$ and an XY model with coupling $w$ and fields $\psi_{\bm x}$. Thus, four different phases appear, that are separated by the critical lines $w=w_c$ and $J=J_c$ meeting at a tetracritical point, with O($2N$) and XY decoupled multicriticality. Correlation functions of the bilinear fields $Q_{\bm x}$ and $\lambda_{{\bm x},\mu}$ are only sensitive to the O($2N$) and XY critical behavior, respectively. Vector correlations instead are sensitive to both transitions, since \begin{equation} G_V({\bm x}) = \hbox{Re}\, \langle \bar{\bm Z}_{\bm 0} \cdot {\bm Z}_{\bm x}\rangle \langle {\psi}_{\bm 0} \bar{\psi}_{\bm x}\rangle. \end{equation} In particular, the vector fields only order for $w > w_c$ and $J > J_c$, where both ${\bm Z}_{\bm x}$ and $ \psi_{\bm x}$ show long-range correlations. The decoupling of the XY and O($2N$) degrees of freedom only occurs in infinite volume. For finite systems with periodic boundary conditions, for $\gamma\to \infty$ one obtains U(1) fluctuating boundary conditions for both fields, with the same boundary fields $\tau_\mu$. In this case, a complete decoupling is not realized. \subsection{The model for $J=0$} \label{sec4.D} For $J=0$ we obtain a pure U(1) gauge theory in the presence of the gauge-breaking term (\ref{linpert}). For $\gamma = \infty$, as discussed in Sec.~\ref{sec4.C}, an XY transition occurs at $w = w_c$, where the gauge field $\lambda_{{\bm x},\mu}$ becomes critical. As we discuss below, this transition disappears for finite values of $\gamma$. For large values of $\gamma$ only a crossover occurs for $w\approx w_c$. \subsection{The phase diagram} \label{sec4.E} The limiting cases we have discussed above and the numerical results that we present below allow us to conjecture the phase diagram of the model. In Fig.~\ref{phdiagn2} we sketch the $J$-$w$ phase diagram for $\gamma=0$ and $N=2$. It is supported by the numerical results and is consistent with the limiting cases reported above. We expect three different phases. For small values of $J$ there is a disordered phase, while for large $J$ there are two different ordered phases. For small $w$ and large $J$, the tensor operator $Q$ condenses, while the vector correlation $G_V$, defined in Eq.~(\ref{vectorfu}), is short ranged. In this phase the model behaves as for $w=0$: the gauge-symmetry breaking is irrelevant and gauge invariance is recovered in the critical limit. On the other hand, for large $w$, both vector and tensor correlations are long-ranged. In the latter phase, the ordered behavior of tensor correlations is just a consequence of the ordering of the vector variables ${\bm z}_{\bm x}$. We do not expect $\gamma$ to be relevant, and therefore, we expect the same phase diagram for any finite $\gamma$. Note that, for $\gamma=\infty$, the multicritical point is tetracritical and four lines are present. In particular, there is a line where $\lambda_{{\bm x},\mu}$ orders, while both vector and tensor correlations are short-ranged. We have no evidence of this transition line for finite values of $\gamma$, at least for $N=2$, the only case we consider. Simulations for $J=0$ and for small values of $J$, in the parameter region where vector and tensor correlations are both disordered, observe crossover effects but no transitions. The three phases mentioned above are separated by three transition lines, whose nature depends on the value of $N$. For $N=2$, transitions are continuous. Along the DT line, which separates the disordered phase from the tensor-ordered and vector-disordered phase, we expect the transition to belong to the same universality class as the transition for $w=0$. Thus, the transitions should belong to the O(3) vector universality class. Along the DV line, which separates the disordered phase from the vector-ordered and tensor-ordered phase, we expect the transition to belong to the same universality class as the transition for $w=\infty$. It should belong to the O(4) vector universality class, consistently with the LGW argument presented in Sec.~\ref{u1gb}. Finally, along the TV line that starts at $J=\infty$ and separates the two tensor-ordered phases, the critical behavior should be associated with the phases of the scalar variables. Therefore, the most natural hypothesis is that transitions belong to the O(2) or XY universality class, as it occurs for $J\to\infty$. The three transition lines are expected to meet at a multicritical point, see Fig.~\ref{phdiagn2}. The three phases can be characterized using the renormalization-group invariant quantities $U_{V/T}$ and $R_{V/T}$. In the disordered phase $R_V=R_T=0$, while $U_V$ and $U_T$ take in the O(N) model the values \begin{equation} U_V = {N+1\over N}, \qquad\qquad U_T = {N^2 + 1\over N^2 - 1}. \label{U-disorderedphase} \end{equation} In the vector-ordered phase $R_V=R_T=\infty$ and $U_V=U_T=1$. In the tensor-ordered phase $R_V=0$, $R_T=\infty$ and $U_T=1$. As for $U_V$, note that the ordering of $Q$ implies the ordering of the absolute value of each component $z_{\bm x}^a$, and of the relative phases between $z_{\bm x}^a$ and $z_{\bm x}^b$. Only the global phase of the field does not show long-range correlations. Thus, ${\bm z}_{\bm x}$ can be written as $\bar{\psi}_{\bm x} {\bm v}$ where ${\bm v}$ is a constant vector and ${\psi}_{\bm x}$ an uncorrelated phase. Therefore, we expect $U_V$ to converge to the value appropriate for a disordered O(2) model, i.e., $U_V = 2$. As we discussed in Sec.~\ref{u1gb}, for $\gamma = 0$, the model we consider should be equivalent to a model with Hamiltonian \begin{equation} H = H_z + \tilde{w} \sum_{{\bm x},\mu} \hbox{Re}\, \bar{\bm z}_{\bm x} \cdot {\bm z}_{{\bm x} + \hat{\mu}}, \label{Hvector} \end{equation} i.e., an equivalent breaking of the gauge invariance is obtained by adding a ferromagnetic O($2N$)-invariant vector interaction. On the basis of an analysis analogous to that presented above, we expect a phase diagram, in terms of $J$ and $\tilde{w}$, similar to the one presented in Fig.~\ref{phdiagn2}. The only difference should be the behavior of the DV line, that should connect the multicritical point with the point $J=0$, $\tilde{w} = w_c$, where $w_c$ is the O($2N$) critical point. By using the relation between the original model and the formulation with Hamiltonian (\ref{Hvector}), one can understand why the gauge fields $\lambda_{{\bm x},\mu}$ can only be critical at transitions where the vector field has long-range correlations. For $\gamma = 0$, zero-momentum correlations of $\hbox{Re}\, \lambda_{{\bm x},\mu}$ can be directly related to vector energy correlations in the model with Hamiltonian (\ref{Hvector}). Thus, gauge fields become critical only on the DV and TV lines. \subsection{The phase diagram in the presence of an axial gauge fixing} \label{sec4.F} Let us finally discuss the expected phase diagram in the presence of an axial gauge fixing. The behavior for $w=0$, i.e., in the gauge-invariant theory, is mostly unchanged. Gauge-invariant quantities are indeed identical in the two cases. Vector correlations vary, but are still short-ranged. Indeed, the gauge fixing introduces a ferromagnetic interaction, which is, however, effectively one-dimensional (this is evident in the formulation with Hamiltonian (\ref{Haxial-eff})) and therefore unable to give rise to long-range correlations. The behavior for $\gamma\to \infty$ and $J\to \infty$ instead changes. In this limit, since no gauge degrees of freedom are present, the gauge fields converge to 1. Therefore, only two phases are present---a disordered phase and a vector-ordered phase---and a single DV transition line. Given these results, it would be possible for the system to have, for finite values of $\gamma$ and $J$, a phase diagram without the tensor-ordered phase. The numerical data we will show, instead, indicate that the qualitative behavior is not changed by the gauge fixing: the phase diagram is still the one presented in Fig.~\ref{phdiagn2}. However, consistency with the limiting cases gives constraints on the large-$J$, large-$\gamma$ behavior of the TV line. If the TV line is given by the equation $w = f_{TV}(J,\gamma)$, one should have $f_{TV}(J,\gamma)\to 0$ for $J\to\infty$ at fixed $\gamma$ and for $\gamma\to\infty$ at fixed $J$. The gauge fixing only shrinks the size of the tensor-ordered phase. \section{Numerical results} \label{numres} In this section we discuss our numerical Monte Carlo results for $N=2$ for the model with Hamiltonian (\ref{Hext}), focusing mostly on the behavior for $\gamma=0$. They provide strong evidence in support of the phase diagram sketched in Fig.~\ref{phdiagn2} and of the theoretical analysis of Sec.~\ref{phdiacrbeh}. An accurate study of the nature of the multicritical point deserves further investigations, that we leave for future work. The Monte Carlo data are generated by combining Metropolis updates of the scalar and gauge fields with microcanonical updates of the scalar field. The latter are obtained by generalizing the usual reflection moves used in O($N$) models. Trial states for the Metropolis updates are generated so that approximately $30\%$ of the proposed updates are accepted. Each data point corresponds to a few milion updates, where a single update consists of 1 Metropolis sweep and 5 microcanonical sweeps of the whole lattice. Errors are estimated by using standard jackknife and blocking procedures. We consider cubic lattices of linear size $L$. In the absence of gauge fixing we use periodic boundary conditions along all lattice directions. We consider $C^*$ boundary conditions ~\cite{KW-91,LPRT-16,BPV-21-gsb} in simulations in which the axial gauge---$\lambda_{{\bm x},3} = 1$ on all sites---is used. \subsection{The extended model for $N=2$} \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{RTJ_nccp1_kappa0p0_msq0p3.eps} \includegraphics*[width=0.9\columnwidth]{RTJ_scal_nccp1_kappa0p0_msq0p3.eps} \caption{Numerical data for $\gamma=0$ and $w=0.3$. Top: tensor correlation-length ratio $R_T$ versus $J$. Bottom: $R_T$ versus $(J-J_c)L^{1/\nu}$, with $J_c=0.7097$ and the O(3) critical exponent $\nu \approx 0.7117$. } \label{r0p3data_1} \end{figure} \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{UTRT_nccp1_kappa0p0_msq0p3.eps} \caption{Numerical data for $\gamma=0$ and $w=0.3$. We report $U_T$ versus $R_T$ and the universal scaling curve $U_T(R_T)$, defined in~Eq.~(\ref{yfur}), for the O(3) vector model (Eq.~(\ref{FO3V}) in the Appendix).} \label{r0p3data_2} \end{figure} \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{xiVJ_nccp1_kappa0p0_msq0p3.eps} \includegraphics*[width=0.9\columnwidth]{UVJ_nccp1_kappa0p0_msq0p3.eps} \caption{Numerical data for $\gamma=0$ and $w=0.3$. Results for $\xi_V$ (top) and $U_V$ (bottom) as a function of $J$ across the transition, which show that the vector degrees of freedom are disordered in the whole critical region.} \label{r0p3data_3} \end{figure} \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{RVJ_nccp1_kappa0p0_msq2p25.eps} \includegraphics*[width=0.9\columnwidth]{RVJ_scal_nccp1_kappa0p0_msq2p25.eps} \caption{Numerical data for $\gamma=0$ and $w=2.25$. Top: Vector correlation-length ratio $R_V$ versus $J$. Bottom: $R_V$ versus $(J-J_c) L^{1/\nu}$, using $J_c=0.3270$ and the O(4) critical exponent $\nu = 0.750$. } \label{r2p25data_1} \end{figure} We start the numerical investigation of the phase diagram of the model with Hamiltonian (\ref{Hext}), by studying the critical behavior in the small-$w$ region, where we expect transitions to belong to the DT line. Since for $J=\infty$ the tensor-ordered phase corresponds to $w < w_c \approx 0.454$, we perform simulations keeping $w = 0.3 < w_c$ fixed and varying $J$. We observe criticality in the tensor channel for $J\approx 0.71$, see the upper panel in Fig.~\ref{r0p3data_1}. The data for $R_T$ are fully consistent with an O(3) critical behavior, as it is evident from the lower panel where we show a scaling plot using the O(3) critical exponent $\nu$ and the estimate $J_c = 0.7097(1)$ of the critical point. Stronger evidence for O(3) behavior is provided in Fig.~\ref{r0p3data_2}, where data for $U_T$ are reported as a function of $R_T$ and compared with the universal curve of the O(3) vector model, obtaining excellent agreement. Finally, we check that vector degrees of freedom are not critical at the transition: for all the values of $J$ studied, $\xi_V$ is very small, see Fig.~\ref{r0p3data_3}, and the same is true for the susceptibility $\chi_V$, which takes the value $\chi_V\approx 1.9$ in the transition region (not shown); note that $\chi_V=1$ for every $J$ when $w=0$. The vector Binder parameter $U_V$ is also reported in Fig.~\ref{r0p3data_3}. As expected, it is approximately equal to 3/2 in the disordered phase, see Eq.~(\ref{U-disorderedphase}), and increases towards to XY value 2 as J increases. We next investigate the behavior for large $w$ values, where we expect the DV line. Vector and tensor correlations should simultaneously order, displaying O(4) vector critical behavior. Again we perform simulations at fixed $w$, choosing $w=2.25$. In Fig.~\ref{r2p25data_1} we show $R_V$ as a function of $J$: a transition is identified for $J=J_c= 0.3270(1)$. The data show very good scaling when plotted against $(J-J_c) L^{1/\nu}$, using the O(4) exponent $\nu = 0.750$. The O(4) nature of the transition is further confirmed by the plots of $U_V$ and $U_T$ versus $R_V$ and $R_T$, respectively, shown in Fig.~\ref{r2p25data_2}. The numerical data fall on top of the scaling curves $U_V(R_V)$ and $U_T(R_T)$ computed in the O(4) vector model. \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{UVRV_nccp1_kappa0p0_msq2p25.eps} \includegraphics*[width=0.9\columnwidth]{UTRT_nccp1_kappa0p0_msq2p25.eps} \caption{Numerical data for $\gamma=0$ and $w=2.25$. Plots of $U_V$ versus $R_V$ (top) and of $U_T$ versus $R_T$ (bottom). The numerical data are compared with the universal scaling curves $U_V(R_V)$ (top) and $U_T(R_T)$ (bottom), defined in Eq.~(\ref{yfur}), computed in the O(4) vector model (Eqs.~(\ref{FO4V}) and (\ref{FO4T}) in the Appendix). } \label{r2p25data_2} \end{figure} \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{RVmsq_nccp1_kappa0p0_J1p0.eps} \includegraphics*[width=0.9\columnwidth]{RVmsq_scal_nccp1_kappa0p0_J1p0.eps} \caption{ Numerical data for $\gamma=0$ and $J=1$. Top: Vector correlation-length ratio $R_V$ versus $w$. Bottom: $R_V$ versus $(w-w_c) L^{1/\nu}$, using the O(2) value $\nu = 0.6717$. The critical point is located at $w_c = 0.5505(3)$.} \label{J1p0data_1} \end{figure} Finally, we performed a set of simulations, to investigate the nature of the TV line, that separates the two phases in which the tensor degrees of freedom are ordered. As discussed in Sec.~\ref{sec4.E}, the TV line can be identified using vector observables that should display O(2) critical behavior. Given that the DT line ends at $J_c = 0.7102$, $w = 0$, we fixed $J=1$ and increased $w$. As evident from Fig.~\ref{phdiagn2}, this choice should allow us to observe the TV line. In Fig.~\ref{J1p0data_1}, we report $R_V$ versus $w$. A crossing point is detected for $w_c \approx 0.5505$. Close to it, vector data are fully consistent with an O(2) critical behavior. This is further confirmed by the results reported in Fig.~\ref{J1p0data_2}. The data of the vector Binder parameter, when plotted versus $R_V$, are fully consistent with the corresponding universal curve computed in the vector O(2) model. As a further check that the transition belongs to the TV line, in Fig.~\ref{J1p0data_3} we report $U_T$ versus $J$: $U_T$ converges to 1 by increasing the size of the lattice on both sides of the transition, confirming that tensor modes are fully magnetized. \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{UVRV_nccp1_kappa0p0_J1p0.eps} \caption{Numerical data for $\gamma=0$ and $J=1$. We report $U_V$ versus $R_V$ and the universal scaling curve $U_V(R_V)$, defined in~Eq.~(\ref{yfur}), for the O(2) vector model (Eq.~(\ref{FO2V}) in the Appendix).} \label{J1p0data_2} \end{figure} \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{UTmsq_nccp1_kappa0p0_J1p0.eps} \caption{Numerical data for $\gamma=0$ and $J=1$. Estimates of $U_T$ versus $w$ across the transition: the tensor degrees of freedom are ordered on both sides of the transition [$w_c = 0.5505(3)$].} \label{J1p0data_3} \end{figure} \subsection{Simulations in the presence of an axial gauge fixing} \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{RTJ_TGF_nccp1_kappa0p0_msq0p1.eps} \includegraphics*[width=0.9\columnwidth]{xiVJ_TGF_nccp1_kappa0p0_msq0p1.eps} \caption{ Numerical data for $\gamma=0$ and $w=0.1$, in the axial gauge with $C^*$ boundary conditions. Behavior of $R_T$ (top) and $\xi_V$ (bottom) across the transition [$J_c = 0.706(1)$]. While a crossing point for $R_T$ is clearly seen, vector degrees of freedom are disordered for all values of $J$.} \label{axial_r0p2data_1} \end{figure} \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{UTRT_TGF_nccp1_kappa0p0_msq0p1.eps} \caption{Numerical data for $\gamma=0$ and $w=0.1$, in the axial gauge with $C^*$ boundary conditions. The estimates of $U_T$ versus $R_T$ obtained for $w=0.1$ (results for $L=8,16,24$) are compared with results for the gauge-invariant model with $w=0$ (results for $L=32$). The continuous line is a spline interpolation of the $w=0$ data. The consistency of the data signals that the two transitions belong to the same O(3) universality class.} \label{axial_r0p2data_2} \end{figure} We now discuss the extended model with Hamiltonian (\ref{Hext}) in the presence of the axial gauge fixing. We use C$^*$ boundary conditions \cite{KW-91,LPRT-16,BPV-21-gsb}, so that we can require $\lambda_{{\bm x},3} = 1$ on all sites. The purpose of the simulations is that of understanding if a tensor-ordered phase as well as a DT transition line are present, so that gauge invariance is recovered in the critical limit for finite small values of $w$. As we shall see, the answer is positive: the gauge fixing does not change the qualitative shape of the phase diagram. As we have discussed in Sec.~\ref{sec4.F}, even if the qualitative phase diagram is unchanged, the DT phase should shrink and the TV line should get closer to the $w=0$ axis. For this reason, we decided to perform simulations at fixed $w = 0.1$. The vector and tensor correlation lengths are reported in Fig.~\ref{axial_r0p2data_1} as a function of $J$. The data for $R_T$ have a crossing point at $J_c = 0.706(1)$, which is very close to the critical point for $w = 0$, $J_c = 0.7102(1)$ \cite{PV-19-AH3d}. The tensor degrees of freedom are critical at the transition. The vector ones are instead disordered and $\xi_V$ is of order 1 across the transition. Thus, the data confirm the existence of a DT line also in the presence of the axial gauge fixing. As an additional check, in Fig.~\ref{axial_r0p2data_2} we plot $U_T$ against $R_T$. The data are compared with the results for the gauge-invariant model ($w=0$) with the same C$^*$ boundary conditions (we consider results with $L=32$, which should provide a good approximation of the asymptotic curve). The agreement is excellent, confirming that the transition belongs indeed to the DT line, where O(3) behavior is expected. As a side remark, note that the O(3) curves reported in Figs.~\ref{axial_r0p2data_2} and~\ref{r0p3data_2} are different, since the scaling curve depends on the boundary conditions. We also performed simulation with $w=0.2$. In this case, we observe two very close transitions, that are naturally identified with transitions on the DT and TV line. They provide an approximate estimate of the multicritical point, $w_{mc} \approx 0.2$, $0.69\lesssim J_{mc} \lesssim 0.70$. Note that the size of the tensor-ordered phase is, not surprisingly, significantly smaller than in the absence of gauge fixing. Indeed, in the latter case $w_{mc} \gtrsim w_c(J=1) \approx 0.55$. \subsection{Finite $\gamma$ results for the pure gauge model} \begin{figure}[tbp] \includegraphics*[width=0.9\columnwidth]{Clambda_nccp1_kappa2p5_J1p0.eps} \includegraphics*[width=0.9\columnwidth]{M3lambda_nccp1_kappa2p5_J1p0.eps} \caption{Numerical data for $\gamma=2.5$ and $J=0$ (pure gauge model). Top panel: specific heat $C_\lambda$; Bottom panel: third moment $M_{3,\lambda}$.} \label{Puregauge} \end{figure} At $\gamma=\infty$, the phase diagram is characterized by four transition lines, see Sec.~\ref{sec4.C}. In addition to the lines reported in Fig.~\ref{phdiagn2}, there is a line where $\lambda_{{\bm x},\mu}$ is critical and which separates two phases with no tensor or vector order. As we have discussed in Sec.~\ref{sec4.E}, such a line is not expected to occur at $\gamma =0$. We wish now to provide evidence that such a line does not exist for any finite $\gamma$. As this line starts on the $J=0$ line, we consider the pure gauge model with Hamiltonian \begin{equation} H = H_\lambda(\lambda) + H_b(\lambda). \end{equation} For $\gamma=\infty$, there is an XY transition for $w=w_{c,XY}\approx 0.454$. We wish now to verify whether there is a transition for large, but finite values of $\gamma$. If it were present, it would imply the presence of a fourth transition line in the phase diagram. For this purpose, we have studied the model for $\gamma = 2.5$. For this value of $\gamma$ the gauge fields are significantly ordered and indeed $\langle \Pi_{{\bm x},\mu\nu}\rangle \approx 0.93$ in the relevant region $w\approx w_{c,XY}$. To detect the transition, we have considered cumulants of the energy \begin{eqnarray} C_\lambda &=& {1\over V w^2} \langle (H_b - \langle H_b\rangle)^2 \rangle , \nonumber \\ M_{3,\lambda} &=& {1\over V w^3} \langle (H_b - \langle H_b\rangle)^3 \rangle . \end{eqnarray} At an XY transition, the specific heat has a nondiverging maximum while the third cumulant should have a positive and a negative peak \cite{SSNHS-03}, both diverging as $L^{3/\nu-3} = L^{(1+\alpha)/\nu} \approx L^{1.47}$. In Fig.~\ref{Puregauge}, we show the two quantities as a function of $w$. The specific heat has a maximum at $w \approx 0.476$, which might in principle signal an XY transition. However, $M_{3,\lambda}$ is not diverging. It increases significantly as $L$ changes from 16 to 24, but its maxima decrease as $L$ varies from 24 to 32. We are evidently observing a crossover behavior, due to the transition present for infinite $\gamma$. Simulations do not provide evidence of finite-$\gamma$ transitions and therefore, of a fourth transition line in the phase diagram. \section{Conclusions} \label{conclu} In this work we discuss the role of GSB perturbations in lattice gauge models. In Ref.~\cite{BPV-21-gsb} we studied the AH model with noncompact gauge fields, analyzing the role of GSB perturbations at transitions associated with a charged FP, i.e., where both scalar-matter and gauge correlations are critical. In that case we showed that there is an extreme sensitivity of the system to GSB perturbations, such as the photon mass in the AH field theory. In this work, instead, we discuss the behavior at transitions where gauge correlations are not critical and the gauge symmetry has only the role of reducing the scalar degrees of freedom that display critical behavior. A paradigmatic model in which this type of behavior is observed is the 3D lattice CP$^1$ model or, more generally, the lattice AH model with two-component complex scalar matter and U(1) compact gauge fields. We find that in this case, the critical gauge-invariant modes are robust against GSB perturbations: for small values of the GSB coupling, the critical behavior (continuum limit) is the same as in the gauge-invariant model. Note that, even when gauge symmetry is explicitly broken, the existence of a gauge-symmetric limit is very useful to understand the physics of the model. If indeed one directly studies the gauge-broken model, a detailed analysis of the nonperturbative dynamics of the model would be required to identify the nature of the critical degrees of freedom, which instead emerges naturally in the gauge-invariant limit. Looking for gauge-invariant limits or deformations might thus be a useful general strategy to pursue in order to understand the origin of ``unusual'' orderings and transitions (the DT and the TV lines in the present model or in the models with Hamiltonians (\ref{hlaeffexpmix}) and (\ref{Haxial-eff})). In our study we consider the $N=2$ lattice AH model with Hamiltonian (\ref{Hdef}). The model has two phases \cite{PV-19-AH3d} separated by a continuous transition line driven by the condensation of the gauge-invariant operator $Q$ defined in Eq.~(\ref{qdef}). The nature of the transition is independent of the gauge coupling and the is same as that in the CP$^1$ model, which, in turn, is the same as that in the O(3) vector model. We add to the Hamiltonian (\ref{Hdef}) the GSB term $H_b=- w \sum_{{\bm x},\mu} {\rm Re} \,\lambda_{{\bm x},\mu}$, where the parameter $w$ quantifies the strength of the perturbation. A detailed numerical MC study, supported by the analysis of some limiting cases, allows us to determine the phase diagram of the model, see Fig.~\ref{phdiagn2}. Its main features are summarized as follows. \begin{itemize} \item The parameter $w$ is an irrelevant RG perturbation of the gauge-invariant critical behavior (continuum limit). Therefore, even in the presence of a finite small GSB perturbation, the critical behavior is the same as that of the gauge-invariant CP$^1$ universality class. The gauge-breaking effects of the linear perturbation disappear in the large-distance behavior. \item As sketched in Fig.~\ref{phdiagn2}, the phase diagram of the model with Hamiltonian Eq.~(\ref{Hext}) presents three phases for $\gamma=0$: a disordered phase for small $J$ and two ordered phases for large $J$. There is a tensor-ordered phase for small values of $w$, in which the operator $Q$ defined in Eq.~(\ref{qdef}) condenses, while the vector correlation $G_V$ defined in Eq.~(\ref{vectorfu}) is short ranged. For sufficiently large values of $w$, the ordered phase is characterized by the condensation of the vector matter field. The three phases are separated by three different transition lines that presumably meet at a multicritical point. It would be interesting to understand its nature, but we have not pursued this point further. Most of our results are obtained for the case $\gamma=0$, i.e., for the CP$^1$ model, but we expect the same phase diagram also for any finite $\gamma$, with strong crossover effects for large values of $\gamma$. \item Transitions along the DT line, that separates the disordered phase from the tensor-ordered phase, belong to the O(3) vector universality class, as the transition for $w=0$. \item Transitions along the DV line, that separates the disordered phase from the vector-ordered phase, belong to the O(4) vector universality class \cite{BPV-21-gsb}. Note the effective symmetry enlargement at the transition---from U(2) to O(4)---that was predicted using RG arguments based on the correspoding LGW theory~\cite{BPV-21-gsb}. Of course, the symmetry enlargement is limited to the critical regime. Outside the critical region, one can only observe the global U(2) symmetry of the model. \item Transitions along the TV line, that separates the tensor and the vector ordered phases, are associated with the condensation of the global phase of the scalar field, which is disordered in the tensor-ordered phase and ordered in the vector-ordered phase. Continuous transitions along the TV line belong to the O(2) or XY universality class. Note that this mechanism realizes an unusual phenomenon: two ordered phases are separated by a continuous transition line. \item We have also studied the model in the presence of a gauge fixing. We considered the axial gauge fixing defined in Eq.~(\ref{axialgauge}). We find the phase diagram to be qualitatively similar to that found in the absence of a gauge fixing, see Fig.~\ref{phdiagn2}. In particular, for sufficiently small $w$, the critical behavior is the same as in the gauge-invariant theory. \end{itemize} We have not analyzed the behavior for $N>2$. Also in this case, we expect a phase diagram with three different phases, as for $N=2$. However, the nature of the transition lines might be different. Since the transitions in the gauge-invariant lattice CP$^{N-1}$ models are of first-order for any $N>2$, the transitions along the DT line should be of first order. Transitions along the TV line should still belong to the XY universality class, if they are continuous, while transitions along the DV line are expected to belong to the O($2N$) vector universality class with the effective enlargement of the symmetry of the critical modes from U($N$) to O($2N$). There are several interesting extension of our work. One might consider GSB terms involving higher powers of the link variables, for instance $H_{b,q}=-w\sum_{{\bm x},\mu} {\rm Re}\,\lambda_{{\bm x},\mu}^q$, which leave a residual discrete ${\mathbb Z}_{q}$ gauge symmetry. In this case we may have a more complex phase diagram where the residual gauge symmetry may play a role. One may also consider the same linear perturbation in the AH model with higher-charge compact matter fields \cite{BPV-20-q2}, which has a more complex phase diagram than the AH lattice model we consider here. Finally, we mention that several nonabelian gauge models with multiflavor scalar matter~\cite{BPV-19-sqcd,BPV-20-son,BFPV-21-adj} have transitions where gauge correlations do not become critical. Gauge symmetry is only relevant for defining the critical modes and therefore the symmetry-breaking pattern associated with the transition, as in the lattice AH model we have considered here. For this class of nonabelian models, one might investigate the role of similar GSB terms, for instance of a perturbation $H_b = - w \sum_{{\bm x},\mu} \hbox{Re Tr} \, U_{{\bm x},\mu}$ where $U_{{\bm x},\mu}$ are the gauge variables associated with the lattice links \cite{Wilson-74}, to understand whether a gauge-invariant continuum limit is still obtained for small values of $w$. \acknowledgments Numerical simulations have been performed on the CSN4 cluster of the Scientific Computing Center at INFN-PISA.
1,116,691,498,851
arxiv
\subsubsection*{References}} \setcitestyle{authoryear, open={(},close={)}} \usepackage{mathtools} \usepackage{booktabs} \input{math_commands} \usepackage{tabularx,xcolor,graphicx,booktabs} \usepackage[linesnumbered,ruled,noend]{algorithm2e} \usepackage{hyperref} \definecolor{darkblue}{rgb}{0,0.08,0.45} \hypersetup{colorlinks=true,citecolor=darkblue,linkcolor=brown,pagebackref=true,urlcolor=black,bookmarks=true} \usepackage[accepted]{icml2021} \icmltitlerunning{Coordinate-wise Control Variates for Deep Policy Gradients} \begin{document} \twocolumn[ \icmltitle{Coordinate-wise Control Variates for Deep Policy Gradients} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Yuanyi Zhong}{to} \icmlauthor{Yuan Zhou}{to,go} \icmlauthor{Jian Peng}{to} \end{icmlauthorlist} \icmlaffiliation{to}{Department of Computer Science, UIUC} \icmlaffiliation{go}{Department of Industrial and Enterprise Systems Engineering, UIUC} \icmlcorrespondingauthor{Yuanyi Zhong}{[email protected]} \icmlkeywords{Machine Learning, ICML, Reinforcement Learning, Policy Gradients} \vskip 0.3in ] \printAffiliationsAndNotice{} \thispagestyle{plain} \pagestyle{plain} \begin{abstract} The control variates (CV) method is widely used in policy gradient estimation to reduce the variance of the gradient estimators in practice. A control variate is applied by subtracting a baseline function from the state-action value estimates. Then the variance-reduced policy gradient presumably leads to higher learning efficiency. Recent research on control variates with deep neural net policies mainly focuses on scalar-valued baseline functions. The effect of vector-valued baselines is under-explored. This paper investigates variance reduction with coordinate-wise and layer-wise control variates constructed from vector-valued baselines for neural net policies. We present experimental evidence suggesting that lower variance can be obtained with such baselines than with the conventional scalar-valued baseline. We demonstrate how to equip the popular Proximal Policy Optimization (PPO) algorithm with these new control variates. We show that the resulting algorithm with proper regularization can achieve higher sample efficiency than scalar control variates in continuous control benchmarks. \end{abstract} \input{sec_intro} \input{sec_related} \input{sec_method} \input{sec_exp} \section{Conclusion} We propose the coordinate-wise control variates for variance reduction in deep policy gradient methods. We demonstrate that more variance reduction can be obtained with such CVs compared to the conventional scalar CVs in a case study. We also show that this new family of CVs improves the sample efficiency of PPO in continuous control benchmarks. In the future, we hope to combine the technique with other CVs, choose the hyper-parameters automatically, understand the properties of coordinate CV better and benchmark in real-world robotic problems. \section{Experiment} We first conduct a case study to demonstrate the benefit of treating the coordinates differently and verify the variance reduction with more sophisticated CVs. We then demonstrate that PPO equipped with the proposed CVs has better sample efficiency in simulated control tasks. \subsection{Case study in Walker2d} \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{fig/fig_g.pdf} \caption{The histograms of $\frac1d \sum_{j=1}^d |\frac{\partial \log \pi_\theta(a_i|s_i)}{\partial \theta_j}|^2$ (left) and $\frac1n \sum_{i=1}^n |\frac{\partial \log \pi_\theta(a_i|s_i)}{\partial \theta_j}|^2$ (right). The squared gradient norms exhibit a tailed distribution. The coordinate-wise CV may benefit from using these norms as weights in a weighted regression.} \label{fig:grad_norm} \vspace{-10pt} \end{figure} \begin{table}[tb!] \centering \small \caption{The empirical variance of different policy gradient estimators derived from different control variates. The smaller the better. The gradients are computed with a fixed policy trained for 300 steps in the Walker2d environment. Value CV refers to using the value function learned previously during RL training as baseline. The other CVs (Value CV (refit), Scalar, Layer-wise and Coord.-wise) are \emph{refitted} with a \emph{new sample} of $10^5$ timesteps with the policy function fixed. Value CV (refit) refers to refitting a new value function as the baseline. Scalar, Layer-wise and Coord.-wise refer to using single baseline function for all coordinates, one baseline per neural net layer, and one baseline per coordinate, respectively, as described in Sec.~\ref{sec:method}.} \label{tab:var} \begin{tabular}{l|l} \toprule \bfseries{CV Method} & \bfseries{Variance estimates} \\ \midrule Without CV & $ 33355 \quad\; [32844, 33878] $ \\ Value CV & $ 34.759 \quad [34.226, 35.304] $ \\ Value CV (refit)& $ 22.333 \quad [21.990, 22.683] $ \\ Scalar CV & $ 21.856 \quad [21.521, 22.199] $ \\ Layer-wise CV & $ 21.405 \quad [21.077, 21.741] $ \\ Coord.-wise CV & $ \textbf{21.259} \quad [20.933, 21.593] $ \\ \bottomrule \end{tabular} \end{table} We take the Walker2d-v2 environment in the OpenAI Gym \citep{brockman2016openai} benchmark suite as the testbed to study the variances of policy gradient estimators with different control variates. Specifically, we pull out a policy checkpoint saved during the training of the regular PPO algorithms, then fit different baseline functions according to Eq.~\ref{eq:l_baseline} (with $\lambda=0.1, \rho=0$) for the fixed policy with a large sample ($10^5$ timesteps). After that, we report the empirical variance measured on another large roll-out sample ($10^5$ timesteps). We follow the standard approach to estimate the variance by $\widehat{\sigma^2} = \frac1{n-1} \sum_{i=1}^n (g_i - \bar{g})^2 $ and the 95\% confidence intervals by the Chi-Square distribution. Tab.~\ref{tab:var} shows that more variance reduction can be achieved with more sophisticated control variates for a fixed policy. In particular, the value function baseline is already able to offer a significant amount of variance reduction compared to the policy gradient without CV (Eq.~\ref{eq:g_0}). The layer-wise and coordinate-wise control variate can further reduce the variance by a decent amount. In the supplementary material, we also compare the Mean Squared Error (MSE) of different gradient estimators as they could be biased in practice. We find that more advanced CVs can also reduce the MSE of the gradients. These observations give us hope that the proposed control variates are \emph{capable} of reducing variances and might lead to improved sample efficiency, which we study in the next subsection. One main difference between coordinate-wise CV and value CV is that the former leverages a weighted regression objective. Therefore, we plot the histograms of the weights, i.e., the squared derivative norms of $\log \pi_\theta$ in Fig.~\ref{fig:grad_norm}. Intuitively, weighing the baseline fitting errors by these gradient/derivative norms focuses the training effort onto data points and coordinates that have the highest impact on the policy gradient variance. \begin{figure*}[tb] \centering \includegraphics[width=1.0\linewidth]{fig/fig_mujoco.pdf} \caption{The episodic returns during training, averaged over 4 random seeds. The shades show $\pm 1$ standard error. The horizontal axis represents the number of environment timesteps. We ran 1M timesteps in total except for Humanoid, which requires 3M timesteps. In each subplot, other than the control variates, the rest of the PPO algorithm was the same. The curves are resampled to 512 points and smoothed with a window size 15.} \label{fig:mujoco} \vspace{-10pt} \end{figure*} \begin{table*}[tb!] \centering \caption{The mean episodic returns of \emph{all episodes} during training by different methods on the MuJoCo continuous control tasks. We compare (PPO plus) the value function CV (the original PPO), the fitted optimal scalar-valued CV, the proposed fitted layer-wise and coordinate-wise CVs. The results are averaged over 4 random seeds, and $\pm$ denotes the 1 standard error. The last column `Improve.' lists the normalized relative improvement over the first row (PPO+Value CV) averaged across the environments. } \label{tab:mujoco} \small \setlength{\tabcolsep}{2.4pt} \begin{tabular}{l|lllllll|r} \toprule {CV} & {Ant} & {HalfCheetah} & {Hopper} & {Humanoid} & {HumanoidStand} & {Swimmer} & {Walker2d} & {Improve.} \\ \midrule Value & $156.2 \pm 5.0$ & $1418.3 \pm 128.3$ & $1297.9 \pm 115.9$ & $860.4 \pm 22.1$ & $115451.0 \pm 4934.5$ & $84.4 \pm 1.7$ & $1159.1 \pm 80.4$ & $+0\%$ \\ Scalar & $210.3 \pm 38.5$ & $1399.9 \pm 115.5$ & $1160.3 \pm 126.2$ & $1034.9 \pm 48.0$ & $122945.1 \pm 2464.2$ & $82.2 \pm 4.8$ & $1337.6 \pm 165.2$ & $+8.9\%$ \\ \midrule Layer & $189.5 \pm 50.7$ & $1833.4 \pm 473.6$ & $1395.8 \pm 97.3$ & $902.8 \pm 39.9$ & $120230.0 \pm 3109.1$ & $84.3 \pm 5.5$ & $1325.7 \pm 154.3$ & $+11.6\%$ \\ Coord & $323.7 \pm 13.8$ & $1694.5 \pm 250.3$ & $1408.6 \pm 28.1$ & $1056.1 \pm 97.4$ & $128267.3 \pm 4620.5$ & $91.0 \pm 4.5$ & $1455.9 \pm 73.7$ & $+28.9\%$ \\ \bottomrule \end{tabular} \caption{The mean episodic returns of the \emph{last 100 episodes} rather than all episodes.} \label{tab:mujoco100} \setlength{\tabcolsep}{1.7pt} \begin{tabular}{l|lllllll|r} \toprule {CV} & {Ant} & {HalfCheetah} & {Hopper} & {Humanoid} & {HumanoidStand} & {Swimmer} & {Walker2d} & {Improve.} \\ \midrule Value & $755.3 \pm 58.9$ & $2110.1 \pm 341.2$ & $2420.0 \pm 216.0$ & $2569.2 \pm 184.8$ & $140955.1 \pm 6520$ & $119.9 \pm 1.3$ & $3286.5 \pm 532.7$ & $+0\%$ \\ Scalar & $998.7 \pm 144.5$ & $1991.1 \pm 263.0$ & $2378.4 \pm 460.2$ & $3038.0 \pm 306.1$ & $150306.6 \pm 2425$ & $112.6 \pm 5.0$ & $3688.2 \pm 487.6$ & $+8.0\%$ \\ \midrule Layer & $966.8 \pm 152.2$ & $2735.7 \pm 757.7$ & $2849.2 \pm 113.7$ & $2911.0 \pm 172.1$ & $153919.0 \pm 2329$ & $111.1 \pm 5.3$ & $3789.4 \pm 498.0$ & $+15.1\%$ \\ Coord & $1337.3 \pm 169.3$ & $2449.7 \pm 460.6$ & $2956.7 \pm 158.5$ & $3599.9 \pm 451.9$ & $151228.4 \pm 5796$ & $119.5 \pm 4.1$ & $4148.7 \pm 282.1$ & $+27.0\%$ \\ \bottomrule \end{tabular} \end{table*} \subsection{Continuous control benchmarks} As described in Section \ref{sec:method_ppo}, the PPO algorithm leverages an adaptive clipped objective function and usually more sophisticated optimizers such as ADAM \citep{kingma2014adam}. Will the variance reduction with more complicated control variates we showed in the previous subsection translate into better sample efficiency in practice? We investigate this question on the popular MuJoCo-based \citep{todorov2012mujoco} continuous control benchmarks provided by OpenAI Gym \citep{brockman2016openai}. We covered 7 locomotion environments (Gym `-v2' versions): Ant, HalfCheetah, Hopper, Humanoid, HumanoidStandup, Swimmer, Walker2d. The Humanoid experiments are run for 3 million timesteps due to its complexity, while the other experiments are run for 1 million timesteps. Besides the different control variates, other components of PPO remain the same across different methods within each subplot. We tune $\lambda$ and $\rho$ for \emph{each} environment. We refer the readers to the supplementary material for the details, the time and space costs and the ablation study on the hyperparameters. Fig.~\ref{fig:mujoco} and Tab.~\ref{tab:mujoco}, \ref{tab:mujoco100} demonstrate the advantages of the more complex forms of control variates: They yield faster learning speed particularly in hard environments such as Ant and Humanoid, although the difference in Tab.~\ref{tab:var} is seemingly small. Without much surprise, the fitted optimal scalar-valued baseline outperforms the vanilla value function baseline variant. We observe that the most elaborated coordinate-wise control variate actually leads to the highest sample efficiency, and both the fitted layer-wise and coordinate-wise control variates outperform the value or scalar control variates. The relative improvement over the value function CV is $\ge 27\%$ for coordinate-wise CV and $\ge 11\%$ for layer-wise CV measured in the normalized episodic returns. On the disadvantageous side, the more complex control variates need larger effort to train, in terms of memory and computation cost. They are also prone to over-fitting the current trajectory sample at hand due to the higher degrees of freedom. However, we show it is possible to combat over-fitting and achieve good overall performance by utilizing the regularization introduced in Eq.~\ref{eq:l_baseline}. Overall, we think the best choice of control variates depends on the trade-off between sample efficiency, computation budget, and hyper-parameter tuning effort. It can be worthwhile to explore the family of coordinate-wise control variates in policy gradient methods. \section{Introduction} Policy gradient methods have been empirically very successful in many real-life reinforcement learning (RL) problems, including high-dimensional continuous control and video games \citep{schulman2017proximal,mnih2016asynchronous,sutton2000policy,williams1992simple}. Compared to alternative approaches to RL, policy gradient methods have several properties that are especially appealing to practitioners. For example, they directly optimize the performance criterion hence can continuously produce better policies. They do not require the knowledge of the environment dynamics (model-free) thus have broad applicability. They are typically on-policy, i.e., estimating the gradients with on-policy rollout trajectories, thus are easy to implement without the need of a replay buffer. However, in real life, policy gradient methods have long suffered from the notorious issue of high gradient variance since Monte Carlo methods are used in policy gradient estimation \citep{sutton2000policy,sutton2018reinforcement}. Optimization theory dictates that the convergence speed of stochastic gradient descent crucially depends on the variance of the stochastic gradient \citep{ghadimi2016mini}. In principle, a lower variance policy gradient estimator should lead to higher sample efficiency. Therefore, for a long time, people have investigated various ways to reduce the variance of policy gradient estimators. The control variates (CV) method, a well-established variance reduction technique in Statistics \citep{rubinstein1985efficiency,nelson1990control}, is a representative approach \citep{mnih2016asynchronous,schulman2015trust,schulman2017proximal,sutton2018reinforcement}. In order to reduce the variance of an unbiased Monte Carlo estimator, a control variate with known expectation under the randomness is subtracted from (and the expectation added back to) the original estimator. The resulting estimator would still be unbiased but have lower variance than the original estimator if the control variate correlates with the original estimator. Researchers in reinforcement learning utilize a baseline function to construct the control variate. The term ``baseline function'' refers to the function that is subtracted from the return estimates when computing policy gradients. Consider a state dependent baseline function $b({\mathbf{s}})$, then the gradient estimator is $\nabla \log \pi({\mathbf{s}},{\mathbf{a}}) \left( \hat{Q}({\mathbf{s}}, {\mathbf{a}}) - b({\mathbf{s}}) \right)$. Here, the baseline $b$ corresponds to the control variate $ \nabla \log \pi({\mathbf{s}},{\mathbf{a}}) b({\mathbf{s}}) $. A frequently-used baseline is the approximate state value function $b({\mathbf{s}}) = V({\mathbf{s}})$ due to its convenience and simplicity. The value function baseline is not optimal in terms of variance reduction though. The optimal state-dependent baseline is known as the squared-gradient-norm weighted average of the state-action values \citep{weaver2001optimal,greensmith2004variance,zhao2011analysis}. Baseline functions and control variates have contributed to the success of policy gradients in modern deep RL \citep{schulman2017proximal,mnih2016asynchronous,chung2020beyond} and many works have followed up. For example, one line of follow-up work designs state-action dependent baselines rather than state-only dependent baselines to achieve further variance reduction \cite{liu2018action,grathwohl2018backpropagation,wu2018variance}. Nevertheless, prior works only consider the scalar-valued functions as baselines. Scalar-valued baselines ignore the differences between gradient coordinates. The fact that the policy gradient estimator is a random vector rather than a single random variable has been largely overlooked. Since the gradient is a vector, it is possible to employ a vector-valued function as the baseline to construct coordinate-wise control variates. In other words, it is feasible to apply a separate control variate for each coordinate of the gradient vector in order to achieve even further variance reduction. In this paper, we investigate this new possibility of vector-valued baseline functions through extensive experiments on continuous control benchmarks. These new families of coordinate-wise and group-wise (layer-wise) control variates derived from the vector-valued baselines respect the differences between the parameter coordinates and sample points. We equip the popular Proximal policy optimization (PPO) algorithm \citep{schulman2017proximal} with the new control variates. The experiment results demonstrate that indeed further variance reduction and higher learning efficiency can be obtained with these more sophisticated control variates than the conventional scalar control variates. \section{Method} \label{sec:method} We review the variance reduction with control variates technique in policy gradient methods and introduce the coordinate-wise control variates and demonstrate how to integrate them with PPO. \subsection{Background} Reinforcement learning (RL) is often formalized as maximizing the expected cumulative return obtained by an agent in a Markov decision process $({\mathsf{S}}, {\mathsf{A}}, {\mathcal{P}}, r)$. Here the set ${\mathsf{S}}$ is the state space, the set ${\mathsf{A}}$ is the action space, ${\mathcal{P}}({\mathbf{s}}_{t+1} | {\mathbf{s}}_t, {\mathbf{a}}_t)$ is the (probabilistic) transition function (dynamics), and $r({\mathbf{s}}, {\mathbf{a}})$ is the reward function. The infinite-horizon $\gamma$-discounted return setting defines the expected cumulative return as \begin{equation} J = \mathbb{E} \left[ \sum_{t=0}^{\infty} \gamma^t r({\mathbf{s}}_t, {\mathbf{a}}_t) \right] . \end{equation} The policy gradient method is a popular class of model-free RL methods that directly optimizes the return of a parameterized policy $\pi_\theta$, by treating RL as a stochastic optimization problem and applying gradient ascent. The gradient is obtained by the Policy Gradient Theorem \citep{sutton2000policy} as in Eq.~\ref{eq:pg}. \begin{align} \label{eq:pg} \nabla_\theta J(\theta) = \mathbb{E}_{{\mathbf{s}},{\mathbf{a}} \sim \pi_\theta} \nabla_\theta \log \pi_\theta({\mathbf{a}}|{\mathbf{s}}) Q_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) . \end{align} The shorthand notation ${\mathbf{s}},{\mathbf{a}} \sim \pi_\theta$ denotes that the expectation is taken over the state-action distribution induced by $\pi_\theta$. Specifically, ${\mathbf{s}}$ is drawn from the (un-normalized) discounted state distribution $\sum_{t=0}^\infty \gamma^t \Pr[{\mathbf{s}}_t = s]$\footnote{If we instead define the normalized state distribution by multiplying $1-\gamma$, there needs to be a $\frac{1}{1-\gamma}$ coefficient in front of the policy gradient in Eq.~\ref{eq:pg}.} and ${\mathbf{a}}|{\mathbf{s}}$ is drawn from the conditional action distribution $\pi_\theta(a|{\mathbf{s}})$. $Q_{\pi_\theta}$ refers to the state-action value function associated with the policy $\pi_\theta$. Given the Q value estimates $\hat{Q}_{\pi_\theta}$, one can easily construct the following Monte Carlo estimator of the policy gradient with a state-action pair sample (or with $n$ i.i.d. state-action pairs and the average of the same constituting estimator). \begin{align} \label{eq:g_0} {\mathbf{g}}^0 = \nabla_{\theta} \log \pi_\theta({\mathbf{a}}|{\mathbf{s}}) \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) \\ \text{ where } {\mathbf{s}},{\mathbf{a}} \sim \pi_\theta \nonumber \end{align} Optimization theory predicts that the convergence rate of stochastic gradient descent depends crucially on the variance of the gradient estimator. For example, \citet{ghadimi2016mini} states that the iteration complexity of mirror descent methods to an ${\epsilon}$-approximate stationary point in non-convex problems follows $ O\left( \frac{ \sigma^2 }{{\epsilon}^2} \right) $ where $\sigma^2 = \Tr \mathrm{Var}[\text{gradient}]$. Therefore, for decades, researchers have studied various ways to reduce the variance of the policy gradient estimators. The control variates (CV) method is a prominent one. In its generic form, suppose $g$ is the original unbiased Monte Carlo estimator, $c$ is a control variate which correlates with $g$ and has known expectation, then $g' = g - c + \mathbb{E}[c]$ is an unbiased estimator to the same quantity as $g$ but with lower variance. In the policy optimization context, a new estimator is constructed in Eq.~\ref{eq:g_b} by subtracting, for instance, a state-dependent baseline function $b({\mathbf{s}})$ from the Q value estimation term. This effectively constructs the control variate $\nabla_{\theta} \log \pi_\theta({\mathbf{a}}|{\mathbf{s}}) b({\mathbf{s}}) $. \begin{align} \label{eq:g_b} {\mathbf{g}}^b = \nabla_{\theta} \log \pi_\theta({\mathbf{a}}|{\mathbf{s}}) (\hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) - b({\mathbf{s}})) \\ \text{ where } {\mathbf{s}},{\mathbf{a}} \sim \pi_\theta \nonumber \end{align} This is valid because, for any real-valued function $b({\mathbf{s}})$, the expectation of the control variate is \begin{align*} &\; \mathbb{E}_{{\mathbf{s}}} \left[ \mathbb{E}_{{\mathbf{a}} | {\mathbf{s}} \sim \pi_\theta} \left[ \nabla_{\theta} \log \pi_\theta(a | {\mathbf{s}}) b({\mathbf{s}}) \right] \right] \\ =&\; \mathbb{E}_{{\mathbf{s}}} \left[ b({\mathbf{s}}) \int_{a} \pi_\theta(a | {\mathbf{s}}) \nabla_{\theta} \log \pi_\theta(a | {\mathbf{s}}) \dif a \right] \\ =&\; \mathbb{E}_{{\mathbf{s}}} \left[ b({\mathbf{s}}) \int_{a} \nabla_{\theta} \pi_\theta(a | {\mathbf{s}}) \dif a \right] \\ =&\; \mathbb{E}_{{\mathbf{s}}} \left[ b({\mathbf{s}}) \nabla_{\theta} \left( \int_{a} \pi_\theta(a | {\mathbf{s}}) \dif a \right) \right] \\ =&\; \mathbb{E}_{{\mathbf{s}}} \left[ b({\mathbf{s}}) \nabla_{\theta} 1 \right] = 0 . \end{align*} A common choice of baseline function in practice \citep{schulman2015trust,mnih2016asynchronous,schulman2017proximal} is the estimated version $\hat{V}_{\pi_\theta}$ of the expected value $ V_{\pi_\theta}({\mathbf{s}}) = \mathbb{E}_{{\mathbf{a}}|{\mathbf{s}} \sim \pi_\theta} Q_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) $ through function approximation due to simplicity. We call this estimator with value function baseline as ${\mathbf{g}}^v$ in Eq.~\ref{eq:g_v}. \begin{align} \label{eq:g_v} {\mathbf{g}}^v = \nabla_{\theta} \log \pi_\theta({\mathbf{a}}|{\mathbf{s}}) (\hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) - \hat{V}_{\pi_\theta}({\mathbf{s}})) \\ \text{ where } {\mathbf{s}},{\mathbf{a}} \sim \pi_\theta \nonumber \end{align} \subsection{Minimum variance control variates} The variance of a $d$-dimensional policy gradient estimator ${\mathbf{g}}$ can be defined in one way as the trace of the (co)variance matrix, i.e. \begin{equation} \label{eq:var_g} {\mathbb{V}}[{\mathbf{g}}] = \Tr \mathrm{Var}[{\mathbf{g}}] = \sum_{j=1}^d \mathrm{Var}[{\mathbf{g}}_j] = \sum_{j=1}^d \mathbb{E}[{\mathbf{g}}_j^2] - \mathbb{E}[{\mathbf{g}}_j]^2 \end{equation} A minimum variance control variate can be found by explicitly minimizing ${\mathbb{V}}[{\mathbf{g}}^b]$ with respect to $b$. After inspection, one can notice that only the first term of Eq.~\ref{eq:var_g} depends on $b$ while the second term does not. By minimizing $\sum_{j=1}^d \mathbb{E}[{{\mathbf{g}}^b_j}^2]$ w.r.t. $b$, we can easily arrive at the following optimal scalar-valued state-dependent baseline \citep{weaver2001optimal,greensmith2004variance,zhao2011analysis}. \begin{equation} \label{eq:g_squared} \begin{aligned} \sum_{j=1}^d \mathbb{E}[{{\mathbf{g}}^b_j}^2] & = \mathbb{E} \sum_{j=1}^d \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}|{\mathbf{s}}) \big)^2 \big( \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) - b({\mathbf{s}}) \big)^2 \\ & = \mathbb{E}\left[ \| \nabla_{\theta} \log \pi_\theta({\mathbf{a}} | {\mathbf{s}}) \|^2 \big( \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) - b({\mathbf{s}}) \big)^2 \right] . \end{aligned} \end{equation} \begin{equation} \label{eq:best_scalar_b} \begin{aligned} &\text{For any state ${\mathbf{s}}$, } \frac{\partial}{\partial b({\mathbf{s}})} \sum_{j=1}^d \mathbb{E}[{{\mathbf{g}}^b_j}^2] = 0 \implies\\ &b^*({\mathbf{s}}) = \frac{\mathbb{E}_{{\mathbf{a}}|{\mathbf{s}} \sim \pi_\theta} \left[ \| \nabla_{\theta} \log \pi_\theta({\mathbf{a}} | {\mathbf{s}}) \|^2 \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) \right] } {\mathbb{E}_{{\mathbf{a}}|{\mathbf{s}} \sim \pi_\theta} \| \nabla_{\theta} \log \pi_\theta({\mathbf{a}} | {\mathbf{s}}) \|^2 } . \end{aligned} \end{equation} \subsection{Coordinate-wise and layer-wise control variates} Now, is Eq.~\ref{eq:best_scalar_b} truly optimal? The answer is yes, in the sense if we restrict ourselves to scalar-valued baseline functions $b({\mathbf{s}}): {\mathsf{S}} \mapsto {\mathbb{R}}$, which is essentially assigning the same control variate to all coordinates of the gradient vector. However, scalar-valued baselines disrespect the difference between coordinates. If we relax the baseline function space to vector-valued functions, namely $c({\mathbf{s}}) = \left( c_1({\mathbf{s}}), c_2({\mathbf{s}}), \ldots, c_d({\mathbf{s}}) \right): {\mathsf{S}} \mapsto {\mathbb{R}}^d$ where $d$ is the dimension of the gradient, the gradient variance could potentially be further reduced. Consider the following family of policy gradient estimators with coordinate-wise control variates, \begin{equation} \begin{aligned} \label{eq:g_coord} {\mathbf{g}}^c = ({\mathbf{g}}^c_1, {\mathbf{g}}^c_2, \ldots, {\mathbf{g}}^c_d) , \\ {\mathbf{g}}^c_j = \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}|{\mathbf{s}}) \left( \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) - c_j({\mathbf{s}}) \right) \\ \text{ where } {\mathbf{s}},{\mathbf{a}} \sim \pi_\theta . \end{aligned} \end{equation} We assign a separate baseline function $c_j({\mathbf{s}}), j=1\ldots d$ for each coordinate of the gradient vector, or equivalently, a vector-valued baseline $c({\mathbf{s}})$ for the whole gradient. Since the space of feasible solutions is enlarged, the minimum variance estimator in this family would be able to achieve a lower variance than the scalar-value counterpart. Without any restrictions on function space containing $c_j$, the optimal vector-valued baseline admits a form similar to Eq.~\ref{eq:best_scalar_b}, except that now the weights are the squared partial derivatives instead of the gradient norm: $$ c_j^*({\mathbf{s}}) = \frac{\mathbb{E}_{{\mathbf{a}}|{\mathbf{s}} \sim \pi_\theta} \left[ \big( \frac{\partial \log \pi_\theta({\mathbf{a}} | {\mathbf{s}})}{\partial\theta_j} \big)^2 \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) \right] } {\mathbb{E}_{{\mathbf{a}}|{\mathbf{s}} \sim \pi_\theta} \big( \frac{\partial \log \pi_\theta({\mathbf{a}} | {\mathbf{s}})}{\partial\theta_j} \big)^2 } . $$ When function approximation is used, we have the following proposition that formalizes the intuition and suggests that further variance reduction could be achieved with more sophisticated control variates. \begin{proposition}[Variance comparison] \label{prop:var} Given a real-valued function class ${\mathsf{F}}: {\mathsf{S}} \mapsto {\mathbb{R}} $ for $b, c_j$ and $\hat{V}_{\pi_\theta}$. Let the optimal scalar-valued baseline be $$b^* = \argmin_{b\in {\mathsf{F}}} {\mathbb{V}}[{\mathbf{g}}^b]$$ and the optimal vector-valued baseline be $$c^* = \argmin_{c \in {\mathsf{F}}^d} {\mathbb{V}}[{\mathbf{g}}^{c}] . $$ The variances of the policy gradient estimators with the optimal coordinate-wise CV, the optimal scalar CV, and the value function CV satisfy the following relationship: \begin{equation*} {\mathbb{V}}[{\mathbf{g}}^{c^*}] \le {\mathbb{V}}[{\mathbf{g}}^{b^*}] \le {\mathbb{V}}[{\mathbf{g}}^v] . \end{equation*} \end{proposition} \begin{proof} The proposition is a straightforward consequence of the definition of minimizers. Comparing ${\mathbf{g}}^c$ and ${\mathbf{g}}^b$, since $c_j$ and $b$ belong to the same basic function class, $c' = (b^*,b^*, \ldots, b^*)$ is a special case such that $c' \in {\mathsf{F}}^d$. This means the coordinate-wise CV is more expressive than the scalar-valued CV. Furthermore, the minimizer $c^*$ in this more expressive function class will be able to achieve a smaller objective value than $c'$. Hence, ${\mathbb{V}}[{\mathbf{g}}^{c^*}] \le {\mathbb{V}}[{\mathbf{g}}^{c'}] = {\mathbb{V}}[{\mathbf{g}}^{b^*}]$. Comparing ${\mathbf{g}}^{b*}$ and ${\mathbf{g}}^v$, this is trivially true because $ {\mathbb{V}}[{\mathbf{g}}^{b^*}] = \min_{b\in {\mathsf{F}}} {\mathbb{V}}[{\mathbf{g}}^b] \le {\mathbb{V}}[{\mathbf{g}}^b], \forall b \in {\mathsf{F}} $ and $\hat{V}_{\pi_\theta} \in {\mathsf{F}}$ by assumption. \end{proof} A middle ground between the scalar-valued and the fully coordinate-wise control variates is the group-wise version. The coordinates of the gradient vector can be partitioned into disjoint groups. Within each group, the same control variate is shared. A natural choice of grouping strategy is to partition by the layers in a neural network (i.e., parameter tensors). We refer to this variant as the \textbf{Layer-wise CV}. In practice, we may choose neural nets as the baseline function class in our experiments. We build a fully-connected neural net with the same architecture as the value function except for the last layer. For the coordinate-wise baseline function, the last layer outputs $d$ predictions where $d$ is the number of policy parameter coordinates. For the layer-wise baseline function, the number of outputs equals the number of layer weight tensors. \subsection{Fitting a neural vector-valued baseline} We use the following loss function to train the vector-valued baseline function $c_{\phi} = (\ldots, c_{j,\phi}, \ldots)$ parameterized by $\phi$. The general form of this loss takes two flexible hyper-parameters $\lambda$ and $\rho$, where $\lambda$ plays the role of interpolating between fitting a value function and minimizing variance, and $\rho$ imposes a proximal regularization term inspired by the original PPO. Intuitively, this loss is still fitting $c$ to the Q values but weighted by the squared derivatives. A coordinate $j \in \{1,2,\ldots,d\}$ or data point $i \in \{1,2,\ldots,n\}$ ($n$ is the number of state-action pairs) with large derivative values may get more weights to reduce the overall gradient variance. \begin{equation} \label{eq:l_baseline} \begin{aligned} L^{\text{baseline}} = & \frac1{nd} \sum_{i=1}^n \sum_{j=1}^d \bigg\{ \big( \hat{Q}_{\pi_\theta}({\mathbf{s}}_i, {\mathbf{a}}_i) - c_{j,\phi}({\mathbf{s}}_i) \big)^2 \\ & \quad \underbrace{ \cdot \Big( (1-\lambda) \overline{ \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i) \big)^2 } + \lambda \Big) }_{\text{$\lambda$-interpolated Empirical Variance}} \\ & + \underbrace{ \rho \cdot \big( c_{j,\phi}({\mathbf{s}}_i) - c_{j,\phi_{\text{old}}}({\mathbf{s}}_i) \big)^2 }_{\text{Proximal Regularization}} \bigg\} \end{aligned} \end{equation} The over-line in $\overline{ \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i) \big)^2 }$ indicates that the derivative square is normalized to have 1 as its mean value in a mini-batch, namely $\overline{ \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i) \big)^2 } = \frac{\big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i) \big)^2}{\frac1n \sum_i \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i) \big)^2}$. In this way, the loss function is properly normalized: It is only affected by the relative difference between different coordinates but not the absolute scale of the gradient. As a special case of Eq.~\ref{eq:l_baseline} when $\lambda=1$, there is no difference between the losses for different $j$, the objective degenerates into fitting a conventional value function approximation. When $\lambda \to 0$, the loss stays more faithfully to the definition of gradient variance. Since we do mini-batch gradient descent to optimize Eq.~\ref{eq:l_baseline} in practice, using the original definition of variance as objective might not be desirable when the mini-batch sample is small. The new $c_\phi$ might easily over-fit to the current sample due to the large number of free parameters. The proximal term is out of this consideration. This is inpired by PPO, as the original PPO employs a similar kind of proximal methods to update the value function approximator \citep{schulman2017proximal}. When $\rho$ is large, the regularization term encourages the new baseline function to stay close to the old one and induces conservative updates to the baseline parameters. \begin{algorithm}[t] \small Initialize policy $\pi_\theta$, baseline $c_\phi$, and value function\; \For{$i = 0,1,2,\ldots$} { Collect rollout trajectory sample $D = \left\{s_i, a_i, r_i\right\}_{i=1}^n$ with policy $\pi_\theta$\; Compute TD($\lambda$) return estimates $\hat{Q}_{\pi_\theta}$ with $D$ and the fitted value function\; $\theta_\text{old} \leftarrow \theta, \phi_\text{old} \leftarrow \phi$\; Update the vector-valued baseline $c_\phi$ by (Eq.~\ref{eq:l_baseline}) $\phi \leftarrow \argmin_{\phi} L^{\text{baseline}}(\phi, \phi_{\text{old}}, D, \hat{Q}_{\pi_\theta}) \text{;}$ \par Compute per-coordinate advantages $\hat{A}_j$\ by Eq.~\ref{eq:a_j}\; Update the policy with surrogate loss Eq.~\ref{eq:l_ppo_j} by $\theta \leftarrow \argmin_{\theta} L^{\text{ppo}}(\theta, \theta_\text{old}, D, \hat{Q}_{\pi_\theta}, \hat{A}_j) \text{;}$ \par Update the value function as in the original PPO\; } \Return $\pi_\theta$\; \caption{PPO with coordinate-wise CV.} \label{alg:ppo} \end{algorithm} \subsection{Integration with Proximal Policy Optimization} \label{sec:method_ppo} We show how to integrate the coordinate-wise control variates with the popular Proximal Policy Optimization (PPO) algorithm \citep{schulman2017proximal}. Since PPO introduces several modifications over the vanilla policy gradient, applying the coordinate-wise control variates requires some non-trivial redesign of the PPO objective functions, as we shall see in Sec.~\ref{sec:method_ppo_sub2}. \subsubsection{The original PPO objective} Given a trajectory sample of $n$ state-action-reward tuple $D = \{(s_i,a_i,r_i)\}_{i=1}^n$, PPO defines a surrogate objective motivated by proximal/trust-region methods for policy improvement: \begin{equation} \label{eq:ppo} \begin{aligned} L^{\text{ppo}} &= \frac1n \sum_{i=1}^n \bigg[ \min \Big\{ \frac{\pi_\theta(a_i|s_i)}{\pi^{\text{old}}(a_i|s_i)} \hat{A}(s_i,a_i), \\ & \hspace{3em} \Big[\frac{\pi_\theta(a_i|s_i)}{\pi^{\text{old}}(a_i|s_i)}\Big]_{1-{\epsilon}}^{1+{\epsilon}} \hat{A}(s_i,a_i) \Big\} \bigg] . \end{aligned} \end{equation} In the objective, $\pi^\text{old}$ is the old policy (fixed), $\pi_\theta$ is the policy to be optimized, and $\hat{A}$ is the estimated advantage function usually computed with the Generalized Advantage Estimation (GAE) \citep{schulman2015high}. PPO further defines the following clipped importance sampling ratio: \begin{align*} \omega_i = \omega(a_i|s_i) = \begin{cases} \left[ \frac{\pi_\theta(a_i|s_i)}{\pi^{\text{old}}(a_i|s_i)} \right]^{1+{\epsilon}} & \text{if $\hat{A}(s_i,a_i) \ge 0$;} \\ \vspace{-6pt} \\ \left[ \frac{\pi_\theta(a_i|s_i)}{\pi^{\text{old}}(a_i|s_i)} \right]_{1-{\epsilon}} & \text{if $\hat{A}(s_i,a_i) < 0$.} \end{cases} \end{align*} Here the notation $[\;\cdot\;]^{1+{\epsilon}} = \min\{\;\cdot\;, 1+{\epsilon}\}$, $[\;\cdot\;]_{1-{\epsilon}} = \max\{\;\cdot\;, 1-{\epsilon}\}$, and $[\;\cdot\;]^{1+{\epsilon}}_{1-{\epsilon}} = \big[[\;\cdot\;]^{1+{\epsilon}}\big]_{1-{\epsilon}}$. Moreover, note that the advantage function is implicitly using the value function as the baseline function, i.e., $\hat{A}(s_i,a_i) = \hat{Q}_{\pi_\theta}(s_i,a_i) - \hat{V}_{\pi_\theta}(s_i)$. Computing the GAE $\hat{A}$ is equivalent to computing the TD($\lambda$) for $\hat{Q}$ \citep{schulman2015high,sutton2018reinforcement}. Then the PPO objective can be rewritten as the weighted sum \begin{align*} L^{\text{ppo}} &= \frac1n \sum_{i=1}^n \left[\omega(a_i|s_i) \hat{A}(s_i,a_i) \right] \\ &= \frac1n \sum_{i=1}^n \left[\omega(a_i|s_i) \left( \hat{Q}_{\pi_\theta}(s_i,a_i) - \hat{V}_{\pi_\theta}(s_i) \right) \right] . \end{align*} Notice that the gradient of the PPO objective for the policy parameter theta can also be expressed in the following compact form using the clipped ratio \begin{align*} \nabla_\theta L^{\text{ppo}} &= \frac1n \sum_{i=1}^n \left[ \omega_i \bm{1}_{\omega_i \ne 1\pm {\epsilon}} \nabla_\theta \log \pi_\theta(a_i|s_i) \hat{A}(s_i,a_i) \right] \\ &= \frac1n \sum_{i=1}^n \bigg[ \omega_i \bm{1}_{\omega_i \ne 1\pm {\epsilon}} \cdot \nabla_\theta \log \pi_\theta(a_i|s_i) \\ & \hspace{5em} \cdot \left( \hat{Q}_{\pi_\theta}(s_i,a_i) - \hat{V}_{\pi_\theta}(s_i) \right) \bigg] . \end{align*} This now resembles very much to the policy gradient estimators Eq.~\ref{eq:g_v} and Eq.~\ref{eq:g_b} except for the importance weights $\omega_i \bm{1}_{\omega_i \ne 1\pm {\epsilon}}$. The indicator function $\bm{1}_{\omega_i \ne 1\pm {\epsilon}}$ occurs due to the way how AutoGrad works. When a sample point is clipped, it does not contribute to the final gradient calculation, as the AutoGrad engine regards $1 \pm {\epsilon}$ as constants without gradients. Hence, PPO can be thought of as adaptively dropping sample points to avoid extreme gradient values. In practice, only a small fraction of sample points are clipped and the remaining $\omega_i$ are close to 1. Therefore, we can still leverage an empirical variant to Eq.~\ref{eq:g_squared} to fit a baseline function other than $\hat{V}_{\pi_\theta}$ in the hope to reduce the final gradient variance. \subsubsection{PPO + Coordinate-wise CV} \label{sec:method_ppo_sub2} When using the coordinate-wise control variates, i.e., using the corresponding vector-valued baseline function), conceptually, a separate clipped objective is defined for each parameter coordinate. Note the difference from the previous equations in subscripts $j$. For any $i \in \{1,2,\ldots,n\}$ and $j \in \{1,2,\ldots,d\}$, we analogously have the following set of equations. \begin{align*} \omega_{ij} = \omega_j(a_i|s_i) = \begin{cases} \left[ \frac{\pi_\theta(a_i|s_i)}{\pi^{\text{old}}(a_i|s_i)} \right]^{1+{\epsilon}} & \text{if $\hat{A}_j(s_i,a_i) \ge 0$;} \\ \vspace{-6pt} \\ \left[ \frac{\pi_\theta(a_i|s_i)}{\pi^{\text{old}}(a_i|s_i)} \right]_{1-{\epsilon}} & \text{if $\hat{A}_j(s_i,a_i) < 0$.} \end{cases} \end{align*} \vspace{-5pt} \begin{align} \label{eq:a_j} \hat{A}_j(s_i,a_i) = \hat{Q}(s_i,a_i) - c_{j,\phi}(s_i). \end{align} \vspace{-5pt} \begin{align} \label{eq:l_ppo_j} L^{\text{ppo}}_j = \frac1n \sum_{i=1}^n \left[\omega_j(a_i|s_i) \hat{A}_j(s_i,a_i) \right] . \end{align} This looks cumbersome to compute, as constructing $d$ separate loss functions and doing back-propagation is computationally too expensive. Thankfully, if we inspect the formula for the new objectives' gradients, we notice that the $\frac{\partial}{\partial\theta_j} \log \pi_\theta $ part only needs to be computed once. \begin{align*} \frac{\partial L^{\text{ppo}}}{\partial \theta_j} &= \frac1n \sum_{i=1}^n \left[ \omega_{ij} \bm{1}_{\omega_{ij} \ne 1\pm {\epsilon}} \frac{\partial}{\partial\theta_j} \log \pi_\theta(a_i|s_i) \hat{A}_j(s_i,a_i) \right] \\ &= \frac1n \sum_{i=1}^n \bigg[ \omega_{ij} \bm{1}_{\omega_{ij} \ne 1\pm {\epsilon}} \cdot \frac{\partial}{\partial\theta_j} \log \pi_\theta(a_i|s_i) \\ & \hspace{5em} \cdot \left( \hat{Q}_{\pi_\theta}(s_i,a_i) - c_{j,\phi}(s_i) \right) \bigg] . \end{align*} Leveraging efficient per-example AutoGrad implementations in deep learning libraries such as PyTorch \citep{paszke2019pytorch}, we can compute the $\nabla \log \pi_\theta(a_i|s_i)$ for all $(s_i,a_i)$ examples in one backward pass. Then the policy gradient is simply the matrix multiplication between $\nabla \log \pi_\theta$ and the per-coordinate advantages. Putting everything together, the modified PPO algorithm with coordinate-wise CV is outlined in Alg.~\ref{alg:ppo}. There is one caveat here: In theory, the samples used to fit the baseline function and to optimize the policy need to be independent to make policy gradient unbiased. However, in practical implementations of PPO \cite{schulman2017proximal,liu2018action}, this requirement is often loosened since (1) The state-action values are computed \emph{before} the policy optimization; (2) Mini-batching is employed to train the baselines, the value function and the policy function jointly or separately; The stochasticity in the process might decorrelate the data points; (3) PPO is often interpreted as a policy optimization algorithm rather than a vannila policy gradient method; It works as long as local policy improvement can be achieved. We indeed find during our experimentation that the choice of dependent or independent samples for baseline and policy training do not significantly affect performance. \section{Related work} \paragraph{Policy gradient (PG) methods.} There is a large body of literature on policy gradient methods \citep{williams1992simple,sutton2000policy,konda2000actor,kakade2002natural,peters2006policy, mnih2016asynchronous,schulman2015trust,schulman2017proximal,gu2017interpolated,fakoor2020p3o}. They are a class of reinforcement learning algorithms that leverage gradients on a parameterized policy to do local policy search. In particular, the Policy Gradient Theorem proves the correctness of the general approach \citep{sutton2000policy}. The REINFORCE algorithm \citep{williams1992simple} can be thought of as PG with Monte Carlo estimated episodic return. The actor-critic variants (e.g., Advantage Actor-Critic) \citep{konda2000actor,mnih2016asynchronous} learn a value function approximation at the same time as policy learning, and adopt the bootstrapped value estimates rather than the (fully) Monte Carlo estimates to compute the gradients, which has led to better learning efficiency. Generalized Advantage Estimation (GAE) considers a geometric average of the bootstrapped values from different time steps as a low variance estimator for the advantage function \citep{schulman2015high} in a way similar to TD($\lambda$) \citep{sutton2018reinforcement}. Trust Region Policy Optimization \citep{schulman2015trust} and Proximal Policy Optimization \citep{schulman2017proximal} derive variants of PG by extending ideas from Conservative Policy Iteration \citep{kakade2002approximately} and Natural Policy Gradient \citep{kakade2002natural} into the deep learning regime. \vspace{-5pt} \paragraph{Variance reduction with control variates.} The control variates method was seen in Monte Carlo simulation \citep{rubinstein1985efficiency,nelson1990control,glynn2002some} and finance \citep{glasserman2003monte}. Its application in reinforcement learning or Markov decision processes dates back to at least \cite{weaver2001optimal,greensmith2004variance}. Although the most common choice of baseline in practice is the value function \cite{schulman2017proximal,mnih2016asynchronous}, it is known that the optimal scalar-valued state-dependent baseline is the Q values weighted by the squared gradient norms of the policy function \citep{weaver2001optimal,greensmith2004variance,zhao2011analysis,peters2006policy}. Many have followed this line of work to construct better control variates. For example, \citet{gu2017q}, \citet{liu2018action}, \citet{grathwohl2018backpropagation} and \citet{wu2018variance} developed \emph{(state-)action-dependent} control variates to reduce the variance further, since the action-dependent CV can better correlate with original PG estimator. The action-dependent CVs are later more carefully analyzed by \citet{tucker2018mirage}. Recently, \citet{cheng2020trajectory} extended action-dependent CV to trajectory-wise CV by exploiting the temporal structure. An earlier report \citep{pankov2018reward} was motivated similarly and presented the similar main result to \citet{cheng2020trajectory}. Nevertheless, these work consider scalar-valued baseline functions, i.e., a single control variate for the whole PG. \citet{huang2020importance} showed many policy gradient estimators can be derived from finite difference of importance sampling evaluation estimators, and proposed a general form of PG that subsumes \citep{cheng2020trajectory} as a special case. Their additional term takes the form of gradient of state-action baselines which can be viewed as vector-valued CV. An early work \citep{peters2006policy} described the possibility of closed-form coordinate-wise baselines in the context of robotics and \emph{shallow} policy parameterization (in fact, linear function as the mean of a Gaussian). Our paper differs from them in that (1) we demonstrate the effectiveness of coordinate CV in the context of deep neural nets and modern policy optimization algorithm (PPO) rather than vanilla policy gradients; (2) we study \emph{fitted} baselines instead of closed-form baselines; (3) we investigate the possibility of novel layer-wise CVs specifically for neural nets which is in-between coordinate-wise and scalar CVs. \vspace{-5pt} \paragraph{Other variance reduction techniques.} Besides the control variates method, there exist other tools to build low variance gradient estimators. Rao-Blackwellization is a classic information-theoretic variance reduction technique that has been applied in estimating gradients of discrete distribution \citep{liu2019rao}. Others seek to apply well-established stochastic optimization techniques in convex optimization literature, such as the Stochastic Variance Reduced Gradient (SVRG) \citep{johnson2013accelerating,reddi2016stochastic} and the Stochastic Average Gradient (SAG) \citep{schmidt2017minimizing} to policy optimization \citep{papini2018stochastic} and deep Q-learning \citep{anschel2017averaged}. Variance reduction is also possible through action clipping if we allow the final estimator to be biased \citep{cheng2019control}. AVEC \citep{flet2021learning} proposed to use the residual variance as an objective function for the critic, which leads to better value function estimates and better low variance gradient estimator. \section{Supplementary experiment details} \subsection{Hyper-parameters for Sec. 4.2} We list the hyper-parameters of the PPO algorithm that are shared across all methods in Tab.~\ref{tab:hyperparam}. Note that the Humanoid-v2 is harder than other environments, hence we set a few hyper-parameters differently (number of time-steps, parallel processes, learning rate, and PPO optimization parameters). The choices of some hyper-params that are unrelated to our proposal, such as clipping, GAE, coefficient, follow the original PPO paper. \begin{table}[h] \caption{Hyper-parameters that are shared across methods.} \centering \begin{tabular}{lll} \toprule \bfseries Hyper-parameters & \bfseries Other env. & \bfseries Humanoid-v2 \\ \midrule Total num. env steps & 1,000,000 & 3,000,000 \\ Num. env steps per update & 2048 & 1024 \\ Num. parallel env & 1 & 4 \\ Discount factor $\gamma$ & 0.99 & 0.99 \\ GAE-$\lambda$ & 0.95 & 0.95 \\ Entropy coefficient & 0 & 0 \\ Value loss coefficient & 0.5 & 0.5 \\ Max grad norm & 0.5 & 0.5 \\ Optimizer & ADAM & ADAM \\ Learning rate (linearly annealed) & 0.0003 $\rightarrow$ 0 & 0.0001 $\rightarrow$ 0 \\ Num. mini-batches per PPO epoch & 32 & 64 \\ Num. PPO epochs per update & 10 & 10 \\ PPO clip parameter & 0.2 & 0.2 \\ Network architecture & \multicolumn{2}{l}{Linear[64] - Linear[64] - Linear[out]} \\ Network activation & TanH & TanH \\ \bottomrule \end{tabular} \label{tab:hyperparam} \end{table} In Tab.~\ref{tab:hyperparam_baseline}, we also list the per-environment hyper-parameters $(\lambda, \rho)$ accompanying the results in the main text Sec. 4, along with the environment observation and action space sizes. The number of policy net parameters (number of coordinates) depends on the sizes of these two spaces. We adopt a two hidden layer fully connected network architecture for all the policy, value and baseline functions. The hidden layer width is 64. Therefore the number of policy parameter is $\text{Obs. size} \times 64 + 64 + 64\times 64 + 64 + 64 \times \text{Act. size} + \text{Act. size} \times 2$, counting the weights, the biases and the action log-std parameters. \begin{table}[h] \caption{Details about each environment and regularization hyper-parameters for the Sec.~4.2 results regarding fitting Scalar, Layer-wise and Coordinate-wise CV with $L_{\text{baseline}}$.} \centering \begin{tabular}{llllllll} \toprule & {Ant} & {HalfCheetah} & {Hopper} & {Humanoid} & {HumanoidStand} & {Swimmer} & {Walker2d} \\ \midrule Obs. space & 111 & 17 & 11 & 376 & 376 & 8 & 17 \\ Action space & 8 & 6 & 3 & 17 & 17 & 2 & 6 \\ \#policy param & 11856 & 5708 & 5126 & 29410 & 29410 & 4868 & 5708 \\ \midrule $\lambda$ & 0.01 & 0.01 & 0.01 & 0.1 & 0.0 & 0.0 & 0.01 \\ $\rho$ & 0.1 & 0.1 & 0.05 & 0.01 & 0.0 & 0.01 & 0.0 \\ \bottomrule \end{tabular} \label{tab:hyperparam_baseline} \end{table} \clearpage \section{Additional results} \subsection{Mean squared error plot for Sec. 4.1} Sec.~4.1 contains a table showing the variance estimates. Here we also draw the Mean Squared Error (MSE) plots of the gradient estimates, varying the size of the sample. The true policy gradient is approximated by a separate large sample of $3\times 10^5$ transitions and the learned value function baseline. Then the MSE is calculated against the approximated true gradient. As we can see, the MSE also becomes smaller with more advanced CVs, suggesting more accurate gradient estimation. \begin{figure*}[h] \centering \includegraphics[width=0.5\textwidth]{fig/fig_mse.pdf} \caption{MSE of gradient estimators with different CVs.} \end{figure*} \subsection{Space and time comparison} We take Humanoid-v2 as an example to investigate the space and time overhead of coordinate-wise CVs. Compared with Value CV, the more sophisticated Scalar, Layer, Coord CVs involve storing and training an extra neural network. The number of output units of this neural net depends on the CV used. Layer CV baseline has 7 outputs because there are 7 parameter tensors in the policy network. The number of outputs of Coord CV equals the number of coordinates of the policy network, which is determined by the state and action dimensions. We observe that the time overhead of Coord CV is manageable thanks to the efficient per-example AutoGrad implementation - only 0.6h extra training time compared with the Value CV (relative 0.6/2.19=27\% increase). \begin{table}[h] \caption{Training space and time comparison of PPO with value, scalar, layer-wise and coordinate-wise CVs for the Humanoid-v2 environment, measured with an nVidia GTX 1080 GPU.} \centering \begin{tabular}{lllll} \toprule \bfseries{Humanoid} & Value CV & Scalar CV & Layer CV & Coord CV \\ \midrule Num. baseline outputs & 1 & 1 & 7 & 29410 \\ Frames per second & 380 & 309 & 305 & 299 \\ Total training time & 2.19 hours & 2.67 hours & 2.73 hours & 2.78 hours \\ \bottomrule \end{tabular} \label{tab:time} \end{table} \subsection{Effect of regularization parameter $\lambda$ and $\rho$} We conduct ablation study on the regularization parameter $\lambda$ and $\rho$ in the baseline fitting loss. $\lambda$ interpolates between the fitting the minimum empirical variance baseline and the value function baseline, while $\rho$ tunes the proximal regularization strength. \begin{table*}[h] \centering \caption{The effect of baseline fitting hyper-parameters $(\lambda, \rho)$ on performance. We report the mean episodic returns of \emph{all episodes} during training. All results are the average of 4 random seeds.} \small \begin{tabular}{l|l|lllll} \toprule \bfseries Ant & $(0.01, 0.1)$ & $(0.01, 0.0)$ & $(0.01, 0.05)$ & $(0.0, 0.1)$ & $(0.1, 0.1)$ \\ \midrule Scalar CV & 210.25 & 179.88 & 189.66 & 194.40 & 146.54 \\ Layer-wise CV & 189.53 & 134.69 & 106.40 & 156.53 & 87.90 \\ Coord-wise CV & 323.67 & 235.23 & 253.60 & 284.54 & 222.92 \\ \bottomrule \toprule \bfseries HalfCheetah & $(0.01, 0.1)$ & $(0.01, 0.0)$ & $(0.01, 0.01)$ & $(0.0, 0.1)$ & $(0.1, 0.1)$ \\ \midrule Scalar CV & 1399.89 & 1554.35 & 1738.73 & 1308.23 & 1298.71 \\ Layer-wise CV & 1833.37 & 1808.32 & 1499.38 & 1311.79 & 1591.76 \\ Coord-wise CV & 1694.47 & 1337.72 & 1391.58 & 1332.84 & 1785.43 \\ \bottomrule \toprule \bfseries Hopper & $(0.01, 0.05)$ & $(0.01, 0.01)$ & $(0.01, 0.1)$ & $(0.0, 0.01)$ & $(0.0, 0.1)$ \\ \midrule Scalar CV & 1160.30 & 1412.54 & 1094.03 & 1146.21 & 1183.94 \\ Layer-wise CV & 1395.81 & 1228.40 & 1290.87 & 1311.79 & 1365.25 \\ Coord-wise CV & 1408.58 & 1481.50 & 1403.77 & 1379.06 & 1495.21 \\ \bottomrule \toprule \bfseries Humanoid & $(0.1, 0.01)$ & $(0.1, 0.05)$ & $(0.05, 0.0)$ & $(0.01, 0.01)$ & $(0.01, 0.05)$ \\ \midrule Scalar CV & 1034.93 & 1034.47 & 1094.03 & 1022.11 & 991.22 \\ Layer-wise CV & 902.80 & 824.81 & 885.59 & 885.32 & 848.48 \\ Coord-wise CV & 1056.05 & 991.96 & 1007.88 & 1119.19 & 1012.50 \\ \bottomrule \toprule \bfseries HumanoidStandup & $(0.0, 0.0)$ & $(0.0, 0.01)$ & $(0.01, 0.0)$ & $(0.01, 0.01)$ & $(0.1, 0.01)$ \\ \midrule Scalar CV & 122945 & 126393 & 110376 & 122647 & 128471 \\ Layer-wise CV & 120230 & 107631 & 105924 & 111034 & 113181 \\ Coord-wise CV & 128267 & 100111 & 115444 & 110293 & 108881 \\ \bottomrule \toprule \bfseries Swimmer & $(0.0, 0.01)$ & $(0.0, 0.0)$ & $(0.01, 0.0)$ & $(0.01, 0.01)$ & - \\ \midrule Scalar CV & 82.24 & 74.31 & 76.99 & 77.57 & - \\ Layer-wise CV & 84.31 & 72.94 & 60.84 & 73.79 & - \\ Coord-wise CV & 91.04 & 81.63 & 83.40 & 69.50 & - \\ \bottomrule \toprule \bfseries Walker2d & $(0.01, 0.0)$ & $(0.0, 0.0)$ & $(0.0, 0.01)$ & $(0.01, 0.01)$ & $(0.01, 0.1)$ \\ \midrule Scalar CV & 1337.64 & 1202.55 & 1403.35 & 1373.67 & 1363.48 \\ Layer-wise CV & 1325.74 & 958.75 & 1406.74 & 1428.29 & 1330.60 \\ Coord-wise CV & 1455.91 & 1440.88 & 1198.60 & 1301.73 & 1288.50 \\ \bottomrule \end{tabular} \label{tab:ablation} \end{table*} \section{Derivations} \subsection{Derivation of Eq.~8} Here is a more detailed derivation of the optimal state-dependent baseline. First, we know that minimizing the variance ${\mathbb{V}}[{\mathbf{g}}^b]$ is equivalent to minimizing the expected gradient square term. \begin{align*} \argmin_b {\mathbb{V}}[{\mathbf{g}}^b] &= \argmin_b \sum_{j=1}^d \mathbb{E}[{{\mathbf{g}}^b_j}^2] \text{\qquad where} \\ \sum_{j=1}^d \mathbb{E}[{{\mathbf{g}}^b_j}^2] \tag{Linearity of expectation} &= \mathbb{E}[\sum_{j=1}^d {{\mathbf{g}}^b_j}^2] = \mathbb{E} \sum_{j=1}^d \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}|{\mathbf{s}}) \big)^2 \big( \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) - b({\mathbf{s}}) \big)^2 \\ & = \mathbb{E}_{\mathbf{s}} \mathbb{E}_{{\mathbf{a}}|{\mathbf{s}}} \left[ \| \nabla_{\theta} \log \pi_\theta({\mathbf{a}} | {\mathbf{s}}) \|^2 \big( \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) - b({\mathbf{s}}) \big)^2 \right] . \tag{Iterated expectation \& definition of $\nabla$} \end{align*} Consider the case where we can freely choose $b({\mathbf{s}})$ for every state, which implies $\min_b \mathbb{E}_{\mathbf{s}} \mathbb{E}_{{\mathbf{a}}|{\mathbf{s}}}[\cdots] = \mathbb{E}_{\mathbf{s}} [ \min_{b({\mathbf{s}})} \mathbb{E}_{{\mathbf{a}}|{\mathbf{s}}} [\cdots]]$. Then the optimal solution should satisfy the first order necessary condition: \begin{align*} &0 = \frac{\partial}{\partial b({\mathbf{s}})} \mathbb{E}_{{\mathbf{a}}|{\mathbf{s}}} \left[ \cdots \right] = 2 \mathbb{E}_{{\mathbf{a}}|{\mathbf{s}}} \left[ \| \nabla_{\theta} \log \pi_\theta({\mathbf{a}} | {\mathbf{s}}) \|^2 \big( b({\mathbf{s}}) - \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) \big) \right] \\ \implies\quad &\mathbb{E}_{{\mathbf{a}}|{\mathbf{s}}} \left[ \| \nabla_{\theta} \log \pi_\theta({\mathbf{a}} | {\mathbf{s}}) \|^2 b({\mathbf{s}}) \right] = \mathbb{E}_{{\mathbf{a}}|{\mathbf{s}}} \left[ \| \nabla_{\theta} \log \pi_\theta({\mathbf{a}} | {\mathbf{s}}) \|^2 \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) \right] \\ \implies\quad &b^*({\mathbf{s}}) = \frac{\mathbb{E}_{{\mathbf{a}}|{\mathbf{s}} \sim \pi_\theta} \left[ \| \nabla_{\theta} \log \pi_\theta({\mathbf{a}} | {\mathbf{s}}) \|^2 \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) \right] } {\mathbb{E}_{{\mathbf{a}}|{\mathbf{s}} \sim \pi_\theta} \| \nabla_{\theta} \log \pi_\theta({\mathbf{a}} | {\mathbf{s}}) \|^2 } . \end{align*} \subsection{Derivation of Eq.~10} The loss function for the baseline fitting is derived as follows. Similarly, we start by minimizing the variance, then add the regularization to make it better suitable for neural net fitting in practice. \begin{align*} \argmin_{\phi} {\mathbb{V}}[{{\mathbf{g}}^{c_\phi}}] = \argmin_{\phi} \mathbb{E}[\sum_{j=1}^d {{\mathbf{g}}^c_j}^2] = \argmin_{\phi} \mathbb{E} \sum_{j=1}^d \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}|{\mathbf{s}}) \big)^2 \big( \hat{Q}_{\pi_\theta}({\mathbf{s}}, {\mathbf{a}}) - c_j({\mathbf{s}}) \big)^2 . \end{align*} The objective can be approximated by a Monte Carlo sampling version with $n$ state-action pairs, \begin{align*} \min_{\phi} \frac1{nd} \sum_{i=1}^n \sum_{j=1}^d \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i) \big)^2 \big( \hat{Q}_{\pi_\theta}({\mathbf{s}}_i, {\mathbf{a}}_i) - c_{j,\phi}({\mathbf{s}}_i) \big)^2 . \end{align*} Notice that this is similar to the usual value function fitting except for the $\frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i)$ weights. We therefore introduce the hyper-parameter $\lambda$ to interpolate between fitting an optimal baseline and fitting a value function. To normalize the numerical scale to match the usual value function learning, we divide the $\big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i) \big)^2$ coefficient by its batch mean, such that \begin{align*} \frac1{nd} & \sum_{i=1}^n \sum_{j=1}^d \Big( (1-\lambda) \overline{ \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i) \big)^2 } + \lambda \Big) = 1. \end{align*} Furthermore, inspired by PPO and TRPO, we add a constraint to encourage conservative updates on the baseline: \begin{align*} \min_\phi \frac1{nd} & \sum_{i=1}^n \sum_{j=1}^d \Big( (1-\lambda) \overline{ \big( \frac{\partial}{\partial \theta_j} \log \pi_\theta({\mathbf{a}}_i|{\mathbf{s}}_i) \big)^2 } + \lambda \Big) \big( \hat{Q}_{\pi_\theta}({\mathbf{s}}_i, {\mathbf{a}}_i) - c_{j,\phi}({\mathbf{s}}_i) \big)^2 \\ & \text{s.t.\quad} \sum_{i,j} \big( c_{j,\phi}({\mathbf{s}}_i) - c_{j,\phi_{\text{old}}}({\mathbf{s}}_i) \big)^2 \le \delta. \end{align*} The above problem can be turned into an unconstrained one with appropriate Lagrange multipliers ($\rho$). We pick a fixed $\rho$, resulting in the loss function Eq.~10 to be trained by a few steps of mini-batch gradient descent.
1,116,691,498,852
arxiv
\section{Introduction} Many classic theorems of relativity are obtained by positing a number of local conditions on the geometry of spacetime. These geometric conditions are inequalities imposed, by fiat, on certain contractions of the Einstein or Ricci tensor. With the use of Einstein's equations, these geometric conditions become energy conditions that, supposedly, represent certain energetic characteristics of matter residing in spacetime. It is now understood, however, that these local energy conditions are violated by a number of classical matter models and, moreover, that violations are ubiquitous in the context of quantum field theory in both flat and curved spacetime.\footnote{See \cite{C 14} for a foundational perspective on the status of energy conditions.} To put it another way, the classic theorems aforementioned rely on assumptions not satisfied in contexts considered physically relevant. In view of this, we might wish to ask, for certain theorems of interest, whether they can be formulated with weaker energy conditions. The purpose of this paper is two-fold. To show that Hawking's area theorem can be strengthened as such, and, with the semi-classical context in mind, to interpret the result presented. The definitions used here follow Wald \cite{W 84}. \\ \\ It is instructive, before delving into the area theorem, to consider the well known singularity theorems.\footnote{See \cite{W 84} for an introduction.} There is, it seems, a common template to these theorems. Their assumptions usually include an energy condition, a restriction on the causal properties of the spacetime, and an initial or boundary condition, and their conclusions almost always involve no more than the failure of non-spacelike geodesic completeness. Raychaudhuri's equation describing geodesic congruences is very useful in many proofs of such theorems. In the four dimensional null irrotational case it reads \begin{equation} \frac{d\theta}{d\lambda} = -\frac{1}{2} \theta^2 - \sigma^2 - R_{ab}k^a k^b \end{equation} with \(\lambda\) the affine parameter, \(\sigma\) the shear, \(\theta\) the expansion and \(R_{ab}k^a k^b\) the Ricci tensor twice contracted with a null vector \(k^a\) tangent to the geodesic. Assuming the null convergence condition, \(R_{ab}k^a k^b \geq 0\), or, with Einstein's equations in four dimensions, the null energy condition (NEC), \(T_{ab}k^a k^b\geq 0\), it follows that if the expansion \(\theta\) satisfies \(\theta(\lambda_0) < 0\), then \(\theta \to -\infty \) within finite affine parameter \(\lambda \in (\lambda_0,\infty)\). This behavior is sometimes referred to as geodesic focusing. The onset of geodesic focusing signals the failure of certain geodesics to satisfy certain properties which, in other circumstances, are associated with them. In the null case, a null geodesic focusing to the future of a point signals the geodesic's failure to remain on the boundary of the causal future of that point, and in the timelike case, focusing signals the geodesic's failure to maximize proper time. Many proofs of singularity theorems work by setting up a contradiction under the assumption that all null or timelike geodesics are complete. The rough template is runs as follows. Assume that all null (timelike) geodesics are complete and deduce, under the energy and boundary or initial conditions, the onset of geodesic focusing. Then, combining causal restriction and initial or boundary condition, show that the focusing produced leads to a contradiction. Deduce, therefore, that not all null (timelike) geodesics can be complete. \\ \\ Tipler \cite{T 78} was among the first to show that singularity theorems may be strengthened by way of weaker energy conditions. He defined these weaker energy conditions as non-local restrictions on the integral, along certain types of null or timelike geodesics, of various contractions of the Ricci or Einstein tensor. He showed that these conditions were sufficient to cause focusing and indeed this made it possible to strengthen certain singularity theorems without major amendments to the original arguments. His observation was developed over many years and there arose a number of weaker energy conditions falling under the umbrella term of average energy conditions. One example that continues to generate interest is the average null energy condition (ANEC), which, roughly speaking, is the requirement that \[\int_\gamma R_{ab} k^a k^b d\lambda \geq 0 \] for some suitable class of null geodesics \(\{ \gamma \}\) with \(k^a \) a null vector tangent to the geodesic.\footnote{More care in definitions is taken when we come to formulate the result. } \\ \\ Though mathematically weaker, the physical interpretation of these average energy conditions is still murky at best. Over the course of a long list of studies, it has been found that many of the average energy conditions allowing for theorem-strengthening are violated by classical matter models, and, less straightforwardly, in the context of quantum field theory in curved spacetime (QFCTS).\footnote{See \cite{C 14, V, F 05} for an expression of this fact and links to some of the relevant references.} To put it another way, many classic theorems of relativity do not, at present, cover a whole host of classical and quantum matter models of physical interest. Efforts to better this situation are, naturally, still ongoing. \\ \\ In the QFTCS context, there is growing evidence that the extent of the violation of certain energy conditions is, in some sense, restricted. There is a whole body of work dedicated to making this more precise. The idea is to produce certain kinds of inequalities that represent the spatiotemporal constraints that (contractions of) renormalized stress energy tensors in various contexts of QFTCS obey.\footnote{See \cite{F 05} for an introduction and links to various relevant references.} These inequalities are known as quantum energy inequalities (QEI), and, in the best of cases, they have been used either to prove the ANEC in certain circumstances or to constrain the properties of certain spacetime scenarios otherwise associated with violations of certain more standard energy conditions. Examples include: Ford and Roman's study of the properties of traversable wormholes \cite{FR 96}, Fewster, Olum, Pfenning QEI-proof of the ANEC under certain circumstances \cite{Few}, and Kontou and Olum QEI-proof under different circumstances \cite{Kontou}. \\ \\ Yet despite providing valuable insights, QEI have not yet permitted the extension of certain classic relativity theorems to the semi-classical context. Galloway and Fewster \cite{FG 11} recently proposed a study aiming to provide a step in this direction. They formulated versions of Hawking's cosmological and Penrose's collapse singularity theorems based on energy conditions allowing for, respectively, violations of the strong and null energy conditions. Though inspired by QEI methods, the conditions that stand in as energy conditions in their singularity theorems are not, strictly speaking, QEI. On this point, it is worth noting that the relationship between QEI and energy conditions remains not particularly well understood, with only a few definitive results available. One example is \cite{FR 03} where Fewster and Roman show that the existence of QEI is not necessary for the ANEC to be satisfied. \\ \\ Galloway and Fewster's arguments rely on lemmas that establish sufficient conditions for focusing, which they use to strengthen the singularity theorems of, respectively, Penrose and Hawking. Moreover, they do so without having to make any essential amendments to the original spirit of the proof of these theorems, and they also consider a specific matter model to illustrate how the NEC may be violated whilst satisfying their weakened energy conditions. The specific lemma that we shall use is the following. \begin{lemma} Consider the initial value problem for \(z(t)\) \[ \dot{z} = \frac{z^2}{s} + r \] where \(r(t)\) is continuous on \([0,\infty)\), \(z(0)=z_0\) and \(s>0\) is constant. If there exists \(c \geq 0\) such that \[z_0 -\frac{c}{2} + \lim_{T\to \infty} \inf \int_0^T e^{-2ct/s}r(t) dt >0\] then the initial value problem has no solution on \([0,\infty)\), where `no solution' means \(z(t) \to \infty\) as \(t\to t^-_* < \infty\). \end{lemma} \section{A stronger area theorem} Hawking's area theorem \cite{H} is often considered to be one of the most important results in black hole theory. It describes a fundamental property of dynamical black holes, it underlies black hole thermodynamics and it bounds the amount of radiation that can be emitted upon black hole collisions. Consider Ashketar and Krishnan's remarks in their recent very well cited review on generalized black holes \cite{AK}. \begin{quotation} For fully dynamical black holes, apart from the `topological censorship' results which restrict the horizon topology [...], there has essentially been only one major result in exact general relativity. This is the celebrated area theorem proved by Hawking in the early seventies [...]: If matter satisfies the null energy condition, the area of the black hole event horizon can never decrease. This theorem has been extremely influential because of its similarity with the second law of thermodynamics. \end{quotation} The precise statement of the area theorem depends on a number of definitions for which there are a number of possible choices. The definitions and the proof used in theorem below are identical to those used by Wald \cite{W 84}. This choice is made for reasons of expediency. Note, however, that the following arguments apply to other available formulations of the area theorem. Consider now the main result of this article. \begin{definition} A spacetime \((M,g)\) satisfies the \textit{damped averaged null energy condition} (dANEC) if along each future complete affinely parametrized null geodesic \(\gamma:[0,\infty)\to M\), there exists a non-negative constant \(c \geq 0\) such that \[ \lim_{T\to \infty} \inf \int_0^T e^{-ct} Ric(\gamma', \gamma') dt - \frac{c}{2} >0 \] where \(\gamma'\) is a tangent vector for \(\gamma\). \end{definition} \begin{theorem} Let \((M,g_{ab})\) be a four-dimensional strongly asymptotically predictable spacetime that satisfies the dANEC. Let \(\Sigma_1\) and \(\Sigma_2\) be spacelike Cauchy surfaces for the globally hyperbolic region \(\tilde{V}\) with \(\Sigma_2 \subset I^+(\Sigma_1)\) and let \(\mathcal{H}_{1(2)}=H \cap \Sigma_{1(2)}\), where \(H\) denotes the event horizon, i.e., the boundary of the black hole region of \((M,g_{ab})\). Then the area of \(\mathcal{H}_2\) is greater or equal to the area of \(\mathcal{H}_1 \) \end{theorem} \begin{proof} The argument is a straightforward application of Galloway and Fewster's lemma 1.1 with the original arguments by Hawking, which are as in \cite{W 84} in this formulation. We start by showing that the expansion \(\theta\) of the null generators of \(H\) is everywhere non-negative. \\ \indent Suppose \(\theta <0\) at \(p\in H\). Let \(\Sigma\) be a spacelike Cauchy surface for \(\tilde{V}\) passing through \(p\) and consider the two-surface \(\mathcal{H}=H\cap \Sigma\). Since \(\theta<0\) at \(p\), we can deform \(\mathcal{H}\) outward in a neighborhood of \(p\) to obtain a surface \(\mathcal{H}'\) on \(\Sigma\) which enters \(J^-(\mathcal{J}^+)\) and has \(\theta<0\) everywhere in \(J^-(\mathcal{J}^+)\). However, by the same argument as in proposition 12.2.2 of \cite{W 84}, this leads to a contradiction as follows. Let \(K\subset \Sigma\) be the closed region lying between \(\mathcal{H}\) and \(\mathcal{H}'\), and let \(q\in \mathcal{J}^+\) and with \(q\in \dot{J}^+(K)\). Then the null geodesic generator of \(\dot{J}^+(K)\) on which \(q\) lies must meet \(\mathcal{H}'\) orthogonally. \\ \indent However, this is impossible since, by \(\theta<0\) on \(\mathcal{H}'\), the null generator of \(\dot{J}^+(K)\), eg., \(\gamma\), will have a conjugate point before reaching \(q\). This follows from considering Raychaudhuri's equation for irrotational null congruences, equation (1), and identifying it with the equation of lemma 1.1. That is, take \(s=2\), \(z=-\theta(t)\), \(r(t)=Ric(\gamma', \gamma')+ 2\sigma^2\) where \(\theta(t)\) is the null expansion relevant to the null geodesic \(\gamma\). We now have an initial value problem as in lemma 1.1 with \(z(0)=-\theta(0)>0\). By the dANEC and the non-negativity of \(\sigma^2\), it follows that the conditions of lemma 1.1 are satisfied, and so there occurs a focal point within finite affine parameter from \(\gamma(0)\). \\ \indent It now follows that \(\theta\geq0\). By standard properties \cite{W 84}, each \(p\in \mathcal{H}_1\) lies on a future inextendible null geodesic, \(\eta\), contained in \(H\). Since \(\Sigma_2\) is a Cauchy surface, \(\eta\) must intersect \(\Sigma_2\) at a point \(q\in \mathcal{H}_2\). Thus, we obtain a natural map from \(\mathcal{H}_1\) into a portion of \(\mathcal{H}_2\). Since \(\theta\geq0\), the area of the portion of \(\mathcal{H}_2\) given by the image of \(\mathcal{H}_1\) under this map must be at least as large as the area of \(\mathcal{H}_1\). In addition, since the map need not be onto, the area of \(\mathcal{H}_2\) may be even larger. Thus, the area of \(\mathcal{H}_2\) cannot be smaller than than of \(\mathcal{H}_1\). \end{proof} \begin{remark} We have chosen an energy condition based on the work of Galloway and Fewster because we find it to be both state of the art and neatly amenable to classical field models violating the NEC. In section 6 of their paper \cite{FG 11}, they construct such a model which, by inessential modifications, is straightforwardly applicable to theorem 2.1. We note that other conditions similar in form to the ANEC for semi-complete geodesics could have been used, eg., Roman's condition \cite{R 88} in his version of Penrose's theorem.\end{remark} \begin{remark} Recent studies \cite{CDHG} have shown that there are, in both Hawking \cite{H} and Wald's \cite{W 84} formulation of the area theorem, lacunas in the form of unstated or undesirable assumptions of differentiability of the horizon. These deficiencies have been overcome in formulations of much more sophisticated area theorems \cite{CDHG}. Indeed, the authors in \cite{CDHG} mention that the assumptions of the area theorem in \cite{W 84} are satisfied under their conditions. It is worth noting that the strengthening offered here generalizes to these superior area theorems. \end{remark} \begin{remark} The exponential damping in the integrand shows that boundary effects are crucial in determining whether area non-decrease obtains. Provided there occurs positive contributions to the null contractions of the Ricci tensor near the horizon, the condition may be satisfied even if negative contributions persist for arbitrarily long segments of the null geodesic away from the horizon. That is: the area may be non-decreasing even if there are large violations to the standard ANEC for semi-complete geodesics. \end{remark} \begin{remark} A number of other standard theorems about black holes can be strengthened in the way described. The list includes results describing the location of trapped surfaces, marginally trapped surfaces, apparent horizons and so on. Our focus on the area theorem is owed to the obvious tension it generates with the idea of Hawking radiation. \end{remark} \begin{remark} As a final point, this conclusion seems to dovetail nicely with what emerges from the generalized black holes framework.\footnote{See \cite{J} for an introduction.} In that framework, area non-decrease results are obtained for trapping, isolated and dynamical horizons upon assuming that the NEC holds within a neighborhood of the relevant horizon. We note, then, that the dANEC will be satisfied if there occurs, in the immediate vicinity of the horizon, sufficiently positive contributions to the integrand. With that being said, the relation between global and quasi-local types of horizons is still yet to be fully understood, and so these area non-decrease behaviors ought not to be identified. The parallel is nevertheless worth underlining. \end{remark} \section{Discussion} There is now a certain conventional wisdom regarding the tension between Hawking radiation and the area theorem. Consider, for instance, the remarks made by Visser \cite{V} and, respectively, Bousso and Engelhart \cite{BE 15}. \begin{quotation} The very fact that Hawking evaporation occurs at all violates the area increase theorem...for classical black holes. This implies that the quantum process underlying the Hawking evaporation process must also induce a violation of one or more of the input assumptions used in proving the classical area increase theorem. The only input assumption that seems vulnerable to quantum violation is the assumed applicability of the null energy condition. \end{quotation} \begin{quotation} Hawking's theorem holds in spacetimes obeying the null curvature condition, \(R_{ab}k^a k^b \geq0\) for any null vector \(k^a\). This will be the case if the Einstein equations are obeyed with a stress tensor satisfying the NEC, \(T_{ab}k^a k^b \geq0\). The NEC is satisfied by ordinary classical matter, but it is violated by valid quantum states (e.g., in the Standard Model). In particular, the NEC fails in a neighborhood of a black hole horizon when Hawking radiation is emitted. Indeed, the area of the event horizon of an evaporating black hole decreases, violating the Hawking area law. \end{quotation} Galloway and Fewster show that their energy condition leads, under the relevant conditions, to null geodesic incompleteness. Their argument is classical and standard except for the particular form of the energy condition that is used. One may consider whether their results provide evidence in favor of singularities occurring in the semi-classical context, and, likewise, whether the area theorem is relevant to the study of semi-classical black holes. We recall, on this point, that a precise and rigorous understanding of the sense in which the area theorem fails in the semi-classical context is still unavailable. Nevertheless, in view of the almost unchallenged view that black holes do in fact radiate in various semi-classical contexts, there are at least two kinds of possible attitudes: either none of the standard arguments are fully applicable and we must await further progress, or, the standard arguments apply in the sense that it is only the energy condition component of the various theorems that need generalization to the QFTCS context. In the former case, the theorem above provides no particularly new information. As for the latter, there seem to be two possibilities. There may arise conflicting results whereby a model satisfies an energy condition permitting an area theorem (similar, perhaps, to the one above) despite being one for which evaporation is expected. In that case, then, the typical argument in favor of evaporation cannot be sustained. The other possibility is that everything remains harmonious in the sense that models satisfying energy conditions that lead to an area theorem end up being precisely those in which area decrease by evaporation is not expected. Results in either direction would be fruitful. \\ \\ The view adopted by the author is that neither case seems to be substantiated by precise and rigorous arguments, and, moreover, that current understanding is still some way away from identifying exactly what in the area theorem fails semi-classically. Further understanding of this issue is likely to be provided by a better understanding of a number issues including, for instance, the type of energy conditions that are suitable in the context of QFTCS, the effects of back-reaction, the trans-Planckian problem, and the various links between event horizons and analogous entities occurring in the generalized black holes framework. \section*{Acknowledgements} I thank the Ruth and Nevil Mott scholarship and the AHRC for funding this research. I thank Prof. Erik Curiel for his guidance and for writing the paper that provided the initial inspiration for this work \cite{C 14}. I also thank my supervisor, Prof. Harvey Brown, for his support, and last but not least, I thank the reviewers for their insightful suggestions. \newpage
1,116,691,498,853
arxiv
\section{Introduction} Jet suppression~\cite{Bjorken} is considered an excellent probe of QCD matter created in ultra-relativistic heavy ion collisions~\cite{Gyulassy,DBLecture,Wiedemann2013,Vitev2010}. Specifically, charged hadrons and D mesons are often in the focus of both theoretical and experimental research, since they represent the most direct probes of light and heavy flavor in QCD matter. However, while D meson suppression is indeed a clear charm quark probe, this is not the case for light hadrons. That is, while experimentally measured prompt D mesons are exclusively composed of charm quarks~\cite{ALICE_preliminary}, light hadrons are composed of both light quarks and gluons~\cite{Vitev0912}. Furthermore, gluons are known to have larger jet energy loss compared to light and heavy quarks, while light and charm quarks are expected to have similar suppressions~\cite{MD_PRC}. It is therefore clearly expected that charged hadron suppression should be significantly larger compared to D meson suppression. However, preliminary ALICE measurements~\cite{ALICE_h,ALICE_preliminary} surprisingly show that charged hadrons and D mesons have the same $R_{AA}$. We here denote this surprising observations as the ``heavy flavor puzzle at LHC'', and explaining this puzzle is the main goal of this paper. We note that, perhaps the most straightforward explanation of the puzzle would be that, contrary to pQCD expectations, gluons and light quarks in fact lose the same amount of energy in the medium created at ultra relativistic heavy ion collisions. In fact, the same possibility was also suspected in the context of RHIC experiments~\cite{Gyulassy_viewpoint,SE_puzzle}, where similar suppressions were observed for pions and single electrons; this has led some theorists to seek explanations outside conventional QCD~\cite{FNG,NGT,GC,HGB}. The main goal of this paper is to analyze phenomena behind the ``heavy flavor puzzle at LHC'', and investigate if pQCD is able to explain such unexpected experimental data. \section{Results and discussion} To analyze the puzzle described in the previous section, we will here use our recently developed theoretical formalism, which is outlined in detail in~\cite{MD_LHSupp_2013}. The procedure is based on {\it i)} jet energy loss in a finite size dynamical QCD medium~\cite{MD_PRC,DH_PRL,MD_PRC2012,MD_Coll}, {\it ii)} finite magnetic mass effects~\cite{MD_MagnMass}, {\it iii)} running coupling~\cite{MD_LHSupp_2013}, {\it iv)} multigluon fluctuations~\cite{GLV_suppress}, {\it v)} path-length fluctuations~\cite{WHDG,Dainese} and {\it v)} most up to date production~\cite{Cacciari:2012,Vitev0912} and fragmentation functions~\cite{DSS}. Note that the computational procedure uses no free parameters, i.e. the parameters that are stated below correspond to the standard literature values. We consider a QGP of temperature $T{\,=\,}304$\,MeV (as extracted by ALICE~\cite{Wilde2012}), with $n_f{\,=\,}2.5$ effective light quark flavors. Perturbative QCD scale is taken to be $\Lambda_{QCD}=0.2$~GeV. For the light quarks we assume that their mass is dominated by the thermal mass $M{\,=\,}\mu_E/\sqrt{6}$, and the gluon mass is $m_g=\mu_E/\sqrt{2}$~\cite{DG_TM}. Here Debye mass $\mu_E \approx 0.9$~GeV is obtained by self-consistently solving Eq.~(3) from~\cite{MD_LHSupp_2013} (see also~\cite{Peshier2006}), and magnetic mass $\mu_M$ is taken as $0.4 \, \mu_E < \mu_M < 0.6 \, \mu_E$~\cite{Maezawa,Bak}. Charm and bottom mass is, respectively, $M=1.2$~GeV and $M=4.75$~GeV. For charm and bottom, the initial quark spectrum, $E_i d^3\sigma(Q)/dp_i^3$, is computed at next-to-leading order using the code from~\cite{Cacciari:2012,FONLL}; for gluons and light quarks, the initial distributions are computed at next-to-leading order as in~\cite{Vitev0912}. For light hadrons, we use DSS fragmentation functions~\cite{DSS}. For D mesons we use BCFY fragmentation functions~\cite{BCFY}. Path length distributions are extracted from~\cite{Dainese}, while fragmentation functions are implemented according to~\cite{Vitev06}. \begin{figure} \epsfig{file=QuarkRaa.eps,width=2.5in,height=2.2in,clip=5,angle=0} \vspace*{-0.3cm} \caption{{\bf Suppression of quark and gluon jets.} Momentum dependence of the jet suppression is shown for charm quarks (the full curve), light quarks (the dashed curve) and gluons (the dot-dashed curve). Electric to magnetic mass ratio is fixed to $\mu_M/\mu_E =0.5$.} \label{QuarkELossRaa} \end{figure} \begin{figure} \epsfig{file=Hadron_Gluon_Light_Ratio.eps,width=2.5in,height=2.2in,clip=5,angle=0} \vspace*{-0.3cm} \caption{{\bf Ratio of gluon to light quark contribution in the initial distributions of charged hadrons as a function of momentum. } The figure shows the ratio of the gluon to light quark contribution in the initial distributions of charged hadrons.} \label{GLRatio} \end{figure} We start by quantitatively reproducing the expectations summarized in the introduction, in order to obtain a clear view of the relevant hierarchies for the suppression and the initial distributions. Fig.~\ref{QuarkELossRaa} shows the comparison of the suppressions for light and charm quark and gluon jets. We see that suppression of gluon jets is significantly larger compared to corresponding suppression of quark jets, while the suppression predictions for light and charm quarks are similar. Furthermore, in Fig.~\ref{GLRatio} we show that both light quarks and gluons significantly contribute to the charged hadron production - in fact, in the lower momentum range, gluons dominate over light quarks; therefore both contributions from gluons and light quarks have to be taken into account when analyzing charged hadron suppression. On the other hand, D mesons present a clear charm quark probe, since the feedown from B mesons is subtracted from the experimental data~\cite{ALICE_preliminary}. \begin{figure*} \epsfig{file=DH_DataTheoryComparison.eps,width=5in,height=2.3in,clip=5,angle=0 \vspace*{-0.4cm} \caption{{\bf Momentum dependence of charged hadron and D meson $R_{AA}$. } The left panel shows together the experimentally measured 0-5\% central 2.76 Pb+Pb ALICE preliminary $R_{AA}$ data for charged hadrons~\cite{ALICE_h} (the red circles) and D mesons~\cite{ALICE_preliminary} (the blue triangles). The right panel shows the comparison of the light hadron suppression predictions (the gray band with full-curve boundaries) with the D meson suppression predictions (the gray band with dashed-curve boundaries). Both gray regions correspond to $0.4 < \mu_M/\mu_E < 0.6$, where the upper (lower) boundary on each band corresponds to $\mu_M/\mu_E = 0.6$ ($\mu_M/\mu_E = 0.4$).} \label{TheoryVsData} \end{figure*} As discussed above, Figures~\ref{QuarkELossRaa} and~\ref{GLRatio} lead to the expectation that the charged hadron suppression should be significantly larger than the D meson suppression. Surprisingly, this expectation is not confirmed by the experimental data, which are shown in the left panel of Fig.~\ref{TheoryVsData}; these data clearly suggest the same suppression for both pions and D mesons. We can also calculate these $R_{AA}$s through the finite size dynamical energy loss formalism~\cite{MD_LHSupp_2013}, which is shown in the left panel of Fig.~\ref{TheoryVsData}. Even more surprisingly, these calculations are in accordance with the experimental data, i.e. they show the same suppression patterns for light hadrons and D mesons. Moreover, we see that the theoretical predictions even reproduce the experimentally observed smaller suppression of D mesons compared to charged hadrons in the lower momentum range. This difference is the consequence of the ``dead cone effect''~\cite{Kharzeev}, as can be seen in Fig.~\ref{QuarkELossRaa}. Consequently, we see that our theoretical predictions show a very good agreement with the experimental data, which is in an apparent contradictions with the qualitative expectations discussed above; we will below concentrate on finding the explanation for these unexpected results. \begin{figure} \epsfig{file=DmesonCharmRaa.eps,width=2.5in,height=2.3in,clip=5,angle=0} \vspace*{-0.3cm} \caption{{\bf Comparison of charm quark and D mason suppression predictions.} The figure shows a comparison of the charm quark suppression predictions (the full curve) with the D meson suppression predictions (the dashed curve), as a function of momentum. Electric to magnetic mass ratio is fixed to $\mu_M/\mu_E =0.5$. } \label{HeavyFlavorRaa} \end{figure} We start by asking how fragmentation functions modify light hadron and D meson suppressions, since these functions define the transfer from the parton to the hadron level. We first analyze how fragmentation functions modify the D mason suppression, compared to the bare charm quark suppression. We see that there is a negligible difference between these two suppression patterns, so that D meson fragmentation does not modify bare charm quark suppression. Consequently, the D meson suppression is indeed a genuine probe of the charm quark suppression in QCD medium. \begin{figure*} \epsfig{file=HadronQuarkRaa.eps,width=5in,height=2.3in,clip=5,angle=0} \vspace*{-0.4cm} \caption{{\bf Comparison of the light flavor suppression predictions.} The left panel shows the comparison of light hadron suppression predictions (the full curve) with light quark (the dashed curve) and gluon (the dot-dashed curve) suppression predictions, as a function of momentum. On the right panel, the dashed curve shows what would be the light hadron suppression if only light quarks would contribute to light hadrons. The dot-dashed curve shows what would be the light hadron suppression if only gluons would contribute to light hadrons, while the full curve shows the actual hadron suppression predictions. On each panel, electric to magnetic mass ratio is fixed to $\mu_M/\mu_E =0.5$.} \label{LightFlavorRaa} \end{figure*} However, there is a significantly more complex interplay between suppression and fragmentation in charged hadrons. In the left panel of Fig.~\ref{LightFlavorRaa}, we compare the light hadron suppression with the bare light quark and gluon suppression patterns. Surprisingly, we see that the light hadron suppression pattern almost exactly coincides with the bare light quark suppression. This may suggest that gluon jets do not contribute to the light hadron suppression, which is however clearly inconsistent with the significant (even dominant) gluon contribution in light hadrons (see Fig.~\ref{GLRatio}). To further investigate this, in the right panel of Fig.~\ref{LightFlavorRaa}, we show what would be the light hadron suppression if hadrons were composed only by light quark jets (the dashed curve), or only by gluon jets (the dot-dashed curve). We see that, as expected from Fig.~\ref{GLRatio}, the actual light hadron suppression is clearly in between the two above mentioned suppression alternatives, so that both light quarks and gluons indeed significantly contribute to the light hadron suppression. However, by comparing the two panels in Fig.~\ref{LightFlavorRaa}, we see that light hadron fragmentation functions modify the bare light quark and gluon suppressions so that, coincidentally, their ``resultant'' charged hadron suppression almost identically reproduces the bare light quark suppression. Consequently, the heavy flavor puzzle at LHC is a consequence of a specific combination of the suppression and fragmentation patterns for light partons, and it does not require invoking an assumption of the same energy loss for light partons. \section{Conclusions} A major theoretical goal in relativistic heavy ion physics is to develop a theoretical framework that is able to consistently explain both light and heavy flavor experimental data. We here analyzed the comparison of charged hadron and D meson suppression data in central 2.76 TeV Pb+Pb collisions at LHC, which leads to a surprising puzzle. To analyze this puzzle, we here used a dynamical energy loss formalism. While the solution of this puzzle is inherently quantitative, it can be qualitatively summarized in the following way: Despite the dominant gluon contribution in the charged hadron production, LHC charged hadron suppression turns out to be a genuine probe of bare light quark suppression. The main effect responsible for this key result is the distortion of the bare suppression patterns by the jet fragmentation. Furthermore, the D meson suppression correctly represents charm quark suppression, and bare charm and light quark suppressions are very similar. Taken together, these results in fact explain the observed puzzle, i.e. the fact that light hadron and D meson suppression are measured to be the same at LHC. Therefore, the explanation of the puzzle follows directly from pQCD calculations of the energy loss and fragmentation. Furthermore, these calculations directly relate the bare quark suppressions to the experimentally observed charged hadron suppressions. Consequently, the heavy flavor puzzle at LHC is not only a coincidental combination of suppression and fragmentation patterns, but also their serendipitous interplay, which can substantially simplify the interpretation of the relevant experiments. {\em Acknowledgments:} This work is supported by Marie Curie International Reintegration Grant within the $7^{th}$ European Community Framework Programme (PIRG08-GA-2010-276913) and by the Ministry of Education, Science and Technological Development of the Republic of Serbia, under projects No. ON171004 and ON173052. I thank I. Vitev and Z. Kang for providing the initial light flavor distributions and useful discussions. I also thank M. Cacciari for useful discussion on heavy flavor production and decay processes. I thank ALICE Collaboration for providing the shown preliminary data, and thank M. Stratmann and Z. Kang for help with DSS fragmentation functions.
1,116,691,498,854
arxiv
\section{\label{Introduction}Introduction} Quantum refrigerators are solid-state devices with huge potential benefits in technology. With no moving parts and of microscopic size, they could easily be integrated into existing technology, such as cellphones and computers, to enhance their performance by utilizing the waste heat energy they produce. As always, the technological frontier is supported by a backbone of theoretical framework, which in recent years have seen many advancements, see, e.g.,~\cite{PhysRevE.91.050102,PhysRevB.85.075412,PhysRevLett.110.256801,PhysRevE.64.056130,PhysRevLett.105.130401,PhysRevLett.108.070604}. In addition to the technological possibilities they present, quantum refrigerators are excellent tools for providing insight into the unique features of open quantum systems. For a review of stochastic thermodynamics and the formalism used to treat quantum refrigerators see, e.g.,~\cite{0034-4885-75-12-126001} and \cite{doi:10.1146/annurev-physchem-040513-103724}. The quantum absorption refrigerator is a version of these general machines, based on producing a steady-state heat-flow from a cold to a hot reservoir, driven by absorption from an external heat reservoir. A key tool to understand the operation of these refrigerators, when approaching the limiting temperature of absolute zero, is the laws of thermodynamics. In this article we have studied one such device that appeared to violate the dynamic version of the third law of thermodynamics (the unattainability principle), which states that one can not cool a system to absolute zero in a finite amount of time. A recent publication by Cleuren \textit{et al.}~\cite{PhysRevLett.108.120603} presented a novel model based on two electronic baths coupled together via a system of quantum dots and driven by an external photon source. The article generated some controversy due to its apparent violation of the unattainability principle, and several authors \cite{PhysRevLett.109.248901,PhysRevLett.109.248902,PhysRevLett.109.248903,PhysRevLett.112.048901} proposed explanations for this violation. However, we find that the discussion was without conclusion, and we will discuss this later in the article. We will begin by giving a brief presentation of the quantum refrigerator model, as introduced by Cleuren \textit{et al.}~\cite{PhysRevLett.108.120603}, and its thermodynamic properties. Then we will summarize and comment on the discussion that followed. Finally we will present a simple modification, based only on the fact that the energy levels of metals are discrete when treated quantum mechanically, which becomes important at temperatures $ T\lesssim\Delta $ where $ \Delta $ is the level spacing. (We measure temperature in energy units putting the Boltzmann constant $k_B=1$). Our modification upholds the third law, while it simultaneously reproduces the results from the original model down the the limit of $ T\sim\Delta $. In essence, we want to make the point that the unattainability principle is only valid when applied to a quantum description of a system. \subsection{\label{model}The Model} \begin{figure}[b] \centering \includegraphics[width=0.8\linewidth]{real_model.pdf} \caption{A schematic of the model shown in real-space. A small piece of metal with temperature $ T_R $ is coupled to a larger piece with temperature $ T_L>T_R $. Four quantum dots form two channels for electron transport between the metals. The arrows indicate the desired direction of the net particle current to achieve cooling of the right metal lead. The distance between the two channels is too large for any Coulomb interaction to take place between them.} \label{fig:real_model} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{model.pdf} \caption{A hot metal lead ($ T_L $) is coupled to a cold one ($ T_R $) via two spatially separated pairs of quantum dots, which form two channels for electron transport between the leads. We consider the case where $ \mu_L=\mu_R=\mu $, and the energy levels of the quantum dots are symmetric about the chemical potential ($ \epsilon_2-\mu=\mu-\epsilon_1 \to \epsilon_1=-\epsilon_2$). Schematic adapted from \cite{PhysRevLett.108.120603}.} \label{fig:model} \end{figure} The quantum refrigerator model proposed in \cite{PhysRevLett.108.120603} is shown schematically in real-space in Fig.~\ref{fig:real_model} and in energy-space in Fig.~\ref{fig:model}. Here we will briefly explain its operating protocol. It consists of two metal leads and four quantum dots; the large and hot lead with temperature $ T_L $ is coupled to the small, cold lead with temperature $ T_R $, via the set of quantum dots. We assume that each quantum dot is highly confined, and is thus associated with a single energy level, since the other levels are far outside the energy-range of the system. These four levels are marked in Fig.~\ref{fig:model}. The quantum dots form two channels, as illustrated in Fig.~\ref{fig:real_model}, where the energy levels $ \epsilon_2 $ ($ \epsilon_1 $) and $ \epsilon_2+\epsilon_g $ ($ \epsilon_1-\epsilon_g $) are coupled together in channel 2 (channel 1). The two channels are spatially separated, therefore we can safely ignore any Coulomb interaction between the electrons in channel 1 and 2. The basic idea is to move cold electrons (i.e, with energy less than $ \mu $) from the hot lead into the cold lead via channel 1, while simultaneously move hot electrons (energy greater than the chemical potential $ \mu $) from the cold lead to the hot lead via channel 2. This transport of electrons will thus cool the right lead by injecting cold and extracting hot electrons. Naturally the transport will also heat up the left lead, but since we assume that it is a large piece of metal with a high heat capacity, the heat absorbed will not result in a measurable change in $ T_L $. We can obtain the desired particle flow direction by coupling the quantum dot system to a bosonic bath which induces transitions between the quantum dots of each pair, i.e., between $ \epsilon_1 $ and $ \epsilon_1-\epsilon_g $ in channel 1, and between $ \epsilon_2 $ and $ \epsilon_2+\epsilon_g $ in channel 2. The bosonic bath can be photons from an external source, and/or phonons from the device. In this discussion we will consider it to be a photon bath with temperature $ T_S $. In Ref.~\cite{PhysRevLett.108.120603} the photon bath is taken to be the Sun with a temperature $ T_S\simeq6000$~K, and we will follow this in the sense that we will assume that it is the largest energy scale in the system. In any case the transition rates between the quantum dots are proportional to the probability of finding a boson with energy equal to the energy-difference between the two quantum dot levels, which is given by the Planck distribution $ n(E) $. The rates are thus given by \begin{equation} \label{eq:rates1} k_{\uparrow}^{\epsilon_g} =\frac{\Gamma_s}{e^{\epsilon_g/T_S}-1}, \quad k_{\downarrow}^{\epsilon_g}=\frac{\Gamma_s}{1-e^{-\epsilon_g/T_S}}. \end{equation} Here $ k^{\uparrow} $ and $ k^{\downarrow} $ are the rates for upwards and downwards transitions in energy, respectively. The difference between them is that $ k^{\downarrow} $ contains an additional term for spontaneous emission. The transition rate for electron transfer from the metal to an empty quantum dot level is proportional to the probability of finding an electron in the same energy level in the metal, which is given by the Fermi-Dirac distribution $ f(E) $. For the inverse transition to take place there has to be an available energy level in the metal, which has a probability proportional to $ 1-f(E) $. Thus the transition rates between quantum dot and metal are \begin{equation} \label{eq:rates2} k^{E}_{l\to d} =\frac{\Gamma}{e^{(E-\mu)/T}+1}, \quad k^{E}_{d\to l}=\frac{\Gamma}{e^{(\mu-E)/T}+1}. \end{equation} For transitions involving the right lead the temperature $ T=T_R $, while for the left lead $ T=T_L $. Notice that in general $ \Gamma\neq\Gamma_s $. These are the constants that set the timescale of the transitions and depends on the specific details of the device. As in Ref.~\cite{PhysRevLett.108.120603}, we will considering the strongly coupled case where the energies of the quantum dots are symmetric about the chemical potential ($ \epsilon_2-\mu=\mu-\epsilon_1$). We can therefore choose to measure all energies relative to $ \mu=0 $, and combine the two parameters $ \epsilon_2=-\epsilon_1=\epsilon $. We can now introduce three distinct occupation probabilities per channel. Since the two quantum dots in the same channel are close to each other in space we assume that the Coulomb repulsion between electrons prevents simultaneous occupation of the right and left quantum dot. For channel 1 we then have the probabilities $ P^{(1)}_L,P^{(1)}_R$, and $P^{(1)}_0$, which represent the probability of finding an electron in the left quantum dot with energy $ -(\epsilon+\epsilon_g) $, in the left quantum dot with energy $ -\epsilon $, and in neither quantum dot, respectively. A master equation describing the time-evolution of the occupation probabilities in channel 1 can thus be formulated: \begin{equation} \label{master_eq} \dot{\mathbf{P}}^{(1)}=\hat{M}^{(1)} \mathbf{P}^{(1)}\, , \quad \mathbf{P}^{(1)} \equiv \begin{bmatrix} P^{(1)}_0 \\ P^{(1)}_L \\ P^{(1)}_R \end{bmatrix}, \end{equation} where the transition matrix $ M^{(1)} $ is given by \begin{equation} \nonumber M^{(1)} \! \! \!= \! \! \! \begin{bmatrix}\! -k_{l\to d}^{-(\epsilon+\epsilon_g)}-k_{l\to d}^{-\epsilon} & k_{d\to l}^{-(\epsilon+\epsilon_g)} & k_{d\to l}^{-\epsilon} \\\\ k_{l\to d}^{-(\epsilon+\epsilon_g)} & -k_{d\to l}^{-(\epsilon+\epsilon_g)}-k_{\uparrow}^{\epsilon_g} & k_{\downarrow}^{\epsilon_g} \\\\ k_{l\to d}^{-\epsilon} & k_{\uparrow}^{\epsilon_g} & -k_{d\to l}^{-\epsilon}-k_{\downarrow}^{\epsilon_g} \! \! \end{bmatrix} \! \! . \end{equation} We are interested in the steady state of the system, where the probabilities does not change as a function of time. To find this state we set $ \dot{\mathbf{P}}^{(1)}=\mathbf{0} $, and solve Eq.~(\ref{master_eq}). By doing this we obtain the steady state probability vector $\mathbf{P}^{(1)}(\epsilon,\epsilon_g,T_R,T_L $) where we consider $ \Gamma,\Gamma_s $ and $ T_S $ as constants. A similar procedure gives us the steady-state probability vector for the channel 2 as well. The particle current between the right dot in the lower level and the cold lead can be written as \begin{equation} J^{(1)}= P^{(1)}_Rk_{d\to l}^{-\epsilon} - P^{(1)}_0k_{l\to d}^{-\epsilon} \, , \end{equation} and the current through the upper level is \begin{equation} J^{(2)}= P^{(2)}_Rk_{d\to l}^{\epsilon} - P^{(2)}_0k_{l\to d}^{\epsilon} \, . \end{equation} The cooling power, i.e., the heat transported \textit{out of} the right lead per unit time, can now be defined as: \begin{equation} \dot{Q}_R=(-\epsilon-\mu)(-J^{(1)}) + (\epsilon-\mu)(-J^{(2)}). \end{equation} Since the energy levels are symmetric about $ \mu $, we can set $ \mu=0 $, and we obtain the cooling power for the refrigerator model. \begin{equation} \dot{Q}_R=\epsilon(J^{(1)}-J^{(2)}). \end{equation} Optimized cooling is attained by varying $ \epsilon(T_R) $ and $ \epsilon_g(T_R) $ as a function of $ T_R $ (when $ T_S $ and $ T_L $ are kept constant). It can be shown (see Ref.~\cite{PhysRevLett.108.120603} for details) that the cooling power in the limit of low $ T_R $ is given by \begin{equation} \lim\limits_{T_R\to 0}\dot{Q}_R\propto T_R . \label{eq:Q_lowT} \end{equation} When working at an energy-scale where $\epsilon_g\ll T_S $ we have $ k^{\uparrow}\simeq k^{\downarrow} $. In this situation we can get a better understanding of the system and when cooling will occur by considering the transitions in channel 2. There the energy levels are situated above $ \mu $, and we have \[ 0< f(E)< 1/2, \quad 1/2<1-f(E)<1. \] Therefore the rate from lead to dot will always be less than the rate from dot to lead, $ k_{l\to d}^{E}<\ k_{d\to l}^{E} $, for a given energy $ E $. The requirement for cooling to take place in this situation is that $ f(\epsilon+\epsilon_g)<f(\epsilon) $, i.e., we require $ (\epsilon+\epsilon_g)/T_L>\epsilon/T_R $. We then have \begin{equation} \left. \begin{array}{l} k^{\epsilon+\epsilon_g}_{d\to l} > k^{\epsilon}_{d\to l} \\ k^{\epsilon+\epsilon_g}_{l\to d} < k^{\epsilon}_{l\to d} \\ k_{l\to d}^{E}<\ k_{d\to l}^{E} \end{array}\right\} \Rightarrow k_{d\to l}^{\epsilon+\epsilon_g}>k_{d\to l}^{\epsilon}>k_{l\to d}^{\epsilon}>k_{l\to d}^{\epsilon+\epsilon_g} \, . \label{rate_inequalities} \end{equation} When $ k^{\uparrow}\simeq k^{\downarrow} $ we know that the occupation probability $ P_L^{(2)}\simeq P_R^{(2)} = P $, and thus $ P_0^{(2)}=(1-2P) $. Using the inequalities shown in Eq.~(\ref{rate_inequalities}) we now consider two different states of the system. First assume that there is an electron in the quantum dot system; it can either exit into the left lead or the right lead, where the currents are $k^{\epsilon+\epsilon_g}_{d\to l}P_L^{(2)} $ and $ k^{\epsilon}_{d\to l}P_R^{(2)} $, respectively. The difference is \[ P\big(k^{\epsilon+\epsilon_g}_{d\to l}- k^{\epsilon}_{d\to l} \big)>0 \] that tells us it is more likely for the electron to exit into the left lead. Next we assume the quantum dot system is unoccupied; an electron can enter from the left lead or the right lead, with currents $k^{\epsilon+\epsilon_g}_{l\to d}P_0^{(2)} $ and $k^{\epsilon}_{l\to d}P_0^{(2)} $, respectively. The difference is now \[ (1-2P)\big(k^{\epsilon+\epsilon_g}_{l\to d}-k^{\epsilon}_{l\to d}\big)<0 \] indicating that it is more likely that an electron enters from the right lead. Above the chemical potential, electrons entering from the right lead and exiting into the left lead corresponds to a net cooling of the right lead, which is our desired effect. A similar analysis can be done for channel 1, where the corresponding result of net transport from the left to the right lead is obtained. \subsection{\label{unatt}The Unattainability Principle} The unattainability principle states that one cannot cool a system to absolute zero in a finite amount of time~\cite{PhysRevE.85.061126}. A system with heat capacity $ C_V=dQ/dT $ and cooling power $ \dot{Q}\equiv dQ/dt $ has a cooling rate given by \begin{equation} \frac{dT}{dt}=\frac{\dot{Q}}{C_V}. \end{equation} If we assume that $ C_V $ and $ \dot{Q} $ scale with temperature to the power of $ \kappa $ and $ \lambda $, respectively, we have \begin{equation} \frac{dT}{dt}\propto T^{\lambda-\kappa}. \label{eq:unattainability} \end{equation} For $ \alpha\equiv\lambda-\kappa < 1 $ the unattainability principle is violated~\cite{PhysRevLett.109.248901}, and cooling to absolute zero is possible in finite time. By inspecting Eq.~(\ref{eq:Q_lowT}) we find that $ \lambda=1 $. The heat capacity of the metal lead as $ T_R\to 0 $ is dominated by the electronic heat capacity, which is proportional to the temperature $ C_V\propto T_R $ (see chapter 7.H.2 of Ref.~\cite{reichl1980modern} or an equivalent textbook), and therefore $ \kappa=1 $. The end result is that $ \dot{T}_R\propto T^0 $, in violation of the unattainability principle. \subsection{\label{comments}Comments} Levy \textit{et al.}~\cite{PhysRevLett.109.248901} were the first to point out that because the refrigerator presented in \cite{PhysRevLett.108.120603} has a cooling power of $ \dot{Q}\propto T_R $ and a heat capacity of $ C_V\propto T_R $ in the limit of $ T_R\to 0 ~\mathrm{K} $, its cooling rate is given by \begin{equation} \frac{dT(t)}{dt}=\frac{\dot{Q}}{C_V}\propto T_R^0=\text{const} . \end{equation} That enables cooling to absolute zero in a finite amount of time. In the original model proposed by Cleuren \textit{et al.} the quantum dot system consisted of only two quantum dots, with the levels $ \epsilon_1 $ ($ \epsilon_1-\epsilon_g $) and $ \epsilon_2 $ ($ \epsilon_2+\epsilon_g $) being two adjacent levels within the right (left) quantum dot. Levy \textit{et al.} suggest that the violation of the third law may be due to the neglect of internal transitions within a single dot. This suggestion was refuted by Cleuren \textit{et al.}~\cite{PhysRevLett.109.248902} who stated that the model could also be constructed using two pairs of spatially separated quantum dots, as we have done here. Their own explanation for the violation was that the quantum master equation they utilized does not take into account coherent effects, and the broadening of the linewidth of the quantum dot energy levels was ignored. Both of these effects becomes important in the low-temperature limit. Allahverdyan \textit{et al.}~\cite{PhysRevLett.109.248903} suggested that the violation occurs since the weak-coupling master equation used by Cleuren \textit{et al.} is limited at low temperatures. They state one can justify taking the limit, $ T_R\to0 $, for such an equation only while simultaneously reducing the coupling between the quantum dot system and heat reservoirs, $ \gamma \to 0 $. Concrete analysis of the low-temperature behavior of the cooling power is not given. Finally, Entin-Wohlman \textit{et al.}~\cite{PhysRevLett.112.048901} considered a simplified version of the original model, where only a single channel contribute to the electron transfer. They assume that boson-assisted hopping is the dominant form of electronic transport \cite{PhysRevB.85.075412} (an assumption we will also make later in the article). If we remove channel 1 from our model and only consider channel 2, we obtain the same system as considered in \cite{PhysRevB.85.075412}. Using Fermi's golden rule they find that the heat current is exponentially small for $ \epsilon_2-\mu \gg T_R $. They go on to state that the violation of the third law comes from allowing the levels $ \epsilon_1 $ and $ \epsilon_2 $ to approach the chemical potential linearly as a function of temperature, and claim that this is unnecessary and complicates the setup. In our opinion, the linear temperature dependence of the energy levels $ \epsilon_1 $ and $ \epsilon_2 $ in the quantum dots coupled to the cold lead is an essential feature -- it arises from the optimization of the cooling power suggested in Ref.~\cite{PhysRevLett.108.120603}, but not implemented in Ref.~\cite{PhysRevLett.112.048901}. \section{\label{discretization}Discretization of the Model} One of the assumptions of the model proposed is that there is a continuous spectrum of energy states in the metal leads. Thus the electrons are transferred elastically between the quantum dots and the metals. We will now introduce a simple discretized modification of the original model, and show that the unattainability principle will then be restored. In our model, we will assume an even spacing between the energy levels. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{d_model.pdf} \caption{The continuous states of the metal are replaced by a discrete spectrum with a constant energy-spacing $ \Delta $. The asymmetry between states above and below $ \mu $ is modeled by the parameter $ \delta $. For $ \delta=\Delta/2 $ the chemical potential lies exactly in the middle of two energy-levels. The $ j $th [$ i $th] level below [above] $ \mu $ is given by $ \epsilon_j=\delta-j\Delta $ [$ \epsilon_i=\delta+(i-1)\Delta $].} \label{fig:d_model} \end{figure} We also introduce the parameter $ \delta $ to quantify the asymmetry about the chemical potential $ \mu $, see Fig.~\ref{fig:d_model}. If $ \delta=\Delta/2 $ the energy levels are symmetrically distributed about $ \mu $. As long as the quantum dot and metal energy levels do not exactly overlap, the transitions are now inelastic and require absorption/emission of phonons. \subsection{Cooling Power} We can set up a master equation for the dynamics in channel 1, as in Eq.~(\ref{master_eq}), but now for the discrete system. The rate-matrix is almost identical, but since we allow for phonon-assisted transitions, the rates between the quantum dots and the discrete levels of the right lead are given by a sum of all possible emission and absorption transitions. We will use $ \epsilon_{n} $/$ \epsilon_{m}$ to denote the $ n $th/$ m $th level in the metal lead, above/below the quantum dot level $ \epsilon_1 $. We also introduce $ \omega_n=\epsilon_{n}-\epsilon_1 $ and $ \omega_m=\epsilon_1-\epsilon_{m} $ to represent the phonon frequencies associated with transitions between these levels. For transitions from the lead to the dot, $ \epsilon_n $ and $ \epsilon_m $ are the energies associated with emission and absorption processes, respectively, while for dot-to-lead transitions the association is opposite. The matrix elements changes from $ k^{\epsilon_1}_{d\to l}\to k^{d,\epsilon_1}_{d\to l} $ and $ k^{\epsilon_1}_{l\to d}\to k^{d,\epsilon_1}_{l\to d} $, where we use the superscript $ d $ to indicate that it is the transition rate for the discrete model. These rates are then sums of all possible emission and absorption processes, and can be written as \begin{align} k^{d,\epsilon_1}_{d\to l}&=\overbrace{\sum\limits_{m}k^{\epsilon_m}_{d\to l}}^{\text{emission}}+\overbrace{\sum\limits_{n}k^{\epsilon_n}_{d\to l}}^{\text{absorption}}\, , \nonumber \\ k^{d,\epsilon_1}_{l\to d}&=\underbrace{\sum\limits_{m}k^{\epsilon_m}_{l\to d}}_{\text{absorption}}+\underbrace{\sum\limits_{n}k^{\epsilon_n}_{l\to d}}_{\text{emission}} \end{align} where the emission and absorption rates are given by \begin{align} k^{\epsilon_n}_{d\to l}&=\Gamma\big[1-f(\epsilon_n)\big]n(\omega_n)\omega_n^2 \, , \nonumber \\ k^{\epsilon_m}_{d\to l}&= \Gamma\big[1-f(\epsilon_m)\big]\big[n(\omega_m)+1\big]\omega_m^2 \, , \nonumber \\ k^{\epsilon_m}_{l\to d}&=\Gamma f(\epsilon_m)n(\omega_m)\omega_m^2 \, , \nonumber \\ k^{\epsilon_n}_{l\to d}&=\Gamma f(\epsilon_n)\big[n(\omega_n)+1\big]\omega_n^2 \, . \end{align} Here $ n(\omega)=(e^{\omega/T_R}-1)^{-1} $ is the Planck distribution, which tells us the probability of finding a phonon with energy $ \omega $ and $ f(\epsilon)=(e^{\epsilon/T_R}+1)^{-1} $ is the Fermi-Dirac distribution, which tells us the probability of finding an occupied state at $ \epsilon $. We assume a 3D phonon density of states, thus the rates has to be multiplied by a $ \omega^2 $ term. We have absorbed all other constants from the DOS into the $ \Gamma $ introduced earlier. The transitions between the left quantum dot and the hot left lead, i.e., the rates involving $ -(\epsilon+\epsilon_g) $, remains unchanged since we still consider this to be a large metal piece with a quasi-continuous energy-spectrum. Again, we solve the master equation in the steady state and obtain the occupation probability vector $ \mathbf{P}^{(1)} $, but now for the discrete model. With this we can find the particle currents in the channel 1 for the discrete model, \begin{equation} J_d^{(1)}=P^{(1)}_Rk^{d,\epsilon_1}_{d\to l} - P^{(1)}_0k^{d,\epsilon_1}_{l\to d}. \end{equation} Thus we can write the part of the cooling power associated with channel 1 as \begin{align} \dot{Q}_R^{(1)}= & P^{(1)}_R\bigg(\sum\limits_{m}k^{\epsilon_m}_{d\to l}\epsilon_m+\sum\limits_{n}k^{\epsilon_n}_{d\to l}\epsilon_n\bigg) \nonumber \\ - & P^{(1)}_0\bigg(\sum\limits_{m}k^{\epsilon_m}_{l\to d}\epsilon_m+\sum\limits_{n}k^{\epsilon_n}_{l\to d}\epsilon_n\bigg). \end{align} A similar analysis as shown here can be applied to channel 2 and provide its corresponding cooling power $ \dot{Q}_R^{(2)} $. Thus the total cooling power written as \begin{equation} \dot{Q}_R=\dot{Q}_R^{(1)}+\dot{Q}_R^{(2)} \label{discrete_Q}. \end{equation} It should be noted that in the limit of $ T_R\to0 $ only the two levels $ \delta $ and $ \delta-\Delta $ will contribute to the total cooling power since all levels above $ \delta $ will be unoccupied and all levels below $ \delta-\Delta $ will be occupied. We can now numerically optimize Eq.~(\ref{discrete_Q}), with respect to the two parameters $ \epsilon $ and $ \epsilon_g $, while keeping $ T_L $ and $ T_S $ constant. Note that $ \epsilon_m $ and $ \epsilon_n $ are determined from $ \epsilon=-\epsilon_1=\epsilon_2 $ and are not free parameters. When $ \epsilon_g\gg\epsilon$, the optimal energy of the quantum dot levels $ \pm(\epsilon_g+\epsilon) $ is independent of $ \epsilon $ and therefore also independent of $ T_R $ (the only influence of $ T_R $ on those levels come via the coupling to the levels $ \pm \epsilon $). This in turn makes the optimal cooling power $ \dot{Q}_R $ approximately independent of $ \epsilon_g $. Hence the only free parameter for optimization is $ \epsilon(T_R) $. The plot of the optimized cooling power as a function of $ T_R $ is shown in Fig.~(\ref{fig:cooling_power}). For simplicity we set $ \delta=\Delta/2 $ and find by that the optimized cooling power as $ T_R\to 0~\mathrm{K} $ is given by \begin{equation} \dot{Q}_R\propto e^{-\Delta/2T_R}, \quad T_R\to 0. \end{equation} \subsection{Heat Capacity} The heat capacity of a Fermi gas with temperature-independent chemical potential $\mu$ can be expressed as \begin{equation} C_V=\frac{dU}{dT}=\int\limits_{0}^{\infty}d\epsilon(\epsilon-\mu) D(\epsilon)\frac{\partial f(\epsilon)}{\partial T} \, . \end{equation} Here $ D(\epsilon) $ is the density of states (which is a constant in our case), and $ f(\epsilon) $ is the Fermi-Dirac distribution. When going from the continuous to the discrete description we have to exchange the integral with a sum, and the continuous variable $ \epsilon $ with the discretized states $ n\Delta $. \begin{equation} C_V=\sum\limits_{n=0}^{\infty}\bigg(\frac{n\Delta-\mu}{T_R}\bigg)^2\frac{e^{(n\Delta-\mu)/T_R}}{(e^{(n\Delta-\mu)/T_R}+1)^2} \label{discrete_C} \end{equation} This sum can easily be determined numerically, but to gain additional insight we can consider the heat capacity for a two level system. As $ T_R\to0 $ the levels $ \delta $ and $ \delta-\Delta $ will be the only relevant levels. We can write the grand canonical partition function for the two level system as \begin{equation} \Xi=1+e^{-\beta\delta}+e^{-\beta(\delta-\Delta)}+e^{-\beta(2\delta-\Delta)} \end{equation} The energy can be written as \begin{equation} U=\frac{1}{\Xi}\sum\limits_{i}H_ie^{-\beta H_i} \end{equation} where $ H_i $ is the energy of the state $ i $. From this we can find the heat capacity from $ C_V=dU/dT $, and we find: \begin{eqnarray} \label{heatcap} && C_V= - \frac{\Delta^2 A+ \delta^2 B +\Delta \delta \, C}{T_R^2 \left(1 + e^{-\beta\delta} +e^{-\beta(\delta-\Delta)} + e^{-\beta(2\delta-\Delta)}\right)^2}, \nonumber \\&& A=e^{\beta(\Delta-3\delta)} + e^{\beta(\Delta-\delta)}+2e^{\beta(\Delta-2\delta)}\, ,\nonumber \\&& B=e^{\beta(\Delta-3\delta)} + e^{\beta(2\Delta-3\delta)} +e^{\beta(\Delta-\delta)} +4e^{\beta(\Delta-2\delta)}+e^{-\beta\delta}, \nonumber \\&& C=2e^{\beta(\Delta-3\delta)}+2e^{\beta(\Delta-\delta)}+4e^{\beta(\Delta-2\delta)}\, . \end{eqnarray} This expression is greatly simplified at $ \delta=\Delta/2 $, i.e., a symmetric distribution of energy levels above and below $ \mu $. In this case we obtain: \begin{equation} C_V=\frac{dU}{dT}=2\bigg(\frac{\Delta}{2T_R}\bigg)^2\frac{e^{\Delta/2T_R}}{(e^{\Delta/2T_R}+1)^2}, \end{equation} and with this result, and find that in the limit of $ T_R\to0 $ the heat capacity is \begin{equation} C_V=2\bigg(\frac{\Delta}{2T_R}\bigg)^2e^{-\Delta/2T_R}, \quad T_R\to 0. \label{heatcap_lowT} \end{equation} Although this is only true for $ \delta=\Delta/2 $, we see from the general equation for the heat capacity given in Eq.~(\ref{heatcap}) that the factor of $ T_R^{-2} $ is present for all terms, and we have found numerically that the dominating exponential terms in the optimized cooling power, Eq.~(\ref{discrete_Q}), and the heat capacity, Eq.~(\ref{heatcap_lowT}), always cancel each other as $ T_R\to 0 $. \section{\label{results}Results} \begin{figure}[b] \centering \includegraphics[width=1\linewidth]{cooling_power.pdf} \caption{Graph of the optimized cooling power $ \dot{Q}_R $ as a function of the dimensionless variable $ T_R/\Delta $. The dashed line is the result from the continuous model while the whole line is the result from the discrete model. For temperatures $ T_R\gtrsim\Delta $ the discrete model reproduces the linear cooling power of the continuous model. However, for temperatures $ T_R\lesssim \Delta $ the cooling power changes to an exponential form. Parameters used: $ \Gamma=\Gamma_s=1 $, $ T_L=20 $~K, $ T_S=6000$~K, $ \epsilon_g= 100$~K, $ \Delta=1$~K, and $ \delta=\Delta/2 $.} \label{fig:cooling_power} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{cooling_rate.pdf} \caption{Plot of the optimized cooling rate, $dT_R/dt $, as a function of $ T_R/\Delta $, Again we see that the discrete model (whole line) reproduce the third law violating constant rate of temperature change of the continuous model (dashed line) for $ T_R \gtrsim \Delta $. The inset shows $ C_V $ as a function of the same variable. When $T_R\lesssim \Delta/2 $ the heat capacity obtains a feature similar to the Schottky-anomaly, indicating that the main contribution to the heat capacity comes from the two levels ($ \delta $ and $\delta-\Delta $) closest to $ \mu=0 $. As a result, for $ T_R\lesssim\Delta $ the exponential term in $ \dot{Q}_R $ cancels the one in $ C_V $, and we are left with the $ T_R^2 $ term from the heat capacity. Parameters used: $ \Gamma=\Gamma_s=1 $, $ T_L=2$0~K, $ T_S=6000$~K, $ \epsilon_g= 100$~K, $ \Delta=1$~K, and $ \delta=\Delta/2 $.} \label{fig:cooling_rate} \end{figure} We can now find the cooling rate $ dT_R/dt $ for the discrete system. In Fig.~\ref{fig:cooling_power} we have plotted the cooling power $ \dot{Q}_R $ as a function of the dimensionless variable $ T_R/\Delta $. The whole line is the result of our numerical calculations, while the dashed line is the result form the original model~\cite{PhysRevLett.108.120603}. We see that for $ T_R \gtrsim \Delta $ the discrete model reproduce the results from the original model, while when $ T_R\lesssim \Delta $ the result changes to an exponential form. The heat capacity $ C_V $ is shown as a function of the same dimensionless variable $ T_R/\Delta $ in Fig.~\ref{fig:cooling_power}, inset. Again, it reproduces the results from the original model for $ T_R>\Delta $, but when $ T_R<\Delta/2 $ a Schottky-like feature appears, indicating that only the two levels closest to $ \mu=0 $ are participating in the dynamics. As we discussed earlier, that if we can write the cooling rate in a form like in Eq.~(\ref{eq:unattainability}) we require that $ \alpha=\lambda-\kappa\geq1 $. In the original model with a continuous energy-spectrum in the cold metal lead, it was found that $ \alpha=0 $. Using our results from Eq.~(\ref{discrete_Q}) and Eq.~(\ref{discrete_C}), we find that for $ \delta=\Delta/2 $ the cooling rate is \begin{equation} \frac{dT_R}{dt}\propto \frac{\dot{Q}_R}{C_V}\propto T_R^2 \,, \quad T_R \to 0\, . \end{equation} We obtain $ \alpha=2 $, which implies that cooling to absolute zero is impossible in a finite amount of time, and the discrete model is thus consistent with the unattainability principle. The corresponding numerical result is shown in Fig.~\ref{fig:cooling_rate}, where we have plotted $ dT_R/dt $ as a function of $ T_R/\Delta $. Also here the result from the discrete model (whole line) reproduces the result from the original model (dashed line) for $ T_R\gtrsim \Delta $, but once $ T_R\lesssim \Delta $ it differs. For $ T_R<<\Delta $ we find by fitting the data to the function $ dT_R/dt=A~T_R^B $, that $ dT_R/dt\propto T_R^2 $. Although the results from Eq.~(\ref{discrete_Q}) and Eq.~(\ref{discrete_C}) are only valid for $ \delta=\Delta/2 $, we find numerically that the exponential term in $ \dot{Q}_R^{\mathrm{tot}} $ always cancels with the one in $ C_V $, and since the $ T_R^{-2} $ term in $ C_V $ is always present, the cooling rate $ dT_R/dt\propto T_R^2 $ is valid independent of choice of $ \delta $. \section{\label{discussion}Discussion and conclusions} In conclusion, we have shown that our natural modification of the model proposed by Cleuren \textit{et at.} does not violate the dynamic version of the third law, and allows for the same cooling performance at temperatures $ T_R>\Delta $ as the original. This is a positive result, which tells us that the original model can be used to cool very efficiently down to the extreme limit of $ T_R\sim\Delta $, where the cooling power is quenched. Though we assumed a constant level spacing, $\Delta$, the low-temperature behavior of the cooling rate is insensitive to this assumption since at $T_R \to 0$ only the two levels closest to the chemical potential are important. The laws of thermodynamics are so general that they should apply both to classical and quantum systems. The third law, in particular, is a theory about the properties of a system as its temperature approaches absolute zero, and at low temperatures quantum effects become important. Quantum theory predicts that confined systems have discretized energy-levels, and when the temperature $ T $ becomes comparable to the spacing between energy levels $ \Delta $, this discreteness needs to be taken into account. In \cite{PhysRevLett.108.120603} they use a continuous energy spectrum of the metal lead, disregarding the quantum discreteness. In the comments on the violation of the third law \cite{PhysRevLett.109.248901,PhysRevLett.109.248902,PhysRevLett.109.248903,PhysRevLett.112.048901}, they employ a heat capacity derived from quantum theory, and it is this mixing of classical and quantum description that leads to the breaking of the unattainability principle. If instead we use a pure classical expression for the heat capacity, which would be a constant as given by the equipartition principle, the unattainability principle would be satisfied \cite{loebl1960third}. We have assumed that the cold metal lead equilibrates instantaneously after electron transfer. This equilibration depends on the size of the metal; larger volumes equilibrate more slowly, and the cooling would only occur in a finite volume within the metal. Not only does the size affect the equilibration time, but it also affects the spacing between the energy levels. Larger volumes results in smaller spacing and thus the system could be cooled to lower temperatures, since the discrete effects only become apparent when $ T_R\simeq \Delta $. Finding an optimal size of the system, that balances these two effects would be beneficial. Our final assumption is that the left hot lead function as a large heat bath, and have no effect on the cooling rate. A recent article \cite{Masanes2014} have shown that in a cooling process the density of states of the left heat bath affects the cooling rate of quantum refrigerators. A refined model where we take into account the properties of the left lead would give us additional insight into the nature of quantum refrigerators. \acknowledgments The research leading to these results has received funding from the European Union Seventh Framework Program (FP7/2007-2013) under grant agreement No. 308850 (INFERNOS). \input{quantum_refrigerator_paper_VBS_YG_1911.bbl} \end{document}
1,116,691,498,855
arxiv
\section{The proof of Theorem \ref{rf} when case (a) of Lemma \ref{gluing} arises} \label{case a} Figure \ref{inv1} depicts an involution $\tau_1$ on $E_1 \times I$ under which $\partial_3 E_1\times I$ is invariant, has its boundary components interchanged, and $\tau_1(A_{11})=A_{12}$. Then $\tau_1$ extends to an involution of $X_1$ since its restriction to $\partial_3 E_1\times I = A_1$ coincides with the restriction to $A_1$ of the standard involution of $V_1$. Evidently $\tau_1(\partial_1 E_1 \times \{0\}) = \partial_2 E_1 \times \{1\}$ and $\tau_1(\partial_2 E_1 \times \{0\}) = \partial_1 E_1 \times \{1\}$. \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{inv1.ai}}\hspace{10mm}} \caption{$X_1$ and the involution $\tau_1$}\label{inv1} \end{figure} Figure \ref{inv2} depicts an involution $\tau_2$ on $E_2 \times I$ under which each of the annuli $\partial_3 E_2 \times I, A_{21}$, and $A_{22}$ are invariant. Further, it interchanges the components of $E_2 \times\partial I$ and as in the previous paragraph, $\tau_2$ extends to an involution of $X_2$. Note that $\tau_2(\partial_j E_2 \times \{0\}) = \partial_j E_2 \times \{1\}$ for $j = 1, 2$. Next consider the orientation preserving involution $\tau_2' = f (\tau_1|P_1) f^{-1}$ on $P_2$. By construction we have $\tau_2'(\partial_j E_2 \times \{0\}) = \partial_j E_2 \times \{1\}$ for $j = 1, 2$, and therefore $\tau_2' = g (\tau_2|P_2) g^{-1}$ where $g: P_2 {\rightarrow} P_2$ is a homeomorphism whose restriction to $\partial P_2$ is isotopic to $1_{\partial P_2}$. The latter fact implies that $g$ is isotopic to a homeomorphism $g': P_2 {\rightarrow} P_2$ which commutes with $\tau_2|P_2$. Hence, $\tau_2'$ is isotopic to $\tau_2|P_2$ through orientation preserving involutions whose fixed point sets consist of two points. In particular, $\tau_1$ and $\tau_2$ can be pieced together to form an orientation preserving involution $\tau: M {\rightarrow} M$. For each slope $\gamma$ on $\partial M$, $\tau$ extends to an involution $\tau_\gamma$ of the associated Dehn filling $M(\gamma) = M \cup V_\gamma$, where $V_\gamma$ is the filling solid torus. Thurston's orbifold theorem applies to our situation and implies that $M(\gamma)$ has a geometric decomposition. In particular, $M(\alpha)$ is a Seifert fibred manifold whose base orbifold is of the form $S^2(2,3,4)$, a $2$-sphere with three cone points of orders $2,3,4$ respectively. It follows immediately from our constructions that $X_1(\beta)/ \tau_\beta$ and $X_2(\beta)/ \tau_\beta$ are $3$-balls. Thus $M(\beta)/ \tau_\beta = (X_1(\beta)/ \tau_\beta) \cup (X_2(\beta)/ \tau_\beta) \cong S^3$ and since $\partial M / \tau \cong S^2$, it follows that $M / \tau$ is a $3$-ball. More precisely, $M/\tau$ is an orbifold $(N,L^0)$, where $N$ is a 3-ball, $L^0$ is a properly embedded 1-manifold in $N$ that meets $\partial N$ in four points, and $M$ is the double branched cover of $(N,L^0)$. We will call $(N,L^0)$ a {\em tangle}, and if we choose some identification of $(\partial N,\partial L^0)$ with a standard model of ($S^2$, {\em four points}), then $(N,L^0)$ becomes a {\em marked} tangle. Capping off $\partial N$ with a 3-ball $B$ gives $N\cup_\partial B\cong S^3$. Then, if $\gamma$ is a slope on $\partial M$, we have $V_\gamma/\tau_\gamma \cong (B,T_\gamma)$, where $T_\gamma$ is the rational tangle in $B$ corresponding to the slope $\gamma$. Hence \begin{equation*} \begin{split} M(\gamma)/\tau_\gamma & = (M/\tau) \cup (V_\gamma/\tau_\gamma)\\ & = (N,L^0) \cup (B,T_\gamma)\\ & = (S^3, L^0 (\gamma))\ , \end{split} \end{equation*} where $L^0(\gamma)$ is the link in $S^3$ obtained by capping off $L^0$ with the rational tangle $T_\gamma$. \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{inv2.ai}}\hspace{10mm}} \caption{$X_2$ and the involution $\tau_2$}\label{inv2} \end{figure} We now give a more detailed description of the tangle $(N,L^0)$. For $i=1,2$, let $B_i=V_i/ \tau_i$, $W_i=E_i\times I/\tau_i$, $Y_i=X_i/\tau_i$, and $Q_i=P_i/\tau_i$. Figure \ref{quo1} gives a detailed description of the branch sets in $B_i$, $W_i$, $Y_i$ with respect to the corresponding branched covering maps. Note that $N$ is the union of $Y_1, Y_2$, and a product region $R \cong Q_1 \times I$ from $Q_1$ to $Q_2$ which intersects the branch set $L^0$ of the cover $M {\rightarrow} N$ in a $2$-braid. In fact, it is clear from our constructions that we can think of the union $(L^0 \cap R) \cup (\partial N \cap R)$ as a ``$4$-braid" in $R$ with two ``fat strands" formed by $\partial N \cap R$. See Figure \ref{filling}(a). By an isotopy of $R$ fixing $Q_2$, and which keeps $R$, $Q_1$, and $Y_1$ invariant, we may untwist the crossings between the two fat strands in Figure \ref{filling}(a) so that the pair $(N, L^0)$ is as depicted in Figure \ref{filling}(b). \begin{figure}[!ht] {\epsfxsize=5in \centerline{\epsfbox{quo1.ai}}\hspace{10mm}} \caption{The branch sets in $B_i, W_i$, and $Y_i$}\label{quo1} \end{figure} The slope $\beta$ is the boundary slope of the planar surface $P$, and hence the rational tangle $T_\beta$ appears in Figure~\ref{filling}(b) as two short horizontal arcs in $B$ lying entirely in $Y_2(\beta) = X_2 (\beta)/\tau_\beta$. Since $\Delta (\alpha,\beta)=2$, $T_\alpha$ is a tangle of the form shown in Figure~\ref{filling6}(a). Recall that $M(\alpha)$ is a Seifert fibred manifold with base orbifold of type $S^2 (2,3,4)$, and is the double branched cover of $(S^3,L^0(\alpha))$. Write $L= L^0(\alpha)$. \begin{lemma} \label{type} $L$ is a Montesinos link of type $(\frac{p}{2}, \frac{q}{3}, \frac{r}{4})$. \end{lemma} {\noindent{\bf Proof.\hspace{2mm}}} By Thurston's orbifold theorem, the Seifert fibering of $M(\alpha)$ can be isotoped to be invariant under $\tau_\alpha$. Hence the quotient orbifold is Seifert fibered in the sense of Bonahon-Siebenmann, and so either $L$ is a Montesinos link or $S^3 \setminus L$ is Seifert fibred. From Figure 6(a) we see that $L$ is a 2-component link with an unknotted component and linking number $\pm 1$. But the only link $L$ with this property such that $S^3 \setminus L$ is Seifert fibred is the Hopf link (see \cite{BM}), whose $2$-fold cover is $P^3$. Thus $L$ must be a Montesinos link. Since the base orbifold of $M(\alpha)$ is $S^2(2,3,4)$, $L$ has type $(\frac{p}{2}, \frac{q}{3}, \frac{r}{4})$ (c.f \S 12.D of \cite{BuZi}).{\hspace{2mm}{\small $\diamondsuit$}} It's easy to check that any Montesinos link $L$ of the type described in the Lemma \ref{type} has two components, one of which, say $K_1$, is a trivial knot, and the other, $K_2$, a trefoil knot. Our goal is to use the particular nature of our situation to show that the branch set $L$ cannot be a Montesinos link of type $(\frac{p}{2}, \frac{q}{3}, \frac{r}{4})$, and thus derive a contradiction. {From} Figure \ref{filling}, we see that $L^0$ has a closed, unknotted component, which must be the component $K_1$ of the Montesinos link of type $(\frac{p}{2}, \frac{q}{3}, \frac{r}{4})$ described above. Then $L^0 \setminus K_1 = K_2 \cap N$, which we denote by $K_2^0$. \begin{figure}[!ht] {\epsfxsize=5in \centerline{\epsfbox{filling.ai}}\hspace{10mm}} \caption{The branch set $L^0$ in $N$}\label{filling} \end{figure} Now delete $K_1$ from $N$ and let $U$ be the double branched cover of $N$ branched over $K_2^0$. Then $U$ is a compact, connected, orientable $3$-manifold with boundary a torus which can be identified with $\partial M$. In particular, if we consider $\alpha$ and $\beta$ as slopes on $\partial U$, then both $U(\alpha)$ and $U(\beta)$ are the lens space $L(3,1)$, since they are $2$-fold covers of $S^3$ branched over a trefoil knot. Hence the cyclic surgery theorem of \cite{CGLS} implies that $U$ is either a Seifert fibred space or a reducible manifold. \begin{lemma} $U$ is not a Seifert fibred space. \end{lemma} {\noindent{\bf Proof.\hspace{2mm}}} Suppose $U$ is a Seifert fibred space, with base surface $F$ and $n \geq 0$ exceptional fibres. If $F$ is non-orientable then $U$ contains a Klein bottle, hence $U(\alpha) \cong L(3,1)$ does also. But since non-orientable surfaces in $L(3,1)$ are non-separating, this implies that $H_1(L(3,1); \mathbb Z/2) \not \cong 0$, which is clearly false. Thus $F$ is orientable. If $U$ is a solid torus then clearly $U(\alpha) \cong U(\beta) \cong L(3,1)$ implies $\Delta(\alpha, \beta) \equiv 0$ (mod $3$), contradicting the fact that $\Delta(\alpha, \beta) = 2$. Thus we assume that $U$ is not a solid torus, and take $\phi \in H_1(\partial U)$ to be the slope on $\partial U$ of a Seifert fibre. Then $U(\phi)$ is reducible \cite{Hl} so $d = \Delta(\alpha, \phi) > 0$, and $U(\alpha)$ is a Seifert fibred space with base surface $F$ capped off with a disk, and $n$ or $n+1$ exceptional fibres, according as $d = 1$ or $d > 1$. Since $U(\alpha)$ is a lens space and $U$ isn't a solid torus, we must have that $F$ is a disk, $n = 2$, and $d = 1$. Similarly $\Delta (\beta, \phi) = 1$. In particular, without loss of generality $\beta = \alpha + 2\phi$ in $H_1(\partial U)$. The base orbifold of $U$ is of the form $D^2(p,q)$, with $p,q > 1$. Then $H_1(U)$ is the abelian group defined by generators $x,y$ and the single relation $px + qy = 0$. Suppose $\alpha \mapsto ax + by$ in $H_1(U)$. Then $H_1(U(\alpha))$ is presented by the matrix $\left( \begin{array}{cc} p & a \\ q & b \end{array} \right)$. Similarly, since $\phi \mapsto px$ in $H_1(U)$, $H_1(U(\beta))$ is presented by $\left( \begin{array}{cc} p & a + 2p\\ q & b \end{array} \right)$. But the determinants of these matrices differ by $2pq \geq 8$, so they cannot both be $3$ in absolute value. This completes the proof of the lemma. {\hspace{2mm}{\small $\diamondsuit$}} Thus $U$ is reducible, say $U \cong V \# W$ where $\partial V = \partial U$ and $W \not \cong S^3$ is closed. Consideration of $M(\alpha)$ and $M(\beta)$ shows that $W \cong L(3,1)$ and $V(\alpha) \cong V(\beta) \cong S^3$, and so Theorem 2 of \cite{GL1} implies that $V \cong S^1 \times D^2$. It follows that any simple closed curve in $\partial U$ which represents either $\alpha$ or $\beta$ is isotopic to the core curve of $V$. Let $\lambda \in H_1(\partial U)$ denote the meridional slope of $V$. Then $\{\beta, \lambda\}$ is a basis of $H_1(\partial U)$ and up to changing the sign of $\alpha$ we have $\alpha=\beta \pm 2\lambda$. Since $U\cong (S^1 \times D^2)\,\#\, L(3,1)$, we can find a homeomorphism between the pair $(N,K_2^0)$ and the tangle shown in Figure~\ref{NK20}(a), with the $\beta$, $\alpha$, and $\lambda$ fillings shown in Figures~\ref{NK20}(b), (c) and (d) respectively. (We show the case $\alpha = \beta +2\lambda$; the other possibility can be handled similarly.) \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{NK20.ai}}\hspace{10mm}} \caption{} \label{NK20} \end{figure} Recall that in Figure \ref{filling}(b), the slope $\beta$ corresponds to the rational tangle consisting of two short ``horizontal'' arcs in the filling ball $B$. It follows that under the homeomorphism from the tangle shown in Figure~\ref{NK20}(a) to $(N,K_2^0)$ shown in Figure~\ref{filling}(b), the tangle $T_\alpha$ (resp. $T_\lambda$) is sent to a rational tangle of the form shown in Figure~\ref{filling6}(a) (resp. \ref{filling6}(b)). {From} Figure~\ref{NK20}(d) we see that $L^0(\gamma)$ is a link of three components $K_1 \cup O_1 \cup K_3$, where $O_1$ is a trivial knot which bounds a disk $D$ disjoint from $K_3$ and which intersects $\partial N$ in a single arc; see Figure~\ref{filling6}(b). Push the arc $O_1\cap B$ with its two endpoints fixed into ${\partial} B$ along $D$, and let $O_1^*$ be the resulting knot (see part (c) of Figure \ref{filling6}). Then there is a disk $D_*$ (which is a subdisk of $D$) satisfying the following conditions: \begin{itemize} \item[(1)] $\partial D_*=O_1^*$. \item[(2)] $D_*$ is disjoint from $K_3$. \item[(3)] The interior of $D_*$ is disjoint from $B$. \end{itemize} \begin{figure}[!ht] {\epsfxsize=6in \centerline{\epsfbox{filling6.ai}}\hspace{10mm}} \caption{The tangle fillings $N(\alpha)$ and $N(\lambda)$.} \label{filling6} \end{figure} \noindent Perusal of Figure \ref{filling6} (c) shows that the following condition is also achievable. \begin{itemize} \item[(4)] $D_* \cap Q_2$ has a single arc component, and this arc component connects the two boundary components of $Q_2$ and is outermost in $D_*$ amongst the components of $D_* \cap Q_2$. \end{itemize} Among all disks in $S^3$ which satisfy conditions (1)--(4), we may assume that $D_*$ has been chosen so that \begin{itemize} \item[(5)] $D_* \cap Q_2$ has the minimal number of components. \end{itemize} \begin{claim}\label{parallel} Suppose that $D_* \cap Q_2$ has circle components. Then each such circle separates $K_3 \cap Q_2$ from $\partial Q_2$ in $Q_2$. \end{claim} {\noindent{\bf Proof.\hspace{2mm}}} Let $\delta$ be a circle component of $D_* \cap Q_2$. Then $\delta$ is essential in $Q_2 \setminus (Q_2 \cap K_3)$, for if it bounds a disk $D_0$ in $Q_2 \setminus (Q_2 \cap K_3)$, then an innermost component of $D_* \cap D_0 \subset D_* \cap Q_2$ will bound a disk $D_1 \subset D_0$. We can surger $D_*$ using $D_1$ to get a new disk satisfying conditions (1)--(4) above, but with fewer components of intersection with $Q_2$ than $D_*$, contrary to assumption (5). Next since the arc component of $D_* \cap Q_2$ connects the two boundary components of $Q_2$, $\delta$ cannot separate the two boundary components of $Q_2$ from each other. Lastly suppose that $\delta$ separates the two points of $Q_2 \cap K_3$. Then $\delta$ is isotopic to a meridian curve of $K_3$ in $S^3$. But this is impossible since $\delta$ also bounds a disk in $D_*$ and is therefore null-homologous in $S^3 \setminus K_3$. The claim follows.{\hspace{2mm}{\small $\diamondsuit$}} \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{filling7a.ai}}\hspace{10mm}} \caption{Capping off the $4$-braid to obtain a trivial link}\label{filling7} \end{figure} It follows from Claim \ref{parallel} that there are disjoint arcs in $Q_2$, one, say $\sigma_1$, which connects the two points of $Q_2 \cap K_3$ and is disjoint from $D_*$, and $\sigma_2 = D_* \cap Q_2$ the other. Hence we obtain a ``2-bridge link'' of two components -- one fat, one thin -- in $S^3$ by capping off the ``$4$-braid'' in $R$ with $\sigma_1$ and $\sigma_2$ in $Y_2 \subset Y_2(\beta)$ and with $K_3 \cap Y_1 \subset Y_1$ and $\partial N \cap Y_1 \subset Y_1$ in the $3$-ball $Y_1(\beta)$ (see Figure \ref{filling7}(a)). Furthermore, since the disk $D_*$ gives a disk bounded by the ``fat knot'' which is disjoint from the ``thin knot'', the link is a trivial link. \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{filling8a.ai}}\hspace{10mm}} \caption{The pair $(N, L^0)$ and the filling tangle $T_{\alpha}$}\label{filling8} \end{figure} \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{filling8b.ai}}\hspace{10mm}} \caption{The two possible $T_{\alpha}$}\label{filling8b} \end{figure} Now it follows from the standard presentation of a $2$-bridge link as a $4$-plat (see \S 12.B of \cite{BuZi}), that there is an isotopy of $R$, fixed on the ends $Q_1,Q_2$ and on the two fat strands, taking the ``4-braid'' to one of the form shown in Figure~\ref{filling7}(b). Hence $(N,L^0)$ has the form shown in Figure~\ref{filling8}(a). The filling rational tangle $T_{\alpha}$ is of the form shown in Figure~\ref{filling8}(b). Since the component $K_2^0 (\alpha)$ of $L^0(\alpha) =L$ has to be a trefoil, there are only two possibilities for the number of twists in $T_\alpha$; see Figure \ref{filling8b}. The two corresponding possibilities for $L$ are shown in Figure~\ref{filling9}. But these are Montesinos links of the form $(\frac13, \frac{-3}8,\frac{m}2)$ and $(\frac13, \frac{-5}8, \frac{m}2)$, respectively. \begin{figure}[!ht] {\epsfxsize=5in \centerline{\epsfbox{filling9.ai}}\hspace{10mm}} \caption{$L$ as a Montesinos link of the type $(\frac{1}{3}, \frac{-3}{8}, \frac{m}{2})$ or $(\frac13, \frac{-5}{8}, \frac{m}{2})$}\label{filling9} \end{figure} This final contradiction completes the proof of Theorem~\ref{rf} under the assumptions of case (a) of Lemma \ref{gluing}.{\hspace{2mm}{\small $\diamondsuit$}} \section{The proof of Theorem \ref{rf} when case (b) of Lemma \ref{gluing} arises} In this case we choose an involution $\tau_1$ on $E_1 \times I$ as shown in Figure \ref{inv3}. Then $\tau_1(\partial_3 E_1 \times \{j\}) = \partial_3 E_1 \times \{j\}$, $\tau_1(\partial_1 E_1\times \{j\}) = \partial_2 E_1\times \{j\}$ ($j = 0,1$), and the restriction of $\tau_1$ on $\partial_3 E_1 \times I$ extends to an involution of $V_1$ whose fixed point set is a core circle of this solid torus. Thus we obtain an involution $\tau_1$ on $X_1$. The quotient of $V_1$ by $\tau_1$ is a solid torus $B_1$ whose core circle is the branch set. Further, $A_1 / \tau_1$ is a longitudinal annulus of $B_1$. The quotient of $E_1 \times I$ by $\tau_1$ is also solid torus $W_1$ in which $(\partial_3 E_1 \times I) / \tau_1$ is a longitudinal annulus. Figure \ref{inv3} depicts $W_1$ and its branch set. It follows that the pair $(Y_1 = X_1/ \tau_1, \hbox{branch set of } \tau_1)$ is identical to the analogous pair in Section~\ref{case a} (see Figure~\ref{quo1}). Next we take $\tau_2$ to be the same involution on $X_2$ as that used in Section~\ref{case a}. An argument similar to the one used in that section shows that $\tau_1$ and $\tau_2$ can be pieced together to form an involution $\tau$ on $M$. {From} the previous paragraph we see that the quotient $N = M / \tau$ and its branch set are the same as those in Section~\ref{case a}. Hence the argument of that section can be used from here on to obtain a contradiction. This completes the proof of Theorem \ref{rf} in case (b). {\hspace{2mm}{\small $\diamondsuit$}} \begin{figure}[!ht] {\epsfxsize=5.5in \centerline{\epsfbox{inv3.ai}}\hspace{10mm}} \caption{Involution on $E_1\times I$}\label{inv3} \end{figure} \newpage \def$\underline{\hskip.5truein}${$\underline{\hskip.5truein}$}
1,116,691,498,856
arxiv
\section{Introduction} \label{sec:1} Let $s_k(n)$ denote the sum of the digits in the base-$k$ expansion of the non-negative integer $n$. Although we only consider $k=2$, our results can be easily extended to all integers $k\ge 2$. Put $u_n=(-1)^{s_2(n)}$. In other words, $u_n$ is equal to $1$ if the binary expansion of $n$ has an even number of $1$'s, and is equal to $-1$ otherwise. This is the so-called Thue-Morse sequence with values $\pm 1$. We study infinite products of the form \begin{equation*}f(b,c):=\prod_{n = 1}^\infty\left(\frac{n+b}{n+c}\right)^{u_n}.\end{equation*} The only known non-trivial value of $f$ (up to the relations $f(b,b)=1$ and $f(b,c)=1/f(c,b)$) seems to be \begin{equation*}f\left(\frac 12,1\right)=\sqrt 2 \end{equation*} which is the famous Woods-Robbins identity \cite{R,W}. Several infinite products inspired by this identity were discovered afterwards (see, e.g., \cite{AS2,ASo}), but none of them involve the sequence $u_n$. In this paper we compute another value of $f$, namely, \begin{equation*}f\left(\frac 14,\frac 34\right)=\frac 32.\end{equation*} In Sect.~\ref{sec:2} we look at properties of the function $f$ and introduce a related function $h$. In Sect.~\ref{sec:3} we study the analytical properties of $h$. In Sect.~\ref{sec:4} we try to find infinite products of the form $\prod R(n)^{u_n}$ admitting a closed form value, with $R$ a rational function. This paper forms the basis for the paper \cite{ARS}. While the purpose of \cite{ARS} is to compute new products of the forms $\prod R(n)^{u_n}$ and $\prod R(n)^{t_n}$, $t_n$ being the Thue-Morse sequence with values $0,1$, we restrict ourselves in this paper to studying products of the form $\prod R(n)^{u_n}$ in greater depth. \section{General Properties of $f$ and a New Function $h$} \label{sec:2} We start with the following result on convergence \begin{lemma}\label{num-den} Let $R \in\mathbb C(x)$ be a rational function such that the values $R(n)$ are defined and non-zero for integers $n \geq 1$. Then, the infinite product \, $\prod_n R(n)^{u_n}$ converges if and only if the numerator and the denominator of $R$ have same degree and same leading coefficient. \end{lemma} \begin{proof} \iffalse If the infinite product converges, then $R(n)$ must tend to $1$ when $n$ tends to infinity. Thus the numerator and the denominator of $R$ have the same degree and the same leading coefficient. Now suppose that the numerator and the denominator of $R$ have the same leading coefficient and the same degree. Decomposing them in factors of degree $1$, it suffices, for proving that the infinite product converges, to show that infinite products of the form \begin{equation*}\prod_{n\ge 1} \left(\frac{n+b}{n+c}\right)^{u_n}\end{equation*} converge for complex numbers $b$ and $c$ such that $n+b$ and $n+c$ do not vanish for any $n \geq 1$. Since the general factor of such a product tends to $1$, it is equivalent, grouping the factors pairwise, to prove that the product \begin{equation*}\prod_{n\ge 1} \left[\left(\frac{2n+b}{2n+c}\right)^{u_{2n}} \left(\frac{2n+1+b}{2n+1+c}\right)^{u_{2n+1}}\right]\end{equation*} converges. Since $u_{2n} = u_n$ and $u_{2n+1} = - u_n$ we only need to prove that the infinite product \begin{equation*}\prod_{n\ge 1} \left(\frac{(2n+b)(2n+1+c)}{(2n+c)(2n+1+b)}\right)^{u_n}\end{equation*} converges. Taking the (principal determination of the) logarithm, we see that \begin{equation*}\log\left(\frac{(2n+b)(2n+1+c)}{(2n+c)(2n+1+b)}\right) = O\left(\frac{1}{n^2}\right)\end{equation*} which gives the convergence result. \fi See \cite{ARS}, Lemma 2.1. \qed \end{proof} Hence $f(b,c)$ converges for any $b,c\in\mathbb C\setminus\{-1,-2,-3,\dots\}$. Using the definition of $u_n$ we see that $f$ satisfies the following properties. \iffalse \begin{svgraybox} \textbf{Properties of $f$~} For any $b,c,d\in\mathbb C\setminus\{-1,-2,-3,\dots\}$, \begin{enumerate} \item\label{p1} $f(b,b)=1$. \item\label{p2} $f(b,c)f(c,d)=f(b,d)$. \item\label{p3} $\displaystyle f(b,c)=\left(\frac{c+1}{b+1}\right)f\left(\frac b2,\frac c2\right)f\left(\frac{c+1}{2},\frac{b+1}{2}\right)$. \end{enumerate} \end{svgraybox} \fi \begin{lemma}\label{f} For any $b,c,d\in\mathbb C\setminus\{-1,-2,-3,\dots\}$, \begin{enumerate} \item\label{p1} $f(b,b)=1$, \item\label{p2} $f(b,c)f(c,d)=f(b,d)$, \item\label{p3} $\displaystyle f(b,c)=\left(\frac{c+1}{b+1}\right)f\left(\frac b2,\frac c2\right)f\left(\frac{c+1}{2},\frac{b+1}{2}\right)$. \end{enumerate} \end{lemma} \begin{proof} The only non-trivial claim is part \ref{p3}. To see why it is true, note that $u_{2n} = u_n$ and $u_{2n+1} = - u_n$, so that \begin{align*} f(b,c) &= \prod_{n = 1}^\infty\left(\frac{n+b}{n+c}\right)^{u_n}\\ &=\left(\frac{1+c}{1+b}\right)\prod_{n = 1}^\infty\left(\frac{2n+b}{2n+c}\right)^{u_n}\prod_{n=1}^\infty\left(\frac{2n+1+c}{2n+1+b}\right)^{u_n}\\ &=\left(\frac{1+c}{1+b}\right)\prod_{n = 1}^\infty\left(\frac{n+\frac b2}{n+\frac c2}\right)^{u_n}\prod_{n=1}^\infty\left(\frac{n+\frac{c+1}{2}}{n+\frac{b+1}{2}}\right)^{u_n}\\ &=\left(\frac{c+1}{b+1}\right)f\left(\frac b2,\frac c2\right)f\left(\frac{c+1}{2},\frac{b+1}{2}\right) \end{align*} as desired. \qed \end{proof} One can ask the natural question: is $f$ the unique function satisfying these properties? What if we impose some continuity/analyticity conditions? Using the first two parts of Lemma~\ref{f} we get \begin{equation*}f(b,c)f(d,e)=\frac{f(b,c)f(c,d)f(d,e)f(d,c)}{f(c,d)f(d,c)}=\frac{f(b,e)f(d,c)}{f(c,c)}=f(b,e)f(d,c).\end{equation*} Hence the third part may be re-written as \begin{equation}\label{f-h}f(b,c)=\displaystyle\frac{\displaystyle f\left(\frac b2,\frac{b+1}{2}\right)}{b+1}\bigg/ \frac{\displaystyle f\left(\frac c2,\frac{c+1}{2}\right)}{c+1}.\end{equation} This motivates the following definition. \begin{definition} Define the function \begin{equation}\label{hdef} h(x):=f\left(\frac x2,\frac{x+1}{2}\right).\end{equation} \end{definition} Then Eqs.~\eqref{f-h} and \eqref{hdef} give the following result. \begin{lemma}\label{ftoh} For any $b,c\in\mathbb C\setminus\{-1,-2,-3,\dots\}$, \begin{equation}\label{f-h-1}f(b,c)=\frac{c+1}{b+1}\cdot\frac{h(b)}{h(c)}.\end{equation} \end{lemma} So understanding $f$ is equivalent to understanding $h$, in the sense that each function can be completely evaluated in terms of the other. Moreover, taking $c=b+\frac 12$ in Eq.~\eqref{f-h-1} and then using Eq.~\eqref{hdef} gives the following result. \begin{lemma}\label{fe} The function $h$ defined by Eq. \eqref{hdef} satisfies the functional equation \begin{equation}\label{FE2} h(x)=\frac{x+1}{x+\frac 32}h\left(x+\frac 12\right)h(2x).\end{equation} \end{lemma} Again one may ask: is $h$ the unique solution to Eq.~\eqref{FE2}? What about monotonic/continuous/smooth solutions? An approximate plot of $h$ is given in Fig.~\ref{hx} with the infinite product truncated at $n = 100$. \begin{figure}[h] \centering \sidecaption \includegraphics[scale=.45]{hext} \caption{Approximate plot of $h(x)$} \label{hx} \end{figure} \iffalse \begin{svgraybox} \textbf{A New Function~} The function $h(x):=f(\frac x2,\frac{x+1}{2})$ satisfies the functional equation \begin{equation*} h(x)=\frac{x+1}{x+\frac 32}h\left(x+\frac 12\right)h(2x).\end{equation*} \end{svgraybox} \fi \section{Analytical Properties of $h$} \label{sec:3} The following lemma forms the basis for the results in this section. \begin{lemma} \label{lem:1} For $b,c\in(-1,\infty)$, \begin{enumerate} \item\label{l1} if $b=c$, then $f(b,c)=1$. \item\label{l2} if $b>c$, then \[\left(\frac{c+1}{b+1}\right)^2<f(b,c)<1.\ \item\label{l3} if $b<c$, then \[1<f(b,c)<\left(\frac{c+1}{b+1}\right)^2.\ \end{enumerate \end{lemma} \begin{proof} Using Lemma~\ref{f} it suffices to prove the second statement Let $b>c>-1$ and put \begin{equation}\label{notation}a_n=\log\left(\frac{n+b}{n+c}\right),\quad S_ =\sum_{n=1}^Na_nu_n,\quad U_N=\sum_{n=1}^Nu_n.\end{equation} Note that $a_n$ is positive and strictly decreasing to $0$. Using $s_2(2n)+s_2(2n+1)\equiv 1\pmod 2$ it follows that $U_n\in\{-2,-1,0\}$ and $U_n\equiv n\pmod 2$ for each $n$. Using summation by parts, \begin{equation*}S_N=a_{N+1}U_N+\sum_{n=1}^NU_n(a_n-a_{n+1}).\end{equation*} So $-2a_1<S_N<0$ for large $N$. Exponentiating and taking $N\to\infty$ gives the desired result. \qed\end{proof} Lemmas \ref{ftoh}-\ref{lem:1} immediately imply the following results. \begin{theorem}\label{t} $h(x)/(x+1)$ is strictly decreasing on $(-1,\infty)$ and $h(x)(x+1)$ is strictly increasing on $(-1,\infty)$.\end{theorem} \begin{theorem} For $b,c\in(-1,\infty)$, $f(b,c)$ is strictly decreasing in $b$ and strictly increasing in $c$.\end{theorem} \begin{theorem}\label{hbound} For $x\in(-2,\infty)$, \[1<h(x)<\left(\frac{x+3}{x+2}\right)^2.\]\end{theorem} We now give some results on differentiability. \begin{theorem}\label{smooth} $h(x)$ is smooth on $(-2,\infty)$.\end{theorem} \begin{proof} Recall the definition of $h$: \begin{equation*}h(x)=\prod_{n=1}^\infty\left(\frac{2n+x}{2n+1+x}\right)^{u_n}.\] Then taking $b=x/2$ and $c=(x+1)/2$ in Eqs.~\eqref{notation} shows that the sequence $S_n$ of smooth functions on $(-2,\infty)$ converges pointwise to $\log h$. Differentiating with respect to $x$ gives \[S_N'=\sum_{n=1}^N\frac{u_n}{(2n+x)(2n+1+x)}=\sum_{n=1}^Nu_n\left(\frac{1}{2n+x}-\frac{1}{2n+1+x}\right).\] Hence \begin{align*}\left|S_N'-S_M'\right &\le\sum_{n=M+1}^N\left(\frac{1}{2n+x}-\frac{1}{2n+1+x}\right)\\ &\le\sum_{n=M+1}^N\left(\frac{1}{2n-1+x}-\frac{1}{2n+1+x}\right)\\ &=\frac{1}{2M+1+x}-\frac{1}{2N+1+x}\\ &<\frac{1}{2M-1} \to 0 \end{align*} as $M\to\infty$, for any $x\in (-2,\infty)$ and $N>M$. Thus $S_n'$ converges uniformly on $(-2,\infty)$, which shows that $\log h$, hence $h$, is differentiable on $(-2,\infty)$. Now suppose that derivatives of $h$ up to order $k$ exist for some $k\ge 1$. Note that \[S_N^{(k+1)}=(-1)^kk!\sum_{n=1}^Nu_n\left(\frac{1}{(2n+x)^{k+1}}-\frac{1}{(2n+1+x)^{k+1}}\right).\] As before, \begin{align*}\left|S_N^{(k+1)}-S_M^{(k+1)}\right| &\le k!\sum_{n=M+1}^N\left(\frac{1}{(2n+x)^{k+1}}-\frac{1}{(2n+1+x)^{k+1}}\right)\\ &\le k!\sum_{n=M+1}^N\left(\frac{1}{(2n-1+x)^{k+1}}-\frac{1}{(2n+1+x)^{k+1}}\right)\\ &=\frac{k!}{(2M+1+x)^{k+1}}-\frac{k!}{(2N+1+x)^{k+1}}\\ &<\frac{k!}{(2M-1)^{k+1}} \to 0 \end{align*} as $M\to\infty$, for any $x\in (-2,\infty)$ and $N>M$. Hence $S_n^{(k+1)}$ converges uniformly on $(-2,\infty)$, i.e., $h^{(k)}$ is differentiable on $(-2,\infty)$. Therefore, by induction, $h$ has derivatives of all orders on $(-2,\infty)$. \qed\end{proof} \begin{theorem}\label{series} Let $a\ge 0$. The \begin{equation*}\log h(x)=\log h(a)+\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k}\left(\sum_{n=2}^\infty\frac{u_n}{(n+a)^k}\right)(x-a)^k\end{equation*} for $x\in[a-1,a+1]$. \end{theorem} \begin{proof} Let $H(x)=\log h(x)$. By Theorem~\ref{smooth}, \[H^{(k+1)}(x)=(-1)^kk!\sum_{n=2}^\infty\frac{u_n}{(n+x)^{k+1}}.\] Hence \[|H^{(k+1)}(x)|\le k!\sum_{n=2}^\infty\frac{1}{|n+x|^{k+1}}\le k!\sum_{n=2}^\infty\frac{1}{(n+a-1)^{k+1}}\] for $x\in[a-1,a+1]$. So by Taylor's inequality, the remainder for the Taylor polynomial for $H(x)$ of degree $k$ is absolutely bounded above by \[\frac{1}{k+1}\left(\sum_{n=2}^\infty\frac{1}{(n+a-1)^{k+1}}\right)|x-a|^{k+1}\] which tends to $0$ as $k\to\infty$, since $a\ge 0$ and $|x-a|\le 1$. Therefore $H(x)$ equals its Taylor expansion about $a$ for $x$ in the given range. \qed\end{proof} \section{Infinite Products} \label{sec:4} Recall that \begin{equation*}f(b,c)=\prod_{n=1}^\infty\left(\frac{n+b}{n+c}\right)^{u_n}.\end{equation*} From Lemma~\ref{f} we see that \begin{equation}\label{3-1}\prod_{n=1}^\infty\left(\frac{(n+b)(n+\frac{b+1}{2})(n+\frac c2)}{(n+c)(n+\frac{c+1}{2})(n+\frac b2)}\right)^{u_n}=\frac{c+1}{b+1}\end{equation} for any $b,c\neq -1,-2,-3,\dots$, and if $b,c\neq 0,-1,-2,\dots$, then \begin{equation}\label{3-2}\prod_{n=0}^\infty\left(\frac{(n+b)(n+\frac{b+1}{2})(n+\frac c2)}{(n+c)(n+\frac{c+1}{2})(n+\frac b2)}\right)^{u_n}=1.\end{equation} Some interesting identities can be obtained from Eqs.~\eqref{3-1}--\eqref{3-2}. For example, in Eq.~\eqref{3-1}, taking $c=(b+1)/2$ gives \begin{equation}\label{3-3}\prod_{n=1}^\infty\left(\frac{(n+b)(n+\frac{b+1}{4})}{(n+\frac{b+3}{4})(n+\frac b2)}\right)^{u_n}=\frac{b+3}{2(b+1)}\end{equation} while taking $b=0$ gives \begin{equation}\label{3-4}\prod_{n=1}^\infty\left(\frac{(n+\frac 12)(n+\frac c2)}{(n+c)(n+\frac{c+1}{2})}\right)^{u_n}=c+1\end{equation} for any $b,c\neq -1,-2,-3,\dots$. We now turn our attention to the functional equation Eq.~\eqref{FE2}. Recall that it reads \begin{equation*}h(x)=\frac{x+1}{x+\frac 32}h\left(x+\frac 12\right)h(2x).\end{equation*} Taking $x=0$ gives \begin{equation*}h(0)=\frac 23h\left(\frac 12\right)h(0).\end{equation*} Since $1<h(0)<9/4$ by Theorem~\ref{hbound}, cancelling $h(0)$ from both sides gives $h(1/2)=3/2$. This shows that \begin{equation}\label{b=0}\prod_{n=0}^\infty\left(\frac{4n+3}{4n+1}\right)^{u_n}=2.\end{equation} Next, taking $x=1/2$ in Eq.~\eqref{FE2} gives \begin{equation*}h\left(\frac 12\right)=\frac 34h(1)^2\end{equation*} hence $h(1)=\sqrt 2$ (since $1<h(1)<16/9$ by Theorem~\ref{hbound}) and we recover the Woods-Robbins product \begin{equation}\prod_{n=0}^\infty\left(\frac{2n+2}{2n+1}\right)^{u_n}=\sqrt{2}.\end{equation} Similarly, taking $x=-1/2$ in Eq.~\eqref{FE2} gives \begin{equation*}h\left(-\frac 12\right)=\frac 12h(0)h(-1)=\frac 12f\left(0,\frac 12\right)f\left(-\frac 12,0\right)=\frac 12f\left(-\frac 12,\frac 12\right),\end{equation*} i.e., \begin{equation}\label{b=-1/2}\prod_{n=1}^\infty\left(\frac{(4n-1)(2n+1)}{(4n+1)(2n-1)}\right)^{u_n}=\frac 12.\end{equation} Taking $x=1$ in Eq.~\eqref{FE2} gives \begin{equation*}h(1)=\frac 45h\left(\frac 32\right)h(2)\end{equation*} hence $h(3/2)h(2)=5\sqrt 2/4$ and this gives \begin{equation}\label{b=1}\prod_{n=0}^\infty\left(\frac{(4n+3)(2n+2)}{(4n+5)(2n+3)}\right)^{u_n}=\frac{1}{\sqrt 2}.\end{equation} Taking $x=3/2$ in Eq.~\eqref{FE2} and using the previous result gives \begin{equation*}h(2)^2h(3)=\frac{3}{\sqrt 2}\end{equation*} which is equivalent to \begin{equation}\label{b=3/2}\prod_{n=0}^\infty\left(\frac{(2n+2)(n+1)}{(2n+3)(n+2)}\right)^{u_n}=\frac{1}{\sqrt 2}.\end{equation} Eqs.~\eqref{b=0}--\eqref{b=3/2} can also be combined in pairs to obtain other identities. \iffalse\begin{equation}&\prod_{n\ge 0}\left(\frac{(2n+1)(n+1)}{(2n+3)(n+2)}\right)^{u_n}=\frac{1}{2}\\ &\prod_{n\ge 0}\left(\frac{(4n+5)(n+1)}{(4n+3)(n+2)}\right)^{u_n}=1\\ &\prod_{n\ge 0}\left(\frac{(4n+5)(n+1)}{(4n+1)(n+2)}\right)^{u_n}=2\\ &\prod_{n\ge 0}\left(\frac{(4n+5)(2n+3)}{(4n+1)(2n+2)}\right)^{u_n}=2\sqrt 2\\ &\prod_{n\ge 0}\left(\frac{(4n+5)(2n+3)}{(4n+1)(2n+1)}\right)^{u_n}=4\end{equation} or combined together: \begin{multline}\prod_{n\ge 0}\left(\frac{n+1}{n+2}\right)^{u_n}=\prod_{n\ge 0}\left(\frac{4n+3}{4n+5}\right)^{u_n}=2\prod_{n\ge 0}\left(\frac{4n+1}{4n+5}\right)^{u_n}\\=\frac{1}{\sqrt 2}\prod_{n\ge 0}\left(\frac{2n+3}{2n+2}\right)^{u_n}=\frac{1}{2}\prod_{n\ge 0}\left(\frac{2n+3}{2n+1}\right)^{u_n}\end{multline}\fi \section{Concluding Remarks} \label{sec:5} The quantity $h(0)\approx 1.62816$ appears to be of interest \cite{A,AS1}. It is not known whether its value is irrational or transcendental. We give the following explanation as to why $h(0)$ might behave specially in a sense. Note that the only way non-trivial cancellation occurs in the functional equation Eq.~\eqref{FE2} is when $b=0$. Likewise, non-trivial cancellation occurs in Eq.~\eqref{f-h} or property \ref{p3} in Lemma~\ref{f} only for $(b,c)=(0,1/2)$ and $(1/2,0)$. That is, the victim of any such cancellation is always $h(0)$ or $h(0)^{-1}$. So one must look for other ways to understand $h(0)$. Using the only two known values $h(1/2)=3/2$ and $h(1)=\sqrt 2$, the following expressions for $h(0)$ can be obtained from Theorem~\ref{series}. \begin{itemize} \item By taking $x=0$ and $a=1$, \[h(0)=\sqrt 2\exp\left(-\sum_{k=1}^\infty\frac 1k\sum_{n=2}^\infty\frac{u_n}{(n+1)^k}\right).\] \item By taking $x=1$ and $a=0$, \[h(0)=\sqrt 2\exp\left(\sum_{k=1}^\infty\frac{(-1)^k}{k}\sum_{n=2}^\infty\frac{u_n}{n^k}\right).\] \item By taking $x=0$ and $a=1/2$, \[h(0)=\frac 32\exp\left(\sum_{k=1}^\infty\frac 1k\sum_{n=2}^\infty\frac{u_{2n+1}}{(2n+1)^k}\right) .\] \item By taking $x=1/2$ and $a=0$, \[h(0)=\frac 32\exp\left(\sum_{k=1}^\infty\frac{(-1)^k}{k}\sum_{n=2}^\infty\frac{u_{2n}}{(2n)^k}\right) .\] \end{itemize} The Dirichlet series \[\sum_{n=0}^\infty\frac{u_n}{(n+1)^k}\quad\hbox{and}\quad\sum_{n=1}^\infty\frac{u_n}{n^k}\] appearing in the above expressions were studied by Allouche and Cohen \cite{AC}. \begin{acknowledgement} This work is part of a larger joint work \cite{ARS} with Professors Jean-Paul Allouche and Jeffrey Shallit. I thank the professors for helpful discussions and comments. I also thank the anonymous referees for their feedback. \end{acknowledgement} \iffalse Here is another point of view. The two functions, $(x+1)^{-1}h(x)$, which is strictly decreasing, and $(x+1)h(x)$, which is strictly increasing, in Theorem \ref{t} meet at $x=0$, which behaves like a ``pivot'' point for $h(x)$, as suggested by the following approximate plots. \begin{figure} \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{05} \caption{$(x+1)^{0.05}h(x)$} \end{subfigure}\quad \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{1} \caption{$(x+1)^{0.1}h(x)$} \end{subfigure} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{17} \caption{$(x+1)^{0.17}h(x)$} \end{subfigure}\quad \begin{subfigure}[b]{0.48\textwidth} \includegraphics[width=\textwidth]{2} \caption{$(x+1)^{0.2}h(x)$} \end{subfigure} \end{figure} We see that as $0<a<1$ varies, the extrema of $(x+1)^ah(x)$ seem to be approaching the point $x=0$.\fi \iffalse \section{Section Heading} \label{sec:2} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. Use the standard \verb|equation| environment to typeset your equations, e.g. \begin{equation} a \times b = c\;, \end{equation} however, for multiline equations we recommend to use the \verb|eqnarray| environment\footnote{In physics texts please activate the class option \texttt{vecphys} to depict your vectors in \textbf{\itshape boldface-italic} type - as is customary for a wide range of physical subjects}. \begin{eqnarray} a \times b = c \nonumber\\ \vec{a} \cdot \vec{b}=\vec{c} \label{eq:01} \end{eqnarray} \subsection{Subsection Heading} \label{subsec:2} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references\index{cross-references} and citations\index{citations} as has already been described in Sect.~\ref{sec:2}. \begin{quotation} Please do not use quotation marks when quoting texts! Simply use the \verb|quotation| environment -- it will automatically render Springer's preferred layout. \end{quotation} \subsubsection{Subsubsection Heading} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{subsec:2}, see also Fig.~\ref{fig:1}\footnote{If you copy text passages, figures, or tables from other works, you must obtain \textit{permission} from the copyright holder (usually the original publisher). Please enclose the signed permission with the manucript. The sources\index{permission to print} must be acknowledged either in the captions, as footnotes or in a separate section of the book.} Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. \begin{figure}[b] \sidecaption \includegraphics[scale=.65]{figure} \caption{If the width of the figure is less than 7.8 cm use the \texttt{sidecapion} command to flush the caption on the left side of the page. If the figure is positioned at the top of the page, align the sidecaption with the top of the figure -- to achieve this you simply need to use the optional argument \texttt{[t]} with the \texttt{sidecaption} command} \label{fig:1} \end{figure} \paragraph{Paragraph Heading} % Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. For typesetting numbered lists we recommend to use the \verb|enumerate| environment -- it will automatically render Springer's preferred layout. \begin{enumerate} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \begin{enumerate} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \end{enumerate} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \end{enumerate} \subparagraph{Subparagraph Heading} In order to avoid simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}, see also Fig.~\ref{fig:2}. For unnumbered list we recommend to use the \verb|itemize| environment -- it will automatically render Springer's preferred layout. \begin{itemize} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development, cf. Table~\ref{tab:1}.} \begin{itemize} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \end{itemize} \item{Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.} \end{itemize} \begin{figure}[t] \sidecaption[t] \includegraphics[scale=.65]{figure} \caption{If the width of the figure is less than 7.8 cm use the \texttt{sidecapion} command to flush the caption on the left side of the page. If the figure is positioned at the top of the page, align the sidecaption with the top of the figure -- to achieve this you simply need to use the optional argument \texttt{[t]} with the \texttt{sidecaption} command} \label{fig:2} \end{figure} \runinhead{Run-in Heading Boldface Version} Use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. \subruninhead{Run-in Heading Italic Version} Use the \LaTeX\ automatism for all your cross-refer\-ences and citations as has already been described in Sect.~\ref{sec:2}\index{paragraph}. \begin{table} \caption{Please write your table caption here} \label{tab:1} \begin{tabular}{p{2cm}p{2.4cm}p{2cm}p{4.9cm}} \hline\noalign{\smallskip} Classes & Subclass & Length & Action Mechanism \\ \noalign{\smallskip}\svhline\noalign{\smallskip} Translation & mRNA$^a$ & 22 (19--25) & Translation repression, mRNA cleavage\\ Translation & mRNA cleavage & 21 & mRNA cleavage\\ Translation & mRNA & 21--22 & mRNA cleavage\\ Translation & mRNA & 24--26 & Histone and DNA Modification\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} $^a$ Table foot note (with superscript) \end{table} \section{Section Heading} \label{sec:3} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. If you want to list definitions or the like we recommend to use the Springer-enhanced \verb|description| environment -- it will automatically render Springer's preferred layout. \begin{description}[Type 1] \item[Type 1]{That addresses central themes pertainng to migration, health, and disease. In Sect.~\ref{sec:1}, Wilson discusses the role of human migration in infectious disease distributions and patterns.} \item[Type 2]{That addresses central themes pertainng to migration, health, and disease. In Sect.~\ref{subsec:2}, Wilson discusses the role of human migration in infectious disease distributions and patterns.} \end{description} \subsection{Subsection Heading} % In order to avoid simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Use the \LaTeX\ automatism for all your cross-references and citations citations as has already been described in Sect.~\ref{sec:2}. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. \begin{svgraybox} If you want to emphasize complete paragraphs of texts we recommend to use the newly defined Springer class option \verb|graybox| and the newly defined environment \verb|svgraybox|. This will produce a 15 percent screened box 'behind' your text. If you want to emphasize complete paragraphs of texts we recommend to use the newly defined Springer class option and environment \verb|svgraybox|. This will produce a 15 percent screened box 'behind' your text. \end{svgraybox} \subsubsection{Subsubsection Heading} Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. \begin{theorem} Theorem text goes here. \end{theorem} \begin{definition} Definition text goes here. \end{definition} \begin{proof} Proof text goes here. \qed \end{proof} \paragraph{Paragraph Heading} % Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the \LaTeX\ automatism for all your cross-references and citations as has already been described in Sect.~\ref{sec:2}. Note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are. \begin{theorem} Theorem text goes here. \end{theorem} \begin{definition} Definition text goes here. \end{definition} \begin{proof} \smartqed Proof text goes here. \qed \end{proof} \begin{acknowledgement} If you want to include acknowledgments of assistance and the like at the end of an individual chapter please use the \verb|acknowledgement| environment -- it will automatically render Springer's preferred layout. \end{acknowledgement} \section*{Appendix} \addcontentsline{toc}{section}{Appendix} When placed at the end of a chapter or contribution (as opposed to at the end of the book), the numbering of tables, figures, and equations in the appendix section continues on from that in the main text. Hence please \textit{do not} use the \verb|appendix| command when writing an appendix at the end of your chapter or contribution. If there is only one the appendix is designated ``Appendix'', or ``Appendix 1'', or ``Appendix 2'', etc. if there is more than one. \begin{equation} a \times b = c \end{equation} \fi \input{referenc} \end{document}
1,116,691,498,857
arxiv
\section{Introduction} In Hermitian geometry, the vanishing of the torsion of the Hermitian connection means that the metric is K\"ahler. In the complex Finsler geometry, since the torsion of the Chern-Finsler connection has a horizontal part and a mixed part, a complex Finsler metric to be K\"ahler is quite different. There are three kinds of notions which are the strongly K\"ahler, K\"ahler and weakly K\"ahler metric corresponding to the vanishing of the different parts of the torsion of the Chern-Finsler connection \cite{AP}. However, the relations of the three K\"ahler definition seem rather subtle. Chen and Shen proved that a K\"ahler Finsler metric is actually a strong K\"ahler Finsler metric \cite{CS}. The well-known examples of smooth complex Finsler metrics: Kobayashi and Carath\'eodory metrics, which agree on a bounded strictly convex domain, are weakly K\"ahler metrics \cite{AP}. It is open whether they are K\"ahler or not. In \cite{XZh2}, Xia and Zhong wrote: ``{\em we also do not know whether there exists a weakly K\"ahler Finsler metric which is not a K\"ahler Finsler metric.}" Thus the problem: {\em does there exist a weakly K\"ahler Finsler metric which is not K\"ahler}, remains open. In this paper, we give an affirmative answer to this open problem. More precisely, we construct a family of the complex Randers metrics which are weakly K\"ahler but not K\"ahler. Actually the following classification theorem is proved. \begin{theorem} An unitary invariant complex Randers metric $F=\sqrt{r\phi(t,s)}$ defined on a domain $D \subset \mathbb{C}^n$, where $r:=|v|^2$, $t:=|z|^2$, $s:=\frac{|\langle z,v \rangle |^2}{r}$, $z\in D$, and $v\in T_zD$, is weakly K\"ahler if and only if \[\phi=\Big (\sqrt{f(t)+\frac{tf^{'}(t)-f(t)}{2t}s}+\sqrt{\frac{tf^{'}(t)+f(t)}{2t}s}\Big )^2\] or \[\phi=f(t)+f^\prime(t)s\] where $f(t)$ is a positive smooth function. \end{theorem} \begin{remark} (1) When $\phi=f(t)+f^{\prime}(t)s$, it is easy to see that the metric $F$ is a Hermitian-K\"ahler metric. (2) In \cite{Zhong}, Zhong proved that if the unitary invariant complex Finsler metric $F$ is K\"ahler if and only if it is Hermiatian-K\"ahler. When $$\phi=\Big (\sqrt{f(t)+\frac{tf^{'}(t)-f(t)}{2t}s}+\sqrt{\frac{tf^{'}(t)+f(t)}{2t}s}\Big )^2,$$ obviously, the complex Finsler metric $F=\sqrt{r\phi(t,s)}$ is not Hermitian, and thus $F$ is weakly K\"ahler but not K\"ahler. \end{remark} As we know, the uniformization theorem in Hermitian geometry tells us that a complete, simple-connected K\"ahler manifolds of constant holomorphic (sectional) curvature is isometric to one of the standard models: the Fubini-Study metric on $\mathbb{C}P^n$, the flat metric on $\mathbb{C}^n$ and the Bergmann metric on the unit ball in $\mathbb{C}^n$ \cite{Tian}. In complex Finsler geometry, one natural problem is to classify the weakly K\"ahler or K\"ahler Finsler metric with constant holomorphic curvature. Roughly speaking, this problem is too ambitious to solve if one does not impose any condition on the metrics. In \cite{XZh1}, Xia and Zhong classified the unitary invariant weakly complex Berwald metrics of constant holomorphic curvature. However, if we assume the metrics are the unitary invariant Randers type, we can obtain a complete classification theorem which is similar to the uniformization theorem in Hermitian geometry. In this paper, we proved the following uniformization theorem. \begin{theorem} \label{mt} Let $F=\sqrt{r\phi(t,s)}$ be an unitary invariant complex Randers metric defined on a domain $D\subset \mathbb{C}^n$, where $r:=|v|^2$, $t:=|z|^2$, $s:=\frac{|\langle z,v \rangle |^2}{r}$. Assume $F$ is not a Hermitian metric. If $F$ is a weakly K\"ahler Finsler metric and has a constant holomorphic curvature $k$ if and only if \begin{enumerate} \item $k=4$, $\phi=(\sqrt{\frac{t}{c^2+t^2}-\frac{t^2}{(c^2+t^2)^2}s}+\sqrt{\frac{c^2}{(c^2+t^2)^2}s})^2$ defined on $D=\mathbb{C}^n\setminus \{0\}$; \item $k=0$, $\phi=c(\sqrt{t}+\sqrt{s})^2$ defined on $D=\mathbb{C}^n$; \item $k=-4$, $\phi=\sqrt{\frac{t}{c^2-t^2}+\frac{t^2}{(c^2-t^2)^2}s}+\sqrt{\frac{c^2}{(c^2-t^2)^2}s}$ defined on $D=\{z: |z|<\sqrt{c}\}$ \end{enumerate} where $c$ is a positive constant. \end{theorem} \begin{remark} It is hopeful to generalize above theorem to the more general complex Randers metrics. \end{remark} The rest of this article is organized as follows. In section 2, we recall some notations and formulas in complex Finsler geometry. In section 3, we introduce $U$ and $W$ to simplify the weakly K\"ahler equation of the unitary invariant complex Finsler metrics. We also obtain the formula of the holomorphic curvature of the unitary invariant complex Finsler metrics under the weakly K\"ahler condition. In section 4, we give the examples of the unitary invariant complex Randers metrics which are weakly K\"ahler but not K\"ahler and Theorem \ref{mt} is proved. \section{Preliminaries} Let $M$ be an $n$-dimensional complex manifold. The canonical complex structure $J$ acts on the complexified tangent bundle $T_{\mathbb{C}}M$ so that \[T_{\mathbb{C}}M=T^{1,0}M \oplus T^{0,1}M\] where $T^{1,0}M$ is called the holomorphic tangent bundle. The set $\{z^1, \dots, z^n\}$ is the local complex coordinate on $M$, where $z^\alpha=x^\alpha+ix^{n+\alpha}$, $1\leq\alpha\leq n$, and $\{x^1,\dots,x^n,x^{n+1},\dots,x^{2n}\}$ is the local real coordinate on $M$. Let \[\frac{\partial }{\partial z^\alpha}:=\frac{1}{2}(\frac{\partial}{\partial x^\alpha}-\sqrt{-1}\frac{\partial}{\partial x^{\alpha+n}} ), \quad \frac{\partial }{\partial \bar{z}^\alpha}:=\frac{1}{2}(\frac{\partial}{\partial x^\alpha}+\sqrt{-1}\frac{\partial}{\partial x^{\alpha+n}} ).\] The set $\{\frac{\partial}{\partial z^1}, \dots, \frac{\partial}{\partial z^n} \}$ and $\{\frac{\partial}{\partial \bar{z}^1}, \dots, \frac{\partial}{\partial \bar{z}^n} \}$ are the local frames of $T^{1,0}M$ and $T^{0,1}M$ respectively. Thus for any $(z, v)\in T^{1,0}M$, the vector $v$ can be expressed as $v=v^\alpha \frac{\partial}{\partial z^\alpha}$. Similar to the Hermitian metric in complex geometry, the complex Finsler metric is defined as the follows. \begin{definition}(See \cite{AP}) A complex Finsler metric $F$ on a complex manifold $M$ is a continuous function $F: T^{1,0}M\rightarrow [0, +\infty)$ satisfying \begin{enumerate} \item $G(z,v)=F^2(z,v)\geq 0$ and $G(z,v)=0$ if and only if $v=0$; \item $G(z,\lambda v)=|\lambda|^2G(z,v)$ for all $(z,v)\in T^{0,1}M$ and $\lambda\in \mathbb{C}$; \item $G$ is smooth on $T^{0,1}M$, we say $F$ is a smooth Finsler metric. \end{enumerate} \end{definition} \begin{definition} A complex Finsler metric $F$ is called strongly pseudo-convex if the Levi matrix \[(G_{\alpha\bar{\beta}})=(\frac{\partial ^2 G}{\partial v^{\alpha}\partial \bar{v}^{\beta}})\] is positive definite on $T^{0,1}M$. The tensor $G_{\alpha\bar{\beta}}dz^\alpha\otimes d\bar{z}^\beta$ is called the fundamental tensor. \end{definition} There are several complex Finsler connections in complex Finsler geometry such as the Chern-Finsler connection and the complex Berwald connection. For our convenience, we use the Chern-Finsler connection to derive the holomorphic curvature. For a function $G(z,v)$ defined on $T^{0,1}M$, we denote \[G_{\alpha}:=\frac{\partial G}{\partial v^{\alpha}},\quad G_{;\alpha}:=\frac{\partial G}{\partial z^{\alpha}},\quad G_{\alpha;\bar{\beta}}:=\frac{\partial^2 G}{\partial v^\alpha\partial \bar{z}^{\beta}}.\] Let \[\frac{\delta}{\delta z^\alpha}:=\frac{\partial}{\partial z^\alpha}-N^{\beta}_{\alpha}\frac{\delta}{\delta v^{\beta}},\quad \delta v^\alpha:=dv^{\alpha}+N^{\alpha}_{\beta}dz^\beta,\quad N^\alpha_\beta:=G^{\alpha\bar{\gamma}}G_{\bar{\gamma};\beta},\] then the tangent bundle of $T^{0,1}M$ splits into the horizontal and the vertical parts, i.e.: \[T_{\mathbb{C}}\tilde{M}=\mathcal{H}\oplus \bar{\mathcal{H}}\oplus \mathcal{V}\oplus\bar{\mathcal{V}}\] where $\tilde{M}=T^{0,1}M$, $\mathcal{H}=\text{span}\{\frac{\delta}{\delta z^{\alpha}}\}$ and $\mathcal{V}=\text{span}\{\frac{\partial}{\partial v^{\alpha}}\}$. The Chern-Finsler connection 1-forms are given by \[\omega^\alpha_\beta=G^{\alpha\bar{\gamma}}\partial G_{\beta\bar{\gamma}}=\Gamma^\alpha_{\beta;\gamma}dz^\gamma+C^{\alpha}_{\beta\gamma}\delta v^{\gamma},\] where the connection coefficients can be written as \[\Gamma^\alpha_{\beta; \gamma}:=G^{\alpha\bar{\eta}}\frac{\delta G_{\beta\bar{\eta}}}{\delta z^\gamma}, \quad C^\alpha_{\beta\gamma}:=G^{\alpha\bar{\eta}}\frac{\partial G_{\beta\bar{\eta}}}{\partial v^\gamma}.\] The curvature 2-forms $\Omega^\alpha_\beta$ are \[\Omega^\alpha_\beta:=\bar{\partial }\omega^\alpha_\beta=R^\alpha_{\beta;\gamma\bar{\eta}}dz^\gamma\wedge d\bar{z}^\eta+S^\alpha_{\beta\gamma;\bar{\eta}}\delta v^\gamma\wedge d\bar{z}^\eta+P^\alpha_{\beta\bar{\eta};\gamma}dz^\gamma\wedge \delta\bar{v}^\eta+Q^\alpha_{\beta\gamma\bar{\eta}}\delta v^\gamma\wedge \delta\bar{v}^\eta, \] where \begin{eqnarray*} R^\alpha_{\beta;\gamma\bar{\eta} }& = & -\delta_{\bar{\eta}}(\Gamma^\alpha_{\beta;\gamma})-C^\alpha_{\beta\mu}\delta_{\bar{\eta}}(N^\mu_\gamma),\\ S^\alpha_{\beta\gamma;\bar{\eta}}& = & -\delta_{\bar{\eta}}(C^\alpha_{\beta\gamma}),\\ P^\alpha_{\beta\bar{\eta};\gamma}& = & -\dot{\partial}_{\bar{\eta}}(\Gamma^\alpha_{\beta;\gamma})-C^\alpha_{\beta\mu}\dot{\partial}_{\bar{\eta}}(N^\mu_\gamma),\\ Q^\alpha_{\beta\gamma\bar{\eta}}& = &-\dot{\partial}_{\bar{\eta}}(C^\alpha_{\beta\gamma}). \end{eqnarray*} \begin{definition} (See \cite{AP}) The holomorphic curvature of a complex Finsler metric $F$ is defined as \[K_F(v):=\frac{1}{G^2}G_{\alpha}R^\alpha_{\beta;\gamma\bar{\eta} }v^{\beta}v^{\gamma}\bar{v}^{\eta}=-\frac{2}{G^2}G_{\alpha}\delta_{\bar{\gamma}}(N^\alpha_{\beta})v^\beta\bar{v}^{\gamma} .\] \end{definition} Similar to the Hermitian metric in complex geometry, the $(2,0)$-torsion of the Chern-Finsler connection is \[\theta(X,Y):=\nabla _{X}Y-\nabla_{Y}X-[X,Y],\ \ X, Y\in \mathcal{H}\oplus\mathcal{V}. \] The K\"ahler condition means the above $(2,0)$ torsion is vanishing. \begin{definition} (See \cite{AP}) Let $F$ be a complex Finsler metric, and $\chi=v^\alpha\frac{\delta}{\delta z^{\alpha}}\in \Gamma (\mathcal{H})$ be the radial horizontal field. Then $F$ is \begin{enumerate} \item strongly K\"ahler, if $\theta(X,Y)=0$ for any $X, Y\in \mathcal{H}$; \item K\"ahler, if $\theta(X,\chi)=0$ for any $X\in \mathcal{H}$; \item weakly K\"ahler, if $\langle \theta(X,\chi), \chi\rangle=0$ for any $X\in \mathcal{H}$. \end{enumerate} \end{definition} In a local coordinate, the torsion $\theta$ is given by \[\theta=(\Gamma^{\alpha}_{\beta;\gamma}dz^\beta\wedge dz^{\gamma}+C^\alpha_{\beta\gamma}\delta v^{\beta}\wedge dz^{\gamma})\otimes \frac{\delta}{\delta z^{\alpha}}.\] Hence $F$ is a strongly K\"ahler metric if and only if $\Gamma^{\alpha}_{\beta;\gamma}=\Gamma^{\alpha}_{\gamma;\beta}$; $F$ is a K\"ahler metric if and only if $\Gamma^{\alpha}_{\beta;\gamma}v^{\gamma}=\Gamma^{\alpha}_{\gamma;\beta}v^{\gamma}$; $F$ is a weakly K\"ahler metric if and only if $G_{\alpha}\Gamma^{\alpha}_{\beta;\gamma}v^{\gamma}=G_{\alpha}\Gamma^{\alpha}_{\gamma;\beta}v^{\gamma}$. \section{Unitary invariant Finsler metrics} In complex Finsler geometry, there lack the canonical complex Finsler metrics comparing to Hermitian geometry. In real Finsler geometry, the third author introduced the spherically symmetric Finsler metrics which are invariant under any rotation in $\mathbb{R}^n$ \cite{Zhou}. Similarly, in \cite{Zhong}, Zhong introduced the unitary invariant complex Finsler metrics, which are invariant under any unitary action in $\mathbb{C}^n$, and gave an investigation on the complex Chern-Finsler connection and holomorphic curvature of the metrics. In this section, we will recall some results in \cite{Zhong} and simplify some formulas by introducing $U$ and $W$. \begin{definition} (See \cite{Zhong}) A complex Finsler metric $F$ on a domain $D \subset \mathbb{C}^n$ is called unitary invariant if $F$ satisfies \[F(Az, Av)=F(z,v)\] for any $(z,v)\in T^{1,0}M$ and $A\in U(n)$, where $U(n)$ are the unitary matrices over the complex number field $\mathbb{C}$. \end{definition} \begin{theorem} (See \cite{XZh1}) Let $F$ be a strongly pseudo-convex complex Finsler metric defined on a domain $D\subset \mathbb{C}^n$. The metric $F$ is unitary invariant if and only if there exists a smooth function $\phi(t,s): [0, +\infty)\times [0,+\infty)\rightarrow (0,+\infty)$ such that \[F(z,v)=\sqrt{r\phi(t,s)}, \quad r:=|v|^2, t:=|z|^2, s:=\frac{|\langle z,v \rangle |^2}{r}\] for every $(z,v)\in T^{1,0}D$. \end{theorem} The fundamental tensor of $F=\sqrt{r\phi(t,s)}$ can be written as \[G_{\alpha\bar{\beta}}=(\phi-s\phi_s)\delta_{\alpha\bar{\beta}}+r\phi_{ss}s_{\alpha}s_{\bar\beta}+\phi_s \bar{z}^\alpha z^\beta.\] The determinant of the matrix $(G_{\alpha,\bar{\beta}})$ is \[\det(G_{\alpha,\bar{\beta}})=\{(\phi-s\phi_s)[\phi+(t-s)\phi_s]+s(t-s)\phi\phi_{ss}\}(\phi-s\phi_s)^{n-2}.\] \begin{proposition} (See \cite{Zhong}) The unitary invariant complex Finsler metric $F=\sqrt{r\phi(t,s)}$ is strongly pseudo-convex on a domain $D\subset \mathbb{C}^n$ if and only if \[\phi-s\phi_s>0, \quad (\phi-s\phi_s)[\phi+(t-s)\phi_s]+s(t-s)\phi\phi_{ss}>0.\] \end{proposition} In \cite{Zhong}, Zhong showed that a strongly pseudo-convex complex Finsler metric $F=\sqrt{r\phi(t,s)}$ defined on a domain $D\subset \mathbb{C}^n$ is a K\"ahler Finsler metric if and only if $\phi(t,s)=a(t)+a^{\prime}(t)s$, where $a(t)$ is a positive smooth function satisfying $a(t)+ta^{\prime}(t)>0$. For the weakly K\"ahler case, Zhong obtained the following result. \begin{theorem} (See \cite{Zhong}) The unitary invariant complex Finsler metric $F=\sqrt{r\phi(t,s)}$ defined on a domain $D\subset \mathbb{C}^n$ is weakly K\"ahler if and only if \begin{equation}\label{Kaeq}(\phi-s\phi_s)[\phi+(t-s)\phi_s][\phi_s-\phi_t+s(\phi_{st}+\phi_{ss})]+s(t-s)\phi_{ss}[\phi(\phi_s-\phi_t)+s\phi_s(\phi_t+\phi_s)]=0.\end{equation} \end{theorem} Actually, we can simplify the equation (\ref{Kaeq}) and yield the following theorem. \begin{theorem} \label{wkeq} An unitary invariant complex Finsler metric $F=\sqrt{r\phi(t,s)}$ defined on a domain $D\subset \mathbb{C}^n$ is weakly K\"ahler if and only if \begin{equation}\label{nKaeq} sU(U-t)W_s-s(U-t)U_sW-2(U-s)U_s=0\end{equation} where $U=\frac{s\phi+s(t-s)\phi_s}{\phi}$ and $W=\frac{\phi_t+\phi_s}{\phi}$. \end{theorem} \begin{proof} Let $$U:=\frac{s\phi+s(t-s)\phi_s}{\phi}, \quad W:=\frac{\phi_t+\phi_s}{\phi}.$$ Then we have \begin{equation} \phi_s=\frac{U-s}{s(t-s)}\phi, \quad \phi_t=(W-\frac{U-s}{s(t-s)})\phi.\label{pspt} \end{equation} It is easy to see \[\phi-s\phi_s=\frac{t-U}{t-s}\phi, \quad \phi+(t-s)\phi_s=\frac{U}{s}\phi.\] Obviously, $\phi_s-\phi_t=-W\phi+2\frac{U-s}{s(t-s)}\phi$ and $\phi_t+\phi_s=W\phi$, so that \[\phi(\phi_s-\phi_t)+s\phi_s(\phi_t+\phi_s)=\frac{sW(U-t)+2(U-s)}{s(t-s)}\phi^2.\] By using $\phi_{st}+\phi_{ss}=(\phi_t+\phi_s)_s=W_s\phi+W\phi_s$, we have \[\phi_s-\phi_t+s(\phi_{st}+\phi_{ss})=s W_s\phi+\frac{U-t}{t-s}W\phi+\frac{2(U-s)}{s(t-s)}\phi.\] Moreover, from the first equation of (\ref{pspt}), we compute \[\phi_{ss}=\frac{s(t-s)U_s+U(U-t)}{s^2(t-s)^2}\phi.\] Substituting the above equalities into the weakly K\"ahler equation (\ref{Kaeq}) implies the equation ({\ref{nKaeq}}). \end{proof} \begin{lemma} \label{lem0} $U$ and $W$ in Theorem \ref{wkeq} satisfy the following integral equation \[s(U_t+U_s)=s^2(t-s)W_s+U.\] \end{lemma} \begin{proof} From the formula of $U$ and $W$, we can see that \[\phi_s=\frac{U-s}{s(t-s)}\phi, \quad \phi_t=(W-\frac{U-s}{s(t-s)})\phi.\] Then $\phi_{st}=\phi_{ts}$ implies the result. \end{proof} \begin{theorem} \label{hct1} Let $F=\sqrt{r\phi(t,s)}$ be the unitary invariant complex Finsler metric defined on a domain $D\subset \mathbb{C}^n$ and $K_F$ be the holomorphic curvature. Then in terms of $\phi$, $K_F$ is \begin{eqnarray*} K_F(v)&=&-\frac{2}{\phi}\Big\{[s(\frac{\partial k_2}{\partial t}+\frac{\partial k_2}{\partial s})+k_2]+\frac{s\phi+s(t-s)\phi_s}{\phi}[s(\frac{\partial k_3}{\partial t}+\frac{\partial k_3}{\partial s})+2k_3]\Big\}, \end{eqnarray*} where \begin{eqnarray*} k_1&:=&(\phi-s\phi_s)[\phi+(t-s)\phi_s]+s(t-s)\phi\phi_{ss},\\ k_2&:=&\frac{1}{k_1}\{[\phi+(t-s)\phi_s+s(t-s)\phi_{ss}](\phi_t+\phi_s)-s[\phi+(t-s)\phi_s](\phi_{st}+\phi_{ss})\},\\ k_3&:=&\frac{1}{k_1}[\phi(\phi_{st}+\phi_{ss})-\phi_s(\phi_t+\phi_s)]. \end{eqnarray*} \end{theorem} \begin{proof} According to the definition of the holomorphic curvature, we have \begin{eqnarray} \label{hceq} K_F&=&-\frac{2}{G^2}G_{\alpha}\delta_{\bar{\gamma}}(N^\alpha_{\beta})v^\beta\bar{v}^{\gamma} \nonumber\\ &=&-\frac{2}{G^2}G_{\gamma}\delta_{\bar{\nu}}(2\mathbb{G}^\gamma)\bar{v}^{\nu} \nonumber \\ &=&-\frac{2}{G^2}G_{\gamma}\frac{\partial }{\partial \bar{z}^{\nu}}(2\mathbb{G}^\gamma)\bar{v}^{\nu}+\frac{2}{G^2}G_{\gamma}\bar{N}^\alpha_\nu\frac{\partial }{\partial \bar{v}^\alpha}(2\mathbb{G}^\gamma)\bar{v}^{\nu} \end{eqnarray} where $\mathbb{G}^\gamma:=\frac{1}{2}N^\gamma_{\beta}v^\beta$. For the unitary invariant complex Finsler metric $F=\sqrt{r\phi(t,s)}$, the complex spray coefficients are \cite{Zhong} $$2\mathbb{G}^\gamma=N^\gamma_\beta v^\beta=k_2\overline{\langle z, v\rangle}v^\gamma+k_3\overline{\langle z, v\rangle}^2z^\gamma.$$ Therefore one needs to calculate \begin{eqnarray*} \frac{\partial}{\partial {\bar z^\nu}}(2\mathbb{G}^\gamma)&=&\frac{\partial k_2}{\partial t}\overline{\langle z, v\rangle}z^\nu v^\gamma+\frac{\partial k_2}{\partial s}sv^\nu v^\gamma+k_2v^\nu v^\gamma \\ &&+\frac{\partial k_3}{\partial t}\overline{\langle z, v\rangle}^2z^\nu z^\gamma +\frac{\partial k_3}{\partial s}s\overline{\langle z, v\rangle}v^\nu z^\gamma +2k_3\overline{\langle z, v\rangle}v^\nu z^\gamma \end{eqnarray*} and \begin{eqnarray*} \frac{\partial}{\partial {\bar z^\nu}}(2\mathbb{G}^\gamma)\bar{v}^\nu=r[s(\frac{\partial k_2}{\partial t}+\frac{\partial k_2}{\partial s})+k_2]v^\gamma+r[s(\frac{\partial k_3}{\partial t}+\frac{\partial k_3}{\partial s})+2k_3]\overline{\langle z, v\rangle}z^\gamma. \end{eqnarray*} Since $G=F^2=r\phi(t,s)$, we have $G_\gamma=\bar{v}^\gamma\phi+r\phi_s s_\gamma$, where $s_\gamma:=\frac{\partial s}{\partial v^\gamma}=-r^{-2}\bar{v}^\gamma |\langle z, v\rangle|^2+r^{-1}\langle z, v\rangle\bar{z}^\gamma$. Obviously, $s_\gamma v^\gamma=0$ and $s_\gamma z^\gamma=r^{-1}\langle z, v\rangle (t-s)$. So we have \begin{eqnarray}\label{hc1}-\frac{2}{G^2}G_{\gamma}\frac{\partial }{\partial \bar{z}^{\nu}}(2\mathbb{G}^\gamma)\bar{v}^{\nu}&=&-\frac{2}{r^2\phi^2}(\bar{v}^\gamma\phi+r\phi_s s_\gamma)\Big\{r[s(\frac{\partial k_2}{\partial t}+\frac{\partial k_2}{\partial s})+k_2]v^\gamma \nonumber\\ &&+r[s(\frac{\partial k_3}{\partial t}+\frac{\partial k_3}{\partial s})+2k_3]\overline{\langle z, v\rangle}z^\gamma\Big\} \nonumber \\ &=&-\frac{2}{\phi}\Big\{[s(\frac{\partial k_2}{\partial t}+\frac{\partial k_2}{\partial s})+k_2] \nonumber\\ &&+\frac{s\phi+s(t-s)\phi_s}{\phi}[s(\frac{\partial k_3}{\partial t}+\frac{\partial k_3}{\partial s})+2k_3]\Big\}. \end{eqnarray} On the other hand, it is necessary to compute \begin{eqnarray*} \frac{\partial }{\partial \bar{v}^\alpha}(2\mathbb{G}^\gamma)&=&\frac{\partial }{\partial \bar{v}^\alpha}(k_2\overline{\langle z, v\rangle}v^\gamma+k_3\overline{\langle z, v\rangle}^2z^\gamma)\\ &=&\frac{\partial k_2}{\partial s}s_{\bar{\alpha}}\overline{\langle z, v\rangle}v^\gamma+\frac{\partial k_3}{\partial s}s_{\bar{\alpha}}\overline{\langle z, v\rangle}^2z^\gamma \end{eqnarray*} and \begin{eqnarray*} \bar{N}^\alpha_\nu\frac{\partial }{\partial \bar{v}^\alpha}(2\mathbb{G}^\gamma)\bar{v}^{\nu}&=&2\bar{\mathbb{G}}^\alpha\frac{\partial }{\partial \bar{v}^\alpha}(2\mathbb{G}^\gamma)\\ &=&2(k_2\langle z,v\rangle \bar{v}^\alpha+k_3\langle z,v\rangle^2\bar{z}^\alpha)(\frac{\partial k_2}{\partial s}s_{\bar{\alpha}}\overline{\langle z, v\rangle}v^\gamma+\frac{\partial k_3}{\partial s}s_{\bar{\alpha}}\overline{\langle z, v\rangle}^2z^\gamma)\\ &=&rs^2(t-s)k_3[\frac{\partial k_2}{\partial s}v^\gamma+\frac{\partial k_3}{\partial s}\overline{\langle z, v\rangle}z^\gamma]. \end{eqnarray*} Thus we obtain \begin{eqnarray} \label{hc2} \frac{2}{G^2}G_{\gamma}\bar{N}^\alpha_\nu\frac{\partial }{\partial \bar{v}^\alpha}(2\mathbb{G}^\gamma)\bar{v}^{\nu}&=&\frac{2}{r^2\phi^2}(\bar{v}^\gamma\phi+r\phi_s s_\gamma)rs^2(t-s)k_3[\frac{\partial k_2}{\partial s}v^\gamma+\frac{\partial k_3}{\partial s}\overline{\langle z, v\rangle}z^\gamma] \nonumber\\ &=&\frac{2}{\phi}s^2(t-s)k_3[\frac{\partial k_2}{\partial s}+\frac{s\phi+s(t-s)\phi_s}{\phi}\frac{\partial k_3}{\partial s}]. \end{eqnarray} Plugging (\ref{hc1}) and (\ref{hc2}) into (\ref{hceq}) will yield \begin{eqnarray*} K_F(v)&=&-\frac{2}{\phi}\Big\{[s(\frac{\partial k_2}{\partial t}+\frac{\partial k_2}{\partial s})+k_2]+\frac{s\phi+s(t-s)\phi_s}{\phi}[s(\frac{\partial k_3}{\partial t}+\frac{\partial k_3}{\partial s})+2k_3]\\ &&-s^2(t-s)k_3[\frac{\partial k_2}{\partial s}+\frac{s\phi+s(t-s)\phi_s}{\phi}\frac{\partial k_3}{\partial s}]\Big\}. \end{eqnarray*} By a direct computation, we notice that \[\frac{\partial k_2}{\partial s}+\frac{s\phi+s(t-s)\phi_s}{\phi}\frac{\partial k_3}{\partial s}=0.\] So the the formula of $K_F(v)$ holds. \end{proof} \begin{remark} In \cite{WXZ}, Wang, Xia and Zhong also obtained the formula of the holomorphic curvature of the unitary invariant complex Finsler metrics. \end{remark} When the metric is weakly K\"ahler, the holomorphic curvature has a nice formula in terms of $U$ and $W$. \begin{theorem} Let $F$ be an unitary invariant metric defined on a domain $D\subset \mathbb{C}^n$. If $F=\sqrt{r\phi(t,s)}$ is a weakly K\"ahler Finsler metric, the holomorphic curvature $K_F$ is given by \[K_F=-\frac{2}{\phi}\{s(W_t+W_s)-s^2(t-s)\frac{W_s^2}{U_s}+W\}\] where $U=\frac{s\phi+s(t-s)\phi_s}{\phi}$ and $W=\frac{\phi_t+\phi_s}{\phi}$. \begin{proof} From $$U=\frac{s\phi+s(t-s)\phi_s}{\phi}, \quad W=\frac{\phi_t+\phi_s}{\phi},$$ one can obtain \[\phi_s=\frac{U-s}{s(t-s)}\phi, \quad \phi_t=(W-\frac{U-s}{s(t-s)})\phi.\] Substituting above equalities into the formulas of $k_1$, $k_2$ and $k_3$ in Theorem \ref{hct1} derives \[k_1=U_s\phi^2,\quad k_2=\frac{WU_s-UW_s}{U_s},\quad k_3=\frac{W_s}{U_s}.\] When imposing the weakly K\"ahler condition i.e. \[sU(U-t)W_s-s(U-t)U_sW-2(U-s)U_s=0,\] one can simplify $k_2$ and $k_3$ to get \[k_2=-\frac{2(U-s)}{s(U-t)},\quad k_3=\frac{W-k_2}{U}=\frac{W}{U}+\frac{2(U-s)}{sU(U-t)}.\] Thus \[\frac{\partial k_2}{\partial t}=\frac{2(t-s)U_t-2(U-s)}{s(U-t)^2},\quad \frac{\partial k_2}{\partial s}=\frac{2s(t-s)U_s+2U(U-t)}{s^2(U-t)^2},\] \[\frac{\partial k_3}{\partial t}=(\frac{W}{U})_t+\frac{U_t}{U^2}k_2-\frac{1}{U}\frac{\partial k_2}{\partial t},\quad \frac{\partial k_3}{\partial s}=(\frac{W}{U})_s+\frac{U_s}{U^2}k_2-\frac{1}{U}\frac{\partial k_2}{\partial s}.\] According to the formula of the holomorphic curvature $K_F$ in Theorem \ref{hct1}, the above formulas involving $k_2$ and $k_3$ will imply \[K_F=-\frac{2}{\phi}\Big\{s(W_t+W_s)-\frac{sW(U-t)+2(U-s)}{U(U-t)}(U_t+U_s)+\frac{2sW(U-t)+2(U-s)}{s(U-t)}\Big\}.\] From the integral equation of $U$ and $W$ in Lemma \ref{lem0}, we have \[U_t+U_s=s(t-s)W_s+\frac{U}{s}.\] Therefore the holomorphic curvature $K_F$ can be simplified as \[K_F=-\frac{2}{\phi}\Big\{s(W_t+W_s)-\frac{s^2(t-s)WW_s}{U}-s(t-s)\frac{2(U-s)}{U(U-t)}W_s+W\Big\}.\] Notice that the weakly K\"ahler condition tells us \[\frac{2(U-s)}{U(U-t)}=\frac{sW_s}{U_s}-\frac{sW}{U}.\] It leads to \[K_F=-\frac{2}{\phi}\{s(W_t+W_s)-s^2(t-s)\frac{W_s^2}{U_s}+W\},\] as we want in the Theorem. \end{proof} \end{theorem} \section{Weakly K\"ahler unitary invariant complex Finsler metrics with Randers type and an uniformization theorem} In 2009, Aldea and Munteanu initiated the study of the complex Randers motivated by the notion of the real Randers metrics \cite{AM}. Later, Chen and Shen gave the formula of the holomorphic curvature for complex Randers metrics and proved some rigidity theorems on complex Randers metrics \cite{CS1}. \begin{definition} A complex Finsler metric $F$ on a complex manifold is called a complex Randers metric if $F$ can be written as $F=\alpha+|\beta|$, where $\alpha$ is a Hermitian metric and $\beta$ is a $(1,0)$-form, i.e. $\alpha=\sqrt{a_{i \bar{j}}(z) v^{i} \bar{v}^{j}}$, $\beta=b_{i}(z) v^{i}$. \end{definition} \begin{remark}(1) The complex Randers metric is not smooth along the directions $v=v^i\frac{\partial}{\partial z^i}$ with $b_iv^i=0$. (2) According to above definition, it is easy to see that an unitary invariant Finsler metric $F=\sqrt{r\phi(t,s)}$ is complex Randes metric if and only if \[\phi=\big(\sqrt{f(t)+g(t)s}+\sqrt{h(t)s} \big)^2\] where $f(t), g(t), h(t)$ are smooth functions and $f(t)>0, h(t)\geq0$. \end{remark} \begin{theorem} \label{wkr}An unitary invariant Randers metric $F=\sqrt{r\phi(t,s)}$ defined on a domain $D \subset \mathbb{C}^n$ is a weakly K\"ahler Finsler metric if and only if \[\phi=\Big (\sqrt{f(t)+\frac{tf^{'}(t)-f(t)}{2t}s}+\sqrt{\frac{tf^{'}(t)+f(t)}{2t}s}\Big )^2\] or \[\phi=f(t)+f^\prime(t)s\] where $f(t)$ is a positive smooth function. \end{theorem} \begin{proof} The sufficiency is obvious by a direct computation. Now we prove the necessity. Since $F=\sqrt{r\phi(t,s)}$ is a weakly K\"ahler complex Randers metric, $\phi$ can be written as $$\phi=\big(\sqrt{f(t)+g(t)s}+\sqrt{h(t)s} \big)^2$$ where $f(t), g(t), h(t)$ are smooth functions and $f(t)>0, h(t)\geq 0$. Therefore \[U=\frac{s\phi+s(t-s)\phi_s}{\phi}=\frac{s(f+tg)+t\sqrt{s(f+gs)h}}{\sqrt{f+gs}(\sqrt{f+gs}+\sqrt{hs})},\] \[W=\frac{\phi_t+\phi_s}{\phi}=\frac{(h+sh^\prime)\sqrt{f+gs}+(f^\prime+g+sg^\prime)\sqrt{sh}}{(\sqrt{f+gs}+\sqrt{hs})\sqrt{s(f+gs)h}}.\] Substituting above equalities into the equation of weakly K\"ahler condition (\ref{nKaeq}), one will get a quite complex equation as following \[\{A_0(t)+A_1(t)\sqrt{f+gs}\sqrt{s}+A_2(t)(\sqrt{s})^2+A_3(t)\sqrt{f+gs}(\sqrt{s})^3+A_4(t)(\sqrt{s})^4\}f(t)=0\] where \[A_0(t)=tfh(tf^\prime-f-2tg).\] Notice that $s=\frac{|\langle z,v \rangle|^2 }{|v|^2}$ can be an arbitrary real number, thus \[A_0(t)=A_1(t)=A_2(t)=A_3(t)=A_4(t)=0.\] From $A_0(t)=0$, we know that one of the following three equations holds: \[h(t)=0,\quad f(t)=0\quad \text{or} \quad g(t)=\frac{tf^\prime(t)-f(t)}{2t}.\] If $h(t)=0$, then the complex Randers metric $F$ reduces to the Hermitian metric and then must be K\"ahler. It can imply \[\phi=f(t)+f^\prime(t)s.\] Since $F$ is a complex Randers metric, $f(t)\neq 0$. If $ g(t)=\frac{tf^\prime(t)-f(t)}{2t}$, substituting it into the equation of weakly K\"ahler condition again and rearranging all the terms, one can obtain \[\{B_1(t)\sqrt{f+gs}\sqrt{s}+B_2(t)(\sqrt{s})^2+B_3(t)\sqrt{f+gs}(\sqrt{s})^3+B_4(t)(\sqrt{s})^4\}f(t)=0\] where \[B_1(t)=t\sqrt{h}(tf^\prime+f)(tf^\prime+f-2th).\] Therefore \[B_1(t)=B_2(t)=B_3(t)=B_4(t)=0.\] So $B_1(t)=0$ deduces that one of the following three equations holds \[h(t)=0,\quad tf^\prime(t)+f(t)=0\quad \text{or} \quad h(t)=\frac{tf^\prime(t)+f(t)}{2t}.\] For the case of $h(t)=0$, we have discussed previously. If $tf^\prime(t)+f(t)=0$ i.e. $f(t)=\frac{c}{t}$ with $c$ a constant, then \[\phi=\big(\sqrt{\frac{c(t-s)}{t^2}}+\sqrt{h(t)s}\big)^2\] which means the Hermitian part $\sqrt{\frac{c(t-s)}{t^2}}$ is not positive defined. Therefore we have \[h(t)=\frac{tf^\prime(t)+f(t)}{2t}.\] \end{proof} \begin{remark} It is easy to check that if $f(t)>0$ and $f^{\prime}(t)>0$, the weakly K\"ahler complex Randers metric in above theorem is strongly pseudo-convex. \end{remark} Furthermore, we can prove the following uniformization theorem in complex Finsler geometry. \begin{theorem} Let $F$ be an unitary invariant complex Randers metric defined on a domain $D\subset \mathbb{C}^n$. Assume $F$ is not a Hermitian metric. If $F=\sqrt{r\phi(t,s)}$ is a weakly K\"ahler Finsler metric and has a constant holomorphic curvature $k$ if and only if \begin{enumerate} \item $k=4$, $\phi=(\sqrt{\frac{t}{c^2+t^2}-\frac{t^2}{(c^2+t^2)^2}s}+\sqrt{\frac{c^2}{(c^2+t^2)^2}s})^2$ defined on $D=\mathbb{C}^n\setminus \{0\}$; \item $k=0$, $\phi=c(\sqrt{t}+\sqrt{s})^2$ defined on $D=\mathbb{C}^n$; \item $k=-4$, $\phi=\sqrt{\frac{t}{c^2-t^2}+\frac{t^2}{(c^2-t^2)^2}s}+\sqrt{\frac{c^2}{(c^2-t^2)^2}s}$ defined on $D=\{z: |z|<\sqrt{c}\}$ \end{enumerate} where $c$ is a positive constant. \end{theorem} \begin{proof} The sufficiency is obvious by a direct computation. We only need to prove the necessity. By Theorem \ref{wkr}, the complex Randers metric $F$ can be written as \[\phi=\Big (\sqrt{f(t)+\frac{tf^{'}(t)-f(t)}{2t}s}+\sqrt{\frac{tf^{'}(t)+f(t)}{2t}s}\Big )^2.\] Then we have \[U=\frac{s(f+tg)+t\sqrt{s(f+gs)h}}{\sqrt{f+gs}(\sqrt{f+gs}+\sqrt{hs})},\quad W=\frac{(h+sh^\prime)\sqrt{f+gs}+(f^\prime+g+sg^\prime)\sqrt{sh}}{(\sqrt{f+gs}+\sqrt{hs})\sqrt{s(f+gs)h}}\] where $g=\frac{tf^\prime(t)-f(t)}{2t}$ and $h=\frac{tf^\prime(t)+f(t)}{2t}$. If the holomorphic curvature $k=0$, one can get \[K_F=-\frac{2}{\phi}\{s(W_t+W_s)-s^2(t-s)\frac{W_s^2}{U_s}+W\}=0.\] Plugging $U$ and $W$ into the above equation will yield \[A_0(t)\sqrt{f+gs}+A_1(t)\sqrt{s}+\dots+A_7(t)(\sqrt{s})^7=0\] where $A_0(t)=t^4\sqrt{h}f^2(tf^\prime+f)(tf^\prime-f)$. Hence $A_0(t)=0$. Since $F$ is not a Hermitian metric, we know that \[tf^\prime-f=0.\] It implies that $f(t)=ct$ and $\phi=c(\sqrt{t}+\sqrt{s})^2$. If the holomorphic curvature $k=4$, one can get \[K_F=-\frac{2}{\phi}\{s(W_t+W_s)-s^2(t-s)\frac{W_s^2}{U_s}+W\}=4.\] Plugging $\phi$, $U$ and $W$ into the above equation will yield \[B_0(t)\sqrt{f+gs}+B_1(t)\sqrt{s}+\dots+B_7(t)(\sqrt{s})^7=0\] where $B_0(t)=t^4\sqrt{h}f^2(tf^\prime+f)(tf^\prime+2tf^2-f)$. Hence $B_0(t)=0$. From it, we know that \[tf^\prime+2tf^2-f=0.\] It implies that $f(t)=\frac{t}{c^2+t^2}$ and $\phi=(\sqrt{\frac{t}{c^2+t^2}-\frac{t^2}{(c^2+t^2)^2}s}+\sqrt{\frac{c^2}{(c^2+t^2)^2}s})^2$. If the holomorphic curvature $k=-4$, one can get \[K_F=-\frac{2}{\phi}\{s(W_t+W_s)-s^2(t-s)\frac{W_s^2}{U_s}+W\}=-4.\] Plugging $\phi$, $U$ and $W$ into the above equation will yield \[C_0(t)\sqrt{f+gs}+C_1(t)\sqrt{s}+\dots+C_7(t)(\sqrt{s})^7=0\] where $C_0(t)=t^4\sqrt{h}f^2(tf^\prime+f)(tf^\prime-2tf^2-f)$. Hence $C_0(t)=0$. From it, we know that \[tf^\prime-2tf^2-f=0.\] It implies that $f(t)=\frac{t}{c^2-t^2}$ and $\phi=\sqrt{\frac{t}{c^2-t^2}+\frac{t^2}{(c^2-t^2)^2}s}+\sqrt{\frac{c^2}{(c^2-t^2)^2}s}$. \end{proof}
1,116,691,498,858
arxiv
\section{Discussion} We presented StereoNet, the first real-time, high quality end-to-end architecture for passive stereo matching. We started from the insight that a low resolution cost volume contains most of the information to generate high-precision disparity maps and to recover thin structures given enough training data. We demonstrated a subpixel precision of $1/30th$ pixel, surpassing limits published in the literature. Our refinement approach hierarchically recovers high-frequency details using the color input as guide, drawing parallels to a data-driven joint bilateral upsampling operator. The main limitation of our approach is due to the lack of supervised training data: indeed we showed that when enough examples are available, our method reaches state of the art results. To mitigate this effect, our future work involves a combination of supervised and self-supervised learning \cite{activestereonet} to augment the training set. \section{Experiments} Here, we evaluate our system on several datasets and demonstrate that we achieve high quality results at a fraction of the computational cost required by the state of the art. \subsection{Datasets and Setup} We evaluated StereoNet quantitatively and qualitatively on three datasets: Scene Flow~\cite{mayer2016large}, KITTI 2012 ~\cite{geiger2012we} and KITTI 2015 ~\cite{Menze2015CVPR}. Scene Flow is a large synthetic stereo dataset suitable for deep learning models. However, the other two KITTI datasets, while more comparable to a real-world setting, are too small for full end-to-end training. We followed previous end-to-end approaches by initially training on Scene Flow and then individually fine-tuning the resulting model on the KITTI datasets~\cite{kendall2017end,pang2017cascade}. Finally, we compare against prominent state-of-the-art methods in terms of both accuracy and runtime to show the viability of our approach in real-time scenarios. Additionally, we performed an ablation study on the Scene Flow dataset using four variants of our model. We evaluated setting the number of downsampling convolutions $K$ (detailed in Section~\ref{sec:costvolume}) to $3$ and $4$. This controls the resolution at which the cost volume is formed. The cost volume filtering is exponentially faster with more aggressive downsampling, but comes at the expense of increasingly losing details around thin structures and small objects. The refinement layer can bring in a lot of the fine details, but if the signal is completely missing from the cost volume, it is unlikely to recover them. Additionally we evaluated using $K$ refinement layers to hierarchically recover the details at the different scales versus using a single refinement layer to upsample the cost volume output directly to the desired final resolution. \subsection{Subpixel Precision} The precision of a depth system is usually a crucial variable when choosing the right technology for a given application. A triangulation system with a baseline $b$, a focal length $f$ and a subpixel precision $\delta$ has an error $\epsilon$ which increases quadratically with the distance $Z$: $\epsilon = \frac{\delta Z^2}{bf}$ \cite{Szeliski_book}. Competitive technologies such as Time-of-Flight do not suffer from this issue, which makes them appealing for long range applications such as room scanning and reconstruction. Despite this it has been demonstrated that multipath effects in ToF systems can distort geometry even in close-up tasks such as object scanning \cite{hyperdepth}. Long range precision remains as one of the main arguments against a stereo system and in favor of ToF. Here we show that deep architectures are a breakthrough in terms of subpixel precision and therefore they can compete with other technologies not only for short distances but as well as in long ranges. Traditional stereo matching methods perform a discrete search and then a parabola interpolation to retrieve the accurate disparity. This methods usually leads to a subpixel precision $\sim 0.25$ pixels, that roughly correspond to $4.5$ cm error at $3$m distance for a system with a $55$ cm baseline such as the Intel Realsense D415. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/cost_volume_exp.jpg} \caption{Subpixel precision in stereo matching. We demonstrate that StereoNet achieves a subpixel precision of $0.03$, which is one order of magnitude lower than traditional stereo approaches. The lower bound of traditional approaches was found to be $1/10th$ under realistic conditions (see \cite{pinggera2014}) which we indicate by the black line. Moreover, our method can run in real-time on 720p images.} \label{fig:cost_volume_exp} \end{figure} To assess the precision of our method, we used the evaluation set of Scene Flow and we computed the average error only for those pixels that were correctly matched at integer locations. Results correspond to the average of over a hundred million pixels and are reported in Figure~\ref{fig:cost_volume_exp}. From this figure, it is important to note that: (1) the proposed method achieves a subpixel precision of $\mathbf{0.03}$ which is one order of magnitude lower than traditional stereo matching approaches such as \cite{bleyer2011patchmatch,fanello17_hashmatch,fanello2017ultrastereo}; (2) the refinement layers are performing very similarly irrespective of the resolution of the cost volume; (3) without any refinement the downsampled cost volume can still achieve a subpixel precision of $0.03$ in the low resolution output. However, the error increases, almost linearly, with the downsampling factor. Note that a subpixel precision of $0.03$ means that the expected error is less than $5$mm at $3$m distance from the camera (Intel Realsense D415). This result makes triangulation systems very appealing and comparable with ToF technology without suffering from multi-path effects. \subsection{Quantitative Results} We now evaluate the model on standard benchmarks proving the effectiveness of the proposed methods and the different trade-offs between the resolution of the cost volume and the precision obtained. \paragraph{\textbf{SceneFlow.}} Although this data is synthetically generated, the evaluation sequences are very challenges due to the presence of occlusions, thin structures and large disparities. We evaluated our model reporting the end point error (EPE) in Table \ref{tab:flyingthings3d}. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{images/results.png} \caption{Qualitative results on the FlyingThings3D test set. The proposed two-stage architecture is able to recover very fine details despite the low resolution at which we form the cost volume. } \label{fig:results} \end{figure} A single, unrefined model, i.e. using only the cost volume output at 1/8 of the resolution, achieves an EPE of $2.48$ which is better than the full model presented in \cite{kendall2017end}, which reaches an EPE of $2.51$. Notice that our unrefined model is composed of $~360k$ parameters and runs at $12$ msec at the $960 \times 540$ input resolution, whereas \cite{kendall2017end} uses 3.5 million parameter with a runtime of $950$ msec on the same resolution. Our best, multi-scale architecture achieves the state-of-the-art error of $1.1$, which is also lower than the one reported in very recent methods such as~\cite{pang2017cascade}. Qualitative examples can be found in Figure~\ref{fig:results}. Notice how the method recovers very challenging fine details. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{images/costvolume_qualitative.jpg} \caption{Cost volume comparisons. A cost volume at 1/16 resolution has already the information required to produce high quality disparity maps. This is evident in that post refinement we recover challenging thin structures and the overall end point error (EPE) is below one pixel.} \label{fig:cost_volume_qual} \end{figure} One last consideration regards the resolution of the cost volume. On one hand we proved that a coarse cost volume already carries all the information needed to retrieve a very high subpixel precision, i.e. high disparity resolution. On the other hand, downsampling the image may lead to a loss in spatial resolution, therefore thin structures cannot be reconstructed if the output of the cost volume is very coarse. Here we demonstrate that a volume at 1/16 of the resolution is powerful enough to recover very challenging small objects. Indeed in Figure ~\ref{fig:cost_volume_qual}, we compare the output of the three cost volumes at 1/4, 1/8, 1/16 resolutions where we also applied the refinement layers. We can observe that the fine structures that are missed in the 1/16 resolution disparity map are correctly recovered by the upsampling strategy we propose. The cost volume at 1/4 is not necessary to achieve a compelling results and this is an important finding for mobile applications. As showed in the previous subsection, even at low resolution the network achieves a subpixel precision of $1/30$th pixel. However, we want to also highlight that to achieve state of the art precision on multiple benchmarks, the cost volume resolution becomes an important factor as demonstrated in Table~\ref{tab:flyingthings3d}. \begin{table}[t] \centering \begin{tabularx}{\textwidth}{|X|YYYY|} \hline & EPE all & EPE nocc & EPE all, unref & EPE nocc, unref\\ \hline 8x, multi & $\mathbf{1.101}$ & 0.768 & 2.512 & 1.795 \\ 8x, single & 1.532 & 1.058 & 2.486 & 1.784 \\ 16x, multi & 1.525 & 1.140 & 3.764 & 2.912 \\ 16x, single & 1.974 & 1.476 & 3.558 & 2.773 \\ \hline \scriptsize{CG-Net Fast} \cite{kendall2017end} & 7.27 & - & - & - \\ \scriptsize{CG-Net Full} \cite{kendall2017end} & 2.51 & - & - & - \\ CRL \cite{pang2017cascade} & 1.32 & - & - & - \\ \hline \end{tabularx} \caption{Quantitative evaluation on SceneFlow. We achieve state of the art results compared to recent deep learning methods. We compare four variants of our model which vary in the resolution at which the cost volume is formed (8x vs 16x) and the number of refinement layers (multiple vs single).} \label{tab:flyingthings3d} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{images/kitti.jpg} \caption{Qualitative Results on Kitti 2012 and Kitti 2015. Notice how our method preserves edge and recovers details compared to the fast \cite{sgm-net}. State of the art methods are one order of magnitude slower than the proposed approach.} \label{fig:kitti12_15} \end{figure} \begin{table}[t] \centering \begin{tabularx}{1.0\textwidth}{|X|YYYYY|} \hline & Out-Noc & Out-All & Avg-Noc & Avg-All & Runtime\\ \hline \scriptsize{StereoNet} & 4.91 & 6.02 & 0.8 & 0.9 & 0.015s\\ \hline \scriptsize{CG-Net} \cite{kendall2017end} & 2.71 & 3.46 & 0.6 & 0.7 & 0.9s \\ \scriptsize{MC-CNN \cite{zbontar2016stereo}} & 3.9 & 5.45 & 0.7 & 0.9 & 67s\\ \scriptsize{SGM-Net \cite{sgm-net}} & 3.6 & 5.15 & 0.7 & 0.9 & 67s\\ \hline \end{tabularx} \caption{Quantitative evaluation on Kitti 2012. For StereoNet we used a model with a downsampling factor of $8$ and $3$ refinement levels. We report the percentage of pixels with error bigger than $2$, as well as the overall EPE in both non occluded (Noc) and all the pixels (All).} \label{tab:kitti12} \end{table} \paragraph{\textbf{Kitti.}} Kitti is a prominent stereo benchmark that was captured by driving a car equipped with cameras and a laser scanner~\cite{geiger2012we}. The dataset is very challenging due to the huge variability, reflections, overexposed areas and more importantly, the lack of a big training set. Despite this, we provide the results on Kitti 2012 in Table \ref{tab:kitti12}. Our model uses a downsampling factor of $8$ for the cost volume and $3$ refinement steps. Among the top-performing methods, we compare to three significant ones. Current state of the art \cite{kendall2017end}, achieves an EPE of $0.6$, but it has a running time of $0.9$ seconds per image and uses a multi-scale cost volume and several 3D deconvolutions. The earlier deep learning-based stereo matching approach of \cite{zbontar2016stereo} takes $67$ seconds per image and has higher error ($0.9$) compared to our method that runs at $0.015$s per stereo pair. The SGM-net \cite{sgm-net} has an error comparable to ours. Although we do not reach state of the art results, we believe that the produced disparity maps are very compelling as shown in Figure ~\ref{fig:kitti12_15}, bottom. We analyzed the source of errors in our model and we found that most of the wrong estimates are around reflections, which result in a wrong disparity prediction, as well as occluded regions, which do not have a correspondence in the other view. These areas cannot be explained by the data and the problem can then be formulated as an inpainting task, which our model is not trained for. State of the art \cite{pang2017cascade} uses a hour-glass like architecture in their refinement step, that has been shown to be really effective for inpainting purposes \cite{hourglass-inpainting}. This is certainly a valid solution to handle those invalid areas, however it requires significant additional computational resources. We believe that the simplicity of the proposed architecture shows important insights and it can lead the way to interesting directions to overcome the current limitations. Similarly, we evaluated our algorithm on Kitti 2015 and report the results in Tab. \ref{tab:kitti15}, where similar considerations can be made. In Figure ~\ref{fig:kitti12_15} top, we show some examples from the test data. \begin{table}[t] \centering \begin{tabularx}{0.9\textwidth}{|X|YYYY|} \hline & D1-bg & D1-fg & D1-all & Runtime\\ \hline \scriptsize{StereoNet} & 4.30 & 7.45 & 4.83& 0.015s\\ \hline \scriptsize{CRL \cite{pang2017cascade}} & 2.48 & 3.59 & 2.67 & 0.5s \\ \scriptsize{CG-Net Full} \cite{kendall2017end} & 2.21 & 6.16 & 2.87 & 0.9s \\ \scriptsize{MC-CNN \cite{zbontar2016stereo}} & 2.89 & 8.88 & 3.89 & 67s\\ \scriptsize{SGM-Net \cite{sgm-net}} & 2.66 & 8.64 & 3.66 & 67s\\ \hline \end{tabularx} \caption{Quantitative evaluation on Kitti 2015. For StereoNet we used a model with a downsampling factor of $8$ and $3$ refinement levels. We report the percentage of pixels with error bigger than $1$ in background regions (bg), foreground areas (fg), and all.} \label{tab:kitti15} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{images/analysis.png} \caption{Runtime analysis of StereoNet. Breakdown of the running time. Notice how most of the time is spent at the last level of refinement. } \label{fig:breakdown} \end{figure} \subsection{Running Time Analysis} We conclude this section with a breakdown of the running time of our algorithm. Readers interested in real-time applications would find useful to understand where the bottlenecks are. The current algorithm runs at $60$fps on an NVidia Titan X and in Fig. \ref{fig:breakdown} of the whole running time. Notice how feature extraction, volume formation and filtering take less than half of the whole computation ($41\%$), and the most time consuming steps are the refinement stage: the last level of refinement done at full resolution is using $38 \%$ of the computation. \section{Introduction} Stereo matching is a classical computer vision problem that is concerned with estimating depth from two slightly displaced images. Depth estimation has recently been projected to the center stage with the rising interest in virtual and augmented reality \cite{holoportation}. It is at the heart of many tasks from 3D reconstruction to localization and tracking \cite{kinectfusion}. Its applications span otherwise disparate research and product areas including indoor mapping and architecture, autonomous cars, and human body and face tracking. Active depth sensors like the Microsoft Kinect provide high quality depth-maps and have not only revolutionized computer vision research \cite{dou16,dou17,holoportation,fanello2013one,taylor17}, but also play an important role in consumer level applications. These active depth sensors have become very popular over the recent years with the release of many other consumer devices, such as the Intel RealSense series, the structured light sensor on iPhone X, as well as time-of-flight cameras such as Kinect V2. With the rise of Augmented Reality (AR) applications on mobile devices, there is a growing need of algorithms capable of predicting precise depth under tight computational budget. With the exception of the iPhone X, all smartphones on the market can only rely on single or dual RGB streams. The release of sparse tracking and mapping tools like ARKit and ARCore impressively demonstrate coarse and sparse geometry estimation on mobile devices. However, they lack dense depth estimation and therefore cannot enable exciting AR applications such as occlusion handling or precise interaction of virtual objects with the real world. Depth estimation using a single moving camera, akin to \cite{pradeep2013}, or dual cameras naturally became a requirement from the industry to scale AR to millions of users. The state of the art in passive depth relies on stereo triangulation between two (rectified) RGB images. This has historically been dominated by CRF-based approaches. These techniques obtain very good results but are computationally slow. Inference in these models amounts to solving a generally NP-hard problem, forcing practitioners in many cases to use solvers whose runtime is in the ranges of seconds \cite{meanfield} or resort to approximated solutions \cite{fanello17_hashmatch,fanello2017ultrastereo,patchCollider,sos}. Additionally, these techniques typically suffer in the presence of textureless regions, occlusions, repetitive patterns, thin-structures, and reflective surfaces. The field is slowly transitioning and since \cite{zbontar2015computing}, it started to use deep features, mostly as unary potentials, to further advance the state of the art. Recently, deep-architectures demonstrated a high level of accuracy at predicting depth from passive stereo data \cite{mayer2016large,ilg2017flownet,kendall2017end,pang2017cascade}. Despite these significant advances, the proposed methods require vast amounts of processing power and memory. For instance, \cite{kendall2017end} have $3.5$ million parameters in their network and reach a throughput of about $0.95$ image per second on $960 \times 540$ images, and \cite{pang2017cascade} takes $0.5$ sec to produce a single disparity on a high end GPU. In this paper we present StereoNet, a novel deep architecture that generated state of the art 720p depth maps at $60$Hz on high end GPUs. Based on our insight that deep architectures are very good to infer matches at extremely high subpixel precision we demonstrate that a very low resolution cost volume is sufficient to achieve a depth precision that is comparable to a traditional stereo matching system that operates at full resolution. To achieve spatial precision we apply edge-aware filtering stages in a multi-scale manner to deliver a high quality output. In summary the main contributions of this work are the following: \begin{enumerate} \item We show that the subpixel matching precision of a deep architecture is an order of magnitude higher than those of ``traditional'' stereo approaches. \item We demonstrate that the high subpixel precision of the network allows to achieve the depth precision of traditional stereo matching with a very low resolution cost volume resulting in an extremely efficient algorithm. \item We show that previous work that introduced cost-volume in deep architectures was over-parameterized for the task and how this significantly help reducing the run-time and memory footprint of the system at little cost in accuracy. \item A new hierarchical depth-refinement layer that is capable of performing high-quality up-sampling that preserves edges. \item Finally, we demonstrate that the proposed system reaches compelling results on several benchmarks while being real-time on high end GPU architectures. \end{enumerate} \section{StereoNet algorithm} \subsection{Preliminaries} Given pairs of input images we aim to train an end-to-end disparity prediction pipeline. One approach to train such pipeline is to leverage a generic encoder-decoder network. An encoder distills the input through a series of contracting layers to a bottleneck that captures the details most relevant to the task in training, and the decoder reconstructs the output from the representation captured in the bottleneck layer through a series of expanding layers. While this approach is widely successful across various problems, including depth prediction\cite{mayer2016large,ilg2017flownet,pang2017cascade}, they lack several qualities we care about in stereo algorithm. First of all, this approach does not capture any geometric intuition about the stereo matching problem. Stereo prediction is first-and-foremost a correspondence matching problem, so we aimed to design an algorithm that can be adapted without retraining to different stereo cameras with varying resolutions and baselines. Secondly, we note that similar approaches are evidently overparameterized for problems where the prediction is a pixel-to-pixel mapping that does not involve any warping of the input, and thus likely to overfit. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{images/architecture.png} \caption{Model architecture. A two stage approach is proposed: first we extract image features at a lower resolution using a Siamese network. We then build a cost volume at that resolution by matching the features along the scanlines, giving us a coarse disparity estimate. We finally refine the results hierarchically to recover small details and thin structures.} \label{fig:architecture} \end{figure} Our approach to stereo matching incorporates a design that leverages the problem structure and classical approaches to tackle it, akin to~\cite{kendall2017end}, while producing edge-preserving output using compact context-aware pixel-to-pixel refinement networks. An overview of the architecture of our model is illustrated in Figure~\ref{fig:architecture} and detailed in the following sections. \subsection{Coarse Prediction: Cost Volume Filtering} \label{sec:costvolume} Stereo system are in general solving a correspondence problem. The problem classically boils down to forming a disparity map by finding a pixel-to-pixel match between two rectified images along their scanlines. The desire for a smooth and edge-preserving solution led to approaches like cost volume filtering~\cite{hosni2013fast}, which explicitly model the matching problem by forming and processing a 3D volume that jointly solves across all candidate disparities at each pixel. While \cite{hosni2013fast} directly used color values for the matching, we compute a feature representation at each pixel that is used for matching. \subsubsection{Feature Network} The first step of the pipeline finds a meaningful representation of image patches that can be accurately matched in the later stages. We recall that stereo suffer from textureless regions and traditional methods solve this issue by aggregating the cost using large windows. We replicate the same behavior in the network by making sure the features are extracted from a big receptive field. In particular, we use a feature network with shared weights between the two input images (also known as a Siamese network). We first aggressively downsample the input images using K $5\times5$ convolutions with a stride of 2, keeping the number of channels at 32 throughout the downsampling. In our experiments we set K to 3 or 4. We then apply 6 residual blocks~\cite{he2016deep} that employ $3\times3$ convolutions, batch-normalization~\cite{ioffe2015batch}, and leaky ReLu activations ($\alpha=0.2$)~\cite{maas2013rectifier}. Finally, this is processed using a final layer with a $3\times3$ convolution that does not use batch-normalization or activation. The output is a 32-dimensional feature vector at each pixel in the downsampled image. This low resolution representation is important for two reasons: 1) it has a big receptive field, useful for textureless regions. 2) It keeps the feature vectors compact. \subsubsection{Cost Volume} At this point, we form a cost volume at the coarse resolution by taking the difference between the feature vector of a pixel and the feature vectors of the matching candidates. We noted that asymmetric representations in general performed well, and concatenating the two vectors achieved similar results in our experiments. At this stage, a traditional stereo method would use a winner-takes-all (WTA) approach that picks the disparity with the lowest Euclidean distance between the two feature vectors. Instead, here we let the network to learn the right metric by running multiple convolutions followed by non-linearities. In particular, to aggregate context across the spatial domain as well as the disparity domain, we filter the cost volume with four 3D convolutions with a filter size of $3 \times 3 \times 3$, batch-normalization, and leaky ReLu activations. A final $3 \times 3 \times 3$ convolutional layer that does not use batch-normalization or activation is then applied, and the filtering layers produce a 1-dimensional output at each pixel and candidate disparity. For an input image of size $W \times H$ and evaluating a maximum of $D$ candidate disparities, our cost volume is of size $W / 2^K \times H / 2^K \times (D + 1) / 2^K$ for $K$ downsampling layers. In our design of StereoNet we targeted a compact approach with a small memory footprint that can be potentially deployed to mobile platforms. Unlike~\cite{kendall2017end} who form a feature representation at quarter resolution and aggregate cost volumes across multiple levels, we note that most of the time and compute is spent matching at higher resolutions, while most of the performance gain comes from matching at lower resolutions. We validate this claim in our experiments and show that the performance loss is not significant in light of the speed gain. The reason for this is that the network achieves a magnitude higher sub-pixel precision than traditional stereo matching approaches. Therefore, matching at higher resolutions is not needed. \subsubsection{Differentiable $\arg\min$} We typically would select the disparity with the minimum cost at each pixel in the filtered cost volume using $\arg\min$. For a pixel $i$ and a cost function over disparity values $C(d)$, the selected disparity value $d_i$ is defined as: \begin{equation} d_i = \arg\min_d C_i(d). \end{equation} \noindent This however fails to learn since $\arg\min$ is a non-differentiable function. We considered two differentiable variants in our approach. The first of which is soft $\arg\min$, which was originally proposed in~\cite{chapelle2010gradient} and was used in~\cite{kendall2017end}. Effectively, the selected disparity is a softmax-weighted combination of all the disparity values: \begin{equation} d_i = \sum_{d=1}^{D} d \cdot \frac{\exp(-C_i(d))}{\sum_{d'} \exp(-C_i(d')}. \end{equation} \noindent The second differentiable variant is a probabilistic selection that samples from the softmax distribution over the costs: \begin{equation} d_i = d, \text{ where } d \sim \frac{\exp(-C_{i}(d))}{\sum_{d'} \exp(-C_{i}(d')}. \end{equation} \noindent Differentiating through the sampling process uses gradient estimation techniques to learn the distribution of disparities by minimizing the expected loss of the stochastic process. While this technique has roots in policy gradient approaches in reinforcement learning~\cite{williams1992simple}, it was recently formulated as stochastic computation graphs in~\cite{schulman2015gradient} and applied to RANSAC-based camera localization in~\cite{brachmann2017dsac}. Additionally, the parallel between the two differentiable variants we discussed is akin to that between soft and hard attention networks~\cite{xu2015show}. Unfortunately the probabilistic approach significantly underperformed in our experiments, even with various variance reduction techniques~\cite{xu2015show}. We expect that this is because it preserves hard selections. This trait is arguably critical in many applications, but in our model it is superseded by the ability of soft $\arg\min$ to regress subpixel-accurate values. This conclusion is supported by the literature on continuous action spaces in reinforcement learning~\cite{lillicrap2015continuous}. The soft $\arg\min$ selection was consequently faster to converge and easier to optimize, and it is what we chose to use in our experiments. \subsection{Hierarchical Refinement: Edge-Aware Upsampling} The downside to relying on coarse matching is that the resulting myopic output lacks fine details. To maintain our compact design, we approach this problem by learning an edge-preserving refinement network. We note that the network's job at this stage is to dilate or erode the disparity values to blend in high-frequency details using the color input as guide, so a compact network that learns a pixel-to-pixel mapping, similar to networks employed in recent computational photography work~\cite{chen2017fast,chen2017photographic,gharbi2017deep}, is an appropriate approach. Specifically, we task the refinement network of only finding a residual (or a delta disparity) to add or subtract from the coarse prediction. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{images/refinement_new.png} \caption{Hierarchical refinement results. The result at each stage (top row), starting with the cost volume output in the top left corner, is updated with the output of the corresponding refinement network (bottom row). The refinement network output expectedly dilates and erodes around the edges using the color input as guide. The groundtruth is shown in the lower right corner. The average endpoint error at each stage for this example is: 3.27, 2.34, 1.80, and 1.26 respectively. Zoom in for details.} \label{fig:refinement} \end{figure} Our refinement network takes as input the disparity bilinearly upsampled to the output size as well as the color resized to the same dimensions. Recently deconvolutions were shown to produce checkerboard artifacts, so we opted to use bilinear upsampling and convolutions instead~\cite{odena2016deconvolution}. The concatenated color and disparity first pass through a $3 \times 3$ convolutional layer that outputs a 32-dimensional representation. This is then passed through 6 residual blocks that, again, employ $3\times3$ convolutions, batch-normalization, and leaky ReLu activations ($\alpha=0.2$). We use atrous convolutions in these blocks to sample from a larger context without increasing the network size~\cite{papandreou2015modeling}. We set the dilation factors for the residual blocks to 1, 2, 4, 8, 1, and 1 respectively. This output is then processed using a $3\times3$ convolutional layer that does not use batch-normalization or activation. The output of this network is a 1-dimensional disparity residual that is then added to the previous prediction. We apply a ReLu to the sum to constrain disparities to be positive. In our experiments we evaluated hierarchically refining the output with a cascade of the described network, as well as applying a single refinement that upsamples the coarse output to the full resolution in one-shot. Figure~\ref{fig:refinement} illustrates the output of the refinement layer at each level of the hierarchy as well as the residuals added at each level to recover the high-frequency details. The behavior of this network is reminiscent of joint bilateral upsampling~\cite{kopf2007joint}, and indeed we believe this network is a learned edge-aware upsampling function that leverages a guide image. \iffalse \subsection{Occlusion Reasoning} The architecture described so far does an excellent job at predicting accurate and precise disparity for pixels that are visible in both the left and the right view. However, in a multiview case, many regions will be occluded in the images, therefore there is no information available in the cost volume that can be used to recover the correct output. To deal with this issue, we propose to train an OcclusioNet, which is an appendix of StereoNet and deals specifically with occluded areas. Following a similar structure described in the previous section, OcclusionNet consists of two stages: a low resolution invalidation mask followed by a high-resolution refinement. Finally, an inpainting phase recovers the final prediction. In particular, we start from the features obtained with the Siamese tower and concatenate them in a single descriptor. We run 3 residual blocks with $3\times3$ convolutions, followed by leaky Relu. A final convolutions gives a single value indicating the probability of a pixel to be valid. The low resolution mask is then upsampled and concatenated with the output (refined) disparity and the input reference image to refine the mask. This refinement network uses one $3 \times 3$ convolution with $32$ filters followed by $3$ residual block and a final convolution that predicts the output. At this stage, the output disparity has valid predictions as well as occluded ones that cannot be matched. To obtain the complete output, we concatenate the current disparity estimate with the reference image and we use a hourglass architecture, which has shown great performances for inpainting tasks \cite{hourglass-inpainting}. For additional details please see the supplementary material. \begin{figure}[t] \centering \includegraphics[width=\columnwidth,height=24em]{images/occlusionnet.png} \caption{OcclusionNet. The output of the Siamese network is concatenated and 3 residual blocks ( $3 \times 3$, leaky Relu activations) produces a low resolution occlusion mask. A single refinement stage regress the final invalidation mask. Finally, a hourglass architecture \cite{hourglass} is used to inpaint missing regions.} \label{fig:occlusionnet} \end{figure} \fi \subsection{Loss Function} We train StereoNet in a fully supervised manner using groundtruth-labeled stereo data. We minimize the hierarchical loss function: \begin{equation} L = \sum_k \rho(d_i^k - \hat{d}_i), \end{equation} \noindent where $d_i^k$ is the predicted disparity at pixel $i$ at the $k$-th refinement level, with $k=0$ denoting the output pre-refinement, and $\hat{d}_i$ is the groundtruth disparity at the same pixel. The predicted disparity map is always bilinearly upsampled to match the groundtruth resolution. Finally, $\rho(.)$ is the two-parameter robust function from~\cite{barron2017more} with its parameters set as $\alpha=1$ and $c=2$, approximating a smoothed L1 loss. \subsection{Implementation details} We implemented and trained StereoNet using Tensorflow~\cite{abadi2016tensorflow}. All our experiments were optimized using RMSProp~\cite{hinton2012neural} with an exponentially-decaying learning rate initially set to $1\mathrm{e}{-3}$. Input data is first normalized to the range $[-1, 1]$. We use a batch size of 1 and we do not crop because of the smaller model size, unlike~\cite{kendall2017end}. Our network needs around $150k$ iterations to reach convergence. We found that, intuitively, training with the left and right disparity maps for an image pair at the same time significantly sped up the training time. On smaller datasets where training from scratch would be futile, we fine-tuned the pre-trained model for an additional $50k$ iterations. \section{Related Work} Depth from stereo has been studied for a long time and we refer the interested reader to \cite{scharstein2002taxonomy,hamzah2016literature} for a survey. Correspondence search for stereo is a challenging problem and has been traditionally divided into global and local approaches. Global approaches formulate a cost function over the image that is traditionally optimized using approaches such as Belief Propagation or Graph Cuts~\cite{besse2014pmbp,felzenszwalb2006efficient,klaus2006segment,kolmogorov2001computing}. Instead, local stereo matching methods (e.g. \cite{bleyer2011patchmatch}) center a support window on a pixel in the reference frame and then displace this window in the second image until the point of highest correlation is found. A major challenge for local stereo matching is to define the optimal size for the support window. On the one hand the window needs to be large to capture a sufficient amount of texture but needs to be small at the same time to avoid aggregating wrong disparity values that can lead to the well-known edge fattening effect at disparity discontinuities. To avoid this trade-off, adaptive support approaches weigh the influence of each pixel inside the support region based on e.g. its color similarity to the central pixel. Interestingly adaptive support weight approaches were cast as cost volume filtering in~\cite{hosni2013fast}: a three-dimensional cost volume is constructed by computing the per-pixel matching costs at all possible disparity levels. This cost volume is then filtered with a weighted average filter. This filtering propagates local information in the spatial and depth domains producing a depth map that preserves edges across object discontinuities. For triangulation based stereo matching system the accuracy of depth is directly linked to the precision to which the corresponding pixel in the other image can be located. Therefore, previous work strives to do matching with sub-pixel precision. The complexity of most algorithms scale linearly with the number of disparities evaluated so while one approach is to build a large cost volume with very fine grained disparity steps this is computationally in-feasible. Many algorithms therefore start with discrete matching and then refine these matches by fitting a local curve such as a parabolic fit to the cost function between the discrete disparity candidates (see e.g. \cite{yang2007,Nehab2005}). Other works are based on continuous optimization strategies \cite{Ranftl2012} or on phase correlation \cite{Sanger88}. It was shown in \cite{pinggera2014} that under realistic conditions the bound for subpixel precision is $1/10th$ of a pixel while the theoretical limit under noise free conditions was found to be 10 times lower \cite{Delon2007}. We demonstrate that this traditional wisdom does not hold true for learning-based approaches and we can achieve a subpixel precision of $1/30th$ of a pixel. Recent work has progressed to using end-to-end learning for stereo matching. Various approaches combined a learned patch embedding or matching cost with global optimization approaches like semiglobal matching (SGM) for refinement~\cite{zagoruyko2015learning}. \cite{chen2015deep} learn a multi-scale embedding model followed by an MRF. \cite{zbontar2016stereo,zbontar2015computing} learn to match image patches followed by SGM. \cite{luo2016efficient} learn to match patches using a Siamese feature network and optimize globally with SGM as well. ~\cite{shaked2017improved} uses a multi-stage approach where a highway network architecture is first used to compute the matching costs and then another network is used in postprocessing to aggregate and pool costs. Other works attempted to solve the stereo matching problem end-to-end without postprocessing. \cite{mayer2016large,ilg2017flownet} train end-to-end an encoder-decoder network for disparity and flow estimation achieving state-of-the-art results on existing and new benchmarks. Other end-to-end approaches used multiple refinement stages that converge to the right disparity hypotheses. \cite{gidaris2017detect} proposed a generic architecture for labeling problems, including depth estimation, that is trained end-to-end to predict and refine the output. \cite{pang2017cascade} proposed a cascaded approach to refine predicted depth iteratively. Iterative refinement approaches, while showing good performance on various benchmarks, tend to require a considerable amount of computational resources. More closely related to our work is \cite{kendall2017end} who used the concept of cost volume filtering but trained both the features and the filters end-to-end achieving impressive results. DeepStereo~\cite{flynn2016deepstereo} used a plane-sweep volume to synthesize novel views from multi-view stereo input. Contrary to prior work, we are interested in an end-to-end learning stereo pipeline that can run in real-time, therefore we start from a very low resolution cost volume, which is then upsampled with learned, edge aware filters.
1,116,691,498,859
arxiv
\section{Introduction} Deep neural networks (DNNs) are complex models with potentially millions of trainable parameters. Due to this intrinsic aspect, it is challenging to interpret them and to understand why a classifier makes a certain decision. However, this understanding is paramount for critical tasks, such as diagnosis assisted by artificial intelligence (AI). Layer-wise relevance propagation\cite{LRP} (LRP) is a technique designed to create heatmaps for deep classifiers. Heatmaps are graphics that explain the model's behavior by making explicit how each part of an input image affects the DNN output. We can create a heatmap for an input image and one class. If positive, the values in the map (i.e. relevance) indicate how strongly the classifier associated the image regions with the class. If negative, they represent areas that decreased the classifier confidence in the class (e.g., regions associated with the other possible classes). LRP is a technique created to improve DNN explainability, and, if bias is present in the figure background and affects the classification process, LRP will reveal it. Background features can divert the focus of a classifier, reducing its ability to interpret the useful information in an image. They can carry information that either conflicts with what we wish to classify, confounding the classifier, or that correlates with the classes, biasing the classifier. If a background bias is also present during evaluation of DNNs, it artificially improves performance metrics. This scenario favors shortcut learning\cite{ShortcutLearning}, a condition where deep neural networks learn decision rules that perform well on standard benchmarks, but poorly on real-world applications. Image segmentation is the task of semantically partitioning a figure, assigning a category to each of its pixels. The process allows a differentiation between objects, and the identification of the background. Segmentation is a powerful tool to address the problem of background bias. When applied as a preprocessing step prior to classification, it allows the removal of the image background, preventing the classifier from learning with its features. Therefore, for classification problems that benefit from segmentation, a common pipeline has two DNNs. The first DNN segments the important regions of the image. Its output is used to erase the background, creating a segmented image that the second DNN classifies. In this work, we propose a drastically different strategy for performing image segmentation before classification. We created a novel DNN architecture that is capable of implicitly segmenting its input. It does not produce a segmentation mask, but it learns to precisely select the image regions considered for the classification process. We call this model Implicit Segmentation Neural Network or ISNet. After training with images and segmentation targets, the architecture learns how to classify unsegmented images as if they had previously been segmented by another DNN. During the training procedure, an ISNet comprises a classifier and another structure, which we call 'LRP block'. The novel structure shares parameters with the classifier, and is linked to it via multiple skip connections. Its job is to perform Layer-wise Relevance Propagation through the classifier, producing heatmaps for all possible classes. Besides proposing a strategy to implement LRP rules as DNN layers (which form the LRP block), the ISNet introduces the concept of relevance segmentation in the heatmaps. That is, the ISNet learns to minimize the classifier's undesired/background relevance, forcing it to focus only on the regions of interest, resulting in implicit image segmentation. A major advantage of the ISNet architecture is that, after the training procedure, the LRP block can be removed or inactivated without altering the classifier behavior. It will still only focus on the regions of interest. Consequently, the ISNet presents no additional computational cost at run-time relative to a common classifier. In contrast to our proposal, the accuracy of a standard classification neural network will drop significantly if it is trained with segmented images and later receives unsegmented samples during run-time. The ISNet allows a single model to perform the task of two neural networks, a segmenter and a classifier. Thus, it creates a strong economy of memory and processing time. This benefit is even more solid when we consider models that will be deployed in portable or less capable devices. Furthermore, the architecture is very flexible. Virtually any classification neural network can be converted to an ISNet. In this work, we opted for a Densely Connected Convolutional Network\cite{DenseNet} (DenseNet) as the classifier. Currently, the most popular open databases of COVID-19 chest X-rays contain either no or few COVID-19 negative/control cases\cite{GitCovidSet}\textsuperscript{,}\cite{BrixiaSet}. Due to this limitation, to this date most studies resorted to mixed source datasets to train DNNs to differentiate between COVID-19 patients, healthy people, and patients with other diseases (e.g. non-COVID-19 pneumonia). In these datasets, different classes come from different databases assembled in distinct hospitals and cities. This raises concerns that the dissimilar sources could contain different biases, which may help DNNs to classify the images according to their source dataset, rather than according to the disease symptoms. One study\cite{critic} accurately determined the source datasets after removing all lung regions from the images, proving the presence of background bias. A review\cite{ShortcutCovid} concluded that, if models are allowed to analyze the whole X-ray or a rectangular bounding box around the lungs, they tend to strongly focus on areas outside of the lungs. Thus, they fail to generalize or achieve satisfactory performance on external datasets (i.e., datasets with dissimilar sources in relation to the training images). Moreover, the review identified that the problem is a cause of shortcut learning\cite{ShortcutLearning}. A paper\cite{NatureCovidBias} utilized external datasets to evaluate traditional COVID-19 detection DNNs, whose reported results had been highly positive, and it demonstrated a strong drop in their performances. For these reasons, researchers have resorted to testing on external datasets to understand the bias and generalization capability of DNNs trained on mixed databases. The approach shows reduced and more realistic performances in relation to the standard evaluation methodologies \cite{ShortcutCovid}\textsuperscript{,}\cite{bassi2021covid19}\textsuperscript{,}\cite{covidSegmentation}. A common conclusion of these works is that lung segmentation improves generalization capability, reducing the bias induced by mixed training datasets\cite{bassi2021covid19}\textsuperscript{,}\cite{covidSegmentation}. Therefore, we believe that the ISNet is suited to the task of COVID-19 detection using the usual mixed datasets of chest X-rays. To test the precision of the ISNet architecture's implicit segmentation, we trained it on a mixed dataset based on one of the largest open COVID-19 chest X-ray databases\cite{BrixiaSet}, with the objective of distinguishing COVID-19 positive cases, normal images, and non-COVID-19 pneumonia. The two diseases have similar symptoms, making their differentiation non-trivial, and both produce signs in chest X-rays. Examples of COVID-19 signs are bilateral radiographic abnormalities, ground-glass opacity, and interstitial abnormalities\cite{clinicalCOVID}. Examples of pneumonia signals are poorly defined small centrilobular nodules, airspace consolidation, and bilateral asymmetric ground-glass opacity\cite{clinicalPneumonia}. We evaluate the optimized DNN in a cross-dataset approach, using images collected from external locations in relation to the training samples. This evaluation methodology can also be described as testing with an out-of-distribution (ood) dataset\cite{ShortcutCovid}, and it shall allow us to understand if ignoring the image background increases generalization performance. We present more details about the databases in section \ref{dataset}. To evaluate the general applicability of the ISNet architecture, and its ability to achieve implicit segmentation in diverse domains, we trained the DNN for a second task: facial attribute estimation. We utilized 30000 samples from the CelebA database\cite{celebA}, which is composed of natural images, in three channels, displaying people in a wide variety of poses, and containing background clutter. Using these in the wild images, the ISNet will classify facial features, while implicitly segmenting the face region, which can assume many shapes, sizes, and locations in the figures. Facial attribute estimation is a multi-label classification problem, and it is fundamentally different from COVID-19 detection in chest X-rays. Since a paper\cite{celebA} indicated that the classification of more attributes causes the classifiers to naturally focus more on the faces, we opted to classify only 3 facial attributes (rosy cheeks, high cheekbones and smiling), increasing the implicit segmentation difficulty. Past works did not necessarily search for, or found, an accuracy improvement when using face segmentation before facial attribute estimation. However, they pointed out that background features may influence classifiers\cite{FaceBias}\textsuperscript{,}\cite{FacesSmallSets}. Therefore, segmentation can reduce bias and improve security, which is crucial when the classified attributes will be used for user identification\cite{FaceBias}. In this work, we utilize the facial attribute estimation task to assess the ISNet segmentation capability in evaluating natural images, and in addressing a challenging segmentation problem. For this reason, we will not employ a machine learning pipeline that localizes, aligns, and crops around the face before performing classification (a common approach to increase accuracy\cite{celebA}). As a more extreme test for the ISNet, we also trained it on artificially biased versions of the datasets, for both COVID-19 detection and facial attribute estimation. We added different geometrical shapes in the image backgrounds, according to their classes. A standard classifier trained on this dataset will show increased performance on images containing the shapes, but will lose accuracy for samples without the artificial bias, clearly displaying the effects of shortcut learning. If the ISNet implicit segmentation is successful, it should produce the same result in both cases, and the analysis of its heatmaps will show no LRP relevance over the artificial shapes. Finally, in all tasks we compared the ISNet containing a DenseNet121 classifier to a standard segmentation-classification pipeline with 2 DNNs, comprising a U-Net\cite{unet} segmenter followed by a DenseNet121 classifier. We also compared our proposal to a standalone DenseNet121, without prior image segmentation. The ISNet PyTorch implementation is publicly available at https://github.com/PedroRASB/ISNet. \section{Results} \subsection{COVID-19 Detection} We analyzed the DNNs on the test dataset for COVID-19 detection, with the three classes (COVID-19, normal and non-COVID-19 pneumonia) collected from distinct localities/datasets in relation to the training database. Table \ref{performance} shows class and average precision, recall, F1-Score and specificity for the trained DNNs (ISNet, U-Net followed by DenseNet121 and standalone DenseNet121). Results are reported in the following format: mean +/-std, [95\% HDI]. Mean refers to the metric's mean value, according to its probability distribution, and std to its standard deviation. 95\% HDI indicates the 95\% highest density interval, defined as an interval containing 95\% of the metric's probability mass. Furthermore, any point inside the interval must have a probability density that is higher than that of any point outside. Table \ref{performance} indicates in bold text the models' balanced accuracy (i.e., average recall) and macro-averaged F1-Score (maF1). The ISNet, alternative segmentation-classification pipeline, and DenseNet121 achieved 0.938, 0.833 and 0.808 AUC, respectively. \begin{table} \centering \caption{Performance metrics for the deep neural networks in COVID-19 detection} \label{performance} \begin{tabular}{|l|l|l|l|l|} \hline Class & Precision & Recall & F1-Score & Specificity \\ \hline \multicolumn{5}{|c|}{\textbf{ISNet}} \\ \hline Normal & \begin{tabular}[c]{@{}l@{}}0.381 +/-0.016,\\{[}0.35,0.413]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.917 +/-0.014,\\{[}0.889,0.944]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.538 +/-0.017,\\{[}0.505,0.571]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.804 +/-0.007,\\{[}0.789,0.818]\end{tabular} \\ \hline Pneumonia & \begin{tabular}[c]{@{}l@{}}0.928 +/-0.009,\\ {[}0.91,0.946]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.556 +/-0.014,\\{[}0.529,0.583]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.696 +/-0.012,\\{[}0.673,0.718]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.97 +/-0.004,\\{[}0.963,0.978]\end{tabular} \\ \hline COVID-19 & \begin{tabular}[c]{@{}l@{}}0.921 +/-0.007,\\{[}0.908,0.934]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.92 +/-0.007,\\{[}0.906,0.934]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.92 +/-0.005,\\{[}0.91,0.93]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.928 +/-0.006,\\{[}0.916,0.94]\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}Macro-average\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.743 +/-0.007,\\{[}0.73,0.756]\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{0.798 +/-0.007},\\{[}0.783,0.811]\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{0.718 +/-0.008}, \\{[}0.702,0.735]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.901 +/-0.003,\\{[}0.894,0.907]\end{tabular} \\ \hline \multicolumn{5}{|c|}{\textbf{U-Net segmenter followed by DenseNet121 classifier}} \\ \hline Normal & \begin{tabular}[c]{@{}l@{}}0.446 +/-0.019,\\{[}0.408,0.483]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.796 +/-0.021,\\{[}0.756,0.837]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.571 +/-0.018,\\{[}0.535,0.607]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.869 +/-0.006,\\{[}0.857,0.882]\end{tabular} \\ \hline Pneumonia & \begin{tabular}[c]{@{}l@{}}0.791 +/-0.015,\\{[}0.763,0.82]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.466 +/-0.014,\\{[}0.439,0.494]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.586+/-0.013,\\{[}0.561,0.611]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.915+/-0.006,\\{[}0.903,0.928]\end{tabular} \\ \hline COVID-19 & \begin{tabular}[c]{@{}l@{}}0.723 +/-0.011,\\{[}0.702,0.744]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.838 +/-0.009,\\{[}0.819,0.856]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.776 +/-0.008,\\{[}0.76,0.792]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.708 +/-0.011,\\{[}0.686,0.73]\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}Macro-average\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.653 +/-0.009,\\{[}0.636,0.67]\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{0.7 +/-0.009},\\{[}0.683,0.717]\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{0.645 +/-0.009},\\{[}0.626,0.663]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.831 +/-0.004,\\{[}0.823,0.839]\end{tabular} \\ \hline \multicolumn{5}{|c|}{\textbf{DenseNet121 without segmentation}} \\ \hline Normal & \begin{tabular}[c]{@{}l@{}}0.364 +/-0.02,\\{[}0.324,0.402]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.57 +/-0.026,\\{[}0.518,0.618]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.444 +/-0.02,\\{[}0.403,0.482]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.869 +/-0.006,\\{[}0.856,0.881]\end{tabular} \\ \hline Pneumonia & \begin{tabular}[c]{@{}l@{}}0.827 +/-0.018,\\{[}0.792,0.861]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.294 +/-0.013,\\{[}0.27,0.32]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.434 +/-0.015,\\{[}0.405,0.463]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.958 +/-0.005,\\{[}0.949,0.967]\end{tabular} \\ \hline COVID-19 & \begin{tabular}[c]{@{}l@{}}0.649 +/-0.01,\\{[}0.629,0.67]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.916 +/-0.007,\\{[}0.902,0.93]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.76 +/-0.008,\\{[}0.744,0.775]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.549 +/-0.012,\\{[}0.525,0.573]\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}Macro-average\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.614 +/-0.009,\\{[}0.594,0.631]\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{0.594 +/-0.01},\\{[}0.574,0.612]\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textbf{0.546 +/-0.01},\\{[}0.527,0.565]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.792 +/-0.004,\\{[}0.784,0.8]\end{tabular} \\ \hline \end{tabular} \end{table} In all considered test metrics macro-averages (Table \ref{performance}) the ISNet obtained the highest results, followed by the alternative segmentation-classification pipeline, and finally by the model without segmentation. We also observe that the different models' 95\% highest density intervals do not overlap for any average performance measurement. Consequently, the probability of the models having equivalent performances is minute. Relative to the alternative segmentation methodology, the ISNet achieved a 7.3\% increment in macro-averaged F1-Score. In relation to the DenseNet121 without segmentation, the difference rises to 17.2\%. In this work, our segmentation targets were automatically generated by a U-Net, trained for lung segmentation in another study\cite{bassi2021covid19}. Therefore, we believe that the performances achieved by the ISNet and by the alternative segmentation-classification pipeline could increase even more, provided a dataset containing a large amount of chest X-rays accompanied by manually segmented lung regions. The reported results may seem worse than other studies in the field of COVID-19 detection, which report remarkably high performances, strongly surpassing expert radiologists (e.g., F1-Scores close to 100\%). However, evidence suggests that, currently, such results are obtained when the training and test datasets come from the same distribution/sources (i.e., datasets that are independent and identically distributed, or iid)\cite{ShortcutCovid}. Moreover, studies show that these strong performances may be boosted by bias and shortcut learning, preventing the neural networks from generalizing, or achieving comparable results in the real-world\cite{ShortcutCovid}\textsuperscript{,}\cite{bassi2021covid19}\textsuperscript{,}\cite{NatureCovidBias}. Instead, the performances obtained in this study are in accordance with other works that evaluate their DNNs in external databases (i.e., out-of-distribution or ood datasets, whose image sources are diverse in relation to the ones that generate the training samples)\cite{ShortcutCovid}\textsuperscript{,}\cite{bassi2021covid19}\textsuperscript{,}\cite{NatureCovidBias}. For example, an article\cite{NatureCovidBias} reported AUC of 0.786 +/-0.025 on an external dataset, when evaluating a DenseNet121 used for COVID-19 detection without lung segmentation. Here, the DenseNet121 obtained 0.808 AUC, which falls into their reported confidence interval. Another paper\cite{bassi2021covid19} evaluates COVID-19 detection and utilizes lung segmentation before classification with a DenseNet201. They achieved mean maF1 of 0.754, with 95\% HDI of [0.687,0.82], also evaluating their model on an external dataset. We observe that the ISNet maF1 95\% HDI, [0.702,0.735], fits inside their reported 95\% HDI. We must note that the aforementioned studies use different databases. Thus, caution is required when directly comparing the numerical results. We also trained the ISNet with the artificially biased dataset (altered with polygons representing the X-ray class), and tested it on the external test dataset, also artificially biased, or in its original version. For both versions, the neural network produced exactly the same confusion matrix. The model's result was very similar to the ISNet trained in the common dataset (Table \ref{performance}), presenting a macro-average F1-Score of 0.717 +/-0.01 (and [0.699,0.736] 95\% HDI). If we perform the same procedure using a DenseNet121 without segmentation, it achieves 0.775 +/-0.008 ([0.768,0.796] 95\% HDI) maF1 in the biased test dataset, but the score drops to 0.434 +/-0.01 ([0.414,0.454] 95\% HDI), for the original test database. Consequently, we see that the polygons in the background strongly diverge the normal classifier focus, causing clear shortcut-learning. However, they have no effect on the ISNet, demonstrating the effectiveness of its implicit segmentation. Moreover, the ISNet is resistant to image flipping, rotations, and translations. These operations, used as data augmentation, did not negatively affect the training procedure, nor did they worsen the validation error during preliminary tests with an augmented validation dataset. To illustrate that it is not possible to simply train a model with segmented images, and then use it without segmentation at run-time, we tested the alternative segmentation methodology after removing the U-Net from its pipeline. That is, we simulated a DenseNet121 trained on segmented images and used to classify unsegmented ones. As expected, this resulted in a dramatic performance drop: maF1 was reduced from 0.645 +/-0.009 to 0.217 +/-0.003 (changing its 95\% HDI from [0.626,0.663] to [0.211,0.224]). The test shows the advantage of implicit segmentation, as it eliminates the need for a segmentation model during run-time. As a final note, the classification of chest X-rays as COVID-19, normal or pneumonia constitutes a multi-class, single-label problem. To find interval estimates for the classification performance metrics we employed Bayesian estimation. We utilized a Bayesian model\cite{bayesianEstimator} that takes a confusion matrix (displayed in Table \ref{confusion}) as input, and estimates the posterior probability distribution of metrics like class and average precision, recall and F1-Score. The model was expanded in a subsequent study\cite{bassi2021covid19}, to also estimate class and average specificity. For the posteriors' estimation we used Markov chain Monte Carlo (MCMC), implemented with the Python library PyMC3\cite{pymc}. We employed the No-U-Turn Sampler\cite{NUTS}, using 4 chains and 110000 samples, being the first 10000 for tuning. The estimated distributions allow us to report the means, standard deviations, and the 95\% highest density intervals of the performance metrics. The estimation Monte Carlo error is below 10$^{-4}$ for all scores. We utilized a macro-averaging and pairwise approach to calculate the area under the ROC curve (AUC), a technique developed for multi-class single-label problems\cite{MulticlassAUC}. We do not present interval estimates for this metric, because there is not an established methodology to calculate it (its authors\cite{MulticlassAUC} suggest bootstrapping, but it would not be feasible with the deep models and cross-dataset evaluation used in this study). \begin{table}[h] \centering \caption{Confusion matrices for the deep neural networks in COVID-19 detection} \label{confusion} \begin{tabular}{|llll|} \hline \multicolumn{1}{|l|}{\multirow{2}{*}{True Class}} & \multicolumn{3}{l|}{Predicted Class} \\ \cline{2-4} \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{Normal} & \multicolumn{1}{l|}{Pneumonia} & COVID-19 \\ \hline \multicolumn{4}{|c|}{\textbf{ISNet}} \\ \hline \multicolumn{1}{|l|}{Normal} & \multicolumn{1}{l|}{341} & \multicolumn{1}{l|}{19} & 10 \\ \hline \multicolumn{1}{|l|}{Pneumonia} & \multicolumn{1}{l|}{466} & \multicolumn{1}{l|}{721} & 108 \\ \hline \multicolumn{1}{|l|}{COVID-19} & \multicolumn{1}{l|}{85} & \multicolumn{1}{l|}{35} & 1395 \\ \hline \multicolumn{4}{|c|}{\textbf{U-Net + DenseNet121}} \\ \hline \multicolumn{1}{|l|}{Normal} & \multicolumn{1}{l|}{296} & \multicolumn{1}{l|}{9} & 65 \\ \hline \multicolumn{1}{|l|}{Pneumonia} & \multicolumn{1}{l|}{271} & \multicolumn{1}{l|}{604} & 420 \\ \hline \multicolumn{1}{|l|}{COVID-19} & \multicolumn{1}{l|}{95} & \multicolumn{1}{l|}{149} & 1271 \\ \hline \multicolumn{4}{|c|}{\textbf{DenseNet121}} \\ \hline \multicolumn{1}{|l|}{Normal} & \multicolumn{1}{l|}{211} & \multicolumn{1}{l|}{16} & 143 \\ \hline \multicolumn{1}{|l|}{Pneumonia} & \multicolumn{1}{l|}{306} & \multicolumn{1}{l|}{381} & 608 \\ \hline \multicolumn{1}{|l|}{COVID-19} & \multicolumn{1}{l|}{63} & \multicolumn{1}{l|}{62} & 1390 \\ \hline \end{tabular} \end{table} \subsection{Facial Attribute Estimation} Table \ref{FacesPerformance} shows the test performance of the three DNNs in the facial attribute estimation task, analyzing three attributes (rosy cheeks, high cheekbones and smiling). The table cells report the metrics' mean and error, considering 95\% confidence. The ISNet, alternative segmentation-classification pipeline, and DenseNet121 achieved 0.95 +/-0.008, 0.954 +/-0.007 and 0.952 +/-0.008 macro-average AUC, respectively. We observe that, in this task, the three models have similar performances, presenting strong overlap in the 95\% confidence intervals for every average score in Table \ref{FacesPerformance}, and for AUC. \begin{table}[!h] \centering \caption{Performance metrics for the deep neural networks in facial attribute estimation} \begin{tabular}{|llll|} \hline \multicolumn{1}{|l|}{Class} & \multicolumn{1}{l|}{Precision} & \multicolumn{1}{l|}{Recall} & F1-Score \\ \hline \multicolumn{4}{|c|}{\textbf{ISNet}} \\ \hline \multicolumn{1}{|l|}{Rosy Cheeks} & \multicolumn{1}{l|}{0.647 +/-0.05} & \multicolumn{1}{l|}{0.686 +/-0.048} & 0.666 +/-0.049 \\ \hline \multicolumn{1}{|l|}{High Cheekbones} & \multicolumn{1}{l|}{0.876 +/-0.018} & \multicolumn{1}{l|}{0.81 +/-0.022} & 0.842 +/-0.02 \\ \hline \multicolumn{1}{|l|}{Smiling} & \multicolumn{1}{l|}{0.937 +/-0.013} & \multicolumn{1}{l|}{0.892 +/-0.017} & 0.914 +/-0.015 \\ \hline \multicolumn{1}{|l|}{Mean} & \multicolumn{1}{l|}{0.82 +/-0.027} & \multicolumn{1}{l|}{\textbf{0.796 +/-0.029}} & \textbf{0.807 +/-0.028} \\ \hline \multicolumn{4}{|c|}{\textbf{U-Net segmenter followed by DenseNet121 classifier}} \\ \hline \multicolumn{1}{|l|}{Rosy Cheeks} & \multicolumn{1}{l|}{0.615 +/-0.048} & \multicolumn{1}{l|}{0.707 +/-0.045} & 0.658 +/-0.047 \\ \hline \multicolumn{1}{|l|}{High Cheekbones} & \multicolumn{1}{l|}{0.863 +/-0.018} & \multicolumn{1}{l|}{0.843 +/-0.019} & 0.853 +/-0.019 \\ \hline \multicolumn{1}{|l|}{Smiling} & \multicolumn{1}{l|}{0.905 +/-0.016} & \multicolumn{1}{l|}{0.912 +/-0.015} & 0.908 +/-0.016 \\ \hline \multicolumn{1}{|l|}{Mean} & \multicolumn{1}{l|}{0.794 +/-0.027} & \multicolumn{1}{l|}{\textbf{0.821 +/-0.026}} & \textbf{0.806 +/-0.027} \\ \hline \multicolumn{4}{|c|}{\textbf{DenseNet121 without segmentation}} \\ \hline \multicolumn{1}{|l|}{Rosy Cheeks} & \multicolumn{1}{l|}{0.624 +/-0.05} & \multicolumn{1}{l|}{0.671 +/-0.048} & 0.647 +/-0.049 \\ \hline \multicolumn{1}{|l|}{High Cheekbones} & \multicolumn{1}{l|}{0.876 +/-0.018} & \multicolumn{1}{l|}{0.822 +/-0.021} & 0.848 +/-0.02 \\ \hline \multicolumn{1}{|l|}{Smiling} & \multicolumn{1}{l|}{0.926 +/-0.014} & \multicolumn{1}{l|}{0.897 +/-0.016} & 0.911 +/-0.015 \\ \hline \multicolumn{1}{|l|}{Mean} & \multicolumn{1}{l|}{0.809 +/-0.027} & \multicolumn{1}{l|}{\textbf{0.797 +/-0.028}} & \textbf{0.802 +/-0.028} \\ \hline \end{tabular} \label{FacesPerformance} \end{table} Training the ISNet in the artificially biased version of the CelebA\cite{celebA} dataset yielded the same outcome as in the COVID-19 detection task: the proposed model completely ignored the polygons added to the background. The new ISNet achieved the same macro-average F1-Score (0.803 +/-0.027) and balanced accuracy (0.815 +/-0.027) for original test dataset and for its biased version. Unsurprisingly, a common DenseNet121 trained under the same circumstances did not: it generated 0.974 +/-0.012 maF1 on the biased database, and 0.641 +/-0.054 on the original. We again observe that the strong background bias distracts common classifiers, and causes clear shortcut learning. However, it does not affect the ISNet, due to the successful implicit segmentation. Therefore, we see that the ISNet represents a security improvement for facial attribute estimation. Furthermore, the ISNet ignores undesired features even when analyzing natural images, with cluttered backgrounds, and a wide variety of segmentation masks' shapes, locations, and sizes. In facial attribute estimation, removing the U-Net before testing the alternative segmentation methodology causes a devastating effect, as it did in COVID-19 detection: it reduced maF1 from 0.806 +/-0.027 to 0.543 +/-0.191. Facial attribute estimation is a multi-class multi-label classification problem. Therefore, the previously utilized Bayesian model\cite{bayesianEstimator} is not adequate for it. To create the interval estimates in Table \ref{FacesPerformance}, we employ the Wilson Score Interval. Working with a multi-label task, we can calculate the AUC 95\% confidence interval with an already established non-parametric approach\cite{DeLongAUC}. \subsection{Speed and Efficiency Analysis} All training procedures were executed in a NVidia RTX 3080 graphics processing unit (GPU) using mixed precision. In facial attribute estimation an epoch considered 24183 train images and 2993 validation samples, loaded in mini-batches of 10 figures. An epoch took about 1300 s for the ISNet, 320 s for the alternative segmentation model, and 240 s for the standalone DenseNet121. It is worth noting that the alternative pipeline also required training a U-Net, which needed 400 epochs of about 180 s. Considering this additional training procedure, the ISNet training time was about 82\% longer than the alternative segmentation-classification pipeline's. During training, the ISNet is slower than a traditional pipeline of segmentation and classification. The model's biggest limitation may be the need to generate one heatmap per class. Treating the different heatmap calculations as a propagation of different batch elements allows parallelism, but training time and memory consumption still increases with more classes. The worst-case scenario for this problem is if memory is not sufficient for the batch approach. In this case, the heatmaps will need to be created in series, causing training time to increase linearly with the number of classes. Therefore, more efficient implementations for the LRP block are a promising path for future developments. At run-time, the ISNet shows clear benefits because the LRP block can be removed from the model, leaving only the classifier. With a NVidia RTX 3080 GPU employing mixed precision and using mini-batches of 10 images, the standard pipeline of a U-Net followed by a DenseNet121 classifies an average of 207 samples per second, while the DenseNet121-based ISNet classifies an average of 353. Utilizing the same GPU and mini-batch size, but without mixed precision, the alternative pipeline classifies 143 samples per second, and the ISNet, 298. Therefore, the ISNet is about 70\% to 108\% faster in this configuration. However, there is an even greater advantage regarding the model's size: the ISNet has about 8M parameters (the same as the DenseNet121), while the combined U-Net and DenseNet121 have 39M. Thus, the model size is reduced by a factor of almost 5. Naturally, the smaller the classifier model in relation to the segmentation network, the stronger the performance benefit provided by the ISNet architecture. \subsection{Heatmap Analysis} Besides being necessary for the calculation of the heatmap loss, the heatmaps created by our LRP block can be analyzed, improving the explainability of our models. Figure \ref{maps} presents test images and the associated heatmaps for the two classification problems, according to the three implemented DNNs. For better visualization, we normalized the maps presented in this section, dividing them by their standard deviations. In the X-rays heatmaps, red (positive relevance) indicates areas more associated with COVID-19, and blue (negative relevance) with normal or non-COVID-19 pneumonia. In facial attribute estimation, red and blue colors point positive and negative relevance for the attribute rosy cheeks. In every case, the whiter/brighter a heatmap region, the less the classifier focused on it. We observe that both the ISNet and the alternative segmentation model (U-Net followed by DenseNet121) successfully kept the relevance inside the key areas (lungs/faces), with little to no background attention. The same is not true for the model without segmentation. \begin{figure}[!h] \includegraphics[width=0.9\textwidth]{ISNetHeatmaps.png} \centering \caption{Heatmaps for the three deep neural networks. Red colors indicate association to the class COVID-19 (X-rays) or rosy cheeks (photographs)} \label{maps} \end{figure} The first two rows of figure \ref{maps} show COVID-19 positive X-rays, correctly classified. The third row has an image from a healthy patient, only miss-classified by the model without segmentation, which predicted COVID-19. In this image, we observe that the DenseNet121 strongly focuses on text present in the X-ray upper-right region, and associate it to COVID-19. Interestingly, similar marks were common in the COVID-19 images that we used for training, which may explain the association. In the other two X-ray no segmentation heatmaps we see background attention in, but not limited to, the markings outside of the lungs (upper-left corners). This problem is not observed in the ISNet or in the alternative segmentation-classification pipeline heatmaps. If the model without segmentation miss-classified many X-rays as COVID-19 due to text and markings in the images, its COVID-19 class specificity should be smaller than the other models'. Indeed, in Table \ref{performance} we observe a strong drop in COVID-19 specificity when segmentation is not employed (e.g., we have 0.928 +/-0.006 for the ISNet, and only 0.549 +/-0.012 for the DenseNet121). The two photographs in figure \ref{maps} were correctly classified by the 3 models. Both are labeled as containing the 3 considered attributes. In the facial attribute estimation task, we see that the common classifier pays some attention to body features outside of the face, and to the remaining background. However, most of the model's focus is on the face. This observation, and the weak impact of segmentation on the task's performances scores (see Table \ref{FacesPerformance}), may indicate that background bias does not produce a strong effect on model performance in facial attribute detection using the CelebA\cite{celebA} database. The two photographs in figure \ref{maps} exemplify the wide diversity of segmentation mask sizes in the CelebA dataset. We observe that, classifying both the small and the large face, the ISNet precisely kept the model focus inside the region of interest. Comparing the two models with segmentation, the attention of the ISNet seems to be more concentrated, while the alternative model appears to have its relevances more distributed over the lungs or faces. Mostly in COVID-19 detection, there is more relevance alongside the lungs' borders in the alternative segmentation model heatmaps, possibly indicating a source of overfitting. The model may have learned to consider the shape of the lungs' borders/segmentation masks when distinguishing between the classes in the training dataset. Figure \ref{triangle} shows heatmaps for the artificially biased test datasets, considering the ISNet and the DenseNet121 (without segmentation), both trained with artificially biased databases. Again, red colors indicate positive relevance for COVID-19 or rosy cheeks. The images were correctly classified. A triangle that indicates the COVID-19 class is in the X-ray top left corner. The photograph has positive labels for the three attributes. Thus, it has three geometrical shapes, a square indicates rosy cheeks, a triangle indicates smiling, while the circle represents high cheekbones. For the DNN without segmentation, the triangle and square are clearly associated with COVID-19 and rosy cheeks, respectively. They are very pronounced in the heatmaps, evidencing that they strongly influenced classification. This scenario explains the classifiers strong performances in the biased databases. In the photograph's no segmentation heatmap, the square is the most visible shape because it indicates rosy cheeks, the class chosen for relevance propagation in this example. The ISNet shows no relevance in the geometrical shapes, indicating the success of its implicit segmentation in both applications. Accordingly, the artificial background bias did not affect the model's classification results. \begin{figure}[!h] \includegraphics[width=0.7\textwidth]{BiasedMaps.png} \centering \caption{Heatmaps for positive COVID-19 X-ray and photograph, both images extracted from the artificially biased test datasets. The geometrical shapes indicate the classes. Red colors indicate association to the class COVID-19 (X-ray) or rosy cheeks (photograph)} \label{triangle} \end{figure} \section{Discussion} It is common to think about DNNs as black boxes. This is because it is challenging to interpret powerful models with millions of trainable parameters. Not truly understanding how a classifier makes a decision is a matter of concern, particularly when dealing with crucial choices, as in the field of AI-assisted diagnosis, or in applications in which security is essential. Layer-wise Relevance Propagation shows us how each region inside an image affects the classifier choice, producing a relevance heatmap. The ISNet goes a step further, and allows us to explicitly segment the relevance, choosing where the classifier shall focus and the image regions that it must ignore, resulting in implicit image segmentation. The ISNet architecture is flexible and easily applicable to any type of classification neural network though which Layer-wise Relevance Propagation is possible. A sequence of two DNNs (a segmenter and a classifier) is commonly used to solve tasks that can benefit from image segmentation as a preprocessing stage before classification. The ISNet introduces implicit segmentation, eliminating the need for a dedicated DNN segmenter and without adding any new parameters or computational cost to the classifier at run-time. By replacing a standard pipeline (a U-net followed by a DenseNet121) with an ISNet based on the same DenseNet121 classifier, we obtain a model that is about 70\% to 108\% faster at run-time, and has almost 80\% less parameters. This benefit would be even more pronounced for smaller classification neural networks. The limitation of the ISNet is its training time. For the DenseNet121-based model, the training time was 82\% longer than the standard segmentation-classification pipeline's. The proposed model creates heatmaps for different classes in parallel, but the length of its training time still increases with the number of classes in the classification problem. More efficient implementations may reduce this limitation. However, in its current formulation, the ISNet exchanges training time for run-time performance. But this trade-off may be very profitable, given that DNNs can be trained in powerful computers, then later deployed in less expensive or portable devices. Our results show that the ISNet's superior run-time performance does not come at the price of accuracy. Indeed, the DenseNet121-based ISNet matched the DenseNet121 preceded by a U-net, and a standalone DenseNet121 in facial attribute estimation. In COVID-19 detection, it surpassed both alternative DNNs. The most popular datasets used to classify COVID-19 are mixed, with different classes coming from dissimilar sources. Therefore, lung segmentation can prevent the classifier from learning bias in the image background, which is a danger that has been demonstrated by other studies\cite{bassiCovid}\textsuperscript{,}\cite{critic}\textsuperscript{,}\cite{NatureCovidBias}\textsuperscript{,}\cite{ShortcutCovid}. In this work, we trained DNNs to classify chest X-rays as COVID-19, non-COVID-19 pneumonia, or healthy. Being trained on a mixed database and evaluated on an external dataset (with distinct image sources, allowing a better measurement of DNN generalization capacity), the ISNet achieved 0.798 +/-0.007 mean balanced accuracy and 0.718 +/-0.008 macro-average F1-Score, while the standard segmentation methodology achieved 0.7 +/-0.009 and 0.645 +/-0.009, and the model without segmentation reached 0.594 +/-0.01 and 0.546 +/-0.01. The ISNet results for the task of COVID-19 detection are promising, and in accordance with other studies that employed lung segmentation and evaluated their DNNs in external databases, a technique that is required to reduce the effects of background bias and shortcut learning in the reported performances\cite{ShortcutCovid}\textsuperscript{,}\cite{bassi2021covid19}\textsuperscript{,}\cite{NatureCovidBias}. However, clinical tests are needed to ensure adequate real-world performance. In both tasks, the ISNet's implicit segmentation ability allowed it to ignore even purposefully added bias in the image background, and its heatmaps reveal a precise segmentation of relevance, according to the lungs or the faces. Thus, we observe that the ISNet was successful when analyzing biomedical and natural images, the last containing both background clutter and a wide variety of segmentation target configurations. The accurate implicit segmentation achieved in such diverse applications calls attention to the general applicability of the new architecture. The use of an LRP-based architecture during training naturally creates a DNN that combines a contracting (classifier) and an expanding (LRP Block) path, and links them via multiple skip connections. This structure is capable of combining context information, which is extracted by the classifier's later layers, with the high resolution found in its earlier feature maps. Therefore, the ISNet can use segmentation masks for learning how to precisely control the classifier focus. The resulting run-time model efficiently uses the potential and flexibility of a deep neural network. The proposed architecture is a way to reduce bias and to increase confidence in the results achieved by a DNN, without any additional computational cost at run-time. Therefore, we believe that the ISNet is useful for the task of COVID-19 detection, for other mixed dataset scenarios, and for improving security in facial attribute estimation. More generally, we believe that the ISNet can also be advantageous for other problems that would benefit from image segmentation prior to classification. \section{Methods} \subsection{Layer-wise Relevance Propagation} \label{LRP} Deep neural networks are not easy to interpret. This is because they are complex and nonlinear structures with millions of trainable parameters. Layer-wise relevance propagation\cite{LRP} (LRP) is an explanation technique tailored for deep models. It was created to solve the challenging task of providing heatmaps to interpret DNNs. Heatmaps are graphics that show how a neural network distributes its attention in the input space. For each class, a heatmap explains if an image region has a positive or negative, strong or weak influence (relevance) on the classifier confidence for that class. LRP is based on the almost conservative propagation of a quantity called relevance through the DNN layers, starting from one of the network output neurons and ending at the input layer, where the heatmap is produced. The meaning of the relevance in the heatmap is determined by the choice of the output neuron where the propagation starts. Positive values indicate that an input image pixel was associated with the class predicted by the chosen output neuron, while negative values indicate areas that reduce the classifier confidence in the class (e.g., regions that the classifier related to other classes in a multi-class, single-label problem). Furthermore, high (absolute) values of relevance show input features that were important for the classifier decision, i.e. the heatmap makes the classifier's attention explicit. Previous studies used LRP to understand which X-ray features were important for a DNN classifying COVID-19 patients, pneumonia patients, and healthy people\cite{bassi2021covid19}\textsuperscript{,}\cite{FirstPaperCovid}. One study used lung segmentation as a preprocessing step and found a strong correlation between lung areas with high LRP relevance for the COVID-19 class and regions where radiologists identified severe COVID-19 symptoms\cite{bassi2021covid19}. Both studies used training datasets from mixed sources and observed a concerning quantity of relevance outside of the lungs when segmentation was not used. In particular, LRP revealed that words exclusively present in the COVID-19 class strongly attracted the classifier's attention, which associated them with COVID-19. Figure \ref{seduto} is one example, containing the word ``SEDUTO'' and the letters ``DX'' in its upper corners. In the corresponding heatmap, the words' red coloration indicates their high relevance for the COVID-19 class. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{Seduto.png} \centering \caption{COVID-19 positive chest X-ray and respective LRP heatmap, where red regions show positive relevance for the COVID-19 class and blue areas show negative relevance (image parts associated with other classes, i.e. normal or pneumonia). Figure extracted from\cite{bassi2021covid19}} \label{seduto} \end{figure} LRP starts from the output of the neural network. After choosing one output neuron (according to its predicted class), we can define its relevance as equal to its output value (prior to nonlinear activation), and set the relevance of all other last-layer neurons to zero. Then, LRP uses different rules to propagate the relevance through each DNN layer, one at a time, until it reaches the model's input. The choice of the set of rules influences the heatmap's interpretability and the stability of the relevance propagation\cite{LRPBook}. Here, we will briefly explain the rules and procedures that were important for the creation of the ISNet. The most basic rule is called LRP-0. We define the k-th output of a fully-connected layer ($z_{k}$) before activation as: \begin{equation} z_{k}=b+\sum_{j} w_{jk}a_{j} \end{equation} Where $w_{jk}$ represents the layer's weight that connects its input j to output k, $b$ the layer's bias parameter, and $a_{j}$ the value of its input j. LRP-0 propagates the relevance from the layer output, $R_{k}$, to its input, $R_{j}$, according to the following equation\cite{LRPBook}: \begin{equation} R_{j}=\sum_{k}\frac{w_{jk}a_{j}}{z_{k}}R_{k} \end{equation} Analyzing the fraction above, we see that LRP-0 redistributes the relevance from the layer k-th output ($R_{k}$) to its inputs (j) according to how much they contributed to the layer's k-th activation ($z_{k}$). A second rule, LRP-$\epsilon$, changes LRP-0 to improve the relevance propagation stability, adding a small constant, $\epsilon$, to $z_{k}$. Being $sign(\cdot)$, a function that evaluates to 1 for positive or zero arguments, and to -1 otherwise, LRP-$\epsilon$ is defined as: \begin{equation} R_{j}=\sum_{k}\frac{w_{jk}a_{j}}{z_{k}+sign(z_{k})\epsilon}R_{k} \end{equation} LRP is a scalable technique that can be efficiently implemented. A four-step procedure exists to rapidly execute the aforementioned LRP rules in a fully connected or convolutional layer with ReLU activation function\cite{LRPBook} (see section \ref{layers}). It is possible to adopt a different rule for the DNN input layer, taking into account the nature of the input space. For images, a rule called Z$^{B}$\cite{LRPBook} considers the maximum and minimum pixel values allowed in the figures. Average and sum pooling, as linear operations, can be treated similarly to a convolutional layer. We can either handle max pooling as sum pooling, or adopt a winner-takes-all strategy, propagating all the layer output relevance to its inputs that were actually selected by the max pooling operation. Finally, batch normalization layers can be merged with neighboring convolutional or fully connected layers, defining a single equivalent layer, through which we propagate the relevance \cite{BNLRP}. With defined rules for the most common DNN layers, LRP configures a very flexible technique, which is scalable and applicable to virtually any neural network architecture. For example, for the Keras Python library, implementations for many popular deep learning models are available at\cite{innvestigate}. \subsection{ISNet: Implicit Segmentation Neural Network} \label{IsNetSec} While models that use segmentation as a preprocessing stage before classification generally use a long processing pipeline with two DNNs (a segmenter and a classifier), the ISNet is a fundamentally different approach. We propose a technique to teach virtually any deep classifier to consider only the relevant image regions during classification. In other words, an ISNet produces a classifier capable of implicitly performing image segmentation. Implicitly, in this case, means that the ISNet will not produce a segmentation output (mask); however, it shall classify an unsegmented image as if it were previously segmented. The main goals of ISNet are to simplify the processing pipeline of segmentation followed by classification, and to be flexible, permitting the use of any classification neural network. Therefore, at run-time, the model does not add any additional computational cost relative to a single standard classifier. Moreover, if a classification architecture is chosen as the ISNet base, it will exactly define the run-time model. We only alter the original architecture during the training stage. Layer-wise Relevance Propagation was created to improve DNN interpretability\cite{LRP}. With the ISNet, we introduce an additional function for it by using LRP to explicitly control a DNN's attention. LRP heatmaps show the relevance of each input feature (e.g. pixels) for the DNN classification. During training, we therefore created a method to penalize undesired attention in the maps, forcing the classifier to minimize it and thus to consider only the relevant image regions. Using segmentation masks to define the important areas, the ISNet segments LRP relevance, minimizing the focus on the image background and thus performing implicit image segmentation. To minimize undesired LRP relevance (both positive and negative) during training, we propose a multitask loss function, comprising two terms: the traditional classification loss (which compares the supervised labels with the DNN classification output, e.g. cross-entropy) and a new loss term (which we call 'heatmap loss') that contrasts the classifier's LRP heatmaps with segmentation targets. We precisely define the ISNet loss function in Section \ref{loss}. During training, if a gold standard of segmentation targets is not available for the entire dataset, a pretrained segmenter (e.g. U-Net) can create the segmentation labels. The training-time ISNet is defined by two modules: a classifier (e.g. DenseNet\cite{DenseNet}, ResNet\cite{ResNet}, VGG\cite{VGG}) and an LRP block. The LRP block is a DNN created as a mirror image of the classifier. Its layers perform LRP propagation rules, using shared weights and skip connections with the equivalent classifier layers to propagate relevance through them. Thus, the LRP block produces LRP heatmaps. The advantage of defining the LRP relevance propagation process itself as a neural network is that the optimizers in standard deep learning libraries can automatically backpropagate the heatmap loss through the LRP block. Therefore, we avoid the task of manually defining LRP backpropagation rules, which could require changes in the standard optimization algorithms, making the ISNet much more complicated to implement and less practical. All parameters in the LRP block are shared with the classifier. It therefore generates the same heatmaps as those created with traditional LRP. Moreover, due to parameter sharing, after backpropagation of the heatmap loss through the LRP block, the optimization of the block's parameters is automatically reflected in the classifier. Section \ref{layers} defines all the types of LRP block layers that we created (alternatively, LRP layers), which are enough to perform LRP in a modern, efficient, and complex classifier: a Densely Connected Convolutional Network\cite{DenseNet}. For ISNets to work properly, we need to produce one heatmap for each possible class, starting the LRP relevance propagation at the output neuron that classifies it. We cannot minimize unwanted LRP relevance for a single class (e.g. the winning one). Imagine that we have a bias in the image background, associated with class C, and we minimize only the unwanted LRP relevance for class C. In this case, the classifier can negatively associate all other classes with the bias, using it to lower their outputs, making the class C output neuron the winning one. This negative association is expressed as negative relevance in the other classes' heatmaps. Consequently, the penalization of positive and negative unwanted relevance in all maps is a solution to the problem. Fortunately, using the LRP block to perform relevance propagation provides a simple and efficient strategy for creating multiple heatmaps. They are treated as different samples inside a mini-batch, being processed in parallel, and the relevance propagations for different classes do not interact with each other. Figure \ref{SimpleISNet} presents a simple ISNet example, whose classifier comprises two convolutional layers, L1 and L2, followed by a linear layer, L3. $\bm{x}_{i}$ indicates the input of classifier layer Li. With LRPi being the layer in the LRP block responsible for performing the relevance propagation through Li, its output, $\bm{R}_{i-1}$, is the relevance at the input of layer Li. $\bm{y'}$ is the classifier output for one of the classes (with all other outputs set to zero), which will be used to generate the heatmap associated with it. To perform the propagation, LRPi shares parameters with Li. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{ISNetSimple.png} \centering \caption{Representation of an ISNet for a classifier containing 3 layers. The classifier is on the left, and the equivalent LRP block on the right} \label{SimpleISNet} \end{figure} After training the ISNet, using the classifier and LRP block and simultaneously minimizing the respective loss functions, the LRP block can be removed. The classifier will continue to automatically perform implicit segmentation during run-time. As an additional benefit, the LRP heatmaps may be used before removal for their original function, that is, to interpret the classifier's decisions. In this work, we used a DenseNet121 as the LRP classifier, because it is a modern and very deep model (121 layers) with strong potential in the field of lung disease classification\cite{chexnet}. Moreover, the efficiency of Dense Neural Networks makes them good candidates for becoming ISNet classifiers. Even though the DenseNet121 has more than 100 layers, it only uses around 8M trainable parameters, much fewer than a VGG with 16 layers (138M parameters) and even fewer than a U-Net (about 31M parameters). Due to its definition, an ISNet can be created with any classification neural network that is analyzable by LRP and that has enough power for the desired segmentation task. As LRP can be applied to virtually any DNN and already has implementations for most modern classifiers\cite{innvestigate} (even for recurrent networks), our approach has strong flexibility. \subsection{ISNet Loss Function} \label{loss} The minimization of unwanted LRP relevance is fundamental for the ISNet, so we begin by explaining the loss function that penalizes it. Two terms comprise the ISNet loss, $L_{IS}$, a classification loss, $L_{C}$, which penalizes the classifier's classification outputs in relation to labels, and a term that quantifies the amount of unwanted attention present in the heatmaps, created by the LRP Block. We call this term 'heatmap loss', $L_{H}$. \begin{equation} L_{IS}=(1-P)L_{C}+P L_{H} \end{equation} The hyperparameter $P$ in the above equation balances the influence of the two losses in the gradient and parameters update. It must be valued between 0 and 1, with larger values increasing the strength of the heatmap loss. If P is too small, the network will not minimize $L_{H}$ and implicit segmentation will not be performed. However, if P is too high, the model may ignore the classification task and move towards a zero solution, minimizing its parameters to produce zero valued heatmaps, a simplistic manner of reducing $L_{H}$. To tune P, we advise a search through different values, followed by evaluation of validation error, as the ideal choice may change for different models and tasks. Nevertheless, fine adjustment of the parameter does not seem to strongly affect model performance. Although a long-term increase of $L_{C}$ indicates that P is too high, we observed its increment during a small number of epochs in the beginning of the training process, followed by the two losses dropping simultaneously, even with adequate values of P. $L_{C}$ is a standard loss function for classification. Here, we used cross-entropy or binary cross-entropy (for a single-label or multi-label task, respectively), averaged across the mini-batch samples. We will now explain the calculation of the second term, $L_{H}$. First, some definitions are needed: we denote as $\bm{H}$ a tensor containing the heatmaps for all classes, created for a single input image or for a mini-batch of B images (in this paper we use bold letters to indicate tensors). For a multiclass classification problem with K classes and input images of shape (C,Y,X), where C is the number of channels, Y the image height, and X its width, $\bm{H}$ assumes the following shape: (B,K,C,Y,X). The second dimension (class) indicates at which output neuron the relevance propagation began. So in the k-th heatmap, the classifier associated positive relevance with the class k. The mini-batch and classes dimensions were merged inside the LRP block, but we separate them before loss calculation. $\bm{M}$ is a tensor containing the segmentation targets (masks) for each figure in the mini-batch. A mask separates the image foreground and background. It is defined as a single-channel image with shape (Y,X), valued 1 in the regions of interest and 0 in areas of undesired attention. Having B masks, one for each mini-batch image, we just repeat them in the channels and classes dimensions for $\bm{M}$ to match the shape of $\bm{H}$. Heatmap loss calculation starts by a normalization of the heatmaps tensor, dividing it by the square root of its elements' variance plus a small $e$ value (e.g., $10^{-4}$). $e$ enforces the relationship $0/0=0$, avoiding indeterminate results. Afterwards, we substitute the tensor elements by their absolute values, $abs(\cdot)$, since the training procedure must minimize both the positive and the negative undesired relevance in the heatmaps: \begin{equation} \bm{H'}=abs(\frac{\bm{H}}{\sqrt{Var(\bm{H})+e}}) \label{eqabs} \end{equation} The normalization's objective is to account for relevance absorption. Although LRP performs an almost conservative propagation, relevance is absorbed by the stabilizer element in LRP-$\epsilon$ and by the layers' bias parameters. However, it is important to use a normalization procedure that does not shift the zero value in the heatmaps. The next step in heatmap loss calculation is zeroing the relevance inside the regions of interest in the $\bm{H'}$ heatmaps. This is because the $L_{H}$ minimization process should not affect the classifier's ability to analyze these regions. With $\bm{1}$ being a tensor whose elements are 1, with the same shape as $\bm{H}$ and $\bm{M}$, we create the inverted masks tensor, $\bm{1}-\bm{M}$, whose values are 1 in the region of undesired attention and 0 in the important image areas. Thus, an element-wise multiplication of $\bm{H'}$ with the inverted masks generates a new tensor, $\bm{UH'}$, containing heatmaps that show only the undesired relevance: \begin{equation} \bm{UH'}=(\bm{1}-\bm{M}) \odot \bm{H'}= (\bm{1}-\bm{M}) \odot abs(\frac{\bm{H}}{\sqrt{Var(\bm{H})+e}})=[UH'_{bkcyx}] \end{equation} Summing the elements in $\bm{UH'}$ for all image-related dimensions (C,Y,X) reduces the 5-dimensional tensor to a 2-dimensional one, $\bm{uh'}$, which contains only the dimensions related to batch and class, (B,K). Therefore, the positive real value $uh'_{bk}$ is a measurement of the total undesired relevance in the heatmap of the mini-batch image b, starting the LRP relevance propagation at class k. \begin{equation} uh'_{bk}=\sum_{c=1}^{C}\sum_{y=1}^{Y}\sum_{x=1}^{X}UH'_{bkcyx} \end{equation} Cross-entropy is the most common error function for image segmentation. It therefore seemed a sensible choice for the $uh'_{bk}$ penalization. As the cross-entropy's target for the undesired relevance, the natural choice is zero, which represents a model not considering background features during classification. Under this specific condition, the cross-entropy function, $CE(\cdot)$, for a scalar, $x$, can be expressed as: \begin{equation} CE(x)=-log(1-x) \end{equation} Which evaluates to 0 when $x=0$, and to infinity when $x=1$. Consequently, we must choose a function, $f(\cdot)$, to map the $uh'_{bk}$ elements to the interval $[0,1[$ before applying cross-entropy. Some design requirements for $f(\cdot)$, considering positive arguments ($uh'_{bk}\geq 0$), are: being monotonically increasing, differentiable, limited between 0 and 1, and to map 0 to 0. The sigmoid function, a common activation function for image segmentation, cannot be used, since it evaluates to 0.5 for a null input. After practical tests with some candidates, we decided to use the following function: \begin{equation} f(x)=\frac{x}{x+E} \end{equation} Where $E$ is a hyperparameter controlling the function's slope and how fast it saturates to 1. The ISNet does not appear to be very sensitive to this parameter. Instead of a computationally expensive grid search for the ideal E and P hyperparameters, we therefore suggest the following rule of thumb, which was effective for different tasks and ISNet classifiers during our tests: obtain the average $uh'_{bk}$ value for a few images at the start of the training process, and set $E$ for about this value divided by 10 to 100. This choice makes $f(uh'_{bk})$ high at the start of the training procedure (about 0.9 to 0.99), but not exaggeratedly saturated, facilitating the gradient descent. Figure \ref{act} shows the behavior of $f(x)$ for $E=1$, and the red dot marks the projection for a possible early $uh'_{bk}$ value, according to the proposed rule. \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Activation.png} \caption{} \label{act} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Gx.png} \caption{} \label{g(x)} \end{subfigure} \caption{Behavior of $f(x)=x/(x+E)$ (\textbf{a}) and $g(x)=-log(1-f(x))$ (\textbf{b}), with hyperparameter E=1. The red dot marks the position x=10} \label{fig:three graphs} \end{figure} Finally, we can calculate the $g(uh'_{bk})$, the cross-entropy between $f(uh'_{bk})$ and 0, and average it for all $uh'_{bk}$ elements in $\bm{uh'}$, resulting in the scalar $L_{H}$. Considering the B mini-batch images and K classes, we have: \begin{gather} g(uh'_{bk})=CE(f(uh'_{bk}))=-log(1-\frac{uh'_{bk}}{uh'_{bk}+E})\\ L_{H}=\frac{1}{B.K}\sum_{b=1}^{B}\sum_{k=1}^{K}g(uh'_{bk})=\frac{1}{B.K}\sum_{b=1}^{B}\sum_{k=1}^{K}-log(1-\frac{uh'_{bk}}{uh'_{bk}+E}) \end{gather} Figure \ref{g(x)} plots $g(uh'_{bk})$, for $E=1$ and positive $uh'_{bk}$ values, ensured by the absolute value operation in equation \ref{eqabs}. As expected, the function monotonically increases with the undesired relevance in a heatmap, presenting a minimum at 0, when there is an absence of focus on the image background. As $g(uh'_{bk})$ diverges if $uh'_{bk}=1$, we clamp $uh'_{bk}$ between 0 and $1-10^{-4}$ for numerical stability. \subsection{LRP Block Layers} \label{layers} Besides defining the LRP relevance propagation rules as neural network layers, we also need to modify their execution methodology to train an ISNet. For each layer in the classifier, a corresponding LRP layer is added to the LRP block to perform the relevance propagation through it. All LRP layers are based on an efficient LRP implementation in four steps. Considering the propagation of relevance through a classifier convolutional or dense layer L with ReLU activation, the implementation is defined as\cite{LRPBook}: \begin{enumerate} \item Forward pass the layer L input, $\bm{x}_{L}$, through layer L, generating its output tensor $\bm{z}$ (without activation). \item For LRP-$\epsilon$, modify each $\bm{z}$ tensor element ($z$) by adding $sign(z)\epsilon$ to it. Defining the layer L output relevance as $\bm{R}_{L}$, perform its element-wise division by $\bm{z}$: $\bm{s}=\bm{R}_{L}/\bm{z}$. \item Backward pass: backpropagate the quantity $\bm{s}$ through the layer L, generating the tensor $\bm{c}$. \item Obtain the relevance at the input of layer L by performing an element-wise product between $\bm{c}$ and the layer input values, $\bm{x}_{L}$ (i.e. the output of layer L-1): $\bm{R}_{L-1}=\bm{x}_{L} \odot \bm{c}$. \end{enumerate} In the ISNet, we apply the following changes to the algorithm above: In Step 1, we perform a forward pass through a copy of layer L, whose weights are shared with its clone (i.e. they have the same parameters). The input for the operation is layer L's original input, $\bm{x}_{L}$ (i.e. the output of layer L-1), captured by a skip connection between the LRP block and the classifier. Although the bias parameters are also shared, we do not directly optimize them for the $L_{H}$ minimization (in PyTorch we accomplish this with the detach function on the LRP layer shared bias). Knowing that the biases are responsible for relevance absorption during its propagation\cite{LRP}, this choice prevents the training process from increasing them to enforce an overall reduction of relevance. As in the original method, we do not use an activation function in this step. Step 2 is unchanged. We note that LRP-0 can be unstable for ISNet training. Therefore, we opted to base the LRP layers we implemented on LRP-$\epsilon$, having chosen $\epsilon$ as $10^{-2}$. For Step 3, we propose a fundamental modification because using a backward pass during the forward propagation through the LRP block could cause conflicts in common deep learning libraries, complicating the subsequent backpropagation of the heatmap loss. Consequently, we substitute the backward pass in Step 3 with an equivalent operation: the forward propagation of $\bm{s}$ through a transposed version of layer L. If L is fully connected, its transposed counterpart is another linear layer, whose weights are the transpose of the original ones. Thus, Step 3 becomes: $\bm{c}=\bm{W}^{T} \cdot \bm{s}$. Similarly, convolutional layers have transposed convolutional layers as their counterpart, using the same padding, stride, and kernel size. As in Step 1, the transposed layer share weights with layer L, but this time it does not use bias parameters. Step 4 is unchanged and, as in Step 1, $\bm{x}_{L}$ is carried by the skip connection. The obtained relevance value, $\bm{R}_{L-1}$, is forwarded to the next LRP block layer, which will perform the relevance propagation for the classifier layer L-1. Using italics to emphasize our changes to the original algorithm, we summarize the relevance propagation through the LRP layer corresponding to a convolutional or fully connected classifier layer L, which uses ReLU activation, as follows: \begin{enumerate} \item Forward pass the layer L input, $\bm{x}_{L}$, through \textit{a copy of} layer L, generating its output tensor $\bm{z}$ (without activation). \textit{Use parameter sharing, use the detach() function on the biases. Get $\bm{x}_{L}$ via a skip connection with layer L.} \item For LRP-$\epsilon$, modify each $\bm{z}$ tensor element ($z$) by adding $sign(z)\epsilon$ to it. Defining the layer L output relevance as $\bm{R}_{L}$, perform its element-wise division by $\bm{z}$: $\bm{s}=\bm{R}_{L}/\bm{z}$. \textit{\item Forward pass the quantity $\bm{s}$ through a transposed version of the layer L (linear layer with transposed weights or transposed convolution), generating the tensor $\bm{c}$. Use parameter sharing, but set biases to 0.} \item Obtain the relevance at the input of layer L by performing an element-wise product between $\bm{c}$ and the layer input values, $\bm{x}_{L}$ (i.e. the output of layer L-1): $\bm{R}_{L-1}=\bm{x}_{L} \odot \bm{c}$. \end{enumerate} The above-defined convolutional and fully connected LRP layers are equivalent to the traditional LRP-0/LRP-$\epsilon$ propagation rules. Therefore, the proposed execution methodology is not detrimental to the explanational ability of LRP. In the following subsections, we explain the LRP layers for other common operations in DNNs, namely, pooling layers and batch normalization. Finally, we explain the implementation of the LRP Z$^{B}$ rule for the first convolutional or fully connected layer in the classifier. \subsubsection{LRP Layer for Pooling Layers} \label{pool} As linear operations, sum pooling and average pooling can be treated like convolutional layers, whose corresponding LRP layer was defined in section \ref{layers}. However, since pooling operations do not have trainable parameters, no weight sharing is needed. Furthermore, the $\bm{z}$ tensor in Step 1 can be explicitly defined as the pooling output (obtained via a skip connection). We represent convolutional kernels with a tensor $\bm{K}$ of shape (C1,C2,H,W), where C1 is the number of input channels, C2 of output ones, W the kernel width, and H its height. Then, for a convolution to be equivalent to pooling, C1 and C2 become the pooling number of channels, while W and H are defined by its kernel size. Naturally, the two layers use the same padding and stride parameters. To represent sum pooling, the convolutional kernel elements, $k_{c1,c2,h,w}$, are defined as constants: \begin{equation} k_{c1,c2,h,w}= \begin{cases} 1,& \text{if }c1=c2\\ 0,& \text{otherwise} \end{cases} \label{weights} \end{equation} And considering average pooling with kernel size of (H,W), we have: \begin{equation} k_{c1,c2,h,w}= \begin{cases} \frac{1}{H.W},& \text{if }c1=c2\\ 0,& \text{otherwise} \end{cases} \end{equation} For max pooling, the equivalent LRP layer adopts a winner-takes-all strategy, distributing all the relevance to the layer inputs that were chosen and propagated by the pooling operation. Therefore, we change the LRP four-step procedure for convolutional layers, substituting the forward pass of $\bm{s}$ through a transposed convolution (Step 3) with its propagation through a MaxUnpool layer. This operation, available in the PyTorch library, uses the indices of the maximal values propagated by max pooling to calculate its partial inverse, which sets all non-maximal inputs to zero. As in average/sum pooling, no parameter sharing is needed, and $\bm{z}$ in Step 1 can be directly obtained via a skip connection. \subsubsection{LRP Layer for Batch Normalization} \label{BN} Batch normalization (BN) layers are linear operations that can be fused with an adjacent convolutional or linear layer during relevance propagation, creating a single equivalent convolutional/fully connected layer. Thus, we calculate the parameters of the equivalent layer and create its corresponding LRP layer with the methodology presented in section \ref{layers}. One study\cite{BNLRP} presented equations to fuse batch normalization with an adjacent convolutional or linear layer. To analyze a convolution, followed by BN, and then by a ReLU activation function (a configuration used in DenseNets), we define: $\bm{K}$ of shape (C1,C2,H,W) as the convolution weights/kernels; $\bm{B}$ as the convolutional bias, of shape (C2); $\bm{\gamma}$ as batch normalization weights, of shape (C2); $\bm{\beta}$ as the BN bias, also of size (C2); $\bm{\mu}$ the per-channel (C2) mean of the batch-normalization inputs, and $\bm{\sigma}$ its standard deviation, defined as the square root of the input's variance plus a small value for numerical stability (a parameter of the BN layer). After replicating the tensors $\bm{\gamma}$ and $\bm{\sigma}$ in the C1, W, and H dimensions to match the shape of $\bm{K}$, we can define the equivalent convolutional layer weights, $\bm{K'}$, with element-wise divisions and multiplications: \begin{equation} \label{BN1} \bm{K'}=\frac{\bm{K} \odot \bm{\gamma}}{\bm{\sigma}} \end{equation} While the equivalent convolution bias, $\bm{B'}$, is given by the following element-wise operations: \begin{equation} \label{BN2} \bm{B'}=\bm{\beta}+\bm{\gamma} \odot \frac{\bm{B}-\bm{\mu}}{\bm{\sigma}} \end{equation} Dense neural networks also present BN layers between pooling and ReLU layers, thus an adjacent convolutional or linear layer is not available. Observing that the preceding pooling operation is also a linear operation (not followed by any activation function), we can fuse it with batch normalization. If we have average or sum pooling, we simply calculate its equivalent convolution, as explained in Section \ref{pool}, and perform the fusion according to equations \ref{BN1} and \ref{BN2}. In the case of max pooling, we start by imagining a convolutional layer performing identity mapping before the batch normalization. This imaginary layer has no bias or padding, its stride is unitary, and its weights, $\bm{K}$, have shape (C,C,1,1), where C is the max pooling number of channels. An element $k_{c1,c2,h,w}$ is then given by equation \ref{weights}. Thus, we can fuse the batch normalization with the identity convolution according to the equations \ref{BN1} and \ref{BN2}, creating a BN equivalent convolution. The LRP layer for relevance propagation through the sequence of MaxPool, BN, and ReLU is then defined by the 4-step procedure below: \begin{enumerate} \item Forward pass the MaxPool output through the BN equivalent convolution, generating its output tensor $\bm{z}$ (without activation). Use parameter sharing, use the detach() function on the biases, and obtain the MaxPool output via a skip connection. \item For LRP-$\epsilon$, modify each $\bm{z}$ tensor element ($z$) by adding $sign(z)\epsilon$ to it. Defining the output relevance of the ReLU activation as $\bm{R}_{L}$, perform its element-wise division by $\bm{z}$: $\bm{s}=\bm{R}_{L}/\bm{z}$. \item Forward pass the quantity $\bm{s}$ through a transposed version of the BN equivalent convolution (transposed convolution), generating a quantity $\bm{t}$. Employ parameter sharing, but set biases to 0. Then perform a MaxUnpool operation, using $\bm{t}$ as input, creating the tensor $\bm{c}$. \item Obtain the relevance at the input of the MaxPooling layer by performing an element-wise product between $\bm{c}$ and the MaxPool input values, $\bm{x}_{L}$ (i.e. the output of layer L-1): $\bm{R}_{L-1}=\bm{x}_{L} \odot \bm{c}$. \end{enumerate} This methodology generates the same result as propagating the relevance first in the BN equivalent convolution and then in the max pooling LRP layer, but it has a lower computational cost. \subsubsection{LRP Layer for Z$^{B}$ LRP Rule} The LRP Z$^{B}$ rule\cite{LRPZb} considers the DNN input range in its formulation, and is used for the first layer in the neural network. For a convolutional or fully connected layer, it begins with a separation of the positive and negative elements of its weights, $\bm{W}$, and bias, $\bm{B}$. With $\bm{0}$ being a tensor of zeros: \begin{gather} \bm{W}^{+}=max(\bm{0},\bm{W})\\ \bm{B}^{+}=max(\bm{0},\bm{B})\\ \bm{W}^{-}=min(\bm{0},\bm{W})\\ \bm{B}^{-}=min(\bm{0},\bm{B})\\ \end{gather} The parameters define three new layers: weights $l.\bm{W}^{+}$ and biases $l.\bm{B}^{+}$ define layer L$^{+}$, where $l$ is the lowest pixel value possible (in the common case where $l=0$, L$^{+}$ is ignored); weights $h.\bm{W}^{-}$ and biases $h.\bm{B}^{-}$ produce layer L$^{-}$, where h is the highest allowed input. The original parameters, $\bm{W}$ and $\bm{B}$ define a copy of the original layer L. The four-step procedure for convolutional/fully connected layers (defined in Section \ref{layers}) is then changed to: \begin{enumerate} \item Forward pass the L layer input, $\bm{x}_{L}$, once for each of the tree layers (L$^{+}$, L$^{-}$, and a copy of L), producing $\bm{z}^{+}$, $\bm{z}^{-}$ and $\bm{z}^{original}$ (without activation). The classifier layer L shares parameters with the three instances. The bias values are not optimized to reduce the heatmap loss. Use a skip connection with layer L to obtain $\bm{x}_{L}$. Combine the three results in the following manner: $\bm{z}=\bm{z}^{original}-\bm{z}^{+}-\bm{z}^{-}$. \item Modify each $\bm{z}$ tensor element ($z$) by adding $sign(z)\epsilon$ to it. Defining the layer L output relevance as $\bm{R}_{L}$, perform its element-wise division by $\bm{z}$: $\bm{s}=\bm{R}_{L}/\bm{z}$. \item Forward pass the quantity $\bm{s}$ through a transposed version of L, L$^{+}$ and L$^{-}$, creating the tensors $\bm{c}^{original}$, $\bm{c}^{+}$ and $\bm{c}^{-}$, respectively. Use parameter sharing, but set biases to 0. \item Being $\bm{x}_{L}$ the layer L input values, obtain the relevance at the input of layer L, $\bm{R}_{L-1}$, with: $\bm{R}_{L-1}=\bm{x}_{L} \odot \bm{c}^{original} -\bm{c}^{+}-\bm{c}^{-}$. \end{enumerate} If batch normalization follows the first classifier layer, placed before the ReLU nonlinear activation, the above procedure can implement the Z$^{B}$ rule for the equivalent convolutional/fully connected layer (the result of fusing the convolutional/linear layer with BN), explained in Section \ref{BN}. \subsubsection{Dropout} The dropout operation can be defined as the random removal of layer input elements during the training procedure, and LRP block layers can automatically deal with it. For a layer L preceded by dropout, its inputs $\bm{x}_{L}$ shall have some zero values caused by the operation. According to the element-wise multiplication $\bm{R}_{L-1}=\bm{x}_{L} \odot \bm{c}$ (see Step 4 of the procedure in Section \ref{layers}), these null values will also make their respective relevances zero, thus being accounted for in the LRP propagation. \subsection{Explaining the Implicit Segmentation Precision} Having defined the LRP layers of the LRP block, we can visualize the entire ISNet structure. For a common feed-forward classifier (without skip connections in its structure), a layer in the middle of the corresponding LRP block, propagating relevance through the classifier layer L, has 3 connections: one with the previous LRP layer, the second with the following LRP layer, and a skip connection with the classifier layer L. The first carries the relevance that can be seen at the layer L output, $\bm{R}_{L}$; the second brings the LRP layer result, $\bm{R}_{L-1}$ (equivalent to the relevance at the layer L input), to the next layer in the LRP block; and the last carries, from the classifier layer L, the information required for relevance propagation e.g. L's input, $\bm{x}_{L}$, if it is convolutional, or output, $\bm{x}_{L+1}$, in the case of pooling. The Figure below exemplifies the ISNet architecture for a simple classifier, comprising the following sequence of layers: $L1 \rightarrow L2 \rightarrow L3 \rightarrow L4 \rightarrow L5$, where $L1$ and $L3$ are defined by $Convolution \rightarrow ReLU$, while $L2$ and $L4$ are max pooling operations, and $L5$ is fully connected. The LRP layer propagating relevance through layer $Li$ is named $LRPi$. In Figure \ref{ISNet}, $\bm{y'}$ is the classifier output $\bm{y}$ for one class, with all other outputs made zero. One map shall be created for each class, with each LRP propagation being executed in parallel, in a strategy analogous to mini-batch processing. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{ISNet.png} \centering \caption{Representation of an ISNet. The classifier is on the left, and the equivalent LRP block on the right. Pooling layers and their LRP counterparts are in red, convolutional in green, and linear in yellow} \label{ISNet} \end{figure} As expected, the ISNet structure has a contracting path, defined by the classifier, and an expanding path, the LRP block. We see that the LRP layers for pooling (in red) increase the size of their relevance inputs. Afterwards, an LRP correspondent for convolution (in green) reduces the number of channels in the relevance signal, and combines it with information from an earlier feature map containing higher resolution, which is brought from the classifier. As a consequence of the LRP rules, we naturally have a structure that combines context information from later classifier feature maps with high resolution from the earlier maps, which suffered less down-sampling by pooling operations. Thus, skip connections between an expanding and a contracting path are used to combine information from later and earlier feature maps; this idea is behind state-of-the art architectures used for image segmentation and object detection, such as the U-Net\cite{unet}, Filter Pyramid Networks\cite{FPN}, and the YOLOv3\cite{yolov3}. Therefore, the concept should also allow the ISNet's implicit image segmentation to be precise. \subsection{Challenges and Implementation Notes} Fundamentally, the ISNet architecture restrains the classifier during training, conditioning it to analyze only the relevant part of the image. This is the case even when there are undesired background features that could allow the model to reduce the classification loss rapidly and easily. Therefore, any change in the ISNet architecture needs to be cautiously implemented, because it can create a new and unintended way for the classier to minimize the heatmap loss. Namely, as a flexible model, the classifier will look for the easiest way to reduce $L_{LRP}$, which may be by ``cheating'' and finding a strategy to ``hide'' the undesired relevance in the heatmaps. We encountered an interesting example of this behavior when testing a preliminary version of the ISNet architecture. If we use a heatmap loss formulation that penalizes a ratio of the undesired relevance to the relevance inside the region of interest, the classifier may minimize $L_{LRP}$ by artificially boosting the desired relevance. The alternative loss was still based on the absolute value of the heatmaps (to minimize positive and negative undesired relevance), so the model created a fine and regular chessboard pattern inside the zones of interest, alternating between strong positive and strong negative relevance. The artificial pattern allows the region of interest to equally affect each class in the classifier, making it meaningless for the classification task. However, the high absolute values in the pattern strongly reduce the ratio in the alternative heatmap loss, allowing its minimization while the classifier focuses on the image background with a comparatively small quantity of absolute relevance. Regarding implementation, we need to be careful with in-place operations inside the classifier, since they may change values that will later be processed by the LRP block. Moreover, we explicitly chose to use deterministic operations during training in PyTorch. This was to ensure better correspondence between the LRP block and the classifier (which can be accomplished with a single command, use\_deterministic\_algorithms(True)). Finally, using PyTorch's forward hooks, we can easily store all relevant variables during the forward propagation through the classifier, allowing their later access by the LRP block. This is an effortless way to create the skip connections between the classifier and the block, even if the classifier is already defined and instantiated. To improve training stability, gradient clipping may be utilized, and too high learning rates should be avoided. \subsection{DenseNet based ISNet and Classifier Skip Connections} The Densely Connected Convolutional Network (DenseNet) is characterized by the dense block, a structure where each layer receives, as input, the concatenation of the feature maps produced by every previous layer in the block\cite{DenseNet} (i.e. the block presents skip connections between each layer and all preceding ones). For LRP relevance propagation, all the connections must be considered. Therefore, a mirror image, inside the ISNet LRP block, of a dense block with S skip connections will also have S internal skip connections, now propagating relevance. Naturally, we also have skip connections between the classifier and the LRP block, which carry information from layer L (e.g. its input, $\bm{x}_{L}$) to the LRP layer that performs its relevance propagation. In the case of a DenseNet, $\bm{x}_{L}$ is no longer defined as the output of classifier layer L-1, but rather as the concatenated outputs of all layers preceding L in its dense block. To understand the relevance skip connections, imagine that, in the classifier, a layer L propagates its output to layers L+i, with $i \in \{1,...,N\}$. We define $\bm{R}_{Inp(L+i),L}$ as the relevance at the input of layer L+i, considering only its input elements (or channels) connected to L. We define relevance at the input of layer L+i, considering all input channels, as $\bm{R}_{Inp(L+i)}$. $\bm{R}_{Inp(L+i)}$ (and $\bm{R}_{Inp(L+i),L}$) can be obtained with relevance propagation through L+i. Then, to further propagate the relevance through layer L, we set the relevance at L's output as $\bm{R}_{L}$, given by equation \ref{sumRel}. Proceeding with the propagation rules explained in section \ref{layers}, we can obtain $\bm{R}_{Inp(L)}$, the relevance at the input of layer L. \begin{equation} \bm{R}_{L}=\sum_{i=1}^{N}\bm{R}_{Inp(L+i),L} \label{sumRel} \end{equation} In a DenseNet, a layer L inside one of its dense blocks is defined as a nonlinear mapping $H_{L}(\cdot)$, which can in turn comprise a standard feed-forward sequence of other layers, e.g. ReLU activation, convolution, and batch normalization. Since there is an absence of skip connections inside one layer L, we propagate relevance through the sequence in the standard manner. Equation \ref{sumRel} is useful if $H_{L}(\cdot)$ is defined as sequences in the form $C_{L}(convolution) \rightarrow BN_{L} \rightarrow ReLU_{L}$, $C_{L} \rightarrow ReLU_{L}$, $C_{L} \rightarrow Pool_{L} \rightarrow ReLU_{L}$, or combinations of the previous sequences. Figure \ref{SimpleDense} presents a simple example, considering a sequence of 4 convolutional layers in the classifier, L0, L1, L2, and L3, each defined as $C \rightarrow ReLU$ and receiving the outputs of all previous layers. The flow of relevance is observable in the LRP block. Note that different connections carry different channels of the $\bm{R}_{Inp(L+i)}$ tensor, $\bm{R}_{Inp(L+i),L}$. In the Figure, we define the input of layer Li as $\bm{x}_{i}$, which is a concatenation (in the channels dimension) of the previous layers' outputs, $\bm{y}_{j}$. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{DenseSimple.png} \centering \caption{Classifier and LRP block for 4 convolutional layers with skip connections in the classifier} \label{SimpleDense} \end{figure} The most common proposal for $H_{L}(\cdot)$ in DenseNets is: $BN^{1}_{L} \rightarrow ReLU^{1}_{L} \rightarrow C^{1}_{L} \rightarrow BN^{2}_{L} \rightarrow ReLU^{2}_{L} \rightarrow C^{2}_{L}$. In Section \ref{layers}, we defined LRP layers to propagate relevance in sequences ending with ReLU activation. As such, they cannot be directly applied to $H_{L}(\cdot)$, and nor can we rely on equation \ref{sumRel}. Consider a dense block layer L, followed by N other layers, L+i ($i \in \{1,...,N\}$), which receive L's output. $\bm{R}_{L+i,L}^{ReLU1}$ is the relevance at the output of the first ReLU inside layer L+i ($ReLU^{1}_{L+i}$), considering only channels that came from layer L. Layer L+i starts by processing layer L's output through the sequence $BN^{1}_{L+i} \rightarrow ReLU^{1}_{L+i}$, producing a tensor that we shall call $\bm{y}_{L}^{L+i}$. We can define $\bm{y}_{L}$ as the concatenation, in the channels dimension, of the N $\bm{y}_{L}^{L+i}$ feature maps, one for each L+i layer. With these definitions, the below procedure calculates the relevance at the output of layer L's first ReLU activation, $\bm{R}_{L}^{ReLU1}$, based on the N relevances $\bm{R}_{L+i,L}^{ReLU1}$: \begin{enumerate} \item Fuse layer L's second convolution, $C^{2}_{L}$, with the first batch normalization operation in each of the N layers L+i, $BN^{1}_{L+i}$, obtaining equivalent convolutional kernels and biases according to equations \ref{BN1} and \ref{BN2}, respectively. Concatenate these parameters in their output channel dimension, creating the kernels and biases that define a single equivalent convolutional layer, which generates $\bm{y}_{L}$ from layer L's second ReLU output. Use it to recreate $\bm{y}_{L}$, but detach the bias parameters. \item Concatenate the N relevances $\bm{R}_{L+i,L}^{ReLU1}$ (for each L+i layer) in the channel dimension, producing $\bm{R}_{L}^{conc}$. This tensor can be seen as the relevance at the output of the equivalent convolution created in step 1. \item Propagate the relevance $\bm{R}_{L}^{conc}$ through equivalent layer from step 1, using the usual four-step procedure for convolutional layers (section \ref{layers}). The operation returns $\bm{R}_{L}^{ReLU2}$, the relevance at the output of the second ReLU activation in layer L. \item Obtain $\bm{R}_{L}^{ReLU1}$ by propagating $\bm{R}_{L}^{ReLU2}$ through layer L's sequence of operations $C^{1}_{L} \rightarrow BN^{2}_{L} \rightarrow ReLU^{2}_{L}$, using the procedure in Section \ref{BN}. $\bm{R}_{L}^{ReLU1}$ can be propagated through earlier layers in the dense block by repeating these four steps. \end{enumerate} A DenseNet transition layer L can be formed by the following sequence: $BN^{1}_{L} \rightarrow ReLU^{1}_{L} \rightarrow C^{1}_{L} \rightarrow ReLU^{2}_{L} \rightarrow AvgPool_{L}$. $ReLU^{2}_{L}$ is not part of its original configuration, but we added it to simplify the relevance propagation, as our rules are defined for layers with ReLU activation. It did not seem to have a detrimental effect on the model. The transition layer sits between 2 dense blocks. It receives all outputs from the layers in the first block, B, and propagates its own result to every layer of the next one, B+1. Therefore, a layer in block B naturally considers the transition layer among its consecutive N layers during Step 1 of the above procedure; the same is true for the $BN \rightarrow ReLU$ sequence following the last dense block in a DenseNet (every layer in the last block shall consider it in Step 1). The relevance propagation through the transition layer L must take into account all its skip connections with the block B+1, so we use a 4-step procedure similar to the one above. There are only two changes. Since L ends in an average pooling instead of a convolution, we modify the fusion process in Step 1 to merge pooling and BN, a technique explained in Section \ref{BN}. Furthermore, in Step 4, we obtain $\bm{R}_{L}^{ReLU1}$ after propagating $\bm{R}_{L}^{ReLU2}$ through the transition layer $C^{1}_{L}\rightarrow ReLU^{2}_{L}$ sequence, using the rules defined in Section \ref{layers}. We can treat the max pooling layer in the beginning of the DenseNet similarly, considering its skip connections with the first dense block. The code to automatically generate an ISNet from a DenseNet, in PyTorch, is available at https://github.com/PedroRASB/ISNet. In this study, we use an ISNet created with a DenseNet121 classifier. Our main reason for this choice is the DenseNet's small number of trainable parameters in relation to its depth (fewer than 8M parameters and 121 layers). Moreover, the architecture was successfully employed for detecting COVID-19 and other lung diseases using chest X-rays\cite{bassiCovid}\textsuperscript{,}\cite{chexnet}. We utilized the same DNN for facial attribute estimation, as the DenseNet is known for its robust performance when analyzing natural images\cite{DenseNet}. We modified the original DenseNet121 classifier by substituting its last layer with a layer containing 3 neurons (representing 3 possible classes, normal, pneumonia and COVID-19, or rosy cheeks, high cheekbones and smiling), preceded by dropout of 50\%. \subsection{Datasets} \label{dataset} \subsubsection{COVID-19 Chest X-ray} In this study we employed the Brixia COVID-19 X-ray dataset\cite{BrixiaSet} as the source of the training and hold-out validation COVID-19 positive samples. It is one of the largest open databases regarding the disease, providing 4644 frontal X-rays showing symptoms of COVID-19 (i.e., samples to which the dataset authors assigned a severity score, the Brixia Score, higher than 0). All images were collected from the same hospital, ASST Spedali Civili di Brescia, Brescia, Italy. The patients mean age is 62.4 years, with standard deviation of 13.6 years. They are 69.8\% male. We randomly assigned 75\% of the samples (3483 images) for training, and the remaining for hold-out validation. The two subdivisions were not allowed to have images from the same patients. The images of healthy and non-COVID-19 pneumonia patients in our training and hold-out validation datasets come from the CheXPert database\cite{irvin2019chexpert}. It is a large collection of chest X-rays, showing various lung diseases, assembled in the Stanford University Hospital, California, United States of America. The database's classification labels were created with natural language processing, utilizing radiological reports, and have an estimated accuracy surpassing 90\%\cite{irvin2019chexpert}. Considering class balance, we randomly gathered 4644 images of healthy patients, and 4644 of pneumonia patients. Samples were also randomly divided with 25\% for hold-out validation and 75\% for training, employing a patient split. Pneumonia patients have a mean age of 62.3 years, with a standard deviation of 18.7 years. They are 57.1\% male. Healthy patients have a mean age of 51.7 years, with standard deviation of 18.2, and are 56.3\% male. To avoid the effect of mixed dataset bias in our reported results, and to better understand how segmentation improves the DNNs' generalization capability, an external test database (with dissimilar sources in relation to the training dataset) was assigned. Regarding the COVID-19 class, we selected the dataset BIMCV COVID-19+\cite{BimcvSet} for evaluation. The database is also among the largest open sources of COVID-19 positive X-rays. The data was gathered from health departments in the Valencian healthcare system, Spain. Therefore, it is highly unlikely that this database shares patients with the Brixia dataset\cite{BrixiaSet}. It presents 1515 COVID-19 positive frontal X-rays. The corresponding patients have a mean age of 66 years, with standard deviation of 15.3 years. They are 59.6\% male. We also performed cross-dataset testing for the pneumonia and normal classes. We chose the ChestX-ray14 database\cite{chex14} as the source of test pneumonia X-rays. We utilized 1295 pneumonia images, corresponding to patients over 18 years old. They present a mean age of 48 years, with standard deviation of 15.5 years, and are 58.7\% male. The images were gathered from the National Institutes of Health Clinical Center, Bethesda, United States of America. Normal test images were extracted from a database assembled in Montgomery County, Maryland, USA (80 images), and Shenzhen, China (336 images)\cite{ChineseDataset1}. The healthy patients have a mean age of 36.1 years (and standard deviation of 12.3 years) and are 61.9\% male. The images in this database were manually labeled by radiologists, while the ChestX-ray14 dataset authors\cite{chex14} labeled their database with natural language processing, according to radiological reports (with an estimated accuracy over 90\%). For this reason, we preferred to extract the normal images from the Montgomery and Shenzhen database, and not from ChestX-ray14. The assembled database was chosen to resemble the characteristics of the most popular COVID-19 datasets: the use of dissimilar sources for different classes is a characteristic that increases the risk of dataset bias, making lung segmentation more important. The Brixia COVID-19 dataset presented only 11 images of patients younger than 20 years old, and the BIMCV COVID-19+ database has a single one. Thus, to avoid bias, patients under 18 were not included in the other classes, which had much higher proportions of pediatric patients in their original datasets. Both the ISNet and the alternative segmentation-classification pipeline require segmentation masks. The ISNet utilizes them to calculate the heatmap loss during the training procedure, while the alternative models need them to train the segmenter. Thus, we utilized a U-Net, previously trained for lung segmentation\cite{bassi2021covid19}, to create the segmentation targets. We applied a threshold of 0.4 on the U-Net output to produce binary masks, which are valued 1 in the lung regions and 0 everywhere else; the chosen threshold maximized the segmenter's validation performance (intersection over union or IoU). The U-Net was trained using 1263 chest X-rays distributed across the classes COVID-19 (327 images), healthy (327 images), pneumonia (327 images), and tuberculosis (282 images). The model achieved an IoU of 0.864 with the ground truth lung masks during testing. Please refer to\cite{bassi2021covid19} for a detailed explanation of the U-Net training procedure. Finally, to create an extreme test for the ISNet's segmentation performance, we artificially biased the described dataset: all COVID-19 images were marked with a triangle in their upper left corner, the normal class was marked with a square, and pneumonia with a circle. See figure \ref{triangle} for an example. Our data augmentation procedure can sometimes remove the shapes from the training samples (with rotations and translations), which may attenuate the background bias effect. However, the same is true for many natural background features (e.g., text and markers close to the X-ray borders), and the effect of the artificial bias should still be very solid. A common classifier, analyzing unsegmented images, can easily learn to identify the shapes, and to use them for improving classification accuracy. Therefore, we train an ISNet on the biased dataset and evaluate it with test images containing the geometric shapes or not. If it performs equally in both cases, we can conclude that adequate segmentation was achieved. \subsubsection{Facial Attribute Estimation} For the task of facial attribute estimation we employed images from the Large-scale CelebFaces Attributes Dataset, or CelebA\cite{celebA}. The database has 10000 identities, each one presenting 20 images. Binary labels were created by a professional labeling company, according to 40 facial attributes\cite{celebA}. As the database comprises in the wild images, the displayed faces show a wide variety of positions and sizes, and background clutter is present. The CelebAMask-HQ dataset\cite{CelebAMaskHQ} presents a subset of 30000 high-quality images from CelebA, cropped and aligned. Furthermore, these images have manually-created segmentation masks, which indicate the face regions\cite{CelebAMaskHQ}. For this study, we selected the CelebA images that originated CelebAMask-HQ. Afterwards, we created their segmentation masks by applying translation, rotation and resizing to the CelebAMask-HQ masks; with this operation we reverted the CelebAMask-HQ crop and align procedure (described in\cite{CelebAHQ}, appendix C). We employ the dataset split suggested by the CelebAMask-HQ dataset authors\cite{CelebAMaskHQ}, which assigns 24183 samples for training, 2993 for hold-out validation and 2824 for testing. The proposed splits are subsets of the official CelebA training, validation and test datasets. We chose to work with unaligned images that have strong background clutter, and whose face segmentation masks have a wide variety of shapes and positions. We think this setting constitutes a challenging test of the ISNet implicit segmentation capability. Moreover, the CelebA dataset authors\cite{celebA} show that their DNN naturally focuses more on the persons' faces when more facial attributes are classified. For this reason, we believe that the ISNet implicit segmentation task will be more difficult when it classifies a small number of attributes. Thus, we chose to work with 3 classes, to better assess the ISNet segmentation potential, and to better visualize the architectures' benefits. If we choose attributes that also have features outside of the face (e.g., gender), the ISNet architecture will change the natural classification strategy for one that ignores features outside of the face. This effect can be desirable or not, depending on the researcher's objective. Here we opted to classify three attributes that are exclusively present in the face: rosy cheeks, high cheekbones and smiling. The first two are considered identifying attributes, i.e., they can be used for user identification. Therefore, avoiding bias in their classification is a security concern. As in COVID-19 detection, we also created an artificially biased dataset for facial attribute estimation. The second application constitutes a multi-label problem. Thus, we marked our images with a square in the lower-right corner, a circle in the lower-left, and a triangle in the upper-left, to indicate rosy cheeks, high cheekbones and smiling, respectively. See figure \ref{triangle} for an example of an image with three positive labels. \subsection{Alternative Models: U-Net Followed by DenseNet121 and Standalone DenseNet121} In this study, we compare the ISNet with a more traditional methodology to segment and classify images: a U-net (segmenter) followed by a DenseNet121 (classifier). The Densely Connected Convolutional Network is configured exactly as the one inside the ISNet (including a 50\% dropout before the output layer), while the U-Net uses its original architecture proposal, which can be seen in Figure 1 of\cite{unet}. We trained the segmenter beforehand, using the same datasets that would be latter used for classification, and employing the segmentation masks as targets. The same targets were used in the ISNet training. Analyzing the U-Net validation performance, we found the best threshold to binarize its outputs. For COVID-19 detection the optimal value was 0.4, and for facial attribute estimation, 0.5. The U-Net architecture\cite{unet} and its variants are a common model choice for biomedical image segmentation, including lung segmentation\cite{bassi2021covid19}\textsuperscript{,}\cite{covidSegmentation}\textsuperscript{,}\cite{TuberculosisUNet}. It was created with this type of task in mind and so is designed to obtain strong performances even in small datasets. The U-net is a fully convolutional deep neural network with two parts. First, a contracting path, created with convolutions and pooling layers, captures the image's context information. Then, an expanding path uses transposed convolutions to enlarge the feature maps and perform precise localization. Skip connections between the paths allow the expanding path access to earlier high-resolution feature maps (before down-sampling), which carry accurate positional information. Its combination with the context information in the later feature maps results in the U-net's ability to conduct precise image segmentation. As is commonly done, the U-Net parameters are kept frozen when training the DenseNet121 classifier. The pipeline for segmentation followed by classification can be summarized as follows: \begin{enumerate} \item The U-Net processes the X-ray and segments the lung region. \item We threshold the U-Net result and create a binary mask, where lungs are represented as 1 and the remaining image parts as 0. \item We use an element-wise multiplication between the X-ray and the mask to erase the image background. \item The DenseNet121 classifies the segmented image. \end{enumerate} Finally, we also train the classifier (DenseNet121) without segmentation, as a baseline. \subsection{Data Processing and Augmentation} For COVID-19 detection, we loaded the X-rays as grayscale images to avoid the influence of color variations in the dataset. We then performed histogram equalization to further reduce dataset bias. Next, we re-scaled the pixel values between 0 and 1, reshaped the images (which can assume various sizes in the database) to 224x224 and repeated them in 3 channels. The shape of (3,224,224) is the most common input size for the DenseNet121, reducing computational costs while still being fine enough to allow accurate segmentation and lung disease detection\cite{bassi2021covid19}\textsuperscript{,}\cite{chexnet}\textsuperscript{,}\cite{bassiCovid}. Single-channel images can be used but, without a profound change in the classifier architecture, they would provide marginal benefits in the training time. Test images were made square (by the addition of black borders) before resizing to avoid any bias related to aspect ratio. The technique was not used for training because the model without segmentation could learn to identify the added borders. Since the DenseNet121 classifier is a very deep model and our dataset is small, we used data augmentation in the training dataset to avoid overfitting. The chosen procedure was: random translation (up to 28 pixels up/down, left/right), random rotations (between -40 and 40 degrees), and flipping (50\% chance). Besides the overfitting prevention, the process also makes the DNN more resistant to natural occurrences of the operations. We used online augmentation and substituted the original samples by the augmented ones. For facial attribute estimation, we loaded the photographs in RGB, re-scaled between 0 and 1, and reshaped the images to 224x224, which is a common input size for the DenseNet121 and for natural image classification. We employed the same online image augmentation procedure used in the COVID-19 detection task. \subsection{Training Procedure} To train the ISNet, we first set the heatmap loss E hyperparameter to 10, according to the procedure described in Section \ref{loss}. We then analyzed the hold-out validation accuracy to find the ideal P parameter for the ISNet loss function. During this tuning process, we trained on the artificially biased datasets to ensure that P was high enough to produce adequate segmentation even in extreme cases. We looked for a model resulting in equal accuracy in the original validation set and in its artificially biased version. The P value that better balanced the two loss terms and provided the best results was 0.7 for both tasks, and we found little performance variation with similar values (e.g., 0.5, 0.6 and 0.8). During the hyperparameter tuning procedure, we noticed that using weight decay makes it much more complicated to find P. This is because it seems to strongly favor a zero solution (i.e. the network minimizing its parameters to generate null heatmaps, optimizing the heatmap loss and L2 penalization, but ignoring the classification task). Thus, we did not use L2 regularization. We employed a similar training procedure for the 3 DNNs, to allow a fair comparison. It started from a random parameter initialization. The chosen optimizer was stochastic gradient descent, using momentum of 0.99, and a mini-batch of 10 images. We employed gradient clipping to limit gradient norm to 1, making the training procedure more stable. Learning rate was set to 10$^{-3}$. For the COVID-19 detection task we utilized cross-entropy as the classification loss, and for facial attribute estimation we used binary cross-entropy, as it is a multi-label problem. The network later evaluated on the test dataset is the one achieving the best validation performance during training. We trained all DNNs for 48 epochs in COVID-19 detection. We used 144 epochs for the ISNet in facial attribute estimation, but in this task we trained the alternative methodologies for 96 epochs, because they were already showing overfitting at that point. The classification thresholds for facial attribute estimation were chosen to maximize validation maF1 in the trained DNNs. For the alternative segmentation-classification pipeline, we began by training the U-net for segmentation. We employed the same dataset, data augmentation and preprocessing steps used in the ISNet training procedure. We used stochastic gradient descent, with mini-batches of 10 samples and momentum of 0.99. Learning rate was set to 10$^{-4}$ and we utilized the cross-entropy loss function. We trained using hold-out validation, until overfitting could be observed. Accordingly, we employed 600 epochs in the lung segmentation task, and in face segmentation, 400. The model achieved a test IoU of 0.893 in lung segmentation, and 0.925 in face segmentation. In both cases, the resulting segmentation masks seemed adequate upon visual inspection. \section*{Data Availability} In support of the findings of this study, the X-ray data from healthy and/or pneumonia positive subjects are available from the corresponding author of\cite{ChineseDataset1} upon reasonable request; in ChestX-ray14, https://paperswithcode.com/dataset/chestx-ray14; and in CheXpert, https://stanfordmlgroup.github.io/competitions/chexpert/. COVID-19 radiography data that support the findings of this study are available in The BrixIA COVID-19 project, https://https://brixia.github.io/; and in the BIMCV-COVID19+ database, http://https://bimcv.cipf.es/bimcv-projects/bimcv-covid19/. Images for facial attribute estimation, along with their segmentation masks, are available in the Large-scale CelebFaces Attributes (CelebA) Dataset, https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html. \section*{Code Availability} The code containing the ISNet PyTorch implementation is available at https://github.com/PedroRASB/ISNet.
1,116,691,498,860
arxiv
\section{Introduction}\label{S1} We study the pointwise boundary differentiability for viscosity solutions of \begin{equation}\label{1.1} \left\{\begin{aligned} &u\in S^*(\lambda,\Lambda,f)&& ~~\mbox{in}~~\Omega;\\ &u=g&& ~~\mbox{on}~~\partial \Omega, \end{aligned}\right. \end{equation} where $\Omega\subset {R^n}$ is a bounded domain (i.e. connected open set); $S^*\left( {\lambda,\Lambda,f} \right)$ is the Pucci's class (see \Cref{SP} for details). In this paper, solutions always indicate viscosity solutions. The boundary differentiability has been studied extensively for linear uniformly elliptic equations. Li and Wang \cite{MR2264018,MR2494685} proved the boundary differentiability on convex domains. Later, Li and Zhang \cite{MR3029135} extended the boundary differentiability to more general domains, called $\gamma$-convex domains (see also \Cref{r-1.2} for the definition). Both ``convexity'' and ``$\gamma$-convexity'' are global geometrical conditions. The pointwise regularity is more attractive than the local and global regularity because it shows clearly the behavior of solutions influenced by the coefficients and the prescribed data. Moreover, the assumptions for pointwise regularity are usually weaker. Huang, Li and Wang \cite{MR3167627} gave a pointwise geometrical condition under which they obtained the boundary pointwise differentiability. Huang, Li and Zhang \cite{MR3556828} also considered the pointwise differentiability and constructed a counterexample, demonstrating the necessity of the exterior $C^{1,\mathrm{Dini}}$ condition (see \Cref{de12}). However, all above results are obtained for linear equations. In this paper, we prove the boundary pointwise differentiability for viscosity solutions of $\cref{1.1}$ under more general pointwise geometrical conditions. The following two theorems are our main results, which correspond to cases of the corner point and the flat point in previous papers (see \cite{MR3556828}, \cite{MR2264018,MR2494685,MR3029135}). \begin{theorem}\label{th11} Let $u$ be a viscosity solution of \begin{equation*} \left\{\begin{aligned} &u\in S^*(\lambda,\Lambda,f)&& ~~\mbox{in}~~\Omega\cap B_1;\\ &u=g&& ~~\mbox{on}~~\partial \Omega \cap B_1, \end{aligned}\right. \end{equation*} where $f\in L^{n}(\Omega\cap B_1)$ and $g$ is differentiable at $0\in \partial \Omega$. Suppose that $\Omega$ satisfies the exterior $C^1$ condition at $0$ and there exists a cone $K$ with $0$ as the vertex such that $K\cap B_1\subset \Omega^c \cap R^n_+$. Then $u$ is differentiable at $0$ and $\nabla u(0)=\nabla g(0)$. \end{theorem} \begin{remark}\label{re1.6} An open convex set $K\subset R^n$ is called a cone with $0$ as the vertex if $rx\in K $ for any $x\in K$ and $r>0$. \end{remark} \begin{remark}\label{re1.4} For $u\in S^*(\lambda,\Lambda,f)$ and $f\in L^n$, only the $C^{\alpha}$ ($0<\alpha<1$) regularity can be obtained in the interior (see \cite[Chapter 4]{MR1351007} and \cite{MR882838}). In contrast, \Cref{th11} shows that one can obtain higher regularity on the boundary. \end{remark} \begin{theorem}\label{th12} Let $u$ be a viscosity solution of \begin{equation*} \left\{\begin{aligned} &u\in S^*(\lambda,\Lambda,f)&& ~~\mbox{in}~~\Omega\cap B_1;\\ &u=g&& ~~\mbox{on}~~\partial \Omega \cap B_1, \end{aligned}\right. \end{equation*} where $f\in C^{-1,\mathrm{Dini}}(0)$ and $g\in C^{1,\mathrm{Dini}}(0)$ where $0\in \partial \Omega$. Suppose that $\Omega$ satisfies the exterior $C^{1,\mathrm{Dini}}$ condition and the interior ${C^1}$ condition at $0$. Then $u$ is differentiable at $0$. \end{theorem} \begin{remark}\label{re12} The geometrical conditions in \Cref{th11} and \Cref{th12} are more general than the ones used in previous studies. Indeed, if $\Omega$ is a $\gamma$-convex domain (including convex domains as special cases, see \Cref{r-1.2}) and $0\in \partial \Omega$ is a corner point, $\partial\Omega$ satisfies the conditions in \Cref{th11} (see \cite[Lemma 2.6]{MR3029135}); if $0$ is a flat point, $\partial\Omega$ satisfies the conditions in \Cref{th12} (see \cite[Lemma 2.7]{MR3029135}). Huang, Li and Zhang \cite{MR3556828} proposed a pointwise geometrical condition for the pointwise boundary differentiability, which is a special case of ours. In a word, even for linear equations, all previous results with respect to boundary differentiability are covered by this paper. \end{remark} \begin{remark}\label{re-mod1} As for the gradient regularity at the boundary for fully nonlinear elliptic equations, Silvestre and Sirakov \cite{MR3246039} proved the boundary $C^{1,\alpha}$ regularity. That paper requires $\partial \Omega\in C^2$ since the method of flattening the boundary is used. This result was improved by Lian and Zhang \cite{MR4088470} based on another technique without flattening the boundary. In the case of boundary differentiability, since the geometrical conditions from the interior and the exterior are not the same, above techniques are not applicable. We have to make the estimate more carefully in the proof. \end{remark} As a corollary, we have \begin{corollary}\label{c-1} Let $u$ be a viscosity solution of \begin{equation*} \left\{\begin{aligned} &u\in S^*(\lambda,\Lambda,f)&& ~~\mbox{in}~~\Omega;\\ &u=g&& ~~\mbox{on}~~\partial \Omega, \end{aligned}\right. \end{equation*} where $f\in C^{-1,\mathrm{Dini}}(\bar{\Omega})$ and $g\in C^{1,\mathrm{Dini}}(\bar{\Omega})$. Suppose that $\Omega$ is a $\gamma$-convex domain. Then $u$ is differentiable on the boundary $\partial \Omega$. That is, for any $x_0\in \partial \Omega$, there exist $l\in R^n$ and a modulus of continuity $\omega_{x_0}$ such that \begin{equation*} |u(x)-u(x_0)-l\cdot (x-x_0)|\leq |x-x_0|\omega_{x_0}(|x-x_0|). \end{equation*} \end{corollary} The main idea of this paper is the following. For understanding the boundary differentiability at $0\in\partial \Omega$ influenced by the geometrical property of $\partial \Omega$ clearly, we assume that $f,g\equiv 0$. Roughly speaking, if $\Omega$ occupies a smaller portion in a ball $B_r$ (e.g. $|\Omega\cap B_r|/|B_r|$ is smaller), the regularity of $u$ near $0$ is higher. In particular, if $\Omega$ is contained in a half ball (i.e. $\Omega\cap B_r\subset B_r^+$), it is well known that $u$ is Lipschitz continuous at $0$, which can be shown by constructing a proper barrier. The boundary differentiability can be regarded as the enhancement of the boundary Lipschitz regularity in some sense. The treatments are different under different geometrical conditions. In \Cref{th11}, since there exists a cone $K\subset \Omega^c$ in the half ball $B_1^{+}$, this leads to a higher regularity (than the Lipschitz regularity) of $u$, namely $\nabla u(0)=0$. If $\partial \Omega$ is not curved toward the exterior too much, $\nabla u(0)=0$ will be retained. In \Cref{th12}, the exterior $C^{1,\mathrm{Dini}}$ condition is an appropriate perturbation of the half ball and guarantees the boundary Lipschitz regularity, i.e., the boundedness of $\nabla u(0)$. In the concrete proof (see \Cref{th33}), the graph of $u$ is controlled by two hyperplanes. In addition, the interior $C^{1}$ condition will result in the convergence of the two hyperplanes to one hyperplane, which implies the boundary differentiability of $u$ at $0$. At the end of this section, we make some comments on the boundary differentiability, which is a critical case in the theory of boundary regularity. It is well known that the boundary regularity depends heavily on geometrical properties of the domain. For the continuity of solutions up to the boundary, a geometrical condition on the boundary from the exterior is enough. Famous examples are the Wiener criterion for the boundary continuity \cite{Wiener-1924}, the exterior cone condition for the boundary H\"{o}lder continuity (see \cite[Problem 2.12, Theorem 8.27, Corollary 9.28.]{MR1814364}) and the exterior sphere condition for the boundary Lipschitz continuity (see \cite[Chapter 2.8]{MR1814364} and \cite[Lemma 1.2]{Safonov2008}) etc. However, for the boundary differentiability, only a condition from the exterior is not enough (see the counterexample in \cite{MR3556828}), which is different from the boundary continuity. In addition, for higher order regularity, we need the same geometrical conditions on the boundary from both interior and exterior sides. For instance, the $C^{1,\alpha}$ smoothness of the boundary implies the boundary $C^{1,\alpha}$ regularity (see \cite[Theorem 8.33]{MR1814364}, \cite{MR4088470} and \cite{MR2853528}) and the $C^{2,\alpha}$ smoothness of the boundary implies the boundary $C^{2,\alpha}$ regularity (see \cite[Theorem 6.6]{MR1814364} and \cite{MR4088470}). Whereas, for the boundary differentiability, we don't need the same geometrical conditions from both sides. In the next section, we gather some notations and preliminary results. In \Cref{Sth11}, we will give the proof of \Cref{th11} and \Cref{th12} will be proved in \Cref{Sth12}. \section{Preliminaries}\label{SP} \subsection{Notations and notions}\label{SP-1} In this paper, we use standard notations. Let $\{e_i\}^{n}_{i=1}$ denote the standard basis of $R^n$ and $R^n_+=\{x\in R^n\big|x_n>0\}$. We may write $x=(x_1,...,x_n)=(x',x_n)$ for $x\in R^{n}$. The $B_r(x_0)$ (or $B(x_0,r)$) is the open ball with radius $r$ and center $x_0$ and $B_r^+(x_0)=B_r(x_0)\cap R^n_+$. For simplicity, we denote $B_r=B_r(0)$ and $B_r^+=B^+_r(0)$. We also use the ball in $R^{n-1}$, i.e., $T_r(x_0)=B_r(x_0)\cap \left\{x_n=0\right\}$. Similarly, we may write $T_r=T_r(0)$ for short. In addition, we introduce two new notations for convenience: $\Omega_r=\Omega\cap B_r$ and $(\partial\Omega)_r=\partial\Omega\cap B_r$. Next, we collect some notions with respect to fully nonlinear elliptic equations and viscosity solutions (see \cite{MR1351007,MR1376656,MR1118699} for details). We call $F:S^n\rightarrow R$ a fully nonlinear uniformly elliptic operator with ellipticity constants $0<\lambda\leq \Lambda$ if \begin{equation*} \lambda\|N\|\leq F(M+N)-F(M)\leq \Lambda\|N\|,~~~~\forall~M,N\in S^n,~N\geq 0, \end{equation*} where $S^n$ denotes the set of $n\times n$ symmetric matrices; $\|N\|$ is the spectral radius of $N$ and $N\geq 0$ means the nonnegativeness. Throughout this article, a constant is called universal if it depends only on the dimension $n$ and the ellipticity constants $\lambda$ and $\Lambda$. Unless stated otherwise, the letters $C$ and $c$ always denote positive universal constants and $0<c<1$. \begin{definition}\label{d-viscoF} Let $u\in C(\Omega)$ and $f\in L^{n}(\Omega)$. We say that $u$ is an $L^n$-viscosity subsolution (resp., supersolution) of \begin{equation}\label{FNE} F(D^2u)=f ~~~~\mbox{in}~\Omega, \end{equation} if \begin{equation*} \begin{aligned} \mathrm{ess}~\underset{y\to x}{\lim\sup}\left(F(D^2\varphi(y))-f(y)\right)\geq 0\\ \left(\mathrm{resp.},~\mathrm{ess}~\underset{y\to x}{\lim\inf}\left(F(D^2\varphi(y))-f(y)\right)\leq 0\right) \end{aligned} \end{equation*} provided that for $\varphi\in W^{2,n}(\Omega)$, $u-\varphi$ attains its local maximum (resp., minimum) at $x\in\Omega$. We call $u$ an $L^n$-viscosity solution of \cref{FNE} if it is both an $L^n$-viscosity subsolution and supersolution of \cref{FNE}. \end{definition} \begin{remark}\label{r-14} In fact, one can define $L^p$-viscosity solution for any $p>n/2$ \cite{MR1376656}. In this paper, we only deal with the $L^n$-viscosity solution and a viscosity solution always means the $L^n$-viscosity solution. If $\varphi\in W^{2,n}(\Omega)$ is replaced by $\varphi\in C^2(\Omega)$, we arrive at the definition of $C$-viscosity solution, which is adopted in \cite{MR1351007}. If all functions are continuous in their variables, these two notions of viscosity solutions are equivalent (see \cite[Proposition 2.9]{MR1376656}). \end{remark} The following notions of the Pucci's extremal operators and the Pucci's class are our main concerns for the boundary differentiability. \begin{definition}\label{d-Sf} For $M\in S^n$, denote its eigenvalues by $\lambda_i$ ($1\leq i\leq n$) and define the Pucci's extremal operators: \begin{equation*} \begin{aligned} &\mathcal{M}^+(M,\lambda,\Lambda)=\Lambda\left(\sum_{\lambda_i>0}\lambda_i\right) +\lambda\left(\sum_{\lambda_i<0}\lambda_i\right), \\ &\mathcal{M}^-(M,\lambda,\Lambda)=\lambda\left(\sum_{\lambda_i>0}\lambda_i\right) +\Lambda\left(\sum_{\lambda_i<0}\lambda_i\right), \end{aligned} \end{equation*} which are two typical fully nonlinear uniformly elliptic operators. Then we can define the Pucci's class as follows. We say that $u\in \underline{S}(\lambda,\Lambda,f)$ if $u$ is an $L^n$-viscosity subsolution of \begin{equation*} \mathcal{M}^+(D^2u,\lambda,\Lambda)= f. \end{equation*} Similarly, we denote $u\in \bar{S}(\lambda,\Lambda,f)$ if $u$ is an $L^n$-viscosity supersolution of \begin{equation*} \mathcal{M}^-(D^2u,\lambda,\Lambda)= f. \end{equation*} We also define \begin{equation}\label{SC2Sf} \begin{aligned} &S(\lambda,\Lambda,f)=\underline{S}(\lambda,\Lambda,f)\cap \bar{S}(\lambda,\Lambda,f),\\ &S^*(\lambda,\Lambda,f)=\underline{S}(\lambda,\Lambda,-|f|)\cap \bar{S}(\lambda,\Lambda,|f|). \end{aligned} \end{equation} Clearly, $S(\lambda,\Lambda,f)\subset S^*(\lambda,\Lambda,f)$ and $S(\lambda,\Lambda,0)=S^*(\lambda,\Lambda,0)$. One feature of the Pucci's class is that for any viscosity solution $u$ of \cref{FNE}, $u\in S(\lambda,\Lambda,f)$. \end{definition} Now we present some geometrical conditions on the domain, under which we will prove the boundary differentiability. Recall that $\omega:[0,+\infty)\rightarrow [0,+\infty)$ is called a modulus of continuity if $\omega$ is a nonnegative nondecreasing function satisfying $\omega(r)\rightarrow 0$ as $r\rightarrow 0$. Moreover, a modulus of continuity $\omega$ is called a Dini function if it satisfies the following Dini condition for some $r_0>0$ \begin{equation}\label{deeq11} \int_{0}^{r_0}\frac{\omega(r)}{r}dr<\infty. \end{equation} \begin{definition}[\textbf{exterior $C^{1}$ ($C^{1,\mathrm{Dini}}$) condition}]\label{de12} We say that $\Omega$ satisfies the exterior $C^{1}$ condition at $x_0\in \partial \Omega$ if there exist $r_0>0$ and a coordinate system $\{x_1,...,x_n \}$ (isometric to the original coordinate system) such that $x_0=0$ in this coordinate system and \begin{equation}\label{deeq12} B_{r_0} \cap \{(x',x_n)\big|x_n <-|x'|\omega(|x'|)\} \subset B_{r_0}\cap \Omega^c, \end{equation} where $\omega$ is a modulus of continuity. In addition, if $\omega$ is a Dini function, we say that $\Omega$ satisfies the exterior $C^{1,\mathrm{Dini}}$ condition at $x_0$. \end{definition} \begin{remark}\label{r-1.2} If $\Omega$ satisfies the exterior $C^{1, \mathrm{Dini}}$ condition at every $x_0\in \partial \Omega$ with the same $r_0$ and $\omega$, $\Omega$ is called a $\gamma$-convex domain (see \cite[Definition 1.1]{MR3029135}). If $\omega\equiv 0$, $\Omega$ is just a convex domain. \end{remark} \begin{definition}[\textbf{interior ${C^1}$ condition}]\label{de14} We say that $\Omega$ satisfies the interior ${C^1}$ condition at ${x_0}\in \partial\Omega$ if there exist ${r_0}>0$ and a coordinate system $\left\{{{x_1},\ldots ,{x_n}}\right\}$ (isometric to the original coordinate system) such that ${x_0}=0$ in this coordinate system and \begin{equation}\label{deeq14} B_{r_0} \cap \{(x',x_n)\big|x_n >|x'|\omega(|x'|)\} \subset B_{r_0}\cap \Omega, \end{equation} where $\omega$ is a modulus of continuity. \end{definition} We also need the following definition for the pointwise behavior of a function. \begin{definition}\label{de15} Let $\Omega\subset {R^n}$ be a bounded set (may be not a domain) and $f$ be a function defined on $\Omega$. We say that $f$ is differentiable at $x_0\in \Omega$ if there exist $r_0>0$, $l\in R^n$ and a modulus of continuity $\omega$ such that \begin{equation*} |f(x)-f(x_0)-l\cdot (x-x_0)|\leq |x-x_0|\omega(|x-x_0|),~~\forall~x\in \Omega\cap B_{r_0}(x_0). \end{equation*} Then we denote $l$ by $\nabla f(x_0)$ and define \begin{equation*} \|f\|_{C^{1}(x_0)}=|f(x_0)|+|\nabla f(x_0)|. \end{equation*} If $\omega$ is a Dini function, we say that $f$ is $C^{1,\mathrm{Dini}}$ at $x_0$ or $f\in C^{1,\mathrm{Dini}}(x_0)$. Furthermore, $f$ is called $C^{-1,\mathrm{Dini}}$ at $x_0$ or $f\in C^{-1,\mathrm{Dini}}(x_0)$ if there exist $r_0>0$ and a Dini function $\omega$ such that \begin{equation*} \|f\|_{L^{n}(\Omega\cap B_{r}(x_0))}\leq \omega(r),~\forall ~0<r<r_0. \end{equation*} Finally, if $f\in C^{1,\mathrm{Dini}}(x_0)$ (or $f\in C^{-1,\mathrm{Dini}}(x_0)$) for any $x_0\in \Omega$ with the same $r_0$ and $\omega$, we say that $f\in C^{1,\mathrm{Dini}}(\Omega)$ (or $f\in C^{-1,\mathrm{Dini}}(\Omega)$). \end{definition} \begin{remark}\label{re1.3} In this paper, we only use above definition when $\Omega$ is the closure of a bounded domain or $\Omega$ is the part of the boundary of a domain. \end{remark} \begin{remark}\label{re1.4} Without loss of generality, we always assume that $r_0=1$ in \Crefrange{de12}{de15} throughout this paper. \end{remark} \begin{remark}\label{re1.2} For a domain $\Omega$, we will use $\sigma_{\Omega}$ (resp., $\omega_{\Omega}$) to denote the corresponding modulus of continuity (resp., Dini function) in \Cref{de12} and \Cref{de14}. Similarly, for a function $f$, we will use $\sigma_{f}$ (resp., $\omega_{f}$) to denote the corresponding modulus of continuity (resp., Dini function) in \Cref{de15}. \end{remark} \subsection{Basic lemmas}\label{SP-2} In this subsection, we introduce some basic lemmas which will be used in the proofs of our main results. The crucial one is the boundary pointwise $C^{1,\alpha}$ regularity at flat boundaries. The boundary $C^{1,\alpha}$ regularity was first proved by Krylov \cite{MR688919} for classical solutions and further simplified by Caffarelli (see \cite[Theorem 9.31]{MR1814364} and \cite[Theorem 4.28]{MR787227}). For completeness, we give the detailed proof of the pointwise $C^{1,\alpha}$ regularity at a flat boundary by showing that the solution is controlled by two hyperplanes and is closer to one of them with aid of the Harnack inequality. We need the following simple result to prove the boundary pointwise $C^{1,\alpha}$ regularity. \begin{lemma}\label{l.bas} Let $u(e_n/2)\geq 1$ and $u\geq 0$ satisfy \begin{equation*} \left\{\begin{aligned} &u\in S(\lambda,\Lambda,0)&& ~~\mbox{in}~~B_1^+;\\ &u=0&& ~~\mbox{on}~~T_1. \end{aligned}\right. \end{equation*} Then \begin{equation*}\label{e.bas} u(x)\geq cx_n ~~\mbox{in}~~B^+_{1/2}, \end{equation*} where $0<c<1$ is a universal constant. \end{lemma} \proof By the Harnack inequality \cite[Theorem 4.3]{MR1351007}, $u\geq c_0$ on $\partial B(e_n/4,1/8)$ for some positive universal constant $c_0$. Let \begin{equation*} v(x)=c_1\left(\left|x-e_n/4\right|^{-\alpha}-\left(1/4\right )^{-\alpha}\right), \end{equation*} where $c_1$ is small and $\alpha$ is large such that \begin{equation*} \left\{\begin{aligned} &\mathcal{M}^-(D^2v)\geq 0&&~~~~\mbox{in}~~~~B(e_n/4,1/4)\backslash \bar{B}(e_n/4,1/8);\\ &v=0&&~~~~\mbox{on}~~~~\partial B(e_n/4,1/4);\\ &v\leq c_0&&~~~~\mbox{on}~~~~\partial B(e_n/4,1/8). \end{aligned}\right. \end{equation*} By the comparison principle and a direct calculation, we have \begin{equation*}\label{e.bas-res1} u\geq v\geq cx_n~~\mbox{on}~~\{x'=0,0\leq x_n\leq 1/8\}. \end{equation*} By translating $v$ to proper positions (with $v(x', 0)=0$ for some $x'\in T_{1/2}$) and similar arguments, we obtain \begin{equation*}\label{e.bas-res2} u\geq v\geq cx_n~~\mbox{in}~~\left\{|x'|<1/2,0<x_n<1/8\right\}. \end{equation*} Hence, by the Harnack inequality again, \begin{equation*}\label{e.bas-res} u\geq cx_n~~\mbox{in}~~B^+_{1/2}. \end{equation*} \qed~\\ Now, we can prove the boundary pointwise $C^{1,\alpha}$ regularity at a flat boundary based on above lemma. \begin{lemma}\label{le21} Let $u$ be a viscosity solution of \begin{equation*} \left\{\begin{aligned} &u\in S(\lambda,\Lambda,0)&& ~~\mbox{in}~~B_1^+;\\ &u=0&& ~~\mbox{on}~~T_1. \end{aligned}\right. \end{equation*} Then $u$ is $C^{1,{\alpha_1}}$ at $0$, i.e., there exists a constant $a$ such that \begin{equation}\label{e.reg-res} |u(x)-ax_n|\leq C |x|^{1+{\alpha_1}}\|u\|_{L^{\infty }(B_1^+)}, ~~\forall ~x\in B_{1}^+ \end{equation} and \begin{equation*} |a|\leq C\|u\|_{L^{\infty }(B_1^+)}, \end{equation*} where $0<\alpha_1<1$ and $C$ are universal constants. \end{lemma} \begin{remark}\label{re2.1} From now on, $\alpha_1$ is fixed. \end{remark} \proof Without loss of generality, we assume that $\|u\|_{L^{\infty}(B_1^+)}=1$. To show \cref{e.reg-res}, we only need to prove the following: there exist a nonincreasing sequence $\{a_k\}$ ($k\geq 0$) and a nondecreasing sequence $\{b_k\}$ ($k\geq 0$) such that for all $k\geq 1$, \begin{equation}\label{e.reg-M} \begin{aligned} &b_kx_n\leq u\leq a_kx_n~~~~\mbox{in}~~~~B^+_{2^{-k}},\\ &0\leq a_k-b_k\leq (1-\mu)(a_{k-1}-b_{k-1}), \end{aligned} \end{equation} where $0<\mu<1/2$ is universal. We prove the above by induction. Let $v(x)=2(1-|x+e_n|^{-n\Lambda/\lambda})$. Then $v$ satisfies \begin{equation*} \left\{\begin{aligned} &\mathcal{M}^+(D^2v)\leq 0&&~~~~\mbox{in}~~~~B^+_1;\\ &v\geq 0 &&~~~~\mbox{in}~~~~B^+_1;\\ &v\geq 1 &&~~~~\mbox{on}~~~~\partial B_1\backslash T_1;\\ &v(0)=0. \end{aligned}\right. \end{equation*} By the comparison principle and a direct calculation, \begin{equation*} -Cx_n\leq -v\leq u\leq v\leq Cx_n ~~\mbox{on}~~\{x'=0,0\leq x_n\leq 1/2\}. \end{equation*} By translating $v$ to proper positions (with $v(x', 0)=0$ for some $x'\in T_{1/2}$) and similar arguments, \begin{equation*} -Cx_n\leq u\leq Cx_n ~~\mbox{in}~~ B^+_{1/2}. \end{equation*} Thus, by taking $a_1=C, b_1=-C$ and $a_0=2C, b_0=-2C$, \cref{e.reg-M} holds for $k=1$ where $0<\mu<1/2$ is to be specified later. Assume that \cref{e.reg-M} holds for $k$ and we need to prove it for $k+1$. Since \cref{e.reg-M} holds for $k$, there are two possible cases: \begin{equation*} \begin{aligned} &\mbox{\textbf{Case 1}:}~~&&~~u(2^{-k-1}e_n)\geq \frac{a_k+b_k}{2}\cdot \frac{1}{2^{k+1}},\\ &\mbox{\textbf{Case 2}:}~~&&~~u(2^{-k-1}e_n)< \frac{a_k+b_k}{2}\cdot \frac{1}{2^{k+1}}. \end{aligned} \end{equation*} Without loss of generality, we suppose that \textbf{Case 1} holds. Let $w=u-b_kx_n$ and then \begin{equation*} \left\{\begin{aligned} &w\in S(\lambda,\Lambda,0)&& ~~\mbox{in}~~B_{2^{-k}}^+;\\ &w\geq 0&& ~~\mbox{in}~~B_{2^{-k}}^+;\\ &w(2^{-k-1}e_n)\geq \frac{a_k-b_k}{2}\cdot \frac{1}{2^{k+1}}. \end{aligned}\right. \end{equation*} From \Cref{l.bas} (the scaling version), for some universal constant $0<\mu<1$, \begin{equation*}\label{e.reg-est} w(x)\geq \mu(a_k-b_k)x_n~~\mbox{in}~~B_{2^{-k-1}}^+. \end{equation*} Thus, \begin{equation}\label{e.reg-est-u} u(x)\geq (b_k+\mu(a_k-b_k))x_n~~\mbox{in}~~B_{2^{-k-1}}^+. \end{equation} Let $a_{k+1}=a_k$ and $b_{k+1}=b_k+\mu(a_k-b_k)$. Then \begin{equation}\label{e.reg-ak} a_{k+1}-b_{k+1}=(1-\mu)(a_k-b_k). \end{equation} Hence, \cref{e.reg-M} holds for $k+1$. By induction, the proof is completed. \qed~\\ The next is a Hopf type lemma. \begin{lemma}\label{le-1} Let $u(e_n/2)\geq 1$ and $u\geq 0$ satisfy \begin{equation*} u\in S^{\ast}(\lambda,\Lambda,f)~~\mbox{in}~~B_1^+. \end{equation*} Then there exists a universal constant $0<\delta_0<1$ such that if $\|f\|_{L^n(B_1^+)}\leq \delta_0$, \begin{equation}\label{e-H-l} u(x)\geq cx_n-C\|f\|_{L^n(B_1^+)} ~~\mbox{in}~~B_{1/2}^+, \end{equation} where $c>0$ and $C$ are universal constants. \end{lemma} \proof By the interior Harnack inequality, there exists a universal constant $0<\delta_0<1$ such that \begin{equation*} \inf_{B(e_n/4,1/8)} u\geq 2\delta_0u(e_n/2)-\|f\|_{L^n(B_1^+)}\geq \delta_0. \end{equation*} Take \begin{equation*} v(x)=c_1\left(\left|x-e_n/4\right|^{-\alpha}-\left(1/4\right )^{-\alpha}\right). \end{equation*} As in the proof of \Cref{l.bas}, by choosing $c_1$ small and $\alpha$ large enough, \begin{equation*} \left\{\begin{aligned} &\mathcal{M}^-(D^2v)\geq 0&&~~~~\mbox{in}~~~~B(e_n/4,1/4)\backslash \bar{B}(e_n/4,1/8);\\ &v=0&&~~~~\mbox{on}~~~~\partial B(e_n/4,1/4);\\ &v\leq \delta_0&&~~~~\mbox{on}~~~~\partial B(e_n/4,1/8). \end{aligned}\right. \end{equation*} Hence, \begin{equation*}\label{e.bas-res1} u\geq v\geq cx_n~~\mbox{on}~~\{x'=0,0\leq x_n\leq 1/8\}. \end{equation*} Let $w=u-v$ and then \begin{equation*} \left\{\begin{aligned} \mathcal{M}^-(D^2w)&\leq f~~\mbox{in}~B(e_n/4,1/4)\backslash B(e_n/4,1/8);\\ w&\geq 0~~\mbox{on}~\partial\big(B(e_n/4,1/4)\backslash B(e_n/4,1/8)\big). \end{aligned}\right. \end{equation*} By the Alexandrov-Bakel'man-Pucci maximum principle \cite[Theorem 3.2]{MR1351007}, \begin{equation*} w\geq -C\|f\|_{L^n(B_1^+)}~~~~\mbox{in}~B(e_n/4,1/4)\backslash B(e_n/4,1/8). \end{equation*} Therefore, \begin{equation*} u=v+w\geq cx_n-C\|f\|_{L^n(B_1^+)}~~~~\mbox{on}~\left\{x: |x'|=0, 0\leq x_n\leq 1/8\right\}. \end{equation*} By translating $v$ to proper positions (with $v(x', 0)=0$ for some $x'\in T_{1/2}$) and similar arguments, we have \begin{equation*} u\geq cx_n-C\|f\|_{L^n(B_1^+)}~~~~\mbox{in}~\left\{x: |x'|<1/2, 0\leq x_n\leq 1/8\right\}. \end{equation*} Apply the Harnack inequality again and we derive \begin{equation*} u\geq cx_n-C\|f\|_{L^n(B_1^+)}~~~~\mbox{in}~B_{1/2}^+. \end{equation*} \qed~\\ The following lemma is a simple consequence of the strong maximum principle and \Cref{l.bas}. \begin{lemma}\label{le22} Let $\Gamma \subset \partial {B_1^+} \backslash T_1$ and $u$ satisfy \begin{equation*} \left\{\begin{aligned} &u\in S(\lambda,\Lambda,0) && ~~\mbox{in}~~B_1^+;\\ &u=x_n && ~~\mbox{on}~~\Gamma;\\ &u=0 && ~~\mbox{on}~~\partial {B_1^+}\backslash\Gamma. \end{aligned}\right. \end{equation*} Then \begin{equation}\label{e.2.1} u\ge cx_n~~\mbox{in}~~B^+_{1/2}, \end{equation} where $0<c<1$ depends only on $n, \lambda$, $\Lambda$ and $\Gamma$. \end{lemma} \proof By the strong maximum principle and noting $u\geq 0$, $u(e_n/2)\geq c_0>0$ where $c_0$ depends only on $n,\lambda,\Lambda$ and $\Gamma$. Then from \Cref{l.bas}, there exists $c>0$ such that \cref{e.2.1} holds.~\qed~\\ \section{Proof of \Cref{th11}}\label{Sth11} The main result proved in this section is \Cref{th11}. Firstly, we prove a special case of \Cref{th11}, which shows the key idea clearly for the boundary differentiability. \begin{theorem}\label{th21} Let the assumptions of \Cref{th11} be satisfied. Suppose further that there exists a modulus of continuity $\sigma$ such that \begin{equation}\label{e.ass-2} \begin{aligned} &\|u\|_{L^{\infty}(\Omega_1)}\leq c_0,~\|f\|_{L^n(\Omega_r)}\leq c_0\sigma(r), ~\forall ~0<r<1,\\ &-|x'|\sigma(|x'|)\leq x_n,~\forall~x\in (\partial \Omega)_1,\\ & |g(x)| \leq c_0|x|\sigma(|x|),~\forall~x\in (\partial \Omega)_1,\\ &\sigma(1)\leq c_0~~\mbox{and}~~ r^{\beta}\int_{r}^{1}\frac{\sigma(t)}{t^{1+\beta}}dt\leq c_0, ~\forall ~0<r<1, \end{aligned} \end{equation} where $0<c_0<1/4$ and $0<\beta<1$ are constants (depending only on $n,\lambda,\Lambda$ and $K$) to be specified later. Then there exist a universal constant $\bar{C}$ and a nonnegative sequence of $\{a_k\}$ $(k\ge -1)$ such that for all $k\geq 0$, \begin{equation}\label{eq22} \sup_{\Omega _{\eta^{k}}}(u-a_kx_n)\leq \eta ^{k}A_k, \end{equation} \begin{equation}\label{e.1.7} \inf_{\Omega _{\eta^{k}}}(u+a_kx_n)\geq -\eta ^{k}A_k \end{equation} and \begin{equation}\label{eq23} {a_k} \le (1-\mu){a_{k-1}}+\bar C A_{k-1}, \end{equation} where $0<\eta,\mu<1/4$ depend only on $n,\lambda,\Lambda$ and $K$, \begin{equation}\label{eq24} A_{-1}=0,~A_0=c_0 ~~\mbox{and}~~A_k=\max(\sigma (\eta^{k-1}),~\eta^{\alpha_1/2}A_{k-1})~(k\ge 1). \end{equation} \end{theorem} \begin{remark}\label{r3.1} Note that $0<\alpha_1<1$ is the universal constant originated from \Cref{le21}. \end{remark} \proof We only give the proof of \Cref{eq22} and \Cref{eq23}. The proof of \cref{e.1.7} is similar and we omit it. We prove \Cref{eq22} and \Cref{eq23} by induction. For $k=0$, by setting $a_0=a_{-1}=0$, they hold clearly. Suppose that they hold for $k$. We need to prove that they hold for $k+1$. Set $r=\eta ^{k}/2$, $\tilde{B}^{+}_{r}=B^{+}_{r}-r\sigma(r)e_n $, $\tilde{T}_r=T_r-r\sigma(r) e_n$ and $\tilde{\Omega }_{r}=\Omega \cap \tilde{B}^{+}_{r}$. Note that $\sigma(\eta) \leq \sigma(1)\leq c_0\leq 1/4$ and hence $\Omega _{r/2}\subset \tilde{\Omega }_{r}\subset \Omega_{2r}$. Let $v_1$ solve \begin{equation*} \left\{\begin{aligned} &\mathcal{M}^+({D^2}{v_1})=0 &&\mbox{in}~~\tilde{B}^{+}_{r}; \\ &v_1=0 &&\mbox{on}~~\tilde{T}_{r};\\ &v_1=\eta^kA_k &&\mbox{on}~~\partial \tilde{B}^{+}_{r}\backslash \tilde{T}_{r}. \end{aligned} \right. \end{equation*} By the boundary $C^{1,\alpha}$ estimate for $v_1$ (see \Cref{le21}) and the maximum principle, there exists $\bar a\ge 0$ such that (note that $\eta^{\alpha_1/2}A_k\leq A_{k+1}$) \begin{equation*} \begin{aligned} \|v_1-\bar{a}(x_n+r\sigma(r))\|_{L^{\infty }(\Omega_{2\eta r})}&\leq C\frac{(3\eta r)^{1+\alpha_1}}{r^{1 + \alpha_1 }} \|v\|_{L^\infty(\tilde{B}^{+}_{r})}\leq C\eta^{1+\alpha_1} \eta^kA_k \leq C\eta^{\alpha_1/2}\eta^{k+1}A_{k+1} \end{aligned} \end{equation*} and \begin{equation*} \bar a \le \bar{C}A_{k} \end{equation*} provided \begin{equation}\label{e.1.4} c_0\leq \eta, \end{equation} where $\bar{C}$ is universal. Hence, \begin{equation}\label{eq28} \begin{aligned} \|v_1-\bar{a}x_n\|_{L^\infty(\Omega_{\eta^{k+1}})}=&\|v_1-\bar{a}x_n\|_{L^\infty(\Omega_{2\eta r})}\\ \leq & C\eta^{\alpha_1/2}\eta^{k+1}A_{k+1}+|\bar{a}r\sigma(r)|\\ \leq &\left(C\eta^{\alpha_1/2}+\frac{\bar{C}\sigma(\eta^k)}{\eta^{1+\alpha_1/2}}\right) \cdot\eta^{k+1}A_{k+1}\\ \leq &\left(C\eta^{\alpha_1/2}+\frac{c_0C}{\eta^{1+\alpha_1/2}}\right) \cdot\eta^{k+1}A_{k+1}. \end{aligned} \end{equation} Let $v_2$ solve \begin{equation*} \left\{\begin{aligned} & \mathcal{M}^-(D^2{v_2})=0 &&\mbox{in}~~{{\tilde B_r^ +}}; \\ & v_2=a_kx_n &&\mbox{on}~~{\partial {\tilde B_r^ +}\cap K};\\ & v_2=0 &&\mbox{on}~~{\partial {\tilde B_r^ +}\backslash K}. \end{aligned} \right. \end{equation*} By \Cref{le22}, there exists $0<\mu<1/4$ (depending only on $n,\lambda$, $\Lambda$ and $K$) such that \begin{equation}\label{eq26} v_2\geq \mu a_k(x_n+r\sigma(r))\geq \mu a_kx_n~~~~\mbox{in}~ \Omega\cap B_{\eta^{k+1}}. \end{equation} In addition, it is easy to verify that \begin{equation*} v_2\leq a_k(x_n+r\sigma(r))~~~~\mbox{on}~\partial \tilde B_r^ +. \end{equation*} Then by the comparison principle, \begin{equation*} v_2\le a_k(x_n+r\sigma(r))~~~~\mbox{in}~ \tilde B_r^ +. \end{equation*} Let $w=u-a_kx_n-v_1+v_2$ and it follows that (note that $v_1,v_2\geq 0$) \begin{equation*} \left\{ \begin{aligned} &w\in \underline{S}(\lambda,\Lambda,-|f|) &&\mbox{in}~~ \Omega \cap \tilde{B}^{+}_{r}; \\ &w\leq (c_0+a_k)r\sigma(r) &&\mbox{on}~~\partial \Omega \cap \tilde{B}^{+}_{r};\\ &w\leq 0 &&\mbox{on}~~\partial \tilde{B}^{+}_{r}\cap \bar{\Omega}. \end{aligned} \right. \end{equation*} By the Alexandrov-Bakel'man-Pucci maximum principle, we have \begin{equation*} \begin{aligned} \sup_{\Omega_{\eta^{k+1}}}w\leq \sup_{\tilde{\Omega}_{r}}w & \leq (c_0+a_k)r \sigma(r)+Cr\|f\|_{L^n(\tilde{\Omega}_{r})}\leq (2c_0C+a_k)\eta^k \sigma(\eta^k). \end{aligned} \end{equation*} Now, we try to estimate $a_k$. From the definition of $a_k$ (see \cref{eq23}), \begin{equation}\label{e.1.6} \begin{aligned} a_k&\leq \bar{C}\sum_{i=0}^{k-1}(1-\mu)^{k-1-i}A_i\leq \bar{C}(1-\mu)^{k-1}\sum_{i=0}^{k-1}(1-\mu)^{-i}A_i. \end{aligned} \end{equation} From the definition of $A_i$ (see \cref{eq24}), for any $i\geq 1$, \begin{equation*} A_i\leq \sigma(\eta^{i-1})+\eta^{\alpha_1/2}A_{i-1} \end{equation*} and then \begin{equation*} \begin{aligned} \sum_{i=1}^{k-1}(1-\mu)^{-i}A_i\leq \sum_{i=1}^{k-1}(1-\mu)^{-i}\sigma(\eta^{i-1}) +\sum_{i=0}^{k-1}\eta^{\alpha_1/2}(1-\mu)^{-i-1}A_i. \end{aligned} \end{equation*} Hence, if \begin{equation}\label{e.1.5} \eta^{\alpha_1/2}\leq (1-\mu)/2, \end{equation} we have \begin{equation*} \begin{aligned} \sum_{i=1}^{k-1}(1-\mu)^{-i}A_i &\leq 2\sum_{i=1}^{k-1}(1-\mu)^{-i}\sigma(\eta^{i-1})+c_0\\ &= c_0+\frac{2\sigma(1)}{1-\mu}+\frac{2}{(1-\mu)^2(1-\eta)} \sum_{i=1}^{k-2}(1-\mu)^{-i+1}\frac{\sigma(\eta^{i})}{\eta^{i-1}}(\eta^{i-1}-\eta^i)\\ &\leq 4c_0+\frac{2}{(1-\mu)^2(1-\eta)}\int_{\eta^k}^{1}\frac{\sigma(t)}{t^{1+\beta}}dt, \end{aligned} \end{equation*} where $0<\beta<1$ satisfies $1-\mu=\eta^{\beta}$. By combining with \cref{e.1.6}, we have (note that $0<\eta,\mu<1/4$) \begin{equation}\label{e.1.8} \begin{aligned} a_k&\leq \frac{\bar{C}}{1-\mu}\eta^{k\beta} \left(4c_0+\frac{2}{(1-\mu)^2(1-\eta)}\int_{\eta^k}^{1}\frac{\sigma(t)}{t^{1+\beta}}dt\right)\\ &=\frac{4c_0\bar{C}}{1-\mu}\eta^{k\beta}+\frac{2\bar{C}\eta^{k\beta}}{(1-\mu)^3(1-\eta)} \int_{\eta^k}^{1}\frac{\sigma(t)}{t^{1+\beta}}dt\\ &\leq \frac{4c_0\bar{C}}{1-\mu} +\frac{2c_0\bar{C}}{(1-\mu)^3(1-\eta)} \leq c_0C. \end{aligned} \end{equation} Therefore, for $w$, \begin{equation}\label{eq29} \sup_{\Omega_{\eta^{k+1}}}w \leq \frac{c_0C}{\eta}\cdot\eta^{k+1}A_{k+1}. \end{equation} Take $\eta$ small enough such that \cref{e.1.5} holds and \begin{equation*} C\eta^{\alpha_1/2}\le \frac{1}{3}. \end{equation*} Next, choose $c_0$ small enough such that \cref{e.1.4} holds and \begin{equation*} \frac{c_0C}{\eta^{1+\alpha_1/2}}\leq \frac{1}{3}. \end{equation*} Let $a_{k+1}=(1-\mu)a_k+\bar a$. Then \Cref{eq23} holds for $k+1$. Recalling \Cref{eq28}, \Cref{eq26} and \Cref{eq29}, we have \begin{equation*} \begin{aligned} u-a_{k+1}x_n&=u-a_kx_n-v_1+v_2+v_1-\bar ax_n+\mu a_kx_n-v_2\\ &=w+v_1-\bar ax_n+\mu a_kx_n-v_2\\ &\le w+v_1-\bar ax_n\\ &\le \eta^{k+1}A_{k+1}~~\mbox{in} ~~\Omega_{\eta^{k+1}}. \end{aligned} \end{equation*} By induction, the proof is completed.\qed~\\ Now, we give the~\\ \noindent\textbf{Proof of \Cref{th11}.} In the following, we only need to show two things: One is that by a proper transformation, the assumptions \cref{e.ass-2} can be guaranteed. Another is that \crefrange{eq22}{eq23} imply the differentiability of $u$ at $0$. For $\rho >0$, let $y=x/\rho$, $C=\|u\|_{L^{\infty}(\Omega_1)}+\|g\|_{C^{1}(0)}+1$ and \begin{equation*} v(y)=\frac{c_0}{C}\left(u(x)-g(0)-\nabla g(0)\cdot x\right), \end{equation*} where $c_0$ is as in \cref{e.ass-2}. Then $v$ satisfies \begin{equation*} \left\{\begin{aligned} &v\in S^*(\lambda,\Lambda,\tilde{f})&&~~\mbox{in}~~\tilde{\Omega}\cap B_1;\\ &v=\tilde{g}&& ~~\mbox{on}~~\partial \tilde{\Omega}\cap B_1, \end{aligned}\right. \end{equation*} where \begin{equation*} \tilde{f}(y)=\frac{c_0\rho^{2}}{C}f(x), ~~\tilde{g}(y)=\frac{c_0}{C}\left(g(x)-g(0)-\nabla g(0)\cdot x\right)~~\mbox{and} ~~\tilde{\Omega}=\frac{\Omega}{\rho}. \end{equation*} First, it can be verified easily that \begin{equation*} \begin{aligned} &\|v\|_{L^{\infty}(\tilde{\Omega}_1)}\leq c_0,\\ &\|\tilde{f}\|_{L^{n}(\tilde{\Omega}_r)}=\frac{c_0\rho}{C}\|f\|_{L^{n}(\Omega_{\rho r})} \leq c_0\rho\sigma_f(\rho r):=c_0\sigma_{\tilde{f}}(r),~\forall ~0<r<1,\\ &|\tilde{g}(y)|=|\frac{c_0}{C}(g(x)-g(0)-\nabla g(0)\cdot x)|\leq c_0|x|\sigma_g(|x|)=c_0\rho |y|\sigma_g(\rho |y|):=c_0|y|\sigma_{\tilde{g}}(|y|),\\ \end{aligned} \end{equation*} Next, for $x\in (\partial \Omega)_{1}$, \begin{equation*} -|x'|\sigma_{\Omega}(|x'|)\leq x_n, \end{equation*} where $\sigma_{\Omega}$ is the modulus of continuity corresponding to the exterior $C^{1}$ condition at $0$. Define $\sigma_{\tilde{\Omega}}(r)=\sigma_{\Omega}(\rho r)$ ($0<r<1$) and we have \begin{equation*} -|y'|\sigma_{\tilde{\Omega}}(|y'|)\leq y_n,~\forall~y\in (\partial \tilde\Omega)_1. \end{equation*} Let $\tilde\sigma(r)=\max(\sigma_{\tilde{f}}(r),\sigma_{\tilde{g}}(r),\sigma_{\tilde{\Omega}}(r))$ ($0<r<1$). Then \begin{equation*} \tilde\sigma(1)\leq \sigma_f(\rho)+\sigma_g(\rho)+\sigma_{\Omega}(\rho)\rightarrow 0 ~\mbox{as}~\rho\rightarrow 0 \end{equation*} and \begin{equation*} r^{\beta}\int_{r}^{1}\frac{\tilde\sigma(t)}{t^{1+\beta}}dt \leq (\rho r)^{\beta}\int_{\rho r}^{1} \frac{\sigma_f(t)+\sigma_g(t)+\sigma_{\Omega}(t)}{t^{1+\beta}}dt \rightarrow 0 ~\mbox{as}~\rho\rightarrow 0. \end{equation*} Therefore, by choosing $\rho$ small enough (depending only on $n,\lambda,\Lambda,K$, $\sigma_{f}$, $\sigma_{g}$ and $\sigma_{\Omega}$), the assumptions \cref{e.ass-2} for $v$ can be guaranteed. Hence, we can apply \Cref{th21} to $v$. Finally, we show that the differentiability of $v$ (and hence $u$) at $0$ can be inferred by \crefrange{eq22}{eq23}. From \cref{eq23}, we know that $a_k\rightarrow 0$ as $k\rightarrow \infty$ (see also \cref{e.1.8}). For any $y\in \tilde\Omega_1$, there exists $k\geq 0$ such that $\eta^{k+1}\leq |y|<\eta^{k}$ and then define $\sigma_v(|y|)=(a_k+A_k)/\eta$. Thus, $\sigma_v(r)\rightarrow 0$ as $r\rightarrow 0$. From \Cref{eq22} and \cref{e.1.7}, we obtain \begin{equation*} |v(y)|\leq \eta^{k}(a_k+A_k)\leq |y|\sigma_v(|y|). \end{equation*} Therefore, $v$ is differentiable at 0 with $\nabla v(0)=0$. By rescaling back to $u$, we obtain the conclusion of \Cref{th11}.\qed~\\ \section{Proof of \Cref{th12}}\label{Sth12} In this section, we prove \Cref{th12}. First, we show that the graph of the solution can be controlled by two hyperplanes. \begin{lemma}\label{le33} Let the assumptions of \Cref{th12} be satisfied. Suppose further that there exists a Dini function $\omega$ such that \begin{equation}\label{e.ass-0} \begin{aligned} &\|u\|_{L^{\infty}(\Omega_1)}\leq c_0,~\|f\|_{L^n(\Omega_r)}\leq c_0\omega(r), ~\forall ~0<r<1,\\ &-|x'|\omega(|x'|)\leq x_n,~\forall~x\in (\partial \Omega)_1,\\ & |g(x)| \leq c_0|x|\omega(|x|),~\forall~x\in (\partial \Omega)_1,\\ &\omega(1)\leq c_0~~\mbox{and}~~\int_{0}^{1}\frac{\omega(\rho)}{\rho}d\rho\leq c_0, \end{aligned} \end{equation} where $0<c_0<1/4$ is a universal constant to be specified later. Then there exist sequences of constants $\{\tilde{a}_k\},\{\tilde{b}_k\}$ and $\{\tilde{A}_k\}$ $(k\geq -1)$ with $\tilde{a}_k\geq 0\geq \tilde{b}_k$ and $\tilde{A}_k\geq 0$ such that for any $k\geq 0$, \begin{equation}\label{e.eq31} \tilde{b}_kx_n -\eta^k\tilde{A}_k\leq u\leq \tilde{a}_kx_n+\eta^k\tilde{A}_k~~\mbox{in}~~\Omega_{\eta^k} \end{equation} and \begin{equation}\label{e.s1-0} \begin{aligned} \tilde{a}_{k}\leq \tilde{a}_{k-1}+\bar{C}\tilde{A}_{k-1},~ \tilde{b}_{k}\geq\tilde{b}_{k-1}-\bar{C}\tilde{A}_{k-1}, \end{aligned} \end{equation} where $0<\eta<1/4$ and $\bar{C}$ are universal constants, \begin{equation*} \tilde{A}_{-1}=0,~ \tilde{A}_0=c_0,~ \tilde{A}_{k}=\max(\omega(\eta^{k-1}), \eta^{\alpha_1/2}\tilde{A}_{k-1}) (k\geq 1). \end{equation*} \end{lemma} \proof We prove \cref{e.eq31} and \cref{e.s1-0} by induction. For $k=0$, by setting $\tilde{a}_{-1}=\tilde{a}_0=\tilde{b}_{-1}=\tilde{b}_0=0$, the conclusion holds clearly. Suppose that the conclusion holds for $k$. We need to prove that the conclusion holds for $k+1$. Set $r=\eta ^{k}/2$, $\tilde{B}^{+}_{r}=B^{+}_{r}-r\omega(r)e_n $, $\tilde{T}_r=T_r-r\omega(r) e_n$ and $\tilde{\Omega }_{r}=\Omega \cap \tilde{B}^{+}_{r}$. Note that $\omega(\eta) \leq \omega(1)\leq c_0\leq 1/4$ and then $\Omega _{r/2}\subset \tilde{\Omega }_{r}\subset \Omega_{2r}$. Let $v$ solve \begin{equation*} \left\{\begin{aligned} &\mathcal{M}^+(D^2v)=0 &&\mbox{in}~~\tilde{B}^{+}_{r}; \\ &v=0 &&\mbox{on}~~\tilde{T}_{r};\\ &v=\eta ^{k} \tilde{A}_k &&\mbox{on}~~\partial \tilde{B}^{+}_{r}\backslash \tilde{T}_{r} \end{aligned} \right. \end{equation*} and $w=u- \tilde{a}_kx_n-v$. Then $w$ satisfies (note that $v\geq 0$ in $\tilde{B}^{+}_{r}$) \begin{equation*} \left\{ \begin{aligned} &w\in \underline{S}(\lambda,\Lambda , -|f|) &&\mbox{in}~~ \Omega \cap \tilde{B}^{+}_{r}; \\ &w\leq g- \tilde{a}_kx_n &&\mbox{on}~~\partial \Omega \cap \tilde{B}^{+}_{r};\\ &w\leq 0 &&\mbox{on}~~\partial \tilde{B}^{+}_{r}\cap \bar{\Omega}. \end{aligned} \right. \end{equation*} In the following arguments, we estimate $v$ and $w$ respectively. For $v$, similar to the estimate in \Cref{th21}, there exists $\bar{a}\geq 0$ such that \begin{equation}\label{e.19} \bar{a}\leq \bar{C} \tilde{A}_k \end{equation} and \begin{equation}\label{e.20} \begin{aligned} \|v-\bar{a}x_n\|_{L^{\infty }(\Omega _{\eta^{k+1}})} \leq \left( C\eta ^{\alpha_1/2}+\frac{c_0 C}{\eta^{1+\alpha_1/2}}\right)\cdot \eta ^{k+1} \tilde{A}_{k+1} \end{aligned} \end{equation} provided \begin{equation}\label{e.1.4-2} c_0\leq \eta. \end{equation} From \cref{e.s1-0}, \begin{equation*} -\bar{C}\sum_{i=0}^{\infty} \tilde{A}_i \leq \tilde{b}_k\leq \tilde{a}_k\leq \bar{C}\sum_{i=0}^{\infty} \tilde{A}_i, ~\forall ~k\geq 0. \end{equation*} Moreover, it is easy to show that \begin{equation*} \sum_{i=0}^{\infty} \tilde{A}_i\leq \sum_{i=1}^{\infty}\omega(\eta^i)+\eta^{\alpha_1/2}\sum_{i=0}^{\infty} \tilde{A}_i+2c_0, \end{equation*} which implies \begin{equation}\label{e.1.1} \begin{aligned} \sum_{i=0}^{\infty} \tilde{A}_i &\leq \frac{1}{1-\eta^{\alpha_1/2}}\sum_{i=1}^{\infty}\omega(\eta^i) +\frac{2c_0}{1-\eta^{\alpha_1/2}}\\ &=\frac{1}{\left(1-\eta^{\alpha_1/2}\right)\left(1-\eta\right)}\sum_{i=1}^{\infty} \frac{\omega(\eta^i)\left(\eta^{i-1}-\eta^i\right)}{\eta^{i-1}}+\frac{2c_0}{1-\eta^{\alpha_1/2}}\\ &\leq \frac{1}{\left(1-\eta^{\alpha_1/2}\right)\left(1-\eta\right)}\int_{0}^{1} \frac{\omega(r)dr}{r}+\frac{2c_0}{1-\eta^{\alpha_1/2}}\\ &\leq \frac{c_0}{\left(1-\eta^{\alpha_1/2}\right)\left(1-\eta\right)} +\frac{2c_0}{1-\eta^{\alpha_1/2}}\leq 6c_0, \end{aligned} \end{equation} provided \begin{equation}\label{e-w} \left(1-\eta^{\alpha_1/2}\right)\left(1-\eta\right)\geq 1/2. \end{equation} For $w$, by the Alexandrov-Bakel'man-Pucci maximum principle, we have \begin{equation}\label{e1.22} \begin{aligned} \sup_{\Omega_{\eta^{k+1}}}w\leq \sup_{\tilde{\Omega} _{r}}w & \leq\|g\|_{L^{\infty }(\partial \Omega \cap \tilde{B}^{+}_{r})}+\tilde{a}_kr\omega(r) +Cr\|f\|_{L^n(\tilde{\Omega}_{r})}\\ &\leq c_0\eta^k\omega(\eta^k) +6c_0\bar{C}\eta^k\omega(\eta^k) +Cc_0\eta^k\omega(\eta^k)\\ &\leq \frac{c_0C}{\eta}\cdot \eta^{k+1}\tilde{A}_{k+1}. \end{aligned} \end{equation} Take $\eta $ small enough such that \cref{e-w} holds and \begin{equation*} C\eta ^{\alpha_1/2}\leq \frac{1}{3}. \end{equation*} Take $c_0$ small enough such that \cref{e.1.4-2} holds and \begin{equation*} \frac{c_0C}{\eta^{1+\alpha_1/2}}\leq\frac{1}{3}. \end{equation*} Let $\tilde{a}_{k+1}=\tilde{a}_k+\bar{a}$ and \cref{e.s1-0} holds for $k+1$ clearly. By combining with \cref{e.20} and \cref{e1.22}, we have \begin{equation*} \begin{aligned} u-\tilde{a}_{k+1}x_n&=u-\tilde{a}_kx_n-v+v-\bar{a}x_n=w+v-\bar{a}x_n\leq \eta ^{k+1}\tilde{A}_{k+1} ~~\mbox{in}~~\Omega_{\eta^{k+1}}. \end{aligned} \end{equation*} Similarly, we can prove that there exists a nonnegative constant $\bar{b}$ with $\bar{b}\leq \bar{C}\tilde{A}_k$ such that for $\tilde{b}_{k+1}=\tilde{b}_{k}-\bar{b}$, \begin{equation*} \begin{aligned} u-\tilde{b}_{k+1}x_n\geq -\eta ^{k+1}\tilde{A}_{k+1} ~~\mbox{in}~~\Omega_{\eta^{k+1}}. \end{aligned} \end{equation*} Therefore, the proof is completed by induction.\qed~\\ \begin{remark}\label{re1.1} From \cref{e.s1-0} and \cref{e.1.1}, \begin{equation*} -1\leq \tilde{b}_k\leq \tilde{a}_k\leq 1,~\forall ~k\geq 0. \end{equation*} Then one can show that $u$ is Lipschitz continuous at $0$. In fact, this is one of the main results in \cite{MR1} and the proof is also adopted there. For completeness, we give the detailed proof here. \end{remark} Similarly to the last section, we prove a special case of \Cref{th12} first. \begin{theorem}\label{th33} Let the assumptions of \Cref{th12} be satisfied. Suppose further that there exist a Dini function $\omega$ and a modulus of continuity $\sigma$ with $\sigma\geq \omega$ such that \begin{equation}\label{e.ass} \begin{aligned} &\|u\|_{L^{\infty}(\Omega_1)}\leq c_0,~\|f\|_{L^n(\Omega_r)}\leq c_0\omega(r), ~\forall ~0<r<1,\\ &-|x'|\omega(|x'|)\leq x_n\leq |x'|\sigma(|x'|),~\forall~x\in (\partial \Omega)_1,\\ & |g(x)| \leq c_0|x|\omega(|x|),~\forall~x\in (\partial \Omega)_1,\\ &\omega(1)\leq\sigma(1)\leq c_0~~\mbox{and}~~ \int_{0}^{1}\frac{\omega(\rho)}{\rho}d\rho\leq c_0, \end{aligned} \end{equation} where $0<c_0<1/4$ is the universal constant as in \Cref{le33}. Then there exist sequences of constants $\{a_k\},\{b_k\},\{A_k\}$ and $\{B_k\}$ $(k\geq 0)$ with $a_k\geq b_k$ and $A_k,B_k\geq 0$ such that for any $k\geq 0$, \begin{equation}\label{eq31} b_kx_n -\eta^kB_k\leq u\leq a_kx_n+\eta^kA_k,~~\mbox{in}~~\Omega_{\eta^k} \end{equation} where $0<\eta<1/4$ is the universal constant as in \Cref{le33}; $a_0=b_0=0$ and $A_0=B_0=c_0$; for any $k\geq 1$, \begin{equation}\label{e.s1-2} \left\{\begin{aligned} A_{k}&=\max(\sigma(\eta^{k-1}),\eta^{\alpha_1/2}A_{k-1}),\\ B_{k}&=2\sigma(\eta^{k-1})+\eta^{\alpha_1/2}B_{k-1}, \\ a_{k}&\leq \min(a_{k-1}+\bar{C}A_{k-1},\tilde{a}_{k}),\\ b_{k}&\geq \max(b_{k-1}+\mu (a_{k-1}-b_{k-1})-\bar{C}B_{k-1},\tilde{b}_{k}) \end{aligned} \right. \end{equation} or \begin{equation}\label{e.s1-1} \left\{\begin{aligned} A_{k}&=2\sigma(\eta^{k-1})+\eta^{\alpha_1/2}A_{k-1},\\ B_{k}&=\max(\sigma(\eta^{k-1}),\eta^{\alpha_1/2}B_{k-1}),\\ a_{k}&\leq \min(a_{k-1}-\mu (a_{k-1}-b_{k-1})+\bar{C}A_{k-1},\tilde{a}_{k}),\\ b_{k}&\geq \max(b_{k-1}-\bar{C}B_{k-1},\tilde{b}_{k}). \end{aligned} \right. \end{equation} Here, $0<\mu<1/8$ and $\bar{C}$ are universal constants, and the constants $\tilde{a}_k$ and $\tilde{b}_k$ are as in \cref{e.eq31}. \end{theorem} \proof We prove the above by induction. For $k=0$, the conclusion holds clearly. Suppose that the conclusion holds for $k$. We need to prove that the conclusion holds for $k+1$. By similar arguments to the proof of \Cref{le33}, there exist nonnegative constants $\bar{a}$ and $\bar{b}$ with $\bar{a}\leq \bar{C}A_k$ and $\bar{b}\leq \bar{C}B_k$ such that (note that $\eta<1/4$) \begin{equation}\label{e.1.2} \begin{aligned} u-\bar{a}_{k+1}x_n\leq \eta ^{k+1}\bar{A}_{k+1} ~~\mbox{in}~~\Omega_{3\eta^{k+1}} \end{aligned} \end{equation} and \begin{equation}\label{e.1.2-2} \begin{aligned} u-\bar{b}_{k+1}x_n\geq -\eta ^{k+1}\bar{B}_{k+1} ~~\mbox{in}~~\Omega_{3\eta^{k+1}}, \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} &\bar{a}_{k+1}=a_{k}+\bar{a},&&~\bar{b}_{k+1}=b_{k}-\bar{b},~\\ &\bar{A}_{k+1}=\max(\sigma(\eta^k), \eta^{\alpha_1/2}A_k),~&& \bar{B}_{k+1}=\max(\sigma(\eta^k), \eta^{\alpha_1/2}B_k). \end{aligned} \end{equation*} In the following, we prove the conclusion for $k+1$ according to two cases. \textbf{Case 1:} $u(re_n)\geq \left(a_{k}+b_{k}\right)r/2$ where $r=\eta^{k+1}$. Let \begin{equation*} v(x)=u(x)-\bar{b}_{k+1}x_n+r\bar{B}_{k+1}. \end{equation*} Then $v\geq 0$ in $\Omega_{2r}$ and $v(re_n)\geq \left(a_k-b_k\right)r/2+r\bar{B}_{k+1}$. Next, we apply \Cref{le-1}. Indeed, note that $ B_{2r}^++2r\sigma(2r)e_n \subset \Omega_{2r}$. For $x\in B_{2r}^++2r\sigma(2r)e_n$, set $y=(x-2r\sigma(2r)e_n)/(2r)$ and \begin{equation*} w(y)=\frac{v(x)}{(a_k-b_k)r/2+r\bar{B}_{k+1}}. \end{equation*} Then $w\geq 0$ satisfies $w(y^*)\geq 1$ for some $y^*_n\geq 1/4$ and \begin{equation*} \mathcal{M}^-(D^2w)\leq |\tilde{f}|~~\mbox{in}~~B_1^+, \end{equation*} where \begin{equation*} \tilde{f}(y)=\frac{rf(x)}{(a_k-b_k)/2+\bar{B}_{k+1}}. \end{equation*} Note that \begin{equation*} \|\tilde{f}\|_{L^n(B_1^+)} =\frac{\|f\|_{L^n(B_{2r}^++2r\sigma(2r)e_n)}}{(a_k-b_k)/2+\bar{B}_{k+1}} \leq \frac{c_0\omega(\eta^k)}{(a_k-b_k)/2+\bar{B}_{k+1}}\leq c_0. \end{equation*} Take $c_0\leq \delta_0$ (as in \Cref{le-1}). Then apply \Cref{le-1} to $w$, there exists a universal constant $0<\mu<1/8$ such that \begin{equation*} w\geq 2\mu y_n-C\|\tilde{f}\|_{L^n(B_1^+)} ~~\mbox{in}~~B_{1/2}^+. \end{equation*} By rescaling back to $v$, we have \begin{equation*} v\geq \mu\left(a_k-b_k\right)(x_n-2r\sigma(2r))-c_0Cr\omega(\eta^k) ~~\mbox{in}~~B_{r}^++2r\sigma(2r)e_n. \end{equation*} Note that $v\geq 0$ and thus the above holds in $\Omega_{r}$, i.e., \begin{equation*} v\geq \mu\left(a_k-b_k\right)(x_n-2r\sigma(2r))-c_0Cr\omega(\eta^k) ~~\mbox{in}~~\Omega_{r}. \end{equation*} Hence, \begin{equation}\label{e.1.3} u\geq \left(\bar{b}_{k+1}+\mu\left(a_k-b_k\right)\right)x_n -r\bar{B}_{k+1}-2\mu r(a_k-b_k)\sigma(\eta^k)-c_0Cr\omega(\eta^k) ~~\mbox{in}~~\Omega_{r}. \end{equation} From \Cref{le33} (see also \Cref{re1.1}), $-1\leq\tilde{b}_k\leq\tilde{a}_k\leq 1$. Thus, \begin{equation*} 0\leq a_k-b_k\leq \tilde{a}_k-\tilde{b}_k\leq 2. \end{equation*} Let \begin{equation*} B_{k+1}=2\sigma(\eta^k)+\eta^{\alpha_1/2}B_k. \end{equation*} Then by taking $c_0$ small again (note that $\mu<1/8$), \begin{equation*} B_{k+1}\geq\bar{B}_{k+1}+2\mu(a_k-b_k)\sigma(\eta^k)+c_0C\omega(\eta^k). \end{equation*} Hence, by noting \cref{e.1.2} and \cref{e.1.3}, we have \begin{equation}\label{e.2.2} \left(\bar{b}_{k+1}+\mu(a_k-b_k)\right)x_n -\eta^{k+1}B_{k+1}\leq u \leq \bar{a}_{k+1}x_n+\eta^{k+1}\bar{A}_k,~~\mbox{in}~~\Omega_{\eta^{k+1}}. \end{equation} Take \begin{equation*} \begin{aligned} &a_{k+1}=\min(\bar{a}_{k+1},\tilde{a}_{k+1})=\min(a_k+\bar{a},\tilde{a}_{k+1}),~\\ &b_{k+1}=\max(\bar{b}_{k+1}+\mu(a_k-b_k),\tilde{b}_{k+1}) =\max(b_{k}-\bar{b}+\mu(a_k-b_k),\tilde{b}_{k+1}),~\\ &A_{k+1}=\bar{A}_{k+1}=\max(\sigma(\eta^k), \eta^{\alpha_1/2}A_k). \end{aligned} \end{equation*} Thus, \cref{e.s1-2} holds for $k+1$. Note that $\tilde{a}_{k+1}\geq \tilde{a}_{k}\geq a_k$ and $\tilde{b}_{k+1}\leq \tilde{b}_{k}\leq b_k$. Then it can be checked easily that $a_{k+1}\geq b_{k+1}$. By combining with \cref{e.eq31}, $\tilde{A}_{k+1}\leq A_{k+1}$ and $\tilde{B}_{k+1}\leq B_{k+1}$, \cref{e.2.2} reduces to \cref{eq31} for $k+1$. \textbf{Case 2:} $u(re_n)\leq \left(a_{k}+b_{k}\right)r/2$. Let $v(x)=\bar{a}_{k+1}x_n+r\bar{A}_{k+1}-u(x)$. Then by a similar argument (we omit it), \cref{eq31} and \cref{e.s1-1} hold for $k+1$. Therefore, the proof is completed by induction.\qed~\\ Now, we give the~\\ \noindent\textbf{Proof of \Cref{th12}.} As before, in the following, we only need to show two things: one is that by a proper transformation, the assumptions \cref{e.ass} can be guaranteed. Another is that \crefrange{eq31}{e.s1-1} imply the differentiability of $u$ at $0$. For $\rho>0$, let $y=x/\rho$, $C=\|u\|_{L^{\infty}(\Omega\cap B_1)}+\|g\|_{C^{1}(0)}+1$ and \begin{equation*} v(y)=\frac{c_0}{C}\left(u(x)-g(0)-\nabla g(0)\cdot x\right). \end{equation*} Then $v$ satisfies \begin{equation*} \left\{\begin{aligned} &v\in S^*(\lambda,\Lambda,\tilde{f})&&~~\mbox{in}~~\tilde{\Omega}\cap B_1;\\ &v=\tilde{g}&& ~~\mbox{on}~~\partial \tilde{\Omega}\cap B_1, \end{aligned}\right. \end{equation*} where \begin{equation*} \tilde{f}(y)=\frac{c_0\rho^{2}}{C}f(x), ~~\tilde{g}(y)=\frac{c_0}{C}\left(g(x)-g(0)-\nabla g(0)\cdot x\right)~~\mbox{and} ~~\tilde{\Omega}=\frac{\Omega}{\rho}. \end{equation*} First, it can be verified easily that \begin{equation*} \begin{aligned} &\|v\|_{L^{\infty}(\tilde{\Omega}_1)}\leq c_0,\\ &\|\tilde{f}\|_{L^{n}(\tilde{\Omega}_r)}=\frac{c_0\rho}{C}\|f\|_{L^{n}(\Omega_{\rho r})} \leq c_0\rho\omega_f(\rho r):=c_0\omega_{\tilde{f}}(r),\\ &|\tilde{g}(y)|=|\frac{c_0}{C}(g(x)-g(0)-\nabla g(0)\cdot x)|\leq c_0|x|\omega_g(|x|)=c_0\rho|y|\omega_g(\rho |y|):=c_0|y|\omega_{\tilde{g}}(|y|).\\ \end{aligned} \end{equation*} Next, for $x\in (\partial \Omega)_{1}$, \begin{equation*} -|x'|\omega_{\Omega}(|x'|)\leq x_n\leq |x'|\sigma_{\Omega}(|x'|), \end{equation*} where $\omega_{\Omega}$ is the Dini function corresponding to the exterior $C^{1,\mathrm{Dini}}$ condition at $0$ and $\sigma_{\Omega}$ is the modulus of continuity corresponding to the interior $C^{1}$ condition at $0$. Define $\sigma_{\tilde{\Omega}}(r)=\sigma_{\Omega}(\rho r)$ and $\omega_{\tilde{\Omega}}(r)=\omega_{\Omega}(\rho r)$ ($0<r<1$) and we have \begin{equation*} -|y'|\omega_{\tilde\Omega}(|y'|)\leq y_n\leq |y'|\sigma_{\tilde\Omega}(|y'|),~\forall~y\in (\partial \tilde\Omega)_1. \end{equation*} Let $\tilde\omega(r)=\max(\omega_{\tilde{f}}(r),\omega_{\tilde{g}}(r),\omega_{\tilde{\Omega}}(r))$ and $\tilde{\sigma}(r)=\max(\tilde{\omega}(r),\sigma_{\tilde\Omega}(r))$ ($0<r<1$) and then \begin{equation*} \tilde\sigma(1) \leq \omega_f(\rho)+\omega_g(\rho)+\omega_{\Omega}(\rho)+\sigma_{\Omega}(\rho) \rightarrow 0 ~\mbox{as}~\rho\rightarrow 0 \end{equation*} and \begin{equation*} \int_{0}^{1}\frac{\tilde\omega(t)}{t}dt \leq \int_{0}^{\rho}\frac{ \omega_f(t)+\omega_g(t)+\omega_{\Omega}(t)}{t}dt \rightarrow 0 ~\mbox{as}~\rho\rightarrow 0. \end{equation*} Therefore, by choosing $\rho$ small enough (depending only on $n,\lambda,\Lambda$, $\omega_f$, $\omega_g$, $\omega_{\Omega}$ and $\sigma_{\Omega}$), the assumptions \cref{e.ass} for $v$ can be guaranteed. Hence, we can apply \Cref{th33} to $v$. Next, we show that \crefrange{eq31}{e.s1-1} imply the differentiability of $v$ (and hence $u$) at $0$. Indeed, from \Cref{le33}, $\tilde{a}_k$ and $\tilde{b}_k$ are bounded sequences. By \cref{e.s1-2} and \cref{e.s1-1}, $a_k$ and $b_k$ are also bounded. Moreover, for any $k\geq 0$, \begin{equation*} 0\leq a_{k}-b_{k}\leq (1-\mu)(a_{k-1}-b_{k-1})+\tilde{C}(A_{k-1}+B_{k-1}). \end{equation*} Note that $A_k,B_k\rightarrow 0$ as $k\rightarrow \infty$. Hence, $a_{k}-b_{k}\rightarrow 0$ as $k\rightarrow \infty$. In the following, we prove that the sequence $\{a_k\}$ converges, i.e., $\{a_k\}$ has at most one accumulation point. Suppose not and let $a$ and $\tilde{a}$ be two accumulation points of $\{a_k\}$. We assume that $a,\tilde{a}>0$ and $\varepsilon:=a-\tilde{a}>0$ without loss of generality. Since $A_k,B_k,a_k-b_k\rightarrow 0$ and $\omega$ is a Dini function, there exists $k_0\geq 0$ such that for any $k\geq k_0$, \begin{equation*} A_k,~B_k,~a_k-b_k\leq \varepsilon/8 ~~\mbox{and}~~\int_{0}^{\eta^{k}}\frac{\omega(r)dr}{r}\leq \frac{\varepsilon}{32\bar{C}}. \end{equation*} In addition, there exist $k_2\geq k_1\geq k_0$ such that \begin{equation}\label{e.2.3} a_{k_1}\leq \tilde{a}+\varepsilon/32 ~~\mbox{and}~~ a_{k_2}\geq a-\varepsilon/32. \end{equation} Since \begin{equation*} u\leq a_{k_1}x_n+\eta^{k_1}A_{k_1}~~\mbox{in}~~\Omega_{\eta^{k_1}}, \end{equation*} by a similar argument to the proof of \Cref{le33}, there exist a sequence of $\left\{\hat{a}_k\right\}_{k\geq k_1}$ such that for any $k\geq k_1$, \begin{equation*} u\leq \hat{a}_kx_n+\eta^{k}\hat{A}_{k}~~\mbox{in}~~\Omega_{\eta^{k}} \end{equation*} and \begin{equation*} \hat{a}_{k+1}\leq \hat{a}_k+\bar{C}\hat{A}_{k}, \end{equation*} where \begin{equation*} \hat{A}_{k_1}=A_{k_1}~~\mbox{and}~~ \hat{A}_{k+1}=\max(\omega(\eta^k), \eta^{\alpha_1/2}\hat{A}_k) (k>k_1). \end{equation*} Thus, as before, for any $k\geq k_1$ (note \cref{e-w}), \begin{equation*} \begin{aligned} \hat{a}_k\leq a_{k_1}+\bar{C}\sum_{i=k_1}^{\infty}\hat{A}_{i} &\leq a_{k_1}+\frac{\bar{C}}{1-\eta^{\alpha_1/2}} \sum_{i=k_1}^{\infty}\omega(\eta^i)+\frac{\bar{C}}{1-\eta^{\alpha_1/2}}A_{k_1}\\ &\leq a_{k_1}+\frac{\bar{C}}{\left(1-\eta^{\alpha_1/2}\right)\left(1-\eta\right)} \int_{0}^{\eta^{k_1}}\frac{\omega(r)dr}{r}+\frac{\bar{C}}{1-\eta^{\alpha_1/2}}A_{k_1}\\ &\leq a_{k_1}+\frac{\varepsilon}{4}. \end{aligned} \end{equation*} Then for $x\in \Omega_{\eta^{k_2}}$ with $x_n>0$, \begin{equation*} \begin{aligned} &\left(a_{k_2}-\frac{\varepsilon}{8}\right)x_n-\eta^{k_2}B_{k_2}\leq b_{k_2}x_n-\eta^{k_2}B_{k_2}\leq u(x)\\ &\leq \hat{a}_{k_1}x_n+\eta^{k_2}\hat{A}_{k_2} \leq \left(a_{k_1}+\frac{\varepsilon}{4}\right)x_n+\eta^{k_2}A_{k_2}. \end{aligned} \end{equation*} Choose $x=\eta^{k_2}e_n/2$ and we have \begin{equation*} \begin{aligned} a_{k_2}-a_{k_1}\leq \frac{7\varepsilon}{8}, \end{aligned} \end{equation*} which contradicts with \cref{e.2.3}. Therefore, there exists $\bar{a}$ such that $a_k,b_k\rightarrow \bar{a}$. Then \cref{eq31} implies that $v$ is differentiable at $0$. Indeed, for any $y\in \tilde\Omega_1$, there exists $k\geq 0$ such that $\eta^{k+1}\leq |y|<\eta^{k}$. By applying \cref{eq31} to $v$ in $\tilde\Omega_{\eta^k}$ and noting $a_k,b_k\rightarrow \bar{a}$ and $A_k,B_k\rightarrow 0$, we have \begin{equation*} \begin{aligned} |v(y)-\bar{a}y_n|&\leq \max(|a_k-\bar{a}|,|b_k-\bar{a}|)|y|+\bar{C}(A_k+B_k)\eta^{k}\\ &\leq \max(|a_k-\bar{a}|,|b_k-\bar{a}|)|y|+\bar{C}(A_k+B_k)|y|/\eta. \end{aligned} \end{equation*} Define $\sigma_v(|y|)=\max(|a_k-\bar{a}|,|b_k-\bar{a}|)+\bar{C}(A_k+B_k)/\eta$ and then $\sigma_v(r)\rightarrow 0$ as $r\rightarrow 0$. That is, $v$ (and hence $u$) is differentiable at $0$.~\qed~\\ \bibliographystyle{model4-names}
1,116,691,498,861
arxiv
\section{Introduction} An important fact motivating the study of the statistical properties of dynamical systems is that the pointwise long time prediction of a chaotic system is not possible, while the estimation or forecasting of averages and other long time statistical properties is sometimes possible. This often corresponds in mathematical terms to computing invariant measures, or estimating some of their properties. Giving a precise meaning to the computation of a continuous object like a measure is not a completely obvious task and involves the definition of effective versions of several concepts from mathematical analysis. Our approach will be mainly based on the concept of computable metric space. To give a first example, let us consider the set $\mathbb{R}$ of real numbers. Beyond $\mathbb{Q}$ there are many other real numbers that can be handled by algorithms: $\pi $ or $\sqrt{2}$ for instance can be approximated {\em at any given precision} (with rational numbers) by an algorithm. Hence these numbers can be identified with the algorithm which is able to calculate them (more precisely, with the string representing the program which approximates it). This set of points is called the set of \emph{computable real numbers} and was introduced in the famous paper \cite{T36}. This kind of construction can then be generalized to many other metric spaces, considering a dense countable set that plays the same role as the rationals in the above example. Then, \emph{computable} or \emph{recursive} counterparts of many mathematical notions can be defined, and rigorous statements about the algorithmic approximation of abstract objects can be made, also obtaining algorithmic versions of many classical theorems (see section 2). In particular, this general approach also gives the possibility to treat in a simple way measures spaces, define computable measures and computable functions between measure spaces (transfer operators), which will be the main theme of this paper. The paper is devoted to the problem of computation of invariant measures in discrete time dynamical systems. By discrete time dynamical system we mean a system $(X,T)$ were $X$ is a metric space and $T:X\to X$ is a Borel measurable transformation. Here an invariant measure is a Borel measure $\mu$ on $X$ such that for each measurable set $A$ it holds $\mu(A)=\mu(T^{-1}(A))$. Such measures contain information on the statistical behavior of the system $(X,T)$ and on the possible behavior of averages of observables along typical trajectories of the system. The map $T$ moreover induces a function $L_T:PM(X)\to PM(X)$, where $PM(X)$ is the set of Borel probability measures over $X$ and will be endowed with a suitable metric (for details see section \ref{meas}). $L_T$ is called the transfer operator associated to $T$ (basic results about this are reminded in section \ref{DS}). Before entering into details about the computation of measures and invariant measures in particular, we remark that whatever we mean by \textquotedblleft approximating a measure by an algorithm\textquotedblright , there are only countably many \textquotedblleft measure approximating algorithms\textquotedblright\ whereas, in general, a dynamical system may have uncountably many invariant measures (usually an infinite dimensional set). So, most of them will not be algorithmically describable. This is not a serious problem because we can put our attention on the most \textquotedblleft meaningful\textquotedblright\ ones. An important part of the theory of dynamical systems is indeed devoted to the understanding of \textquotedblleft physically\textquotedblright\ relevant invariant measures, among these, SRB measures play an important role\footnote{Informally speaking, these are measures which represent the asymptotic statistical behavior of \textquotedblleft many\textquotedblright\ (positive Lebesgue measure) initial conditions, see section \ref{DS}}. These measures are good candidates to be computed. The existence and uniqueness of SRB measures is a widely studied problem (see \cite{Y}), which has been solved for some important classes of dynamical systems. Let us precise the concept of computable measure. As mentioned before, the framework of computable analysis can be applied to abstract spaces as the space $PM(X)$. A measure $\mu $ is then \emph{computable} if it is a computable point of that measure space. In this case there is an algorithm such that, for each rational $\varepsilon $ given as input, outputs a "finite" measure (a finite rational convex combination of Dirac measures supported on \textquotedblleft rational\textquotedblright\ points) which is $\varepsilon $-close to $\mu $. In the literature, there are several works dealing with the problem of approximating invariant measures, more or less informally from the algorithmic point of view (see e.g. \cite{L}, \cite{H}, \cite{KMY}, \cite{PJ99}, \cite{Din93, Din94}). In these works the main technique consists in an adequate discretization of the problem. More precisely, in several of the above works the transfer operator associated to the dynamics (see subsection \ref{PF}) is approximated by a finite dimensional one and the problem is reduced to the computation of the corresponding relevant eigenvectors (some convergence result then validates the quality of the approximation). Another strategy to face the problem of computation of invariant measures consist in following the way the measure $\mu $ can be constructed and check that each step can be realized in an effective way. In some interesting examples we can obtain the SRB measure as limit of iterates of the Lesbegue measure $\mu =\lim_{n\rightarrow \infty }L_{T}^{n}(m)$ where $m$ is the Lesbegue measure and $L_{T}$ is the transfer operator associated to $T$. To prove computability of $\mu $ the main point is to recursively estimate the speed of convergence to the limit. This sometimes can be done using the decay of correlations (see \cite{GHR07} where computability of SRB measures in uniformly hyperbolic systems is proved in this way, see \cite{GP09} for general relations between convergence of measures and decay of correlations with a point of view similar to the one of the present paper). Let us illustrate the main results of the paper. Informally speaking, a function $T:X\rightarrow X$ is said to be computable if its behavior can be described by some algorithm (for the precise definitions see sections \ref{CMS} and \ref{comp_meas}). In this case the pair $(X,T)$ is called a computable dynamical system. In this context, the general problem we are facing can be stated in the following terms: \begin{problem}\label{probl} \begin{enumerate} \item[{\bf a)}] Given a computable dynamical system $(X,T)$ does the set of invariant measures contain computable points? \item[{\bf b)}] Can they be found in an algorithmic way, starting from the description of the system? \end{enumerate} \end{problem} We will see that, in general, even the above question a) does not always have a positive answer. However, in many interesting situations, both of the above problems can be positively solved. We will take a general point of view finding the interesting invariant measure as a fixed point of the transfer operator, giving general conditions ensuring its computability. The following theorem will be the main tool (see Thm. \ref{comp_inv}).\newline \noindent \textbf{Theorem A}\textit{\ Let $X$ be a computable metric space and $T$ a function which is computable on $X\setminus D$. Let us consider the dynamical system $(X,T)$. Suppose there is a recursively compact set of probability measures $V\subset PM(X)$ such that for every $\mu \in V$, $\mu (D)=0$ holds. Then every invariant measure isolated (in the weak topology) in $V$ is computable.}\newline The precise meaning of computability on $X\setminus D$ will be given in section \ref{CMS} however the intuitive meaning of the above proposition is that: if the function $T$ is computable outside some singular set $D$ (the discontinuity set for example) and we look for invariant measures in a set $V $ of measures giving no weight to the set $D$ (some class of regular measures e.g.) and in the set $V$ there is a unique invariant measure, then this measure can be computed. This will give as a consequence that the SRB measure is computable in many examples of computable systems (uniquely ergodic systems, piecewise expanding maps in one dimensions, systems having an indifferent fixed point and many other systems having an unique absolutely continuous invariant measure, see Theorem \ref{comp_meas} and Prop. \ref{2} ). Observe that any object which is \textquotedblleft computable\textquotedblright\ in some way (as $T,V,\mu $ in the theorem) admits a finite description (a finite program). Theorem A is actually \emph{% uniform}: there is a \emph{single} algorithm which takes finite descriptions of $T$ and $V$ and which, as soon as the hypothesis in Theorem A are satisfied and $\mu $ is a unique invariant measure in $V$, outputs a finite description of $\mu $ (see remark \ref{comprem} and the above item b) of Problem \ref{probl}). Observe that the algorithm cannot decide whether the hypotheses are satisfied or not, but computes the measure whenever they are fulfilled. After such general statements, one could conjecture that, in computable dynamical systems, SRB measures are always computable. This is not true, and reveals some subtlety about the general problem of computing an invariant measure. In section \ref{lst} we will see that:\newline \noindent \textbf{Examples}\textit{\ There exists a computable dynamical system having no computable measure at all. Moreover, there exists a computable dynamical system on the unit interval having an SRB measure which is not computable.}\newline The interest of the second example comes from the fact that any computable map of the interval must have some computable invariant measure. The example shows that important invariant measures can still be missed. To further motivate these results, we finally remark that from a technical point of view, computability of the considered measure is a requirement in several results about relations between computation, probability, randomness and pseudo-randomness (see e.g. \cite{LM08}, \cite{GHR08}, \cite{GHR07},\cite{GHR09c}). \subsection{Plan of the paper} In section 2 we give a compact and self contained introduction to the prerequisites about computable analysis which are necessary to work with dynamical systems on metric spaces. In this section we also prove some general statements about solutions of equations on metric spaces which will be used to ``find'' the interesting invariant measures as fixed points of the transfer operator (Theorem, \ref{computable_fixed}). In section 3 we develop the computable treatment of the space of probability measures on a given (computable) metric space. Some results of these initial sections are new and should be of independent interest. Their usefulness is demonstrated in the next sections. In section 4 we start considering dynamical systems. A direct application of the results of the previous sections allow us to establish general assumptions under which the transfer operator is computable (on a suitable subset, Theorem \ref{L_comp}). We then use the framework and tools introduced before to face Problem \ref{probl}. We prove Theorem A above (which also becomes a simple application of previous results) and show how to apply it in order to prove the computability of many interesting invariant measures. In section 6 we construct the two counter-examples already announced. \section{Preliminaries on algorithmic theory} \subsection{Analysis and computation} A way to approach several problems from mathematical analysis by computational tools is to approximate the ``infinite'' mathematical objects (elements of non countable sets, as real numbers or a functions ) involved in the problem by some algorithm which constructs an approximating sequence of ``finite'' objects (rational numbers, polynomials with rational coefficients) which are ``treatable'' by the computer. Usually, the algorithm has to manipulate and decide questions about the various mathematical objects involved, and convergence results should be provided in order to choose the suitable level of accuracy for the finite approximation. The actual implementation of the algorithm and the various decisions are, in most cases, subjected to round off errors which can produce additional approximation errors, wrong decisions or undecidable situations if the error is not considered rigorously (how to decide $x\geq y$ when $x=y$?). Sometimes, estimates (for these errors) can be obtained under suitable conditions, but this is in general a further and often nontrivial task (see e.g. \cite{Bla94}). In this paper we will work in a framework where the algorithmic abilities of the computer to represent and manipulate infinite mathematical objects are taken into account from the beginning. In this framework (often referred to as Computable Analysis) one can rigorously determine which objects can be algorithmically approximated at any given accuracy (these will be called \emph{computable} objects), and which can not. Here, the word \emph{computable} is used, but may be adapted to each particular situation: for instance, ``computable'' functions from $\mathbb{N}$ to $\mathbb{N}$ are called \emph{recursive} functions, ``computable'' subsets of $\mathbb{N}$ are called r.e sets, etc. \subsection{Background from recursion theory}\label{background} The starting point of recursion theory was to give a mathematical definition making precise the intuitive notions of \emph{algorithmic} or \emph{% effective procedure} on symbolic objects. Every mathematician has a more or less clear intuition of what can be computed by algorithms: the multiplication of natural numbers, the formal derivation of polynomials are simple examples. Several very different formalizations have been independently proposed (by Post, Church, Kleene, Turing, Markov\dots) in the 30's, and have proved to be equivalent: they compute the same functions from $\mathbb{N}$ to $\mathbb{% N}$. This class of functions is now called the class of \emph{recursive functions}. As an algorithm is allowed to run forever on an input, these functions may be \emph{partial}, i.e.~not defined everywhere. The \emph{% domain} of a recursive function is the set of inputs on which the algorithm eventually halts. A recursive function whose domain is $\mathbb{N}$ is said to be \emph{total}. For formal definitions see for example~\cite{Rog87}. With this intuitive description it is more or less clear that there exists an effective procedure to enumerate the class of all partial recursive functions, associating to each of them its \emph{\textbf{G\"odel number}}. Hence there exists a \emph{universal} recursive function $\varphi_u:\mathbb{N} \rightarrow \mathbb{N}$ satisfying for all $e,n\in \mathbb{N}$, $\varphi_u({% \langle e,n \rangle})=\varphi_e(n)$ where $e$ is the G\"odel number of $% \varphi_e$ and ${\langle \cdot,\cdot \rangle}:\mathbb{N}^2 \rightarrow \mathbb{N}$ is some recursive bijection. The notion of recursive function induces directly an important computability notion on the class of subsets of $\mathbb{N}$: a set of natural numbers is said to be \emph{\textbf{recursively enumerable}} (\emph{\textbf{r.e}} for short) if it is the range of some partial recursive function. That is if there exists an algorithm listing (or enumerating) the set. We denote by $% E_e $ the r.e set associated to $\varphi_e$, namely: $E_e=\mathrm{range}% (\varphi_e)=\{\varphi_u({\langle e,n \rangle}):n\in \mathbb{N}\}$, where $\varphi_{u}$ is the universal recursive function. Let $(E_i)_{i\in\mathbb{N}}$ be a family of r.e subsets of $\mathbb{N}$. We say that $E_i$ is r.e \emph{\textbf{uniformly in $\boldsymbol{i}$}} if there is a single recursive function $\varphi$ such that $E_i=\{\varphi({\langle i,n \rangle}):n\in\mathbb{N}\}$. Taking $\varphi=\varphi_u$ the universal recursive function yields an enumeration $(E_i)_{i\in\mathbb{N}}$ of all the r.e subsets of $\mathbb{N}$, such that $E_i$ is r.e uniformly in $i$. More generally, once a computability notion has been defined for some class of objects in the following form: \begin{center}\label{computable_object} An object $x$ is \emph{computable} if there is a (partial or total) recursive function $\varphi $ which computes $x$ in some sense. \end{center} A uniform version will be implicitly defined and intensively used: \begin{center} Objects from a family $(x_i)_{i\in\mathbb{N}}$ of $X$ are \emph{uniformly} computable if there is a single (total or partial) recursive function $% \varphi$ such that $\varphi({\langle i,. \rangle}):\mathbb{N}\to\mathbb{N}$ computes $x_i$ for each $i$. \end{center} \subsection{From $\mathbb{N}$ to countable sets} Strictly speaking, recursive functions only work on natural numbers, but this can be extended to the objects (thought of as ``finite'' objects) of any countable set, once a numbering of its elements has been chosen. \begin{definition} A \emph{\textbf{numbered set}} $\O $ is a countable set together with a surjection $\nu_\O:\mathbb{N} \to \O $ called the \emph{\textbf{numbering}}. We write $o_n$ for $\nu(n)$ and call $n$ a \emph{\textbf{name}} of $o_n$. \end{definition} The set $\mathbb{Q}$ of rational numbers can be injectively numbered $% \mathbb{Q}=\{q_0,q_1,\ldots\}$ in an \emph{effective} way: the number $i$ of a rational $a/b$ can be computed from $a$ and $b$, and vice versa. We fix such a numbering. % \begin{definition} A subset $A$ of a numbered set $\O $ is \emph{\textbf{recursively enumerable (r.e)}} if there is a r.e set $E\subseteq \mathbb{N}$ such that $% A=\{o_n:n\in E\}$. \end{definition} Uniformity for r.e subsets of $\O $ is defined as uniformity for r.e subsets of $\mathbb{N}$. \subsection{Computability of reals} The following notion was already introduced by Turing in \cite{T36}. \begin{definition} Let $x$ be a real number. We say that: $\bullet$ $x$ is \emph{\textbf{lower semi-computable}} if the set $\{q\in% \mathbbm{Q}:q<x\}$ is r.e., $\bullet$ $x$ is \emph{\textbf{upper semi-computable}} if the set $\{q\in% \mathbbm{Q}:q>x\}$ is r.e., $\bullet$ $x$ is \emph{\textbf{computable}} if it is lower and upper semi-computable. \end{definition} The following classical characterization may be more intuitive: a real number is computable if and only if there exists a recursive function $% \varphi$ computing a sequence of rational numbers converging exponentially fast to $x$, that is $|q_{\varphi(i)}-x|<2^{-i}$, for all $i$. We remark that as there exists subsets of integers which are recursively enumerable but not recursive (see \cite{Rog87}), there also exists semi-computable numbers which are not computable. In the following section we will see how these notions can be generalized to separable metric spaces, which inherit the computable structure of $\mathbbm{R}$ via the metric. \subsection{Computable metric spaces\label{CMS}} In this section we introduce the basic tools of computable analysis on metric spaces. Most of the results of this section and several of the following one have been already obtained by Weihrauch, Brattka, Presser and others in the framework of ``Type-2 theory of Effectivity'', which is based in the notion of ``representation'' (infinite binary codes) of mathematical objects. A standard reference book on this approach to Computable Analysis is \cite{Wei00}, and a specific paper on computability of subsets of metric spaces is \cite{BraPre03}. Our approach to Computable Analysis only uses the notion of recursive function (see subsection \ref{background}). It is intended to emphasize the fact that computability notions are just the ``effective'' versions of classical ones. In this way we obtain a theory syntactically familiar to most mathematicians and computability results can be proved in a transparent and compact way. A computable metric space is a metric space with a dense numbered set such that the distance on this set is algorithmically compatible with the numbering (distances between numbered points can be uniformly computed up to arbitrary precision). From this point of view the real line (with euclidean distance) has a natural structure of computable metric space, whit the rationals as a numbered set. \begin{definition} A \emph{\textbf{computable metric space}} (CMS) is a triple $\mathcal{X}% =(X,d,\S )$, where $\bullet$ $(X,d)$ is a separable complete metric space, $\bullet$ $\S =(s_i)_{i \in \mathbbm{N}}$ is a dense subset of $X$ (the numbered set of \emph{\textbf{ideal points}}), $\bullet$ The real numbers $(d(s_i,s_j))_{i,j}$ are all computable, uniformly in $i,j$. \end{definition} Symbolic spaces, euclidean spaces, functions spaces and manifolds with a suitable metrics can be endowed with the structure of computable metric spaces. See for example \cite{G93, HR07, GHR07}. If $(X,d,\S )$ and $(X^{\prime},d^{\prime},\S ^{\prime})$ are two computable metric spaces, then the product $(X\times X^{\prime},d_\times,\S \times \S % ^{\prime})$ with $d_\times((x,x^{\prime}),(y,y^{\prime}))=\max(d(x,y),d^{% \prime}(x^{\prime},y^{\prime}))$ is a computable metric space. The numbered set of ideal points $(s_i)_i$ induces the numbered set of \emph{% \textbf{ideal balls}} $\mathcal{B}:=\{B(s_i,q_j):s_i \in S, q_j \in\mathbbm{Q}% _{>0}\}$. We denote by $B_\uple{i,j}$ the ideal ball $B(s_i,q_j)$. Let $(X,d,\S )$ be a computable metric space. The computable structure of $X$ assures that the whole space can be ``reached'' using algorithmic means. Since ideal points (the finite objects of $\S $) are dense, they can approximate any $x$ at any finite precision. Then, every point $x$ has a neighborhood basis consisting of ideal balls, denoted $\mathcal{B}% (x)=\{B\in \mathcal{B}:x\in B\}$ and called its \emph{\textbf{ideal neighborhood basis}}. \begin{definition}[Computable points] A point $x\in X$ is said to be \emph{\textbf{computable}} if its ideal neighborhood basis $\mathcal{B}(x)$ is r.e. \end{definition} \begin{remark} As in the case of reals we have the following characterization: \label{fasttt} $x$ is computable if and only if there is a (total) recursive function $\varphi $ such that $d(s_{\varphi (i)},x)<2^{-i}$. \end{remark} Ideal balls are also useful to describe open sets. \begin{definition}[Recursively open sets] We say that the set $U\subset X$ is \emph{\textbf{recursively open}} if there is some r.e set $A$ of ideal balls such that $U=\bigcup_{B\in A}B$. That is, if there is some r.e set $E\subseteq \mathbbm{N}$ such that $% U=\bigcup_{i\in E}B_i$. \end{definition} We remark that the collection of r.e. open sets can be algorithmically enumerated. \begin{definition} Let $(U_{n})_{n}$ be a sequence of r.e.~open sets. We say that the sequence is \emph{\textbf{uniformly r.e.}}~or that $U_{n}$ is r.e.~open \emph{\textbf{% uniformly in $n$}} if there exists an r.e.~set $E\subset \mathbbm{N}^{2}$ such that for all $n$ we have $U_{n}=\bigcup_{i\in E_{n}}B_{i}$, where $% E_{n}=\{i:(n,i)\in E\}$. \end{definition} \begin{examples} \item Let $(U_n)_n$ be a sequence of open sets such that $U_n$ is uniformly recursively open. Then the union $\bigcup_n U_n$ is a recursively open set. \item The universal recursive function $\varphi_u$ induces an enumeration of the collection $\mathcal{U}$ of all the recursively open sets. Indeed, define $E:=\{(e,\varphi_u({\langle e,n \rangle})): e,n \in \mathbbm{N}\}$. Then $\mathcal{U}=\{U_e:e\in N\}$ where $U_e=\bigcup_{i\in E_e}B_i$. \item The numbered set $\mathcal{U}$ is closed under finite unions and finite intersections. Furthermore, these operations are \emph{effective} in the following sense: there exists recursive functions $\varphi^{\cup}$ and $% \varphi^{\cap}$ such that for all $e,e^{\prime}\in \mathbbm{N}$, $U_e\cup U_{e^{\prime}}=U_{\varphi^{\cup}({\langle e,e^{\prime}\rangle})}$ and the same holds for $\varphi^{\cap}$. Equivalently: $U_e\cup U_{e^{\prime}}$ is recursively open uniformly in ${\langle e,e^{\prime}\rangle}$ (see~\cite{HR07} e.g.). \end{examples} \begin{definition}[Computable functions] \label{functions} A function $T: X \rightarrow Y$ is said to be \emph{% \textbf{computable}} if $T^{-1}(U^Y_e)$ is recursively open uniformly in $e$. \end{definition} It follows that computable functions are continuous. Since we will work with functions which are not necessarily continuous everywhere, we shall consider functions which are computable on some subset of $X$. More precisely: \begin{definition} A function $T$ is said to be \emph{\textbf{computable on C}} ($C\subset X$) if there is $U_{n}^{X}$ recursively open uniformly in $n$ such that $$T^{-1}(B_{n}^{Y})\cap C=U_{n}^{X}\cap C.$$ The set $C$ is called the \emph{\textbf{ domain of computability}} of $T$. \end{definition} As an example we show that a monotone real function whose values over the rationals are computable, is computable everywhere. This Lemma will also be used later. \begin{lemma} \label{upup}If $f:[0,1]\rightarrow \lbrack 0,1]$ is increasing and $f(r)$ can be computed uniformly, for each rational $r$ then $f$ is computable. \end{lemma} \begin{proof} Let $a,q\in \mathbbm{Q}$. We remark that $f^{-1}((p,q))=\cup _{f(a)\geq p,f(b)\leq q}(a,b)$ this allows to find a r.e. cover of the interval $% f^{-1}((p,q))$. The case of a general r.e. open set is straightforward. \end{proof} \begin{definition}[Lower semi-computable functions] \label{lsfunctions} A function $f:X\rightarrow \overline{\mathbbm{R}}$ is said to be \emph{\textbf{lower semi-computable}} if $f^{-1}(q_{n},\infty )$ is recursively open uniformly in $n$. \end{definition} It is known that there exists a recursive enumeration of all lower semi-computable functions $(f_i)_i\geq 0$. From the definition follows that lower semi-computable functions are lower semi-continuous. \emph{\textbf{Lower semi-computability on D}} is defined as for computable functions. A function $f$ is \emph{\textbf{upper semi-computable}} if $-f$ is lower semi-computable. It is easy to see that a real function $f$ is computable if and only if it is upper and lower semi-computable. Given a probability measure $\mu$, we say that a function is \emph{\textbf{(lower semi-) computable almost everywhere}} if its domain of computability has $\mu$-measure one. \subsection{Recursively compact sets: approximation from above}\label{s.rec.compact} We will give some general results about solutions of equations concerning functions computable on some subset. As in many other mathematical situations, to prove the existence of certain solutions we are helped by a suitable notion of compactness. In order to the solution be computable, we will need a recursive version of compacity. Roughly, a compact set is recursively compact if the fact that it is covered by a finite collection of ideal balls can be tested algorithmically (for equivalence with the $\epsilon$-net approach see definition \ref{pree} and proposition \ref{pree2} ). This kind of notion and the related basic results are already present in the literature in various forms, or particular cases, we give a very compact self contained introduction based on the previously introduced notions. \begin{definition} A set $K\subseteq X$ is \defin{recursively compact} if it is compact and there is a recursive function $\varphi :\mathbbm{N}\rightarrow \mathbbm{N}$ such that $\varphi ({\langle i_{1},\ldots ,i_{p}\rangle })$ halts if and only if $(B_{i_{1}},\ldots ,B_{i_{p}})$ is a covering of $K$. \end{definition} \begin{remark}Let $U_{i}$ be the collection of r.e open sets (with its uniform enumeration). It is easy to see that a set $K$ is recursively compact iff $K\subseteq U_{i}$ is semi-decidable, uniformly in $i$. \end{remark} Here are some basic properties of recursively compact sets: \begin{proposition}\label{p.compact-basic}Let $K$ be a recursively compact subset of $X$. \begin{enumerate} \item A singleton $\{x\}$ is recursively compact if and only if $x$ is a computable point. \item\label{core_compact} If $K'$ is rec. compact then so is $K\cup K'$. \item if $U$ is recursively open, then $K'=K\setminus U$ is rec compact. \item The diameter of $K$ is upper semi-computable. \item The distance to $K$ : $d_{K}(x):=\inf \{d(x,y): y \in K \}$ is lower-computable \item\label{supinf_compact} If $f:X\to \mathbb{R}$ is lower-computable then so is $\inf_{K}f$ \item if $f:X\to \mathbb{R}$ is upper-computable then so is $\sup_{K}f$ \end{enumerate} \end{proposition} \begin{proof}(1) A point $x$ is computable iff $x\in U_{i}$ is semi-decidable uniformly in $i$. (2) $K\cup K' \subset U$ iff $K \subset U$ and $K' \subset U$. (3) Remark that $K\setminus U\subseteq V\iff K\subseteq U\cup V$ and $U\cup V$ is recursively open uniformly in $U$ and $V$. (4) $\mbox{diam} K=\inf \{q: \exists s, K\subseteq B(s,q)\}$. (5) For $x\in X$ and $q\in \mathbb{Q}$ define $U_{q,x}:=\{y : d(x,y)>q\}$, which is a constructive (in $x$) open set. Then $d_{K}(x)=\sup\{q : K \subset U_{q,x}\}$ is lower-computable. (6) $\inf_K f=\sup\{q:K\subseteq f^{-1}(q,+\infty)\}$. (7) $\sup_K f=\inf\{q:K\subseteq f^{-1}(-\infty,q)\}$. \end{proof} \begin{remarks} \item The arguments are uniform. In point 1) for instance, this means that there is an algorithm which takes a program computing $x$ and outputs a program testifying the rec. compacity of $\{x\}$, and vice-versa. \item When $X$ itself is rec. compact, a subset $K$ is rec. compact iff $d_{K}$ is lower-computable. Indeed, $K=X\setminus \{x : d_{K}(x)>0\}$. \end{remarks} \begin{corollary}\label{c.countable-intersection} If $(K_{i})_{i\in \mathbb{N}}$ are uniformly recursively compact sets, then so is $\bigcap_{i\in \mathbb{N}} K_{i}$. \end{corollary} \begin{proof}The complements of recursively compact sets are r.e open. Then by proposition \ref{p.compact-basic}, part (\ref{core_compact}) the set $\bigcap_{i\in \mathbb{N}} K_{i}=K_{0}\setminus (\bigcup_{i>0}K_{i}^c)$ is recursively compact. \end{proof} It is important to remark that a recursively compact set needs not contain computable points. This will be used in section \ref{lst}. \begin{proposition}\label{p.no-comp} There exists a nonempty recursively compact set $K\subset [0,1]$ containing no computable points. \end{proposition} \begin{proof} Let $I_{n}$ be an enumeration of all the rational intervals and $\epsilon>0$ be a rational number. Put $E=\{i\geq 1:\varphi_i(i)\mbox{ halts and }|I_{\varphi_i(i)}|<\epsilon 2^{-i}\}$. $E$ is a r.e. subset of $\mathbb{N}$. Let $U=\bigcup_{i\in E}I_i$: $\lambda(U)\leq \sum_{i\in E}\epsilon 2^{-i}\leq \epsilon$. Let $x\in[0,1]$ be a computable real number. There is a total recursive function $\varphi_i$ such that $|I_{\varphi_i(n)}|<\epsilon 2^{-n}$ and $x\in I_{\varphi_i(n)}$ for all $n$. Hence $i\in E$, so $x\in U$. Hence $U$ contains all computable points. As $[0,1]$ is recursively compact, so is $K=[0,1]\setminus U$. \end{proof} Now we start to show that many statements about topology and calculus on metric spaces can be easily translated to the computable setting: the first one says that the image of a recursively compact is still a recursively compact. \begin{proposition}[Stability by computable functions]\label{p.stability} Let $f:K\subseteq X\to Y$ be a computable function defined on a recursively compact set $K$. Then $f(K)$ is recursively compact. \end{proposition} \begin{proof} Indeed, $f(K)\subseteq U\iff K\subseteq f^{-1}(U)$. As $f^{-1}(U_e)\cap K=U_{\varphi(e)}\cap K$ where $\varphi$ is a total recursive function, $f(K)\subseteq U_e\iff K\subseteq U_{\varphi(e)}$. \end{proof} Remark that the argument is uniform: if $(K_i)_{i\in\mathbbm{N}}$ is a sequence of uniformly recursively compact subsets of $X$ on which $f$ is defined, then $(f(K_i))_{i\in\mathbbm{N}}$ is a sequence of uniformly recursively compact subsets of $Y$. We will say that $f(K)$ is recursively compact \emph{uniformly in $K$}. As a first simple example of application, we observe that in some cases the global attractor of a (computable) dynamical system can be approximated by an algorithm to any given accuracy. \begin{corollary}\label{c.rec.compact.attractor} Let $X$ be a recursively compact computable metric space and $T$ a computable dynamics on it. Then the set: $$ \Lambda:=\bigcap_{n\geq 0}T^n (X) $$ is recursively compact. \end{corollary} \begin{proof}By proposition \ref{p.stability} and corollary \ref{c.countable-intersection} \end{proof} We remark that these and other frameworks of ``exact computability and rigorous approximation'' have been previously used to study the computability of several similar objects such as Julia or Mandelbrot sets (\cite{H05, BY06, BBY06, BBY07}, \cite{Del97}), or the existence and some basic properties of Lorentz attractor (\cite{Tuc99}). Here is a computable version of Heine's theorem. \begin{definition} A function $f:X\rightarrow Y$ \ between metric spaces is \defin{recursively uniformly continuous} if there is a recursive $\delta :\mathbbm{Q\rightarrow Q} $ such that for all $\epsilon>0$, $\delta(\epsilon)>0$ and $\forall x\in X$, \begin{equation} f(B(x,\delta (\epsilon )))\subset B(f(x),\epsilon ). \end{equation} \end{definition} \begin{proposition} Let $X$ and $Y$ be two computable metric spaces. Let $K\subseteq X$ be recursively compact and $f:K\rightarrow Y$ be a computable function. Then $f$ is recursively uniformly continuous. \end{proposition} \begin{proof} First, $K\times K$ is a recursively compact subset of $X\times X$. For each rational number $\epsilon >0$, define $U(\epsilon )=\{(x,x^{\prime })\in K^{2}:d(f(x),f(x^{\prime }))<\epsilon \}$ and $K(\epsilon )=K\times K\setminus U(\epsilon )$: they are respectively recursively open and recursively compact, uniformly in $\epsilon $. Hence, the function $\delta (\epsilon ):=\inf \{d(x,y):(x,y)\in K(\epsilon )\}$ is lower semi-computable (proposition \ref{supinf_compact}). Now, $f$ is uniformly continuous if and only if $\delta(\epsilon)>0$ for each $\epsilon>0$. By the classical Heine's theorem, this is the case, so by lower semi-computability of $\delta(\epsilon)$, one can compute from $% \epsilon$ some positive $\delta\leq \delta(\epsilon)$. % \end{proof} \begin{theorem} \label{computable_zero} Let $K$ be a recursively compact subset of $X$ and $f:K\to \mathbbm{R}$ be a computable function. Then every isolated zero of $f$ is computable. \end{theorem} \begin{proof} Let $x_0$ be an isolated zero of $f$. Let $s,r$ be an ideal point and a positive rational number such that $x\in B(s,r)$ and the only zero of $f$ lying in $\overline{B}(s,r)$ is $x_0$. The set $N=\{x:f(x)\neq 0\}\cup \{x:d(x,s)>r\}$ is recursively open in $K$ (that is, $N\cap K=U\cap K$ with $U$ recursively open), so $\{x_0\}=K\setminus N=K\setminus U$ is recursively compact by proposition \ref{p.compact-basic}. Hence, $x_0$ is a computable point. \end{proof} \begin{remark}\label{rmk3}Observe that the argument is uniform in $f$ and an ideal ball isolating the zero. In particular, there is an algorithm which takes a finite description of $f$ and the ball and outputs his zero if it is unique. \end{remark} \begin{corollary} \label{computable_fixed} Let $K$ be a recursively compact subset of $X$ and $f:K\to X$ be a computable function. Then every isolated fixed point of $f$ is computable. \end{corollary} \begin{proof} Apply the preceding theorem to the function $g:X\to\mathbbm{R}$ defined by $g(x)=d(x,f(x))$. \end{proof} \subsection{Recursively precompact} In this subsection we prove the equivalence between the notion of recursive compactness given above and another natural approach (which will be used later) to recursive compactness, where it is supposed the existence of an algorithm to construct $\epsilon $-nets. \begin{definition}\label{pree} A CMS is \defin{recursively precompact} if there is a total recursive function $\varphi:\mathbbm{N}\to\mathbbm{N}$ such that for all $n$, $\varphi(n)$ computes a $2^{-n}$-net: that is $\varphi(n)={\langle i_1,\ldots,i_p \rangle} $ where $(s_{i_1},\ldots,s_{i_p})$ is a $2^{-n}$-net. \end{definition} Here is a computable version of a classical theorem: \begin{proposition}\label{pree2} Let $X$ be a CMS. $X$ is recursively compact if and only if it is complete and recursively precompact. \end{proposition} \begin{proof} If $X$ is recursively compact then we define the following algorithm: it takes $n$ as input, then enumerates all the ${\langle i_1,\ldots,i_p \rangle} $, and tests if $(B(s_{i_1},2^{-n}),\ldots,B(s_{i_p},2^{-n}))$ is a covering of $X$ (this is possible by recursive compacity). As $X$ is compact, hence precompact, such a covering exists and will be eventually enumerated: output it. The algorithm makes $X$ recursively precompact. Suppose that $X$ is complete and recursively precompact. Let $(B(s_1,q_1),\ldots,B(s_k,q_k))$ be ideal balls: we claim that $(B(s_1,q_1),\ldots,B(s_k,q_k))$ covers $X$ if and only if there exists $n$ such that each point $s$ of the $2^{-n}$-net given by recursive precompactness lies in a ball $B(s_i,q_i)$ satisfying $d(s,s_i)+2^{-n}<q_i$. The procedure which enumerates all the $n$ and semi-decides this halts if and only if the initial sequence of balls covers $X$. We leave the proof of the claim to the reader (take $n$ such that $2^{-n}$ is less than the Lebesgue number of the finite covering). \end{proof} The following observation is also worth noticing. \begin{proposition} Let $X$ be a computable metric space. If $X$ (as a subset of $X$) is recursively compact, then the set $C(X)$ of continuous functions from $X$ to $\mathbbm{R}$ with the distance induced by the uniform norm is a computable metric space. The function $eval:\mathcal{C}(X)\times X\to\mathbbm{R}$ mapping $(f,x)$ to $f(x)$ is computable. Let $Y$ be a computable metric space: for every computable function $f:Y\times X\to \mathbbm{R}$, the function $Y\to C(X)$ mapping $y$ to $f_y:x\mapsto f(y,x)$ is computable. \end{proposition} \subsection{Recursively closed sets: approximable from below} From the computability viewpoint, the properties of recursively closed sets are, in a sense, complementary to those of recursively compact sets. \begin{definition} A closed set $F$ is \emph{\textbf{recursively closed}} if the set $\{B(s,r): B(s,r)\cap F\neq \emptyset\}$ is r.e. \end{definition} A closed set $F$ is recursively closed if $F\cap U$ is semi-decidable for r.e open sets $U$. It is easy to see that the union of two recursively closed sets is also recursively closed. The closure of any recursively open set is recursively closed: $B\cap \overline{U}\neq \emptyset\iff \exists s\in B\cap U$. The following proposition will be used later. \begin{proposition}\label{p.comp-closed} Let $F$ be a recursively closed subset of $X$. Then there exists a sequence of uniformly computable points $x_{i}\in F$ which is dense in $F$. \end{proposition} \begin{proof} Since $\{n\in \mathbb{N} : B_n=B(s_{n},q_{n})\cap F \neq \emptyset\}$ is r.e, given some ideal ball $B=B(s,q)$ intersecting $F$, the set $\{n \in \mathbb{N} : \overline{B_n}\subset B, q_{n}\leq 2^{-n}, B_n\cap F \neq \emptyset \}$ is also r.e. Then we can effectively construct an exponentially decreasing sequence of ideal balls intersecting $F$. Hence $\{x\}=\cap_k B_k$ is a computable point lying in $F$. \end{proof} We remark that by this, Proposition \ref{p.no-comp} shows a recursive compact which is not recursively closed. For the sake of completeness, let us state some useful simple properties. \begin{proposition}Let $F$ be a recursively closed subset of $X$. Then: \begin{itemize} \item[(1)] The diameter of $F$ is lower semi-computable, uniformly in $F$. \item[(2)] If $f:F\to \mathbbm{R}$ is lower semi-computable, then so is $\sup_F f$. \item[(3)] If $f:F\to \mathbbm{R}$ is upper semi-computable, then so is $\inf_F f$. \end{itemize} \end{proposition} \begin{proof} (1) Let $C(s,q)$ be the complement of the closed ball $\overline{B}(s,q)$, that is $C(s,r)=\{x:d(x,s)>q\}$: this is a recursively open set, uniformly in $% s,q $. Then $\mbox{diam} F=\sup \{q:\exists s,C(s,r)\cap F\neq \emptyset\}$. (2) $\sup_F f=\sup \{q:f^{-1}(q,+\infty)\cap F\neq \emptyset\}$. (3) Apply (2) to $-f$. \end{proof} \begin{corollary} Let $K$ be recursively closed and recursively compact subset of $X$. If $f:K\to \mathbbm{R}^+$ is a computable function, then so are $\inf_K f$ and $\sup_K f$. \end{corollary} \section{Computable measures\label{meas}} Let us consider the space $PM(X)$ of Borel probability measures over $X$. We recall that $PM(X)$ can be seen as the dual of the space $C_{0}(X)$ of continuous functions with compact support over $X$ and recall the notion of weak convergence of measures: \begin{definition} $\mu _{n}$ is said to be \defin{weakly convergent} to $\mu $ if $\integral{f}{\mu_n}\rightarrow \integral{f}{\mu}$ for each $f\in C_{0}(X)$. \end{definition} Let us introduce the Wasserstein-Kantorovich distance between measures. Let $\mu _{1}$ and $\mu _{2}$ be two probability measures on $X$ and consider: \begin{equation*} W_{1}(\mu _{1},\mu _{2})=\underset{f\in 1\text{-Lip}(X)}{\sup }\left|\integral{f}{\mu_1}-\integral{f}{\mu _2}\right| \end{equation*} where $1\mbox{-Lip}(X)$ is the space of 1-Lipschitz functions on $X$. We remark that since adding a constant to the test function $f$ does not change the above difference $\integral{f}{\mu _1}-\integral{f}{\mu _2}$ then the supremum can be taken over the set of 1-Lipschitz functions mapping a distinguished ideal point $s_0$ to $0$. The distance $W_{1}$ has moreover the following useful properties which will be used in the following \begin{proposition}[\cite{AGS} Prop 7.1.5]\label{ambros}\mbox{} \begin{enumerate} \item $W_{1}$ is a distance and if $X$ is bounded, separable and complete, then $PM(X) $ with this distance is a separable and complete metric space. \item If $X$ is bounded, a sequence is convergent for the $W_{1}$ metrics if and only if it is convergent for the weak topology. \item If $X$ is compact $PM(X)$ is compact with this topology. \end{enumerate} \end{proposition} Item (1) has an effective version: $PM(X)$ inherits the computable metric structure of $X$. Indeed, given the set $S_{X}$ of ideal points of $X$ we can naturally define a set of ideal points $S_{PM(X)}$ in $PM(X)$ by considering finite rational convex combinations of the Dirac measures $\delta _{x}$ supported on ideal points $x\in S_{X}$. This is a dense subset of $PM(X)$. The proof of the following proposition can be found in (\cite{HR07}) \begin{proposition} If $X$ bounded then $PM(X)$ with the $W_{1}$ distance (and $S_{PM(X)}$ as a set of ideal points) is a computable metric space. \end{proposition} A measure $\mu $ is then computable if there is a fast sequence $(\mu _{n})\in S_{PM(X)}$ converging to $\mu $ (see remark \ref{fasttt}) in the $W_{1}$ metric (and hence for the weak convergence). Now, point (3) of proposition \ref{ambros} also has an effective version: \begin{lemma} \label{recomp}If $X$ is a recursively precompact metric space, then $PM(X)$ with the $W_{1}$ distance is a recursively precompact metric space. \end{lemma} \begin{proof} We will show how to effectively find an $r-$net for each $r$ of the form $r=% \frac{1}{n},n\in \mathbbm{N}$. Let us consider the set $S_{r}=\{\frac{k}{n}% ,0\leq k\leq n\}$ subdividing the unit intervals in equal segments. Let us also consider an $r$-net $N_{r}=\{x_{1},...x_{m}\}$ constructed by recursive compactness of $X.$ Now let us consider the set $\Upsilon _{r}$ of measures with support in $N_{r}$ given by \begin{equation*} \Upsilon _{r}=\{k_{1}\delta _{x_{1}}+...+k_{m}\delta _{x_{m}}~s.t.~k_{i}\in S_{r}~,~k_{1}+...+k_{m}=1\}. \end{equation*}% This is a \thinspace $2r$ net in $PM(X)$. To see this let us consider a measure $\mu $ on $X$ and a ball $B(x_{1},r)$ centered in $x_{1}\in X$. Let us consider the measure $\mu _{1}$ defined by \begin{equation*} \mu _{1}(A)=\mu (A)-\mu (B(x_{1},r)\cap A)+\delta _{x_{1}}(A) \end{equation*} for each measurable set $A\subset X$. The measure $\mu _{1}$ is obtained transporting the mass contained in the ball $B(x_{1},r)$ to its center. Then $W_{1}(\mu _{1},\mu )\leq r\mu (B(x_{1},r))$. Let us now consider the sequence of measures $\mu _{1},...,\mu _{m}$ where $\mu _{1}$ is as before and the other ones are given by% \begin{equation*} \mu _{i}(A)=\mu _{i-1}(A)-\mu _{i-1}(B(x_{i},r)\cap A)+\delta _{x_{i}}(A), \end{equation*}% at the end $\mu _{m}$ is a measure with support in $N_{r}$ and by the triangle inequality $W_{1}(\mu _{m},\mu )\leq r.$ Now $\mu _{m}$ has the same support as the measures in $\Upsilon _{r}$ and there is $\nu \in \Upsilon _{r}$ such that $|\integral{f}{\mu_n}-\integral{f}{\nu}|\leq r$ for each $f\in 1\mbox{-Lip}(X)$, hence $W_{1}(\mu _{m},\nu )\leq r$ and then $W_{1}(\mu ,\nu )\leq 2r$ and this proves the statement. \end{proof} We now use the recursive enumeration of lower semi-computable functions $(f_{i})_{i}\geq 0$ to characterize computability on $PM(X)$ (see \cite{HR07} corollary 4.3.1): \begin{lemma} \label{comp_meas}Let $X$ be a bounded computable metric space and $\mathcal{S% }$ be any subset of $PM(X)$, then: \begin{enumerate} \item $\mu \in PM(X)$ is computable iff the function $\mu \mapsto \integral{f_i}{\mu}$ is lower semi-computable, uniformly in $i$, \item $L:PM(X)\rightarrow PM(X)$ is computable on $\mathcal{S}$ iff the function $\mu \mapsto \integral{f_i}{L(\mu)}$ is lower semi-computable on $\mathcal{S}$, uniformly in $i$. \end{enumerate} \end{lemma} This gives: \begin{lemma} \label{comp_D} If $g_{i}:X\rightarrow \mathbbm{R}^{+}$ is a uniform sequence of functions which are lower semi-computable on $X\setminus D$, then $\mu \mapsto \integral{g_i}{\mu}$ is lower semi-computable on \begin{equation} PM_{D}(X):=\{\mu :\mu (D)=0\} \end{equation} uniformly in $i$. \end{lemma} \begin{proof} For each $i$, one can construct a lower semi-computable function $\hat{g}_i$ satisfying $\hat{g}_{i}=g_{i}$ on $X\setminus D$ (see \cite{HR07}, subsection 3.1). Since the function $\mu \mapsto \integral{\hat{g}_i}{\mu}$ is lower semi-computable, uniformly in $i$ and $\mu (D)=0$, we have that on $PM_{D}(X) $ it coincides with $\mu \mapsto \integral{\hat{g}_i}{\mu}$, which is then lower semi-computable on $PM_{D}(X)$, uniformly in $i$. \end{proof} An interesting remark about computable measures is that they must have computable points in the support. This will be used in section \ref{example2}. \begin{proposition}\label{compoint} If $\mu$ is a computable probability measure, then there exists computable points in the support of $\mu$. \end{proposition} \begin{proof}The sequence of functions $f_{i}:=1_{B_{i}}$ (the indicator functions of ideal balls) are uniformly lower semi-computable. By lemma \ref{comp_meas}, the numbers $\integral{f_{i}}{\mu}=\mu(B_{i})$ are uniformly lower semi-computable. Hence, the set $\{B_{i}: \mu(B_{i})>0\}$ is recursively enumerable. In other words, the support of $\mu$ is a recursively closed set. Proposition \ref{p.comp-closed} allows to conclude. \end{proof} \section{Dynamical systems, statistical behavior, invariant measures}\label{DS} Let $X$ be a metric space, let $T:X\mapsto X$ be a Borel measurable map. Let $\mu $ be an invariant measure. A set a $A$ is called $T$-invariant if $% T^{-1}(A)=A(mod0)$. The system $(T,\mu )$ is said to be ergodic if each $T$% -invariant set has total or null measure. In such systems the famous Birkhoff ergodic theorem says that time averages computed along $\mu $ typical orbits coincides with space average with respect to $\mu .$ More precisely, for any $f\in L^{1}(X)$ it holds \begin{equation} \underset{n\rightarrow \infty }{\lim }\frac{S_{n}^{f}(x)}{n}=\int fd\mu , \label{Birkhoff} \end{equation}% for $\mu $ almost each $x$, where $S_{n}^{f}=f+f\circ T+\ldots +f\circ T^{n-1}.$ This shows that in an ergodic system, the statistical behavior of observables, under typical realizations of the system is given by the average of the observable made with the invariant measure. In case $X$ is a manifold (possibly with boundary). We say that a point $x$ belong to the basin of an invariant measure $\mu $ if Equation \ref{Birkhoff} holds at $x$ for each continuous $f$ (the average on the $x$ orbit represent the average under the measure). An SRB measure is an invariant measure having a positive Lebesgue measure basin (for more details and a general survey see \cite{Y}). In the applied literature the most common method to simulate or understand the above statistical behaviors is to compute and study some trajectory. This method has three main theoretical problems which motivates the search of another approach: \begin{itemize} \item numerical error, \item tipicality of the sample, \item how many sample points are necessary? \end{itemize} the first (and widely known) problem is the amplification of the numerical error (if the system is sensitive to initial conditions as most interesting systems are). Here the shadowing results are often invoked to justify the correctness of simulations, but rigorous results are proved only for a small class of systems (see e.g.\cite{Pal00}) and moreover the mere existence of a shadowing orbit does not say anything about its typicality (see e.g. \cite{Bla89, Bla94} for a further discussion on numerical errors). The second problem is indeed that this method should compute, in order to be useful, a trajectory which shows the ``typical'' behavior of the system: a behavior which take place with large or full probability in some sense. The main problem here is the fact that the set of initial conditions the computer has access to, being countable, has probability zero. Hence, there is no guarantee that what we see on the screen is typical in some sense. On the contrary, in a chaotic system, typical orbits are far from being describable by a finite program. It is true for example that in an ergodic system having positive entropy $h$ a typical $n$ step orbit segment needs approximatively a program which is $hn$ bits long to be described (up some approximation $\epsilon $, see e.g. \cite{B83} for the original result or \cite{Ga00} and \cite{GHR09c} for a version in the framework of computable analysis). We remark, however, that if one looks for points which behave as typical for Birkhoff averages (hence they behave as typical for some given particular aspect) there are some rigorous results partly supporting this way to proceed: in several classes of systems there are computable initial conditions which behave as typical with respect to Birkhoff averages (see \cite{GHR07} for a precise result). The third problem however remains. Even if you find a program describing a typical orbit of the system: how many iterations should be considered to be near to the limit behavior, so that the orbit represents the invariant measure up to a certain approximation? although this problem can be approached rigorously in some cases (see \cite{CCS} e.g.) we will not adopt this point of view. We will study the system's statistical behavior by directly computing the invariant measure as fixed points of a certain transfer operator. \subsection{The transfer operator\label{PF}} A function $T$ between metric spaces naturally induces a function between probability measure spaces. This function $L_T$ is linear and is called transfer operator (associated to $T$). Measures which are invariant for $T$ Invariant measures are fixed points of $L_T$. Let us consider a computable metric space $X$ endowed with a Borel probability measure $\mu$ and with a dynamics defined by a measure-preserving function $T:X\rightarrow X$. Let us also consider the space $PM(X)$ of Borel probability measures on $X.$ Let us define the function $L_{T}:PM(X)\rightarrow PM(X)$ by duality in the following way: if $\mu \in PM(X)$ then $L_{T}(\mu )$ is such that \begin{equation*} \integral{f}{L_{T}(\mu )}=\integral{f\circ T}{\mu} \end{equation*} for each $f\in C_{0}(X)$. In next sections, invariant measures will be found as solutions of the equation $W_{1}(\mu ,L(\mu ))=0.$ To apply Theorem \ref{computable_zero} and Corollary \ref{computable_fixed} to this equation we need that $L$ is computable. We remark that if $T$ is not continuous then $L$ is not necessarily continuous (this can be realized by applying $L$ to some delta measure placed near a discontinuity point) hence not computable. Still, we have that $L$ is continuous (and its modulus of continuity is computable) at all measures $\mu $ which are \textquotedblleft far enough\textquotedblright\ from the discontinuity set $D$. This is technically expressed by the condition $\mu (D)=0$. We remark that with the general tools introduced before, the proof is immediate. \begin{theorem} \label{L_comp} Let $X$ be a computable metric space and $T:X\rightarrow X$ be a function which is computable on $X\setminus D$. Then $L_{T}$ is computable on the set of measures \begin{equation} PM_{D}(X):=\{\mu \in PM(X):\mu (D)=0\}. \end{equation} \end{theorem} \begin{proof} Note that if $f$ is lower semi-computable, then $f\circ T$ is lower semi-computable on $X\setminus D$. The result then follows from lemmas \ref{comp_D} and % \ref{comp_meas}. \end{proof} In particular, if $T$ is computable on the whole space $X$ then $L$ is computable on all $PM(X)$. \subsection{Computing invariant ``regular'' measures} The above tools allow to ensure the computability of $L_{T}$ on a large class of measures. This will allow to apply Corollary \ref{computable_fixed} and see an invariant measure as a fixed point. \begin{theorem} \label{comp_inv} Let $X$ be a computable metric space and $T$ be a function which is computable on $X\setminus D$. Suppose there is a recursively compact set of probability measures $V\subset M(X)$ such that for every $\mu \in V$, $\mu (D)=0$ holds. Then every invariant measure isolated in $V$ is computable. \end{theorem} \begin{proof} By Theorem \ref{L_comp}, $L_{T}$ is computable on $V$. Since $V$ is recursively compact, theorem \ref{computable_zero} allows to compute any invariant isolated measure in $V$ as a solution of the equation $L_T(\mu)=\mu $. \end{proof} \begin{remark}\label{comprem} This theorem is uniform: there is an algorithm which takes as inputs finite descriptions of $T, V$ and an ideal ball in $M(X)$ which isolates an invariant measure $\mu$, and outputs a finite description of $\mu$ (see the above proof and Remark \ref{rmk3}). \end{remark} A trivial consequence of Theorem \ref{comp_inv} is the following: \begin{corollary} If a computable system as above is uniquely ergodic and its invariant measure $\mu$ satisfy $\mu (D)=0$, then it is a computable measure. \end{corollary} The main problem in the application of theorem \ref{comp_inv} is the requirement that the invariant measure we are trying to compute, should be isolated in $V$. In general the space of invariant measures in a given dynamical system could be very large (an infinite dimensional convex in $% PM(X)$ ) to isolate a particular measure we can restrict and consider a subclass of "regular" measures. Let us consider the following \emph{seminorm}: \begin{equation*} \left\Vert \mu \right\Vert _{\alpha }=\sup_{x\in X,r>0}\frac{\mu (B(x,r))}{% r^{\alpha }}. \end{equation*} \begin{proposition} \label{1}If $X$ is recursively compact then \begin{equation} V_{\alpha ,K}=\{\mu \in PM(X):\left\Vert \mu \right\Vert _{\alpha }\leq K\} \end{equation}% is recursively compact. \end{proposition} \begin{proof}$U=\{\mu \in PM(X):\left\Vert \mu \right\Vert _{\alpha }>K\}$ is recursively open. Indeed, $\norm{\mu}_\alpha > K$ iff there exists $s,r\in \S\times \mathbb{Q}$ for which $\mu(B(s,r))>qr^{\alpha}$. As $\mu \mapsto \mu(B(s,r))$ is lower semi-computable uniformly in $s,r$, the sets $U_{s,r}:=\{\mu: \mu(B(s,r))>Kr^{\alpha}\}$ are uniformly recursively open subsets of $PM(X)$. Hence, $U=\cup_{s,r}U_{s,r}$ is recursively open. Now, $V_{\alpha ,K}=PM(X)\setminus U$. As $PM(X)$ is recursively compact (see Lemma \ref{recomp}) and $U$ is recursively open, then proposition \ref{p.compact-basic} part (3) allows to conclude. \end{proof} In theorem \ref{comp_inv} we require that $\mu (D)=0$ holds. This is automatically true in many examples when the measure is regular and the set $% D$ is reasonably small. \begin{proposition} \label{2}Let $X$ be recursively compact and $T$ be computable on $X\setminus D$, with $dim_{H}(D)<\infty $. Then any invariant measure isolated in $% V_{\alpha ,K}$ with $\alpha >dim_{H}(D)$ is computable. \end{proposition} \begin{proof} Let us first prove that $\mu (D)=0$ for all $\mu \in V_{\alpha ,K}$. For all $\epsilon >0$, there is a covering $(B(x_{i},r_{i}))_{i}$ of $D$ satisfying $\sum_{i}r_{i}^{\alpha }<\epsilon $. Hence $\mu (D)\leq \sum_{i}\mu B(x_{i},r_{i})\leq 2^{\alpha }K\sum_{i}r_{i}^{\alpha }\leq 2^{\alpha }K\epsilon $. As this is true for each $\epsilon >0$, $\mu (D)=0$. The result then follows from the fact that $V_{\alpha ,K}$ is recursively compact and Theorem \ref{comp_inv}. \end{proof} \begin{remark} Once again, this is uniform in $T,\alpha,K$. \end{remark} The above general proposition allows to obtain as a corollary the computability of many absolutely continuous invariant measures. For the sake of simplicity, let us consider maps on the interval. \begin{proposition} If $X=[0,1]$, $T$ is computable on $X\setminus D$, with $dim_{H}(D)<1$ and $(X,T)$ has an unique a.c.i.m. $\mu $ with bounded density, then $\mu $ is computable. \end{proposition} \begin{proof} The result follows from the above proposition \ref{2} and the fact that if $\mu $ is absolutely continuous and the density of $\mu $ is $f\in L_{1}[0,1]$ then $\left\Vert \mu \right\Vert _{1}=ess sup (f).$ We have to check that there could be not other measures having a finite $1$ norm and not being absolutely continuous. If we suppose that $\left\Vert \mu \right\Vert_{1}=l$ is finite, then $\mu$ is absolutely continuous, with bounded density $f\leq l$. Indeed, let us consider the conditional expectation $E[\mu|I^{n}]$ of $\mu$ to the dyadic $n$-th grid $I^{n}=\{[k2^{-n},(k+1)2^{-n}),0\leq k\leq2^{n}\}$. If $\left\Vert \mu\right\Vert_{1}=l$ a fortiori implies $0\leq E[\mu|I^{n}]\leq l$ a.e.. By the first Doobs martingale convergence it follows that $E[\mu|I^{n}]$ has an a.e. pointwise limit $f$ and $f\leq l$ a.e.. Since $f$ is bounded then it is a density for $\mu$. \end{proof} $d$-dimensional submanifolds of $\mathbbm{R}^n$ can naturally be endowed with a natural structure of computable metric spaces ( see \cite{GHR07}). Considering a dyadic grid on $\mathbbm{R}^d$ and chart diffeomorphisms it is straightforward to prove, in the same way as before \begin{corollary} Let $X$ be a recursively compact $d$ dimensional $C^{1}$ submanifold of $% \mathbbm{R}^{n}$ (with or without boundary). If $T$ is computable on $X\setminus D$, with $dim_{H}(D)<d$ and $(X,T)$ has a unique a.c.i.m. $\mu $ with bounded density, then $\mu$ is computable. \end{corollary} As it is well known, interesting examples of systems having an unique a.c.i.m. (with bounded density as required) are topologically transitive {\em piecewise expanding maps} on the interval or {\em expanding maps} on manifolds (see \cite{V97} for precise definitions). Provided that the dynamics is computable we then have by the above propositions that the a.c.i.m. is computable too. \subsection{Unbounded densities} The above results ensure computability of measures having an a.c.i.m. with bounded density. If we are interested in situations where the density is unbounded, we can consider a new norm, ``killing'' singularities. Let us hence consider a computable function $f:X\rightarrow \mathbbm{R}$ \ and \begin{equation*} \left\Vert \mu \right\Vert _{f,\alpha }=\sup_{x\in X,r>0}\frac{f(x)\mu (B(x,r))}{r^{\alpha }}. \end{equation*} Propositions \ref{1} and \ref{2} also hold for this norm. If $f$ is such that $f(x)=0$ when $\lim_{r\rightarrow 0}\frac{\mu (B(x,r))}{r^{\alpha }}% =\infty $ this can let the norm to be finite when the density diverges. As an example, where this can be applied, let us consider the Manneville Pomeau maps on the unit interval. These are maps of the type $x\rightarrow x+x^{z}(\func{mod}~1) $. When $1<z<2$ the dynamics has an unique a.c.i.m. $\mu _{z}$ having density $% e_{z}(x)$ which diverges in the origin as $e_{z}(x)\asymp x^{-z+1}$ and it is bounded elsewhere (see \cite{I03} section 10 and \cite{V97} section 3 e.g.). If we consider the norm $\left\Vert .\right\Vert _{f,1}$ with $% f(x)=x^{2}$ we have that $\left\Vert \mu _{z}\right\Vert _{f,1}$ is finite for each such $z$. By this it follows that the measure $\mu _{z}$ is computable. \section{Computable systems having not computable invariant measures\label{lst}} We have seen that the technique presented above proves the computability of many a.c.i.m. which are also SRB measures. As we have seen in the introduction, with other techniques it is possible to prove the computability of other SRB measures (axiom A systems e.g., see \cite{GHR07}). This raises naturally the following question: a computable systems does necessarily have a computable invariant measure? what about ergodic SRB measures? The following is an easy example showing that this is not true in general even in quite regular systems, hence the whole question of computing invariant measures has some subtlety. Let us consider a system on the unit interval given as follows. Let $\tau\in (0,1)$ be a lower semi-computable real number which is not computable. There is a computable sequence of rational numbers $\tau_i$ such that $\sup_i \tau_i=\tau$. For each $i$, define $T_i(x)=\max(x,\tau_i)$ and $T(x)=\sum_{i\geq 1} 2^{-i}T_i$. The functions $T_i$ are uniformly computable so $T$ is also computable. \begin{figure}[h] \includegraphics[height=5cm]{image-1} \caption{The map $T$.} \end{figure} Now, $T$ is non-decreasing, and $T(x)>x$ if and only if $x<\tau$. The system $([0,1],T)$ is hence a computable dynamical system. This system has a SRB ergodic invariant measure which is $\delta _{\tau }$, the Dirac measure placed on $\tau $. The measure is SRB because $\tau$ attracts all the interval at its left. Since $\tau $ is not computable then $\delta _{\tau }$ is not computable. We remark that coherently with the previous theorems $\delta _{\tau }$ is not isolated. We remark that by a simple dichotomy argument we can prove that a computable function from $[0,1]$ to itself must have a computable fixed point. Hence it is not possible to construct a system over the interval having no computable invariant measure (we always have the $\delta$ over the fixed point). With some more work we will see that such an example can be constructed on the circle. \subsection{A computable system having no computable invariant measure}\label{example2} We go further and exhibit a computable dynamical system on a compact space which has \emph{no} computable invariant probability measure. We consider the unit circle $S$, identified with $\mathbb{R}/\mathbb{Z}$. It naturally has a computable metric structure inherited from that of $\mathbb{R}$. On $S$, there is a computable map with no computable invariant probability measure. We construct such a map $T:[0,1]\to\mathbb{R}$ satisfying $T(1)=T(0)+1$, and consider its quotient on the unit circle. From proposition \ref{p.no-comp} we know that there is a non-empty recursively compact set $K$ containing no computable point. Let $U=(0,1)\setminus K$: this is a r.e. open set, so there are computable sequences $a_i,b_i$ ($i\geq 1$) such that $0<a_i<b_i<1$ and $U=\bigcup_i (a_i,b_i)$. Let us define non-decreasing, uniformly computable functions $f_i:[0,1]\to[0,1]$ such that $f_i(x)>x$ if $x\in (a_i,b_i)$ and $f_i(x)=x$ otherwise. For instance, $f_i(x)=2x-a_i$ on $[a_i,\frac{a_i+b_i}{2}]$ and $f_i(x)=b_i$ on $[\frac{a_i+b_i}{2},b_i]$. \begin{figure}[h] \includegraphics[height=3cm]{image-2} \caption{The map $f_i$.} \end{figure} As neither $0$ nor $1$ belongs to $K$, there is a rational number $\epsilon>0$ such that $K\subseteq [\epsilon,1-\epsilon]$. Let us define $f:[0,1]\to \mathbb{R}$ by $f(x)=x$ on $[\epsilon,1-\epsilon]$, $f(x)=2x-(1-\epsilon)$ on $[1-\epsilon,1]$ and $f(x)=\epsilon$ on $[0,\epsilon]$. We then define the map $T:[0,1]\to \mathbb{R}$ by $T(x)=\frac{f}{2}+\sum_{i\geq 2} 2^{-i}f_i$. $T$ is computable and non-decreasing, and $T(x)>x$ if and only if $x\in [0,1]\setminus K$. As $T(1)=1+\frac{\epsilon}{2}=1+T(0)$, we can take the quotient of $T$ modulo $1$. \begin{figure}[h] \includegraphics[height=3cm]{image} \caption{The map $T$.} \end{figure} \begin{proposition} $W=U\cup [0,\epsilon) \cup (1-\epsilon , 1] $ is a strictly invariant set: $T^{-1}W=W$. \end{proposition} \begin{proof} If $x\notin W$ then $T(x)=x\notin W$. If $x\in W$ then $T(x)\in W$. Indeed, if $T(x)\notin W$, $T(x)$ is a fixed point so $T$ is constant on $[x,T(x)]$ ($T$ is non-decreasing). Let $q$ be any rational number in $(x,T(x))$: $T(x)=T(q)$ is then computable, but does not belong to $W$: impossible. \end{proof} \begin{proposition}\label{bomb} The map $T$ is computable but has no computable invariant probability measure. \end{proposition} Let $x\in [0,1]$: the trajectory of $x$ is "non-decreasing" and converges to the first point above $x$ which is not in $U$, $\inf ([x,1]\setminus U)$ or to $\min(K)$ if $x>\sup(K)$. More precisely, there are two cases: (i) if $x\notin U$ then $x$ is a fixed point (unstable on the right), (ii) if $x\in U$ then the trajectory of $x$ converges to a lower semi-computable fixed point (non-computable, as it does not belong to $U$). \begin{lemma} Let $\mu$ be an invariant probability measure: then $\mu(K^c)=0$. \end{lemma} \begin{proof} Obviously $\mu (0)=0 $ because $0$ is not periodic. Let $(a,b)=(a_i,b_i)$ be an interval from the description of $U$. Since $T^n(a)$ and $T^n(b)$ tends to some non computable $\alpha$ (and then are not stationary, as they are computable), the interval $(a,b)$ is wandering. Hence, by Poincar\'e recurrence theorem it has null measure. \end{proof} \begin{proof}(of proposition \ref{bomb}) We can conclude: let $\mu$ be a computable invariant probability measure: by the above lemma its support is then included in the complement of $W$. But the support of a computable probability measure always contains computable points (see proposition \ref{compoint}) : contradiction. \end{proof} Actually, the set of invariant measures is exactly the set of measures which give null weight to $W$. It is easy to see that in the above system the set of invariant measures is a convex recursive compact set. Indeed, the function $\mu\to\mu(W)$ is lower semi-computable, so $\{\mu:\mu(W)>0\}$ is a recursive open set. Its complement is then a recursive compact set, as the whole space of probability measures is a recursive compact set. The above example hence shows an example of a convex, and recursive compact set whose extremal points are not computable. We end remarking that with a different construction of the various $f_i$ it is possible to give also a smooth system having the same properties as the examples in this section.
1,116,691,498,862
arxiv
\section[#1]{#2} \setcounter{equation}{0}} \newcommand{\, ,\hspace{1cm}}{\, ,\hspace{1cm}} \newcommand{\, ,\hspace{0.4cm}}{\, ,\hspace{0.4cm}} \newcommand{\ins}[1]{{\mbox{\tiny #1}}} \newcommand{\ind}[1]{{\mbox{\scriptsize #1}}} \newcommand{\inds}[1]{{\scriptscriptstyle #1}} \title{Biconformal symmetry and static Maxwell fields near higher-dimensional black holes} \author{Valeri P. Frolov\thanks{E-mail: [email protected]}\, and Andrei Zelnikov\thanks{E-mail: [email protected]}\\ Theoretical Physics Institute, Department of Physics, University of Alberta, Edmonton, AB, Canada T6G 2E1} \abstract{ We study an electric field created by a static electric charge near the higher dimensional Reissner-Nordstr\"{o}m black hole. The relation between the static Green functions on the $D$-dimensional Reissner-Nordstr\"{o}m background and on the $(D+2)$-dimensional homogeneous Bertotti-Robinson spacetime is found. Using the biconformal symmetry we obtained a simple integral representation for the static Maxwell Green functions in arbitrary dimensions. We show that in a four-dimensional spacetime the static Green function obtained by the biconformal method correctly reproduces known results. We also found a closed form for the exact static Green functions and vector potentials in the five-dimensional Reissner-Nordstr\"{o}m spacetime. } \keywords{conformal symmetry, black holes, higher dimensions} \preprint{Alberta Thy 5-15} \begin{document} \section{Introduction} In this paper we continue studying fields created by static charges placed in the vicinity of a higher-dimensional static black hole. For this purpose we use the method of biconformal transformations, which was developed in our previous paper \cite{Frolov:2014kia,Frolov:2014fya} in application to the case of scalar charges in the Schwarzschild-Tangherlini and the Reissner-Nordstr\"{o}m geometries. A vector potential $A_{\mu}$ in a $D$-dimensional spacetime with metric $g_{\mu\nu}$ ($\mu,\nu=0,\ldots, D-1$) obeys the Maxwell equations \begin{equation}\label{Maxwell} F^{\mu\nu}{}\!_{;\nu}=4\pi J^{\mu} \, ,\hspace{1cm} F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu . \end{equation} Here coefficient $4\pi$ comes from the definition of the Maxwell action in higher dimensions. In four dimensions our choice of units corresponds to the conventional Gaussian units. There is an ambiguity in generalizations of the Maxwell equations to higher dimensions which depends on system of units and the definition of an electric charge. In this paper charges are normalized in such a way that the interaction force between two charges in $D$ dimensions reads \begin{equation}\label{f} f={4\pi\over\Omega_\ins{(D-2)}}\,{e_1 e_2\over r^{D-3}}\, ,\hspace{1cm} \Omega_n={2\pi^{(n+1)/2}\over \Gamma((n+1)/2)}. \end{equation} Different choices of units in higher dimensions are also used in the literature. For example, in the paper \cite{Beach:2014aba} authors work in a different system of units, such that the force between two charges $\tilde{e}_1$ and $\tilde{e}_2$ in the $D-$dimensional Minkowski spacetime is given by \begin{equation} f={\tilde{e}_1 \tilde{e}_2\over r^{D-3}}\,. \end{equation} Thus, our normalization of the charges and that of the paper \cite{Beach:2014aba} are related as \begin{equation} e^2={\Omega_{(D-2)}\over 4\pi}~ \tilde{e}^2 , \end{equation} where the volume $\Omega_n$ of $n$-dimensional sphere $S^n$ is given in \eq{f}. In particular one has \begin{equation} \Omega_2=4\pi\, ,\hspace{0.4cm} \Omega_3=2\pi^2\, ,\hspace{0.4cm} \Omega_4={8\pi^2\over 3}\, ,\hspace{0.4cm} \Omega_5=\pi^3 . \end{equation} In the Lorentz gauge $\nabla^{\mu}A_{\mu}=0$ the Maxwell equations become \begin{equation} \Box A_{\mu}-R_{\mu}^{\nu}A_{\nu}=-4\pi J_{\mu}. \end{equation} Let us consider the potential created by a static electric source $J^{\mu}=j(x)\,\delta^{\mu}_0$, where $j(x)$ is the charge density in a static spacetime described by the metric \begin{equation}\begin{split}\label{met} &ds^2=-\alpha^2 dt^2+g_{ab}\,dx^a dx^b, \\ &X^{\mu}=(t,x^a)\, ,\hspace{0.4cm} \alpha=\alpha(x)\, ,\hspace{0.4cm} g_{ab}=g_{ab}(x)\, ,\hspace{1cm} a=1,\dots, D-1\,. \end{split}\end{equation} In the static case one can choose $A_a=0$ and $A_0=A_0(x)$, that is \begin{equation} A_\mu=A_0\,\delta^0_\mu. \end{equation} Then the Maxwell equations (\ref{Maxwell}) for the potential $A_\mu$ boil down to \begin{equation}\label{eqF} \hat{O}\,A_0=4\pi j\, ,\hspace{1cm} \hat{O}={1\over\alpha\sqrt{g}}\partial_a \left({1\over \alpha}\sqrt{g} g^{ab}\partial_b\right). \end{equation} Here $g=\det(g_{ab})$. The red-shift factor $\alpha$ is connected with the norm of the static Killing vector $\bs{\xi}$ as follows: $\alpha=\sqrt{-\bs{\xi}^2}=\sqrt{-g_{tt}}$. We define the static Green function for the operator $\hat{O}$ as the solution of the following equation \begin{equation}\label{G00} \hat{O}\,G_{00}(x,x')={1\over \alpha\sqrt{g}}\delta(x-x'). \end{equation} The equation (\ref{eqF}) is invariant under the following {\em biconformal} transformations \begin{equation}\label{bi-conf} A_0=\bar{A}_0 \, ,\hspace{1cm} g_{ab}=\Omega^2 \,\bar{g}_{ab}\, ,\hspace{1cm} \alpha=\Omega^{n}\bar{\alpha}\, ,\hspace{1cm} j=\Omega^{-2n-2}\,\bar{j}, \end{equation} where $n\equiv D-3$ and $\Omega$ is an arbitrary function of spatial coordinates $x^a$. These biconformal transformations can be used to relate solutions of the Maxwell equations on a physical metric to solutions on a some other `reference' geometry. If the reference spacetime is more symmetrical than the original one, then there is a good chance to simplify the problem of finding the Green function exactly. This approach, for example, enabled us \cite{Frolov:2012jj} to compute the Green functions of static scalar and Maxwell fields on the background of the Majumdar-Papapetrou spacetime, which describe a set of extremally charged black holes. It was possible because the higher dimensional Majumdar-Papapetrou metric is biconformally related to the flat Minkowski metric. In this paper we use biconformal transformations to compute the static Green functions of the Maxwell field on the background of a generic higher dimensional Reissner-Nordstr\"{o}m black hole. \section{Point charge near higher-dimensional Reissner-Nordstr\"{o}m black hole} Let us consider a static spherically symmetric $D$-dimensional metric of the form \begin{equation}\label{ds} ds^2=-f(r)\,dt^2+f^{-1}(r)\,dr^2+r^2\,d\omega^2_{n+1}\, , \end{equation} where $n=D-3$ and $d\omega_{n+1}^2$ is the line element on a $(n+1)$-dimensional unit sphere \begin{equation}\label{theta} d\omega_{n+1}^2=d\theta_{n}^2+\sin^2\theta_{n}\,d\omega_{n}^2\, ,\hspace{1cm} d\omega_{0}^2 = d\phi^2\, ,\hspace{1cm} \theta_n=\theta . \end{equation} We denote $\theta_0\equiv\phi\in[0,2\pi]$. The other angular coordinates $\theta_{i>0}\in[0,\pi]$. \begin{equation}\label{RN} f=1-{2M\over r^n}+{Q^2\over r^{2n}} . \end{equation} For real positive $M$ and real $Q$, which satisfy the condition $|Q|\le M$, the metric \eq{ds} -\eq{RN} describes the geometry of a higher dimensional generalization of a spherically symmetric electrically charged black hole\footnote{Here we consider an electric field of a test charge on the background of an electrically charged black hole. A total electric field is the sum of the electric field of test charge and of the black hole. This distinction becomes important, when one considers quantities, which are nonlinear in the field, for example, an energy of the Maxwell field or a self-force of a charge.}. The parameters $M$ and $Q$ are proportional to the Arnowitt-Deser-Misner mass and charge of the black hole, respectively. The coefficients of proportionality (see, e.g., \cite{Chamblin:1999hg}) depend on the dimensionality of the spacetime and on the choice of units. It is convenient to introduce a new radial variable $\rho$ related to the radial coordinate $r$ as follows \begin{equation}\label{rho_r} \rho={r^n -M\over \mu}\, ,\hspace{1cm} \mu=\sqrt{M^2-Q^2}\, ,\hspace{1cm} r^n=M+\mu\rho\, . \end{equation} Then the Reissner-Nordstr\"{o}m metric \eq{ds}-\eq{RN} takes the form \begin{equation}\begin{split}\label{R-N} ds^2&=-{\mu^2(\rho^2-1)\over(M+\mu\rho)^2}\,dt^2 +(M+\mu\rho)^{2/n}\left[{1\over n^2(\rho^2-1)}\,d\rho^2+d\omega^2_{n+1} \right] . \end{split}\end{equation} The horizon corresponds to $ \rho=1, $ and its (gravitational) radius $r_\ins{g}$ is given by the expression $ r_\ins{g}^n=M+\mu . $ The surface gravity at the horizon is \begin{equation}\label{kappa} \kappa={n\mu \over r_\ins{g}^{n+1}} . \end{equation} Taking into account that \begin{equation}\begin{split} &\alpha={\mu\sqrt{\rho^2-1}\over M+\mu\rho}\, ,\hspace{0.4cm} g^{\rho\rho}=n^2(\rho^2-1)(M+\mu\rho)^{-2/n} \, ,\hspace{0.4cm} g^{\theta\theta}=(M+\mu\rho)^{-2/n},\\ &\sqrt{-g^\ins{D}}=\alpha\sqrt{g}={1\over n}{\mu\,(M+\mu\rho)^{2/n}}\sqrt{g_\omega} \, ,\hspace{1cm} \sqrt{g_\omega}=\prod_{k=1}^{n}(\sin\theta_k)^k, \end{split}\end{equation} the equation for the Green function \eq{G00} takes the form \begin{equation} \left[n^2\,\partial_{\rho}\,(M+\mu\rho)^2\,\partial_{\rho}+{(M+\mu\rho)^2\over\rho^2-1}\bigtriangleup_{\omega} ^ { n+1} \right]G_{00}(x,x')=n\mu\,\delta(\rho-\rho')\delta(\omega,\omega') . \end{equation} If we use the ansatz \begin{equation} G_{00}(x,x')=-{\mu^2\,(\rho^2-1)(\rho'{}^2-1)\over (M+\mu\rho)(M+\mu\rho')}\,H(x,x') , \end{equation} then the equation for $H(x,x')$ becomes \begin{equation} \left\{n^2[(\rho^2-1)\,\partial^2_{\rho}+4\rho\,\partial_{\rho}+2]+\bigtriangleup^{n+1}_{\omega}\right\}\,H(x, x')=-{n\over\mu (\rho'{}^2-1)}\,\delta(\rho-\rho')\delta(\omega,\omega') . \end{equation} In order to solve this equation we shall use the following trick. We first consider another equation for the static Green function of a massive scalar operator $\Box-m^2$ with the mass \begin{equation}\label{m2} m^2=-2n^2/a^2=-2/b^2 \end{equation} on a $(D+2)$-dimensional homogeneous spacetime, which is a direct product of the $(n+1)$-dimensional sphere of a radius $a$ and a four-dimensional AdS spacetime or, in the Euclidean version, the hyperboloid $H^4$ of a radius $b=a/n$. \begin{equation}\label{H4S} d\tilde{s}^2=d\ell^2_\inds{H^4}+a^2\,d\omega^2_{n+1}\, ,\hspace{1cm} b={a\over n}\, ,\hspace{1cm} X_\ins{E}=(\rho,\sigma,\bar{\theta},\bar{\phi},\theta, \theta_{n-1},\dots,\phi), \end{equation} \begin{equation}\label{dell2} d\ell^2_\inds{H^4}=b^2\left[{1\over \rho^2-1}\,d\rho^2+(\rho^2-1)\,d\bar{\omega}^2_{3}\right] , \end{equation} \begin{equation}\label{bar_omega} d\bar{\omega}_{n+1}^2=d\bar{\theta}_{n}^2+\sin^2\bar{\theta}_{n}\,d\bar{\omega}_{n}^2\, ,\hspace{1cm} d\bar{\omega}_{0}^2 = d\bar{\phi}^2\, ,\hspace{1cm} \bar{\theta}_n=\sigma . \end{equation} The Green function of an Euclidean massive operator \begin{equation}\label{hatO} \hat{O}=\Box_\ins{E} -m^2 \end{equation} defined on a $(D+2)$-dimensional Bertotti-Robinson spacetime, can be obtained by the heat kernel method. In order to calculate the function $H(x,x')$ one has to find at first the $(D+2)$-dimensional Green function of the operator \eq{hatO}, which satisfies the equation \begin{equation} \hat{O}\,{\mathbb G}_\inds{\hat{O}} =-\delta^\inds{D+2}(X_\ins{E},X'_\ins{E}). \end{equation} The hyperboloid $H^4$ is spherically symmetric. We shall show that integration of ${\mathbb G}_\inds{\hat{O}}$ over all angle coordinates $\bar{\theta}_{k}$ on $H^4$ gives \begin{equation} H(x,x')={a^{n+3}\over n^3\mu}\int d^3\bar{\omega}\sqrt{g_\inds{\bar{\omega}}}\,{\mathbb G}_\inds{\hat{O}}(X_\ins{E},X'_\ins{E}) . \end{equation} This approach is similar to the case of a scalar field \cite{Frolov:2014kia,Frolov:2014fya}. The difference is that the required static Green function in the spacetime of a black hole is generated by the Green function of the massive scalar operator, which is defined on the $H^4\times S^{n+1}$ geometry. \section{Green functions and heat kernels} \subsection{General formulas} The Euclidean Green function for the self-adjoint operator $\hat{O}_\ins{E}$ in any dimensions can be written in terms of the heat kernel of this operator \begin{equation} {\mathbb G}_\inds{\hat{O}}(X_\ins{E},X'_\ins{E})=\int_0^{\infty}ds\,K_\inds{\hat{O}}(s|X_\ins{E},X'_\ins{E} ). \end{equation} Here the heat kernel $K_\inds{\hat{O}}(s|X_\ins{E},X'_\ins{E})$ is the solution of the problem \begin{equation} (\partial_s-\hat{O})\,K_\inds{\hat{O}}(s|X_\ins{E},X'_\ins{E})=0 \, ,\hspace{1cm} K_\inds{\hat{O}}(0|X_\ins{E},X'_\ins{E})=\delta(X_\ins{E},X'_\ins{E}) \,, \end{equation} which satisfies the same boundary conditions with respect to its arguments $X_\ins{E}$ and $X'_\ins{E}$ as the Green function in question. Because the geometry of the $(D+2)$-dimensional Bertotti-Robinson spacetime has the form of a direct sum of two homogeneous spaces, the heat $K$ has a form of a product of the heat kernels $K_\inds{H^4}$ and $K_\inds{S^{n+1}}$ for the reduced box operators defined on the hyperboloid $H^4$ and on the sphere $S^{n+1}$, respectively \begin{equation}\begin{split}\label{KBR} K_\inds{\hat{O}}(s|\rho,\bar{\theta}_k,\theta_n;\rho',\bar{\theta}'_k,\theta'_n)&=e^{-m^2s}\,K_\inds { H^4}(s|\rho,\bar{\theta}_k;\rho',\bar{\theta}'_k)\,K_\inds{ S^ { n+1 } } (s|\theta_n;\theta'_n) . \end{split}\end{equation} Both spaces $H^4$ and $S^{n+1}$ are homogeneous and isotropic and the corresponding heat kernels are known explicitly \cite{Camporesi:1990wm}. In the case of the operator $\Box_\ins{E}-m^2$ defined on the $n+5$-dimensional Bertotti-Robinson metric \eq{H4S} the Euclidean Green function satisfies the equation \begin{equation} \left(\Box_\ins{E} -m^2\right){\mathbb G}_\inds{\hat{O}}(X_\ins{E},X'_\ins{E})=-\delta(X_\ins{E},X'_\ins{E}) . \end{equation} Because of the symmetry of the metric, this Green function is, in fact, the function of only two geodesic distances: $\chi$ between points on the hyperboloid and $\gamma$ on the sphere \begin{equation} {\mathbb G}_\inds{\hat{O}}(X_\ins{E},X'_\ins{E})={\mathbb G}_\inds{\hat{O}}(\chi,\gamma) . \end{equation} Explicitly, the equation for the $(D+2)$-dimensional Green function ${\mathbb G}_\inds{\hat{O}}$ reads \begin{equation}\begin{split} \left\{n^2\left[(\rho^2-1)\,\partial^2_{\rho}+4\rho\,\partial_{\rho}+{1\over\rho^2-1}\bigtriangleup^{ 3}_{ \bar{\omega} }\right]-a^2 m^2+\bigtriangleup^{n+1}_{ \omega } \right\} \,{\mathbb G}_\inds{\hat{O}}(\chi,\gamma)&\\ =-{n^4\over a^{n+3} (\rho'{}^2-1)}\,\delta(\rho-\rho')\delta(\bar{\omega},\bar{\omega}')\delta(\omega, \omega') .& \end{split}\end{equation} \subsection{Heat kernel on $H^4$} The heat kernel of the four-dimensional Laplace operator defined on the hyperboloid \eq{dell2} of the radius $b$ reads \begin{equation}\label{KH4} K_\inds{H^4}(s|\chi)=-e^{-2s/b^2}\left({1\over 2\pi b^2}{\partial\over\partial\cosh\chi}\right)K_\inds{H^2}(s|\chi) . \end{equation} Here $K_\inds{H^2}(s|\chi)$ is the heat kernel of the Laplace operator defined on the two-dimensional hyperboloid $H^2$ of the same radius \begin{equation}\label{H2} d\ell^2_\inds{H^2}=b^2\left[(\rho^2-1)^{-1}\,d\rho^2+(\rho^2-1)\,d\sigma^2\right] . \end{equation} It reads \cite{Camporesi:1990wm} \begin{equation} K_\inds{H^2}(s|\chi)={\sqrt{2} b \over (4\pi s)^{3/2}}\,e^{-s/(4b^2)}\int_{\chi}^{\infty}dy\,{y\,e^{-b^2 y^2/(4s)}\over(\cosh y-\cosh(\chi))^{1/2}} . \end{equation} Here $\chi$ is the geodesic distance between two points on the $H^2$ of the unit radius $b=1$. It is given by the relation \begin{equation}\begin{split}\label{chi} \cosh(\chi)&=\rho\rho'-\sqrt{\rho^2-1}\sqrt{\rho'{}^2-1}\cos(\sigma-\sigma') . \end{split}\end{equation} \subsection{Heat kernel on $S^{n+1}$} The heat kernel on a two-dimensional sphere $S^2$ of the radius $a$ reads \cite{Camporesi:1990wm} \begin{equation} K_\inds{S^2}(s|\gamma)={\sqrt{2} a \over (4\pi s)^{3/2}}\,e^{s/(4 a^2)}\sum_{k=-\infty}^{\infty}(-1)^k \int_{\gamma}^{\pi} d\phi {(\phi+2\pi k)\,e^{-a^2(\phi+2\pi k)^2/(4s)}\over(\cos\gamma-\cos\phi)^{1/2}} . \end{equation} Another equivalent representation of this kernel is \begin{equation} K_\inds{S^2}(s|\gamma)={1\over 4\pi a^2} \sum_{l=0}^{\infty}(2l+1)P_l(\cos\gamma)\,e^{-{sl(l+1)\over a^2}} . \end{equation} Here $\gamma$ is the geodesic distance between two points on the unit $S^2$ ($a=1$) \begin{equation} \cos\gamma=\cos(\theta_1)\cos(\theta'_1) +\sin(\theta_1)\sin(\theta'_1)\cos(\phi-\phi') . \end{equation} The heat kernel on the three-dimensional sphere $S^3$ of the radius $a$ reads \cite{Camporesi:1990wm} \begin{equation} K_\inds{S^3}(s|\gamma)={1\over (4\pi s)^{3/2}}\,e^{s/a^2}\sum_{k=-\infty}^{\infty}{(\gamma+2\pi k) \,e^{-a^2(\gamma+2\pi k)^2/(4s)}\over\sin\gamma} , \end{equation} where $\gamma$ is the geodesic distance between two points on the unit $S^3$ ($a=1$) \begin{equation} \cos\gamma=\cos(\theta_2)\cos(\theta'_2)+\sin(\theta_2)\sin(\theta'_2)[ \cos(\theta_1)\cos(\theta'_1)+\sin(\theta_1)\sin(\theta'_1)\cos(\phi-\phi')] . \end{equation} The heat kernels on all higher-dimensional spheres $S^{n+1}$ can be derived from $K_\inds{S^2}$ and $K_\inds{S^3}$ using the relations (see \cite{Camporesi:1990wm} Eq.(8.12)-Eq(8.13)) \begin{equation}\label{KS} K_\inds{S^{n+1}}(s|\gamma)=e^{(n^2-1)s\over 4 a^2}\left({1\over 2\pi a^2}{\partial\over \partial \cos\gamma}\right)^{(n-1)\over 2}K_\inds{S^2}(s|\gamma)\, ,\hspace{1cm} n~~\mbox{odd} , \end{equation} \begin{equation} K_\inds{S^{n+1}}(s|\gamma)=e^{(n^2-4) s\over 4 a^2}\left({1\over 2\pi a^2}{\partial\over \partial \cos\gamma}\right)^{(n-2)\over 2}K_\inds{S^3}(s|\gamma)\, ,\hspace{1cm} n~~\mbox{even} . \end{equation} \subsection{Heat kernel and Green functions on $H^4\times S^{n+1}$} For computational reasons, we also use another Green function, which is the Green function of the Laplace operator defined on the $D$-dimensional Euclidean Bertotti-Robinson space $H^2\times S^{n+1}$. \begin{equation}\label{H2S} d\bar{s}^2=d\ell^2_\inds{H^2}+a^2\,d\omega^2_{n+1}\, ,\hspace{1cm} b={a\over n}, \end{equation} \begin{equation}\label{dell} d\ell^2_\inds{H^2}=b^2\left[{1\over \rho^2-1}\,d\rho^2+(\rho^2-1)\,d\sigma^2\right]. \end{equation} In the latter case the corresponding Euclidean Green function, which we denote $\bar{\mathbb G}$, satisfies the equation \begin{equation} \bar{\Box}_\ins{E}(X,X')\,\bar{\mathbb G}=-\delta(X,X') \, ,\hspace{1cm} X^\alpha=(\rho,\sigma,\theta_n,\dots,\phi). \end{equation} Because of the symmetries of the Bertotti-Robinson spacetime, the Green function $\bar{\mathbb G}$ and the heat kernel $\bar{K}$ of the operator $\bar{\Box}_\ins{E}$ are functions of only the geodesic distances $\gamma$ and $\chi$ between the points on the sphere and on the hyperboloid, respectively. One can write explicitly \begin{equation}\begin{split} \left\{n^2\left[(\rho^2-1)\,\partial^2_{\rho}+2\rho\,\partial_{\rho}+{1\over\rho^2-1}\, \partial^2_ { \sigma}\right]+\bigtriangleup^{n+1}_{ \omega } \right\} \,\bar{\mathbb G}(\chi,\gamma)&\\ =-{n^2\over a^{n+1}}\,\delta(\rho-\rho')\delta(\sigma-\sigma')\delta(\omega, \omega') .& \end{split}\end{equation} Using \eq{m2},\eq{KH4}, \eq{KS}, and \eq{KBR} one can express the heat kernel for the $(D+2)$-dimensional massive scalar operator $\hat{O}$ in terms of that of the $D$-dimensional massless operator $\bar{\Box}_\ins{E}$ \begin{equation} K_\inds{\hat{O}}(s|\chi,\gamma)=-\left({n^2\over 2\pi a^2}{\partial\over\partial\cosh\chi}\right) \bar{K}(s|\chi,\gamma) , \end{equation} where we put $b=a/n$. The heat kernel $\bar{K}(s|\chi,\gamma)$ is that of the massless scalar Euclidean D'Alembert operator. It has been calculated in \cite{Frolov:2014kia,Frolov:2014fya}. The Green function of the scalar operator is the integral over the proper time $s$ of the corresponding heat kernel \begin{equation} {\mathbb G}_\inds{\hat{O}}(\chi,\gamma)=\int_0^{\infty}ds\,K_\inds{\hat{O}}(s|\chi,\gamma) . \end{equation} Therefore, one can write \begin{equation} {\mathbb G}_\inds{\hat{O}}(\chi,\gamma)=-\left({n^2\over 2\pi a^2}{\partial\over\partial\cosh\chi}\right) \bar{\mathbb G}(\chi,\gamma) . \end{equation} Note that in this relation both ${\mathbb G}_\inds{\hat{O}}$ and $\bar{\mathbb G}$ are considered as functions of $\chi$ and $\gamma$. On the other hand one has to keep in mind that the geodesic distances $\chi$ on $H^4$ and $H^2$ are quite different functions of coordinates on these hyperboloids. One should also remember the corresponding static Green functions are defined as integrals of ${\mathbb G}_\inds{\hat{O}}$ and $\bar{\mathbb G}$ over the Euclidean time $\sigma$ with different measures. The scalar Green function $\bar{\mathbb G}(\chi,\gamma)$ has been calculated in our papers \cite{Frolov:2014kia,Frolov:2014fya}. Taking into account that \begin{equation} \int d^3\bar{\omega}\sqrt{g_\inds{\bar{\omega}}}\,\delta(\bar{\omega},\bar{\omega}') =1 \, ,\hspace{1cm} \int d^3\bar{\omega}\sqrt{g_\inds{\bar{\omega}}}\,\bigtriangleup^{3 }_{ \bar{\omega} } (\dots) =0 , \end{equation} we obtain \begin{equation} H(x,x')={a^{n+3}\over n^3\mu}\int d^3\bar{\omega}\sqrt{g_\inds{\bar{\omega}}}\,{\mathbb G}_\inds{\hat{O}}(\chi,\gamma) . \end{equation} The spherical symmetry of the $\bar{\omega}$ allows one to choose the coordinates on this sphere such that $\chi$ depends only on the angle $\sigma$ on the sphere. In these coordinates for any function $f(\chi)$ \begin{equation} \int d^3\bar{\omega}\sqrt{g_\inds{\bar{\omega}}}\,f(\chi)=4\pi\int_0^{\pi}d\sigma\,\sin^2\sigma\,f(\chi) =2\pi\int_0^{2\pi}d\sigma\,\sin^2\sigma\,f(\chi) . \end{equation} Thus \begin{equation} H(x,x')=2\pi{a^{n+3}\over n^3\mu}\int_0^{2\pi} d\sigma\,\sin^2\sigma\,{\mathbb G}_\inds{\hat{O}}(\chi,\gamma)= -{a^{n+1}\over n\mu}\int_0^{2\pi} d\sigma\,\sin^2\sigma\,{\partial\over\partial\cosh\chi}\bar{\mathbb G}(\chi,\gamma) . \end{equation} Thus one can formally express the static Green function for the Maxwell field in terms of the scalar Green function ${\mathbb G}(\chi,\gamma)$ of the scalar field, which has been calculated in \cite{Frolov:2014kia,Frolov:2014fya}. \begin{equation}\label{G_00} G_{00}={\mu^2(\rho^2-1)(\rho'{}^2-1)\over(M+\mu\rho)(M+\mu\rho')}{a^{n+1}\over n\mu}\int_0^{2\pi}d\sigma\,\sin^2\sigma\,{\partial\over\partial\cosh\chi}\bar{\mathbb G}(\chi,\gamma) . \end{equation} \subsection{Even-dimensions} In even dimensions the exact static Green function can be represented in the form \begin{equation}\label{EvenG} \bar{\mathbb G}(\chi,\gamma)={1\over a^{n+1}}\,{1\over 2\,(2\pi)^{n+3\over 2}} \left({\partial\over \partial \cos\gamma}\right)^{(n+1)/2} \,A_n . \end{equation} When $n\ge 2$, the functions $A_n(\sigma,\rho,\rho';\gamma)$ are given by the integral \begin{equation}\begin{split}\label{A_n} A_n&=\int_{\chi}^{\infty}dy\,{1\over\sqrt{\cosh\left({y}\right)-\cosh\left({ \chi}\right)}}\, {\sinh\left({y\over n}\right)\over \sqrt{\cosh\left({y\over n}\right)- \cos\left({\gamma}\right)}} . \end{split}\end{equation} At large $y$ the integrand in \eq{A_n} behaves like $\exp[-y(n-1)/(2n)]$. Therefore, \eq{A_n} is convergent for any $n\ge 2$. In the case of the four-dimensional spacetime $(n=1)$ the integrand has to be modified to guarantee convergence of the integral. For example, one can subtract the asymptotic of the integrand, which does not depend on $\gamma$. Since \eq{EvenG} contains the derivative of $A_n$ over $\gamma$, the resulting Green function does not depend on the particular form of the subtracted $\gamma$-independent asymptotic. Thus, for $n=1$ one can choose \begin{equation}\label{A_1} A_1=\int_{\chi}^{\infty}dy\,{1\over\sqrt{\cosh\left({y} \right)-\cosh\left({\chi}\right)}} \,\left[{\sinh\left({y}\right)\over\sqrt{\cosh\left({y}\right)-\cos\left({\gamma}\right)}} -{\sinh\left({y}\right)\over\sqrt{\cosh\left({y}\right)+1}}\right] . \end{equation} Taking into account \eq{G_00} we obtain \begin{equation}\label{G_00even} G_{00}={\mu^2(\rho^2-1)(\rho'{}^2-1)\over(M+\mu\rho)(M+\mu\rho')}{1\over 2\,(2\pi)^{n+3\over 2}}{1\over n\mu}\left({\partial\over \partial \cos\gamma}\right)^{(n+1)/2}\int_0^{2\pi}d\sigma \,\sin^2\sigma\,\tilde{A}_n \,, \end{equation} where \begin{equation} \tilde{A}_n={\partial\over\partial\cosh\chi}A_n \,. \end{equation} Thus we obtain \begin{equation}\begin{split}\label{tildeA_n} \tilde{A}_n&=\int_{\chi}^{\infty}dy\,{1\over\sqrt{\cosh\left({y}\right)-\cosh\left({ \chi}\right)}}\,{\partial\over\partial y} \left[{1\over\sinh(y)}{\sinh\left({y\over n}\right)\over\sqrt{\cosh\left({y\over n}\right)- \cos\left({\gamma}\right)}}\right] . \end{split}\end{equation} Note that the \eq{tildeA_n} remains to be valid for all $n$ including $n=1$ case. In the latter case one has \begin{equation}\label{tildeA_1} \tilde{A}_1=-{\partial\over \partial \cos\gamma}\,A_1\,. \end{equation} \subsection{Odd-dimensions} In odd-dimensional spacetimes we have \begin{equation}\label{OddG} \bar{\mathbb G}(\chi,\gamma)={1\over a^{n+1}}\,{1\over\sqrt{2}\,(2\pi)^{{n+4\over2}}} \left({\partial\over \partial \cos\gamma}\right)^{n/2}\,\int_0^{2\pi} d\sigma\,B_n \,, \end{equation} where \begin{equation} B_n=\int_{\chi}^{\infty}dy\,{1\over\sqrt{\cosh y-\cosh\chi}}\,{\sinh\left({y\over n}\right)\over \cosh\left({y\over n}\right)-\cos\gamma} . \end{equation} Taking into account \eq{G_00} one can write \begin{equation}\label{G_00odd} G_{00}={\mu^2(\rho^2-1)(\rho'{}^2-1)\over(M+\mu\rho)(M+\mu\rho')}{1\over\sqrt{2}\, (2\pi)^{{ n+4\over2}}}{1\over n\mu}\left({\partial\over \partial \cos\gamma}\right)^{n/2}\int_0^{2\pi}d\sigma\,\sin^2\sigma\,\tilde{B}_n \,. \end{equation} where \begin{equation} \tilde{B}_n={\partial\over\partial\cosh\chi}B_n \,, \end{equation} \begin{equation} \tilde{B}_n=\int_{\chi}^{\infty}dy\,{1\over\sqrt{\cosh y-\cosh\chi}}\,{\partial\over\partial y} \left[{1\over\sinh(y)}\,{\sinh\left({y\over n}\right)\over \cosh\left({y\over n}\right)-\cos\gamma}\right] \,. \end{equation} \section{Closed form of the Green function: Examples}\label{section4} \subsection{Four dimensions} In four dimensions ($n=1$) the integral \eq{A_1} reads \cite{Frolov:2014kia,Frolov:2014fya} \begin{equation} A_1=\ln\left({\cosh\left(\chi\right)+1\over \cosh\left(\chi\right)-\cos\left(\gamma\right)} \right) . \end{equation} Hence, according to \eq{tildeA_1} \begin{equation} \tilde{A}_1=-{1\over \cosh\chi-\cos\gamma} . \end{equation} The integral over $\sigma$ in \eq{G_00even} can be taken explicitly and we obtain the closed form for the static Green function \begin{equation}\label{G00_4} G_{00}(x,x')=-{1\over 4\pi (M+\mu\rho)(M+\mu\rho')}\left[\mu\,{\rho\rho'-\cos\gamma\over \sqrt{ \rho^2+\rho'{}^2-2\rho\rho'\cos\gamma-\sin^2\gamma}}-\mu\right] . \end{equation} The last term $-\mu/[4\pi (M+\mu\rho)(M+\mu\rho')]$ in this formula describes a zero mode contribution, which satisfies a homogeneous equation. One should add an extra zero mode contribution \begin{equation} {C\over 4\pi (M+\mu\rho)(M+\mu\rho')} \end{equation} with a coefficient $C$, such that the flux of the electric field across any surface surrounding the charge and the black hole does not depend on the position of the charge. This leads to the final result for the static Green function in four-dimensional Reissner-Nordstr\"{o}m geometry \begin{equation}\label{G00_4} G_{00}(x,x')=-{1\over 4\pi (M+\mu\rho)(M+\mu\rho')}\left[\mu\,{\rho\rho'-\cos\gamma\over \sqrt{ \rho^2+\rho'{}^2-2\rho\rho'\cos\gamma-\sin^2\gamma}}+ M\right] , \end{equation} which satisfies the correct fall-off conditions at infinity. This four-dimensional Green function was obtained earlier in \cite{Linet:1977vv}. The vector potential created by a point charge $e$ placed at the point $x'$ \begin{equation} J^\mu=e\,\delta(x-x')\delta_0^\mu \end{equation} reads \begin{equation} A_0(x)=4\pi e\, G_{00}(x,x') . \end{equation} \subsection{Five dimensions} In five dimensions ($n=2$) the Green function \eq{G_00odd} takes the form \begin{equation} B_2=\int_{\chi}^{\infty}dy\,{1\over\sqrt{\cosh y-\cosh\chi}}\,{\sinh\left({y\over 2}\right)\over \cosh\left({y\over 2}\right)-\cos\gamma} . \end{equation} \begin{equation} B_2={\sqrt{2}\over 2}{1\over (\cosh^2(\chi/2)-\cos^2\gamma)^{1/2}}\left[\arctan\left({ \cos\gamma\over\sqrt{\cosh^2(\chi/2)-\cos^2\gamma}}\right)+{\pi\over 2}\right] . \end{equation} The Green function is \begin{equation}\label{G_00_5} G_{00}={\mu(\rho^2-1)(\rho'{}^2-1)\over(M+\mu\rho)(M+\mu\rho')}{1\over 2\sqrt{2}\,(2\pi)^3}\left({\partial\over \partial \cos\gamma}\right)\int_0^{2\pi}d\sigma\,\sin^2\sigma\,\tilde{B}_2 \,. \end{equation} where \begin{equation} \tilde{B}_2=\int_{\chi}^{\infty}dy\,{1\over\sqrt{\cosh y-\cosh\chi}}\,{\partial\over\partial y} \left[{1\over\sinh(y)}\,{\sinh\left({y\over 2}\right)\over \cosh\left({y\over 2}\right)-\cos\gamma}\right] \,. \end{equation} \begin{equation}\begin{split} \tilde{B}_2&=-{\sqrt{2}\over 4}{1\over (\cosh^2(\chi/2)-\cos^2\gamma)^{3/2}}\left[\arctan\left({ \cos\gamma\over\sqrt{\cosh^2(\chi/2)-\cos^2\gamma}}\right)+{\pi\over 2}\right]\\ &-{\sqrt{2}\over 4}{\cos\gamma\over\cosh^2(\chi/2)(\cosh^2(\chi/2)-\cos^2\gamma)} . \end{split}\end{equation} The result of integration over $\sigma$ in the \eq{G_00_5} \begin{equation} \int_0^{2\pi}d\sigma\,\sin^2\sigma\,\tilde{B}_2={4\sqrt{2}\,\pi\over(\rho^2-1)(\rho'{}^2-1)}\,Q \end{equation} can be expressed in terms of the elliptic integrals. The result reads \begin{equation}\label{G_5} G_{00}={\mu\over 4\,\pi^2\,(M+\mu\rho)(M+\mu\rho')}{\partial\over \partial\cos\gamma}\,Q , \end{equation} where \begin{equation}\begin{split}\label{G_5a} Q&=-q\left[\mathbf{E}(\eta,\varkappa)-2\vartheta(\cos\gamma)\mathbf{E}(\varkappa)\right]\\ &+{q^2+p^2\over 2\,q}\left[\mathbf{F}(\eta,\varkappa)-2\vartheta(\cos\gamma)\mathbf{K}(\varkappa)\right] +\,{q_0-p_0\over q_0}\cos\gamma . \end{split}\end{equation} Here $\vartheta(x)$ is the Heaviside step function and \begin{equation}\begin{split}\label{G_5b} p&=\sqrt{\rho\rho'-\sqrt{\rho^2-1}\sqrt{\rho'{}^2-1}+1-2\cos^2\gamma}\,\big/\sqrt{2} ,\\ q&=\sqrt{\rho\rho'+\sqrt{\rho^2-1}\sqrt{\rho'{}^2-1}+1-2\cos^2\gamma}\,\big/\sqrt{2} ,\\ p_0&=\sqrt{\rho\rho'-\sqrt{\rho^2-1}\sqrt{\rho'{}^2-1}+1}\,\big/\sqrt{2} ,\\ q_0&=\sqrt{\rho\rho'+\sqrt{\rho^2-1}\sqrt{\rho'{}^2-1}+1}\,\big/\sqrt{2} ,\\ \varkappa&={\sqrt{q^2-p^2}\over q}\, ,\hspace{1cm} \sin\eta={q\over q_0}\mbox{sign}(\cos\gamma) . \end{split}\end{equation} Note that in spite the appearance of the Heaviside step function $\vartheta(\cos\gamma)$ in the expressions \eq{G_5a} the function $Q$ is continuous and smooth at $\gamma=Pi/2$, so that the Green function \eq{G_5} is also continuous and smooth everywhere. On has to add to $G_{00}$ a zero mode contribution of the form \begin{equation}\label{C} {C(\rho')\over 4\pi^2 (M+\mu\rho)} , \end{equation} in order to satisfy the boundary condition at infinity, meaning that the total flux of the electric field through the surface surrounding the electric charge should not depend on the position of the charge. This condition uniquely fixes the function $C(\rho')$. When $\rho \rightarrow\infty$ we get \begin{equation} {\partial\over \partial\cos\gamma}\,Q \big|_{\rho \rightarrow\infty}= 1-\rho' \,. \end{equation} Therefore, one has to add the zero mode \eq{C} with \begin{equation} C=-{M+\mu\over M+\mu\rho'} \end{equation} to get a proper asymptotic of the Green function at infinity \begin{equation} G_{00}\big|_{\rho \rightarrow\infty}=-{1\over 4\pi^2 r^2}=-{1\over 4\pi^2 (M+\mu\rho)} . \end{equation} Finally we obtain the closed for for the static Green function, satisfying the correct fall-off conditions at infinity \begin{equation}\label{G_5} G_{00}=-{1\over 4\,\pi^2\,(M+\mu\rho)(M+\mu\rho')}\left[M+\mu\,{p_0\over q_0}-\mu {\partial\over \partial\cos\gamma}\,W\right] , \end{equation} where \begin{equation}\label{P} W=-q\left[\mathbf{E}(\eta,\varkappa)-2\vartheta(\cos\gamma)\mathbf{E}(\varkappa)\right] +{q^2+p^2\over 2\,q}\left[\mathbf{F}(\eta,\varkappa)-2\vartheta(\cos\gamma)\mathbf{K}(\varkappa)\right] , \end{equation} and the other parameters are defined by \eq{G_5b}. This closed form for the static Green function of the Maxwell field is new. The vector potential The vector potential created by a point charge $e$ placed at the point $x'$ \begin{equation} J^\mu=e\,\delta(x-x')\delta_0^\mu \end{equation} is equal to \begin{equation} A_0(x)=4\pi e\, G_{00}(x,x') . \end{equation} Here coefficient $4\pi$ comes from the normalization of the charge in the higher-dimensional Maxwell equations \eq{Maxwell}. \section{Near-horizon limit} In the vicinity of the horizon the gravitational field becomes approximately homogeneous. In the limit of an infinite gravitational radius the static Green function should reproduce the result \cite{Frolov:2014gla} for the Green function in the Rindler spacetime up to the zero mode contribution, because the topology of the Rindler horizon differs from that of the black hole horizon. One should also take into account that for the black hole the Killing vector is normalized to unity at infinity, while in the Rindler spacetime it is usually normalized to unity at the position of an accelerated observer, located close to the horizon. A near-horizon limit can be derived by using the expressions \begin{equation}\begin{split} \rho&=1+{n^2\over 2 r_\ins{g}^2}\,z^2+O(r_\ins{g}^{-4})\, ,\hspace{0.4cm} t={a\over \kappa}\tilde{t}\, ,\hspace{0.4cm} \gamma={1\over r_\ins{g}}|\boldsymbol{x}_{\perp}-\boldsymbol{x}'_{\perp}|+O(r_\ins{g}^{-3}) \, ,\hspace{0.4cm} \kappa={n\mu \over r_\ins{g}^{n+1}}, \end{split}\end{equation} and then taking the limit $r_\ins{g}\rightarrow\infty$, while the parameter $a$ is kept finite. The parameter $a$ has a meaning of a proper acceleration of an observer. In the limit, when the size of the black hole goes to infinity, the region in the vicinity of the observer is described by a homogeneous gravitational field, i.e., by the Rindler spacetime. The Rindler time coordinate $\tilde{t}$ is chosen in such a way that the timelike Killing vector $\xi^\alpha=\delta^\alpha_{\tilde{t}}$ has a unit norm at the position of an observer $z=a^{-1}$. Then the metric \eq{R-N} near the horizon takes the form \begin{equation} ds^2=-a^2\,z^2\,d\tilde{t}^2+dz^2+d\boldsymbol{x}_{\perp}^2 + O(r_\ins{g}^{-2}). \end{equation} The static Green function, corresponding to the rescailed time $\tilde{t}$ is $G_{\tilde{t}\tilde{t}}=\lim_{r_\ins{g}\rightarrow\infty}\,(a/\kappa)\,G_{00}$. Let us introduce the notations \begin{equation} R=\sqrt{(z-z')^2+|\boldsymbol{x}_{\perp}-\boldsymbol{x}'_{\perp}|^2} \, ,\hspace{0.4cm} \bar{R}=\sqrt{(z+z')^2+|\boldsymbol{x}_{\perp}-\boldsymbol{x}'_{\perp}|^2}\, ,\hspace{0.4cm} \zeta={\bar{R}\over 2\sqrt{zz'}}. \end{equation} Then in four dimensions we get \begin{equation}\label{tildeG} G_{\tilde{t}\tilde{t}}=-{a\over 8\pi} {R^2+\bar{R}^2\over R\,\bar{R}}-{M a\over 4\pi\mu} . \end{equation} where $G_{\tilde{t}\tilde{t}}$ is the static Green function corresponding to the rescailed time coordinate $\tilde{t}$. The last term in \eq{tildeG} is constant. It comes from the zero mode contribution. In any case this constant is a pure gauge. The boundary conditions at infinity of the black hole and in the Rindler spacetime are different, therefore, it's not surprising that zero mode contributions may also differ in the near-horizon limit (see discussion in \cite{Frolov:2014gla}). Similarly in five dimensions in the near-horizon asymptotic we obtain \begin{equation}\begin{split} G_{\tilde{t}\tilde{t}}&=-{a\sqrt{zz'}\over 4\pi^2} \left[ {R^2+\bar{R}^2\over R^2\,\bar{R}^2}\mathbf{E}\left(\arcsin{\zeta},{1\over\zeta}\right) -{1\over\bar{R}^2}\mathbf{F}\left(\arcsin{\zeta},{1\over\zeta}\right)\right] -{a\over 4\pi^2\kappa r_\ins{g}^2}\\ &=-{3\,a\, z^2z'{}^2\over 8\pi}{1\over R^5} F\left({5\over 2}, {3\over 2}; 3; -{4zz'\over R^2}\right)-{a\over 4\pi^2\kappa r_\ins{g}^2}. \end{split}\end{equation} When expressed in terms of the hypergeometric function, this formula exactly reproduces eq.(3.29) of the paper \cite{Frolov:2014gla}, where the static Green function in a homogeneous gravitational field was derived. The last term here is constant and can be omitted, because it's a pure gauge. \section{Discussion}\label{section5} In this paper we found the relation between static solutions of the Maxwell field equation on the background of the $D$-dimensional Reissner-Nordstr\"{o}m black hole and on the background of the $(D+2)$-dimensional homogeneous Bertotti-Robinson spacetime. Using the heat kernel technique we obtained a useful integral representation for the electric potential created by a point static charge in the Bertotti-Robinson spacetime and, hence, in the Reissner-Nordstr\"{o}m spacetime too. The method is very similar to the method of biconformal transformations \cite{Frolov:2014kia,Frolov:2014fya}, where the Green function and the potential created by static scalar charges near Reissner-Nordstr\"{o}m black holes have been calculated. In four- and five-dimensional cases we obtained the exact static Green functions in the closed form \eq{G00_4} and \eq{G_5}-\eq{P}. In four dimensions it correctly reproduces the well-known result \cite{Linet:1977vv}. To the best of our knowledge the closed form for the five-dimensional static Green function is new. As a test of the obtained results, we demonstrated that the derived static Green functions in a generic Reissner-Nordstr\"{o}m spacetime obey a correct near-horizon limit \cite{Frolov:2014gla}. The obtained integral representation and analytical expressions for exact Green functions can be used to study the problem of the self-energy and self-force of point electric charges in the background of higher dimensional static black holes. An interesting observation is that the self-force and the self-energy of charged particles qualitatively differ in odd and even spacetime dimensions \cite{Beach:2014aba,Frolov:2014gla,Frolov:2012xf,Frolov:2012ip,Frolov:2013qia}. In odd dimensions the self-force and the self-energy of point charges contain terms logarithmic in the distance to the horizon. These terms are related to the biconformal anomalies (see discussion in \cite{Frolov:2012xf,Frolov:2012ip,Frolov:2013qia}). \acknowledgments{ This work was partly supported by the Natural Sciences and Engineering Research Council of Canada. The authors are also grateful to the Killam Trust for its financial support.} \providecommand{\href}[2]{#2}\begingroup\raggedright
1,116,691,498,863
arxiv
\section{Introduction} \label{intro} It is now generally accepted that Classical Be stars possess gaseous circumstellar disks that are responsible for their excess emission \citep[for reviews, see][]{por03,baa01}. Furthermore spectroastrometry and optical interferometic measurements indicate that this material is in Keplerian rotation \citep{oud11,kra11}. Consequently, as detailed in the review of \citet{car11}, the steady-state viscous decretion disk model \citep{lee91} is capable of explaining many of the (time-averaged) observational properties of Be star disks. Most likely these disks are fed by mass-loss from their rapidly rotating central stars. Both observational \citep{riv98,nei02} and theoretical \citep{and86,cra09} studies suggest that non-radial pulsations may play an important role. However, to create a viscous decretion disk, this material must be placed into orbit. Since the stars most likely are rotating at less than their critical speed \citep[e.g.,][]{cra05}, the mass loss mechanism must add some angular momentum before injecting the material into to the disk. From a theoretical point of view, this requirement is problematic, so, as yet, no entirely satisfactory mechanism has been proposed \citep[see][for a detailed assessment]{owo06}. Regardless of how material is injected into the disk, if this material is in orbit, turbulent viscosity will redistribute it on a viscous diffusion timescale, producing an outflowing decretion disk, as long as the disk is fed continually. However, Be stars exhibit a wide range of variability, ranging from emission line profile variations \citep[e.g, $V/R$ variability, indicative of density waves within the disk; ][]{ste09,car09}, to photometric and polarimetric variability, indicating that frequently the disks are not fed at a constant rate. In some cases, the disk can disappear for decades before being rebuilt \citep[see review by][]{und82,wis10}. Oftentimes, this rebuilding phase occurs in a series of outbursts, during which the disk apparently is fed for a short period of time followed by a period of quiescence, during which the nascent disk can partially reaccrete onto the star. Similar outbursts can occur while the disk is disappearing. Once such example is the Be star 28\,CMa. Over the past 40 years, 28\,CMa has exhibited quasi-regular outbursts, every 8 years or so, when the star brightens by about half a magnitude in the $V$-band. This usually occurs via multiple individual injections (e.g., the rapid rises seen in Fig. 1), during which the star brightens at a rate of $0.02\pm0.01\,\mathrm{mag\,d}^{-1}$, due to free-bound and free-free radiation from the disk. The outbursts last for about two years \citep[see][for a detailed discussion of the long-term photometric variations of 28\,CMa]{ste03}. A particularly well-monitored outburst occurred from 2001 to 2004, and this outburst was followed by a long quiescent phase from 2004 to 2009. Furthermore it appears that during this quiescent phase, very little additional material was injected into the disk (only a few minor events). Thus nature has provided us the perfect experiment to study how Be star disks dissipate. If we accept as a premise that the viscous decretion disk model is correct (and that no other mechanisms participate in the disk mass budget), then the timescale of the post-outburst photometric decline is controlled solely by the disk viscosity. In this paper, we model the $V$-band light curve with the goals of: 1) directly measuring the disk viscosity parameter, $\alpha$, and 2) further testing the viscous decretion disk model by comparing its time-dependent predictions of the disk dissipation against the detailed shape of the photometric light curve. \begin{figure*} \centerline{\includegraphics[width=0.85\linewidth]{f1.eps}} \caption{$V$-band Light Curve. Visual observations of 28\,CMa (grey triangles) are shown in comparison to our model fits for different values of $\alpha$, as indicated. Phase~I (MJD = 52900--53070) is the initial decline, which was used to determine the value of $\alpha$. Phase~II (MJD = 53070--54670) is the slow disk-draining phase. The inset shows the reduced chi-squared of the Phase~I fit for different values of $\alpha$. The horizontal dotted line indicates the 90\% confidence level. } \label{fig:LightCurve} \end{figure*} \section{Observations} Visual photometric observations of 28\,CMa ($\omega$\,CMa, HD\,56139, HR\,2749; B2--3\,IV--Ve; $m_V = 3.6$ to 4.2, $B-V =-0.1$ to $-0.2$) were made from early 2003 to late 2009 using a modified version of the \citet{arg1843} method in which the estimation of the variable star's brightness is performed by comparison with two non-variable stars, one brighter, the other fainter than the target. The accuracy depends on the availability of nearby comparison stars with similar brightness and color. Since 28\,CMa is variable, several pairs of comparison stars are required. Based on comparisons with simultaneous photoelectric $V$-band photometry, the error is typically less than 0.05 mag \citep*{ote01,ote06}. However, based on the typical dispersion of the 28\,CMa observations, we adopt a uniform 1-$\sigma$ error of 0.03 mag. The resulting $V$-band light curve for 28\,CMa is shown in Figure~\ref{fig:LightCurve}. \section{Modeling} To model the circumstellar disk of 28\,CMa, we require physical parameters for the central star, including the inclination angle of the system. Fortunately, \citet{mai03} performed an extensive investigation of the non-radial pulsation modes of this star. Using detailed fits of the line profile variability, they conclude that the star is viewed nearly pole-on at an inclination angle $i = 15^\circ$. Table~\ref{tab:StellarParameters} lists their recommended values for the stellar parameters. \begin{deluxetable}{ll} \tablecaption{Stellar Parameters \label{tab:StellarParameters}} \tablewidth{0pt} \tablehead{ \colhead{Parameter} & \colhead{Value} \\[-10pt] } \startdata $L$ \dotfill & $5224\,L_\sun$ \\ $T_\mathrm{pole}$ \dotfill & $22000\ {\rm K}$ \\ $R_\mathrm{pole}$ \dotfill & $6.0\ R_\sun$ \\ $\log g_\mathrm{pole}$ \ldots & $3.84$ \\ $M$ \dotfill & $9\ M_\sun$ \\ $V_\mathrm{rot}$ \dotfill & $350\ {\rm km\,s}^{-1}$ \\ $V_\mathrm{crit}$ \dotfill & $436\ {\rm km\,s}^{-1}$ \\ $R_\mathrm{eq}$ \dotfill & $7.5\ R_\sun$ \\ $i$ \dotfill & $15^\circ$ \\[-6pt] \enddata \end{deluxetable} Given the stellar parameters, we now model the disk dissipation of 28\,CMa. To do so, we combine a time-dependent calculation of the disk surface density with a NLTE radiative transfer calculation of the emergent flux at selected times. The disk surface density is modeled with a viscous decretion disk \citep*{lee91} using the time-dependent hydrodynamics code {\sc singlebe} \citep{oka02,oka07}, which permits time-variable mass injection at a point placed just outside the stellar surface. This code solves the isothermal 1-D time-dependent fluid equations \citep{lyn74,pri81} in the thin disk approximation. The output is the disk surface density, $\Sigma(r,t)$, as a function of radius, $r$, at selected times, $t=t_i$, as well as the mass flow rate as a function of position ${\dot M}(r)$. This surface density is converted to volume density using the usual vertical hydrostatic equilibrium solution (a Gaussian) with a power law scale height, $H = h_0(r/R)^{1.5}$, where $R$ is the stellar equatorial radius. The volume density then is used as the input to our 3-D Monte Carlo radiative transfer code, {\sc hdust} \citep{car06}. {\sc hdust} simultaneously solves the NLTE statistical equilibrium rate equations and radiative equilibrium equation to obtain the hydrogen level populations, ionization fraction, and electron temperature as a function of position. Its output is the emergent SED (and other observables of interest), which is used to calculate the $V$-band excess, $\Delta V$, of the disk as a function of time. We assume that this variable disk emission dominates any intrinsic variability of the star. We determine the $V$-band excess of 28\,CMa at the onset of the steep decline in late 2003 (time $t_0$) by calculating the average pre-decline brightness ($m_V=3.63\pm0.05$) and subtracting the average near the end of the quiescent phase ($m_V=4.18\pm0.02$). This gives an initial excess $\Delta V(t_0) = -0.54 \pm 0.05$, where the error is estimated from the standard deviation of the means. Although the disk building typically is not steady, the growth time required to produce this entire excess $t_\mathrm{g} = 0.54\,\mathrm{mag}/0.02\,\mathrm{mag\,d}^{-1}=0.07\,\mathrm{yr}$. Since the brightness of 28\,CMa was relatively steady for two years prior to the decline, we assume the initial surface density of the disk was nearly the steady-state solution, $\Sigma(r,t_0) = \Sigma_0 (r/R)^{-2.0}$ \citep{bjo05}. To match the initial $\Delta V$, we require $\Sigma_0 = 1.6 \pm 0.5 \,\mathrm{g}\,\mathrm{cm}^{-2}$, whose error is dominated by the uncertainty in the inclination angle ($i=0^\circ$--$30^\circ$). This implies a pre-decline disk mass $M_\mathrm{d}=2\pi\Sigma_0 R^2\ln(r/R) = 3\times 10^{-9}\,M_\sun$ interior to $r=10\,R$. \begin{figure}[!t] \centerline{\includegraphics[width=0.8\linewidth]{f2.eps}} \caption{Disk dissipation. Shown is the $V$-band excess as a function of time after mass injection stops for different values of $\alpha$ as indicated. } \label{fig:DiskDissipation} \end{figure} To produce the initial steady-state disk, we run {\sc singlebe} for a long time with a constant mass injection rate, $\dot M$. Note that $\Sigma_0$ depends on the ratio ${\dot M}/\alpha$, so whenever we change the disk viscosity parameter, $\alpha$, we adjust $\dot M$ to maintain the desired value for $\Sigma_0$. After {\sc singlebe} achieves steady-state, we turn off the mass injection (by setting $\dot M = 0$) and let the disk dissipate. Figure~\ref{fig:DiskDissipation} shows the decline in the $V$-band excess as a function of time. Note that as the viscosity parameter $\alpha$ increases, the disk dissipates more rapidly. Initially there is a rapid decline as the inner disk (where the $V$-band excess is produced) adjusts from outflow to infall and is reaccreted. Subsequently, the remaining mass of the disk is drained via quasi-steady-state accretion, and the $V$-band excess continues to slowly drop to zero. \begin{deluxetable}{cccccc} \tablecaption{Model Fits \label{tab:ModelFits}} \tablewidth{0pt} \tablehead{ \colhead{$\alpha$} & \colhead{${\dot M}$} & \colhead{$t_0$} & \colhead{$\chi^2_\mathrm{I}$} & \colhead{$\chi^2_\mathrm{II}$} & \colhead{$\chi^2$} \\ \colhead{} & \colhead{($M_\sun\ \mathrm{yr}^{-1}$)} & \colhead{(MJD)} & \colhead{} & \colhead{} & \colhead{} \\[-10pt] } \startdata 0.3 & $1.1 \times 10^{-8}$ & 52824 & 7.6 & 2.7 & 3.8 \\ 0.8 & $2.8 \times 10^{-8}$ & 52929 & 1.4 & 2.1 & 2.0\\ 0.9 & $3.2 \times 10^{-8}$ & 52933 & 1.2 & 2.3 & 2.1\\ 1.0 & $3.5 \times 10^{-8}$ & 52935 & 1.2 & 2.4 & 2.2\\ 1.2 & $4.3 \times 10^{-8}$ & 52939 & 1.6 & 2.6 & 2.5\\ 1.5 & $5.3 \times 10^{-8}$ & 52942 & 2.9 & 2.9 & 3.0\\[-6pt] \enddata \end{deluxetable} Given the model curves in Figure~\ref{fig:DiskDissipation}, we fit the Phase~I portion of the observed light curve (see Fig.~\ref{fig:LightCurve}) by adjusting only two parameters: $\alpha$ and $t_0$. Table~\ref{tab:ModelFits} lists the best-fitting value of $t_0$ for each value of $\alpha$ and the reduced chi-squared value for the Phase~I data, $\chi^2_\mathrm{I}$. Additionally, we list the reduced chi-squared values of the extrapolation of the fit through Phase~II, $\chi^2_\mathrm{II}$, as well as the reduced chi-squared for the entire data set, $\chi^2$. Finally, each model curve is shown in comparison to the observations in Figure~\ref{fig:LightCurve}. The overall best fit to the Phase~I data is $\alpha = 1.0\pm0.2$ (90\% confidence interval). Note that when we extrapolate the fit to Phase~II, the fit is still reasonable, requiring a large value of $\alpha$, albeit with a larger uncertainly. \section{Discussion} As may be seen in Figure~\ref{fig:LightCurve}, our model fits quite well the detailed shape of the $V$-band decline observed in 28\,CMa. Since the $V$-band excess is produced in the innermost parts of the disk \citep[typically $r \lesssim 2\,R$,][]{car11}, we infer that the viscous decretion disk model correctly predicts the time-dependent evolution of the disk density (at least at small radii). Not only does the model reproduce the rapid early decline (Phase~I), it also reproduces the slow disk-draining (Phase~II) with a single physical model. Furthermore, there is no evidence that the value of $\alpha$ changed from Phase~I to Phase~II, despite the rather large change in physical conditions in the disk shown in Figure~\ref{fig:DiskStructure}. \begin{figure}[!t] \centerline{\includegraphics[width=.8\linewidth]{f3.eps}} \caption{Disk Structure. Shown are the disk temperature (top panel), hydrogen ground state population (middle panel), and density (bottom panel) as a function of radius for the initial state of the disk (solid line), at the end of the Phase~I (dotted line) and at the end of the simulation (dashed line) for the $\alpha=1.0$ case. The circle marks the position of the stagnation point (the division between inflow and outflow) for the 2004.1 curve. } \label{fig:DiskStructure} \end{figure} When the mass injection ceases, the disk readjusts most quickly at small radii. This is because the viscous diffusion timescale, $t_\mathrm{diff} \propto r^{1/2} \alpha^{-1} $, increases with radius. Reaccretion of material from the inner disk occurs because the turbulent viscosity transports angular momentum outward, transferring angular momentum from the inner disk to the outer. Hence, the outer parts of the disk continue to expand while the inner parts move inward. This produces simultaneous inflow in the inner disk and outflow in the outer disk. As a function of time, the stagnation point (the division between inflow and outflow) moves outward, controlled by the diffusion time. Interior to the stagnation point, the disk adopts a quasi-steady accretion solution, while exterior to the stagnation point the disk remains in quasi-steady decretion (see Fig.~\ref{fig:DiskStructure}). Hence the outer disk acts as a mass reservoir that feeds reaccreting material through the inner disk where the $V$-band excess is produced. We conclude that the decline rate in Phase~I is controlled by the viscous diffusion time; this is what determines the value of $\alpha$ in our model fit. Since $\alpha$ is the only free physical parameter that we can vary, the fact that we can independently reproduce the rate of decline in Phase~II (when material is being transferred from the outer through the inner disk) is a non-trivial test of the time-dependent viscous decretion disk model. In our analysis we assume, during the dissipation, that the disk is not being fed and no other important mass loss occurs. Two such mass loss mechanisms are disk ablation and entrainment by the stellar wind, but general estimates of their mass loss rates \citep{krt11,gay99} indicate they are much smaller than that of the stellar wind. The disk reaccretion rate during Phase I is much larger than the wind mass loss rate, owing to the large value of $\alpha$, consequently neither plays an important role. Perhaps the most surprising aspect of our results is the extremely large viscosity parameter ($\alpha \approx 1.0$) that is required. In the $\alpha$-disk model \citep{sha73}, $\alpha$ parameterizes the turbulent (eddy) viscosity, $\nu=v_{\rm turb} \ell = \alpha c_s H$, where $v_{\rm turb}$ is the flow speed of the eddies and $\ell$ is their size scale. Generally, one assumes that supersonic turbulence will not occur because of shock dissipation, so Shakura \& Sunyaev suppose the maximum flow speed is the sound speed, $c_s$, and that the characteristic size of the circulation cells is the disk pressure scale height, $H$. Hence one expects $0 < \alpha < 1$. To produce $\alpha = 1.0$ requires either roughly sonic flow and/or somewhat larger circulation cells. If in fact $v_{\rm turb} \approx c_s$, this would strongly suggest that the origin of the turbulent mixing is an instability within the disk that grows until shock dissipation halts the growth. Such an instability would also explain why $\alpha \approx 1$ regardless of the physical conditions within the disk. Having now directly measured the value of $\alpha$ from the $V$-band decline rate, we can determine the mass decretion rate of 28\,CMa. Normally one cannot use the disk density to determine the mass loss rate because the radial flow speeds are too small to be measured from the line profiles \citep{han00}. Similarly, from a theoretical point of view, the disk surface density depends on the ratio ${\dot M} / \alpha$, so one must know $\alpha$ to find ${\dot M}$. Assuming the pre-decline value of $\alpha$ is the same as that we measure during disk dissipation ($\alpha = 1.0$), we find that the mass injection rate of 28\,CMa was ${\dot M} = (3.5 \pm 1.3) \times 10^{-8}\ M_\sun\ \mathrm{yr}^{-1}$. An important check of this rate can be made using the growth time, $t_\mathrm{g}$, in combination with the pre-decline disk mass, $M_\mathrm{d}$, which implies ${\dot M} \sim M_\mathrm{d}/t_\mathrm{g} = 4 \times 10^{-8}\ M_\sun\ \mathrm{yr}^{-1}$ during the growth phase. Consequently, we infer there is no evidence for a significant change in $\alpha$ between the disk growth and the disk dissipation phases. We find, therefore, that the mass injection rate is much larger than previously thought, and is at least an order of magnitude larger than the mass loss in the stellar wind, as inferred from UV observations \citep{sno81}. This implies that the stellar wind is not the mechanism responsible for mass injection into the disk, unless the wind can be prevented from producing strong, easily observable UV wind lines. \section{Conclusions} Light curve fitting to the dissipation of a disk can be a powerful tool for measuring $\alpha$. Further it provides additional quantitative tests of the viscous decretion disk model. Here we have shown that the viscous decretion disk model can reproduce the detailed time-dependence of the dissipation of a Be star disk. Using the $V$-band excess to measure how the disk density decreases with time, we have been able for the first time to directly measure the value of the disk viscosity parameter. We find $\alpha = 1.0\pm0.2$, which provides an important clue about the origin of the turbulent viscosity, suggesting that it likely is produced by an instability in the disk whose growth is limited by shock dissipation. Finally, knowing the value of $\alpha$, we calculate that the mass decretion rate of 28\,CMa was ${\dot M} = (3.5\pm1.3) \times 10^{-8}\ M_\sun\ \mathrm{yr}^{-1}$. Such a large required mass injection rate places strong constraints on any proposed stellar mass loss mechanism. \acknowledgments We thank the referee, Stanley P. Owocki, for his useful comments. We acknowledge support from CNPq grant 308985/2009-5 (ACC) and Fapesp grants 2010/19029-0 (ACC), 2010/16037-2 (JEB), and 2009/07477-1 (XH).
1,116,691,498,864
arxiv
\section{Introduction}\label{sec:introduction} \setcounter{equation}{0} \section{Introduction} The Standard Model (SM) of particle physics is now more than ever motivated by the recent discovery of the Higgs boson at both the ATLAS \cite{HiggsATLAS} and CMS \cite{HiggsCMS} detectors. However, there are still two missing pieces in this elegant picture: the nature of dark matter (DM) and the origin of neutrino mass. Despite the fact that a window for low mass dark matter candidates (below 100 TeV) seems favored by an upper bound coming for perturbative unitarity \cite{Griest:1989wd}, no evidence has been found after many years of experimental searches \cite{DDexp}. On the other hand, the presence of new physics at an intermediate scale seems motivated by the stability of the Higgs potential \cite{EliasMiro:2011aa, OlegHiggs}, the see-saw mechanism \cite{seesaw,seesaw2}, leptogenesis \cite{FY,leptogenesis} or reheating processes. Added to the fact that a super--heavy DM, or WIMPZILLA \cite{WIMPZILLA} could be produced by non--thermal processes and explain the DM density in the universe, it seems natural to link the mechanism for generating a neutrino mass with the dark matter question in a coherent framework at an intermediate scale. An intermediate scale (of order $10^{10}$ GeV) will never be reached by an accelerator on earth. The 100 TeV collider is still a state of mind project, whereas the ILC can reach, at most, a beyond the SM (BSM) scale of 100 TeV through precision measurement. However, we know that energies as large as $10^{10}$ GeV are measured in ultra-high energy cosmic rays experiments like the Auger observatory \cite{Auger}. Recently, the IceCube collaboration claimed the detection of multi PeV ($10^{6}$ GeV) events \cite{Icecube}, giving the community some hope that an intermediate scale can be testable in the near future with these types of experiments. In this letter we show that a high energy neutrino signal can be associated with a long-lived scalar dark matter candidate. We show that this scalar can account for the dark matter in the Universe and moreover, its specific decay mode into two monochromatic neutrino states gives a clear signature detectable in present high energy detectors like IceCube \cite{Icecube}. We then attempt to relate this candidate with the pseudo-Goldstone mode of a complex scalar field responsible for generating a Majorana mass in the right handed sector through dynamical symmetry breaking at an intermediate scale. The letter is organized as follow. After a description of the single scalar model we analyze in section II, we compute its phenomenological consequences and detection prospects in section III. In section IV, we relate this scalar as the Goldstone mode associated with generating the right-handed neutrino mass, necessary for the see-saw mechanism. We draw our conclusions in section V. \section{Dark matter and a standard see--saw mechanism} \label{sec:model} \subsection{The model} \noindent As a simple extension to the SM with a neutrino see-saw mechanism, we add a single real scalar field, $A$, coupled to the right handed (sterile) sector. The Lagrangian can then be written as \begin{equation} {\cal L} = {\cal L}_{\mathrm{SM}} + {\cal L}_\nu + {\cal L}_A \end{equation} with \begin{equation} {\cal L}_\nu = -(\frac{1}{2} M^R +\frac{i h}{\sqrt{2}} A) \bar \nu_R^c \nu_R - \frac{y_{LR}}{\sqrt{2}} \bar \nu_L H \nu_R + h.c. \label{Eq:lagrangian1} \end{equation} and \begin{eqnarray} && {\cal L}_A = - \frac{\mu_A^2}{2} A^2 - \frac{\lambda_A}{4} A^4 \nonumber \\ && -\frac{\lambda_{H A}}{4} A^2 H^2 + \frac{1}{2} \partial_\mu A \partial^\mu A \label{Eq:lagrangian2} \end{eqnarray} where $H$ represents the real part of the SM Higgs field. Here, we have simply assumed that the right handed neutrino has a Majorana mass, $M^R$. We will explore a dynamical version of this extension in section IV. The scalar $A$ is massive and couples to the SM Higgs, but does not itself get a vacuum expectation value ($vev$). While there is no natural value for the mass scale $M^R$, demanding gauge coupling unification in different schemes of SO(10) breaking naturally leads to intermediate scales between $10^6-10^{14}$ GeV \cite{Fukugita:1993fr,moqz}. It seems then reasonable to expect that $M^R$ will lie in this energy range if one embeds our model in a framework where one imposes unification of the gauge couplings. However, we will attempt to stay as general as possible\footnote{{We note that a similar framework has been used in \cite{Higgsstability} to stabilize the Higgs potential up to GUT scale.}}. In the context of very light scalar $A$, of order a keV (though not considered in the present work), some authors have looked at the effect of a decaying $A$ on the CMB \cite{valleold} and more recently the subleading effect of decays to photons \cite{vallenew}. \subsection{The see--saw mechanism} Once symmetry breaking is realized, the mass states in the neutrino sector are mixed in the current eigenstate basis. Diagonalization of the mass matrix leads to the well known see--saw mechanism. We can write the mass term \begin{eqnarray} {\cal L}_\nu = -\frac{1}{2} \bar n~ {\cal M}~ n, ~~ \mathrm{with} ~~ n = \left( \begin{array}{c} \nu_L + \nu_L^c \\ \nu_R + \nu_R^c \end{array} \right)=\left( \begin{array}{c} n_1 \\ n_2 \end{array} \right) \nonumber \end{eqnarray} and \begin{equation} {\cal M} = \left( \begin{array}{cc} 0 \ m_D \\ m_D \ M^R \end{array} \right), \label{Eq:m} \end{equation} with $m_D =y_{LR} v_H / \sqrt{2} $ ($v_H=246$ GeV being the Higgs $vev$). ${\cal M}$, being a complex symmetric matrix, can be diagonalized with the help of {\it one unitary matrix U}, ${\cal M}=U m U^T$ with \begin{equation} m = \left( \begin{array}{cc} m_1 & 0 \\ 0 & m_2 \end{array} \right) . \end{equation} From the diagonalization of {\cal M} \begin{eqnarray} && m_1= \frac{1}{2} \biggl[M^R - \sqrt{(M^R)^2 + 4 m_D^2 } \biggr] \simeq - \frac{m_D^2}{M^R} \simeq - \frac{y_{LR}^2 v_H^2}{2 M^R} \nonumber \\ && m_2=\frac{1}{2} \biggl[M^R + \sqrt{(M^R)^2 + 4 m_D^2} \biggr] \simeq M^R \label{Eq:seesaw} \end{eqnarray} and the eigenvectors $N_1$ and $N_2$ \begin{equation} \left( \begin{array}{c} N_1 \\ N_2 \end{array} \right) \simeq \left(\begin{array}{c} n_1 - \theta ~ n_2 \\ n_2 + \theta ~ n_1 \end{array} \right) = \left( \begin{array}{c} \nu_L + \nu_L^c - \theta ~ (\nu_R + \nu_R^c) \\ \nu_R + \nu_R^c + \theta ~(\nu_L + \nu_L^c) \end{array} \right) \end{equation} where\footnote{Notice that $N_1$ and $N_2$ are Majorana like particles.} $\tan 2 \theta = - \frac{2 m_D}{M^R} $ implying $\theta \simeq \sin \theta \simeq - \frac{m_D}{M^R} = - \frac{y_{LR} v_H}{\sqrt{2} M^R}$. Once the Lagrangian is expressed in terms of the physical mass eigenstates, one can compute the couplings generated by the symmetry breaking and their phenomenological consequences. $m_1$ corresponds to the mass of the Standard Model neutrino. We will consider $m_1 \lesssim 1$ eV from cosmological constraints through the rest of the paper\footnote{We neglect the flavor structure of the SM neutrino sector as it does not affect our main conclusions.}. \section{Phenomenology} \subsection{Generalities} To study the consequences of the model, we first rewrite the Lagrangian (\ref{Eq:lagrangian1}) in terms of the mass eigenstates, $N_{1,2}$, \begin{eqnarray} && {\cal L}_\nu = -\frac{y_{LR}}{2 \sqrt{2}} H \left( \bar N_1 N_2 + \bar N_2 N_1 \right) - \frac{y_{LR} \theta}{\sqrt{2}} H\left(\bar N_2 N_2 - \bar N_1 N_1 \right) \nonumber \\ && - \frac{m_1}{2} \bar N_1 N_1 - \frac{m_2}{2} \bar N_2 N_2 \label{Eq:lagrangian3} \\ && -i \frac{h}{\sqrt{2}} A \left( \bar N_2 \gamma^5 N_2 - \theta \bar N_1 \gamma^5 N_2 - \theta \bar N_2 \gamma^5 N_1 + \theta^2 \bar N_1 \gamma^5 N_1 \right) \nonumber \end{eqnarray} A look at the Lagrangian (\ref{Eq:lagrangian3}) implies some obvious phenomenological consequences of the coupling of the scalar to the neutrino sector. First of all, the field $N_2$ is not stable through its decay $N_2 \rightarrow H N_1$ and cannot be the dark matter candidate as in the standard see-saw mechanism. Secondly, the scalar $A$ is not stable, and its dominant decay mode for $M_A\lesssim$ 8 TeV is $A \rightarrow N_1 ~N_1$, as $M_{N_2} =m_2$ is of the order of $M^R$ and is for now assumed to be heavier than $A$. When we include $A$ as part of a dynamical mechanism for generating the mass $M^R$, we will see that the mass of $A$ may be highly suppressed relative to $M^R$, justifying a posteriori our assumption that $M_A < M_{N_1}$, Moreover, because the coupling of $A$ to $N_1$ is of order of $h \theta^2$, $A$ is naturally long-lived, and can be a good dark matter candidate as we will see below. Its decay signature should be two ultra energetic monochromatic neutrinos which is a clear observable, and could be accessible to the present neutrino experiments like IceCube, ANTARES or SuperK as we will see below. Above 8 TeV, the 3-body decay mode dominates and above 11 TeV, the 4-body decay mode dominates. In this case, the signature of decay is then no longer a monochromatic signal but a neutrino-spectrum as we will see in the next section. \subsection{Neutrino flux} \noindent Before computing the flux of neutrinos expected on earth from the decay of the scalar $A$, let's first check if it can be a reliable dark matter candidate, fulfilling $\tau_{A} > \tau_{\mathrm{Universe}} = 10^{17}$ seconds\footnote{A recent study \cite{Audren:2014bca} showed that taking into account the recent BICEP2 results, the real lifetime to consider should be $\gtrsim 10^{18}$ s. However, the constraints from IceCube are much stronger ($\tau_A \gtrsim 10^{28}$ seconds for $M_A$ at the PeV scale) as we will see below.}. The 2-body decay width for $A \rightarrow N_1 N_1$ is given by \begin{equation} \Gamma^2_A = \frac{10^{-38} h^2}{8 \pi} \left( \frac{m_1}{\mathrm{1~eV}} \right)^2 \left( \frac{10^{10}~\mathrm{GeV}}{M^R} \right)^2 M_A~ \nonumber \end{equation} implying \begin{equation} \tau_A \sim 1.6 \times 10^{12} h^{-2} \left( \frac{1 ~\mathrm{eV}}{m_1} \right)^2 \left( \frac{M^R}{10^{10} ~\mathrm{GeV}} \right)^2 \left( \frac{1~\mathrm{TeV}}{M_A} \right)~~\mathrm{[s]} , \label{Eq:taur21} \end{equation} where we have taken for reference $m_1 \lesssim 1$ eV as implied from the solar and atmospheric constraints on neutrino masses. As one can see, for a scalar mass of order 1 TeV, one can obtain lifetimes in excess of the age of the Universe for $M^R \gtrsim 10^{13} h $ GeV, making the scalar a potentially interesting dark matter candidate. However, it is important to check multi-body processes when $M_A > v_H$. Indeed\footnote{The authors want to thank the referee for having pointed out this possibility.} the 3-body process $A \rightarrow N_1 N_1 H $ or the 4-body decay $A \rightarrow N_1 N_1 H H$, through the exchange of a virtual $N_2$ becomes dominant. Under the approximation of massless final states (largely valid when $M_A \gg m_h$) one obtains for the 3- and 4-body decay widths \begin{equation} \Gamma^3_A = \frac{10^{-38} h^2}{3 \times 2^{8} \pi^3} \left(\frac{m_1}{1~\mathrm{eV}} \right)^2 \left( \frac{10^{10}~\mathrm{GeV}}{M^R} \right)^2 \left( \frac{M_A}{v_H} \right)^2 M_A \end{equation} and \begin{equation} \Gamma^4_A = \frac{10^{-38} h^2}{9 \times 2^{14} \pi^5} \left( \frac{m_1}{1~\mathrm{eV}} \right)^2 \left( \frac{10^{10}~\mathrm{GeV}}{M^R} \right)^2 \left( \frac{M_A}{v_H} \right)^4 M_A \end{equation} which gives for the total width \begin{eqnarray} && \Gamma_A= \frac{10^{-38} h^2}{8 \pi} \left( \frac{m_1}{1~\mathrm{eV}} \right)^2 \left( \frac{10^{10}~\mathrm{GeV} }{M^R} \right)^2 M_A \times \nonumber \\ && \left[ 1 + \frac{1}{96 \pi^2} \left( \frac{M_A}{v_H} \right)^2 + \frac{1}{18432 \pi^4} \left( \frac{M_A}{v_H} \right)^4\right] \end{eqnarray} From the expression above, we can easily compute that for $M_A \gtrsim 10 \pi v_H \simeq 8$ TeV, the three body, and then for $M_A \gtrsim 11$ TeV the 4-body dominate the decay process. It will be interesting then to see what kind of constraints Icecube or ANTARES can impose on the parameter space of the model\footnote{An earlier analysis in another context were proposed in \cite{PalomaresRuiz:2007ry}}. The IceCube collaboration, recently released their latest analysis \cite{Icecube} concerning the (non)observation of ultra high energetic neutrino above 3 PeV. The PeV event rate expected at a neutrino telescope of fiducial volume $\eta_EV$ and nucleon number density $n_N$ from a decaying particle of mass $M_{DM}$, mass density and width $\Gamma_{DM}$ is \cite{Feldstein:2013kka} \begin{eqnarray} && \Gamma_{\mathrm{events}}= \eta_E V \times n_N \times \sigma_N \times L_{\mathrm{MW}} \times \frac{\rho_{DM}}{M_{DM}} \times \Gamma_{DM} \nonumber \\ && \simeq 3 \times 10^{59}~ \eta_E \frac{\Gamma_{DM}}{M_{DM}} ~\mathrm{years^{-1}} , \label{Eq:flux} \end{eqnarray} where $\sigma_N$ ($\simeq 9 \times 10^{-34} \mathrm{cm^2}$ at 1 PeV) is the neutrino--nucleon scattering cross section, $n_N$ is the nucleon number density in the ice ($n_N \simeq n_{\mathrm{Ice}} \simeq 5 \times 10^{23}/ \mathrm{cm^3}$), $L_{MW}$ is the rough linear dimension of our galaxy (10 kpc) and $\rho_{DM} \simeq 0.39 ~\mathrm{GeV}~\mathrm{cm^{-3}}$ the milky way dark matter density (taken near the earth for the purpose of our estimate). The volume $V$ is set to be 1 $\mathrm{km^3}$, which is roughly the size of the IceCube detector, whereas the efficiency coefficient $\eta_E$ depends on the energy of the incoming neutrino and lies in the range $10^{-2}-0.4$ \cite{Veff}. The neutrino--nucleon cross section is, however, highly dependent on the scattering energy and the authors of \cite{Gandhi:1998ri} obtained \begin{eqnarray} \sigma_N &&= 8 \times 10^{-36} \mathrm{cm^2} \left( \frac{E_\nu}{\mathrm{1 ~ GeV}} \right)^{0.363} \nonumber \\ && = 6 \times 10^{-36} \mathrm{cm^2} \left( \frac{M_{DM}}{1 ~\mathrm{GeV}} \right)^{0.363} . \label{Eq:sigman} \end{eqnarray} Implementing Eq.(\ref{Eq:sigman}) in the expression (\ref{Eq:flux}), and adding an astrophysical factor $f_{\mathrm{astro}}\simeq 1$ corresponding to the uncertainty in the distribution of dark matter in the galactic halo, one can write \begin{equation} \Gamma_{\mathrm{events}} = 1.5 \times 10^{57}\eta_E f_{\mathrm{astro}} \frac{\Gamma_{DM}}{M_{DM}^{0.637}} ~~\mathrm{years^{-1}} , \label{Eq:gamma} \end{equation} where $\Gamma_{DM}$ and $M_{DM}$ are expressed in GeV. Noticing that there is no background from cosmological neutrino at energies above 100 TeV, one can deduce the limit set by IceCube from the non-observation of events above 3 PeV. IceCube took data during 3 years, so asking $3 \times \Gamma_{\mathrm{events}} \lesssim 1$ one obtains for $f_{\mathrm{astro}}=1$ \begin{equation} h^2 \left( \frac{M_A}{1~\mathrm{GeV}} \right)^{4.363} \eta_E \lesssim 3.7 \times 10^{-3} \left( \frac{1~\mathrm{eV}}{m_1}\right)^2 \left( \frac{M^R}{10^{10}~\mathrm{GeV}} \right)^2 \end{equation} If we take $M_A=1$ PeV, one obtains $h \lesssim 8 \times 10^{-11}$ for $M^R \sim 10^{14}$ GeV, $\eta_E=0.4$ and $m_1=1$ eV. One can generalize our study to lower masses, down to the GeV scale, taking into account the combined constraints \cite{Covi:2009xn,Ibarra:2013cra,Rott:2014kfa} from SuperK \cite{superk}, ANTARES \cite{antares} and IceCube \cite{Abbasi:2011eq}. The limit on the lifetime of $A$ as function of $M_A$ is depicted in Fig. \ref{Fig:taulimit}. The resulting constraint in the ($M_A, h$) parameter plane is shown in Fig. \ref{Fig:h} for different values of $M_R$. We see that natural values of $M^R$ ($\gtrsim 10^{12}$) GeV leads to upper limit on $h \lesssim 10^{-5}$, for $M_A > 1 $ TeV. We note that despite the fact that a dark matter source for the PeV events of IceCube are less motivated since the discovery of the third event ``big bird", one can also compute the relation between $h$ and $M^R$ to observe the rate of 1 event per year for a 1 PeV dark matter candidate. We obtain from eq.(\ref{Eq:gamma}) $h \simeq 1.3 \times 10^{-10}$ for $M^R=10^{14}$ GeV . \begin{figure} \begin{center} \includegraphics[width=3.in]{taulimit.pdf} \caption{{\footnotesize Limit on the lifetime of $A$ in seconds as a function of its mass $M_A$ extracted from the combined constraints \cite{Covi:2009xn,Ibarra:2013cra,Rott:2014kfa} from SuperK \cite{superk}, ANTARES \cite{antares} and IceCube \cite{Abbasi:2011eq}}}. \label{Fig:taulimit} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.in]{Mh.pdf} \caption{{\footnotesize Parameter space allowed in the plane ($M_A, h$) taking into account a combined analysis of IceCube and SuperK for different values of $M^R$ and $m_1=1$ eV. The regions above the lines are excluded. }} \label{Fig:h} \end{center} \end{figure} We also made a more detailed analysis, taking into account a simulated NFW galactic profile $\rho_{NFW}$ for the milky--way. Our result differs from the constraint with $f_{astro}=1$ only by a factor of a few (2-3). Indeed, compared to an annihilating scenario, due to the lack of quadratic enhancement in the signal, the role of the (better determined) local density is more prominent. We can thus anticipate little dependence of our conclusions on the specific galactic halo used for the analysis. \section{Dark Matter and a Dynamical See-Saw} \subsection{The model} We would now like to ask whether or not, the scalar $A$ can be incorporated into a dynamical mechanism for generating neutrino see-saw masses. Instead of the coupling of $A$ to $\nu_R$ in Eq. (\ref{Eq:lagrangian1}), let us couple the right handed (sterile) sector to a complex scalar field $\Phi = Se^{ia/v_S}$. We assume that $\Phi$ is responsible for the breaking of some global symmetry so that $S$ acquires a $vev$. Here, we would like to stay as general as possible, and show that our framework can in fact be an illustration of any extension to the SM with dynamical breaking occurring at an intermediate scale. We now rewrite the Lagrangian as \begin{equation} {\cal L} = {\cal L}_{\mathrm{SM}} + {\cal L}_\nu + {\cal L}_\Phi \end{equation} with \begin{equation} {\cal L}_\nu = - \frac{h}{\sqrt{2}} \bar \nu_R^c \Phi \nu_R - \frac{y_{LR}}{\sqrt{2}} \bar \nu_L H \nu_R + h.c. \label{Eq:lagrangian4} \end{equation} and \begin{eqnarray} && {\cal L}_{\Phi} = \frac{\mu_\phi^2}{2} |\Phi|^2 - \frac{\lambda_\Phi}{4} |\Phi|^4 \nonumber \\ && -\frac{\lambda_{H \Phi}}{4} |\Phi|^2 |H|^2 + \frac{1}{2} \partial_\mu \Phi \partial^\mu \Phi^* . \end{eqnarray} The lagrangian above has a global $U(1)$ symmetry, under which the charges of $\nu_R$, $\nu_L$ and $\Phi$ are $1$,$1$ and $-2$, respectively. After symmetry breaking generated by the fields $H$ and $\Phi$, one obtains in the heavy sector, \begin{equation} \langle S \rangle = v_S = \frac{\mu_\Phi}{\sqrt{\lambda_{\Phi}}}~; ~~ S = v_S + s~; ~~M_S = \sqrt{2} \mu_\Phi, \end{equation} and we denote by $A$, the argument of $\Phi$ after its magnitude is shifted by $v_S$. Note that our scalar field has been promoted to a Goldstone mode and is massless at tree level. The right-handed mass, $M^R$ is now given by $h v_S/\sqrt{2}$. \subsection{Breaking to a discrete symmetry.} If our U(1) symmetry was exact (prior to $\Phi$ picking up a $vev$), $M_A$ would remain massless to all orders in perturbation theory. In what follows, we will assume that the $U(1)$ symmetry is broken by nonperturbative effects down to a discrete $Z_N$ symmetry. It is actually standard in string theory that all symmetries are gauged symmetries in the UV. Some of them nonlinearly realized, in the sense that under gauge transformations one axion $\tilde \theta$ has a nonlinear transformation \begin{equation} A_\mu \rightarrow A_\mu + \partial_\mu \alpha \ , \ \Phi_i \rightarrow e^{i q_i \alpha} \Phi_i \ , \ \tilde \theta \rightarrow \tilde \theta + \alpha \ , \label{zn1} \end{equation} and the lagrangian contains the Stueckelberg combination of a massive vector boson \begin{equation} \frac{M^2}{2} (A_{\mu} - \partial_\mu \tilde \theta)^2 \ . \label{zn2} \end{equation} Nonperturbative effects can generate operators of the form \cite{nonpert} \begin{equation} \frac{c_n}{M_P^{n-4}}e^{- 2 \pi N (t+i \frac{{\tilde \theta}}{2 \pi})} \prod_i \Phi_i \ , \label{zn3} \end{equation} where $t$ is a field which gets a vev and where $S_{\rm inst.} = 2 \pi N (t+i \frac{\tilde \theta}{2 \pi})$ can be interpreted as an instanton action. Nonperturbatively generated operators (\ref{zn3}) are gauge invariant, provided that $\sum_i q_i = N$. The gauge field $A_\mu$ which eats the axion $\tilde \theta$ and the field $t$ can be very heavy and decouple at low energy. At low energies therefore one gets an effective operator \begin{equation} e^{- \langle S_{\rm inst.} \rangle } \frac{c_n}{M_P^{n-4}} \prod_{i=1}^n \Phi_i \ \ , \label{zn4} \end{equation} with $c$ a numerical coefficient. At low energy, even though the $U(1)$ gauge symmetry is broken, one obtains a remnant $Z_N$ symmetry. Due to its gauge origin, it satisfies anomaly cancellation conditions \cite{Ibanez:1991hv}. If the original gauge symmetry was anomaly-free (which is realized if the axionic coupling to gauge gauge fields as ${\tilde \theta} F {\tilde F}$ is absent), then anomalies have to be canceled. In particular the mixed anomalies $Z_N SU(3)_c^2$, $Z_N SU(2)_L^2$ and $Z_N U(1)_Y^2$ anomalies have to vanish modulo $N$. For three generations of neutrinos, a simple candidate anomaly-free symmetry is $Z_3$. Then the lowest order nonperturbative operator breaking $U(1)$ is \begin{equation} e^{- 12 \pi \langle t \rangle } c M_P (\Phi^3 + {\bar \Phi}^3) \ = e^{- 12 \pi \langle t \rangle } c M_P (2 S^3 - 6 S A^2) \ . \label{zn5} \end{equation} This will generate a nonperturbative mass for the field $A$ \begin{equation} M_A^2 = 12 ~ c~ v_S M_P e^{- 12 \pi \langle t \rangle } \ . \label{zn6} \end{equation} For moderate values of $\langle t \rangle$ this generates a large hierarchy for \begin{equation} \frac{M_A}{M_S} \sim e^{- 6 \pi \langle t \rangle } \sqrt{\frac{M_P}{v_S}} \ . \label{zn7} \end{equation} \subsection{The Signal} After the symmetry breaking, the Lagrangian (\ref{Eq:lagrangian4}) becomes \begin{eqnarray} && {\cal L}_\nu= - \frac{h}{\sqrt{2}} s \bar N_2 N_2 + \frac{h \theta}{\sqrt{2}} s \left( \bar N_1 N_2 + \bar N_2 N_1 \right) \nonumber \\ && -\frac{y_{LR}}{2 \sqrt{2}} H \left( \bar N_1 N_2 + \bar N_2 N_1 \right) - \frac{y_{LR} \theta}{ \sqrt{2}} H\left(\bar N_2 N_2 - \bar N_1 N_1 \right) \nonumber \\ && - \frac{m_1}{2} \bar N_1 N_1 - \frac{m_2}{2} \bar N_2 N_2 \label{Eq:lagrangian6} \\ && -i \frac{h}{\sqrt{2}} A \left( \bar N_2 \gamma^5 N_2 - \theta \bar N_1 \gamma^5 N_2 - \theta \bar N_2 \gamma^5 N_1 + \theta^2 \bar N_1 \gamma^5 N_1 \right) . \nonumber \end{eqnarray} For $M_A \lesssim 8$ TeV, the dominant decay mode is the 2-body process $A \rightarrow N_1 N_1$ and the width of the $A$ boson is now given by \begin{equation} \Gamma_A = \frac{10^{-38}}{4 \pi} \left( \frac{m_1}{\mathrm{1~eV}} \right)^2 \left( \frac{10^{10}~\mathrm{GeV}}{v_S} \right)^2 M_A~ \nonumber \end{equation} implying \begin{equation} \tau_A \sim 8 \times 10^{14} \left( \frac{1 ~\mathrm{eV}}{m_1} \right)^2 \left( \frac{v_S}{10^{10} ~\mathrm{GeV}} \right)^2 \left( \frac{1~\mathrm{GeV}}{M_A} \right)~~\mathrm{[s]} , \label{Eq:taur212} \end{equation} Since $M^R$ is now proportional to $h v_S$, the decay rate becomes independent of $h$ if we express it in terms of $v_S$ and we are forced to consider sub-PeV masses for our dark matter candidate. As one can see, for relatively light Goldstone masses of order 1 GeV, one can obtain lifetimes in excess of the age of the Universe for $v_S > 10^{11}$ GeV, making this Goldstone mode an interesting dark matter candidate. We show in Fig. \ref{Fig:vs} the parameter space allowed by the (non)--observation of signals in neutrino telescope. As surprising as it seems, we obtain quite reasonable values for $v_S$, compatible with GUT--like constructions. \begin{figure} \begin{center} \includegraphics[width=3.in]{vs.pdf} \caption{{\footnotesize {Parameter space allowed by SuperK and Icecube, in the plane ($M_A$, $v_S$) for $m_1=1$ eV. } }} \label{Fig:vs} \end{center} \end{figure} \section*{Conclusion} In this work, we have shown that a dynamical model to generate majorana neutrino masses, naturally leads to the presence of a heavy quasi--stable pseudo--scalar particle that can fill the dark matter component of the Universe and whose main decay mode into two ultra energetic neutrino is a clear signature observable by the IceCube detector. Our work is very general and can be embedded in many grand-unified scenarios where the breaking of hidden symmetries appears at an intermediate scale. \noindent {\bf Acknowledgements. } The authors would like to thank S. Pukhov and E. Bragina for very useful discussions. This work was supported by the Spanish MICINN's Consolider-Ingenio 2010 Programme under grant Multi-Dark {\bf CSD2009-00064}, and the contract {\bf FPA2010-17747} and the France-US PICS no. 06482. E.D. and Y.M. are grateful to the Mainz Institute for Theoretical Physics (MITP) for its hospitality and its partial support during the completion of this work. Y.M. acknowledges partial support from the European Union FP7 ITN INVISIBLES (Marie Curie Actions, PITN- GA-2011- 289442) and the ERC advanced grants Higgs@LHC. E.D. acknowledges the ERC advanced grants MassTeV. The work of K.A.O. was supported in part by DOE grant DE--SC0011842 at the University of Minnesota.
1,116,691,498,865
arxiv
\section{Introduction} Since Fagan's work in the 1970s \cite{b:fagan1976,b:fagan1986}, dozens of studies have shown that code review is the most effective way to find bugs \cite{b:cohen2010,b:bacchelli2013}. Ironically, given that Fagan and others were inspired by academic peer review, code review is still rare in scientific software development. This is partly because most authors don't publish their code, and hence have little incentive to improve its quality, but also a case of the blind (not) leading the blind. In 2013--14, we ran two pilot studies to explore the benefits of code review for typical scientist-coders, and how to transfer the skill to them. Our focus was not the small minority of scientists who already use good software engineering practices \cite{b:hannay2009}, but on the large majority who have never encountered them. This paper describes our methodology and findings, and makes recommendations for other groups who wish to help scientists adopt code review. \subsubsection*{Acknowledgments} Our thanks to the scientists and programmers who took part in this study, and to PLOS, the Mozilla Science Lab, and the Sloan Foundation for their support. \section{Post-Hoc Reviews} Our first study, done in conjunction with the Public Library of Science (PLOS), ran from August to October 2013. Professional developers working at Mozilla who routinely use asynchronous code review in their work reviewed samples of code taken from papers published in \emph{PLOS Computational Biology} in the preceding 18 months. Their reviews were shared with the scientists; we then conducted semi-structured interviews \cite{b:rosenthal2007,b:bryman2008} with 11 developers in late August and early September, and with 4 authors whose code had been reviewed in late September and early October, to determine whether non-scientists could usefully review typical scientific software, whether those reviews were intelligible and useful to the scientists, and whether the participants felt the reviews were valuable. Four significant findings emerged. \noindent \textbf{1) Scientists' starting points varied widely.} Scientists' prior knowledge and practices varied widely. Some used engineering practices and tools such as version control some of the time, but documentation of the form developers expected was rare, and practices like continuous integration were unknown or considered ``very unusual''. In particular, most scientists had never taken part in code review. Some of the code authors we interviewed work in (or lead) teams in which they read each other's code, but most participants had never, or only rarely, discussed their code with others. Instead, they discussed the results the code produced. \noindent \textbf{2) Developers felt limited.} Most developers repeatedly noted the lack of documentation, commenting, tests, and example data sets, e.g., ``Not having the project build is a big problem; I can't verify that the code is correct.'' Equally, most reviewers were frustrated at not being able to run the code as the first step in review, e.g., ``That's the easiest way to see if it works at all,'' and ``It's a way to validate the intentions of the author.'' Several commented that, as a result, they had to make some assumptions in the review. Few reviewers read in detail the paper with which the code was associated, usually because they lacked the domain expertise to do so. They could deduce what the code did and assess it in terms of its ability to perform that operation efficiently, but they could not tell if the code fulfilled its intended scientific role. That, combined with their isolation from the scientists, left developers feeling that they were doing ``drive-by'' reviews. \noindent \textbf{3) Standards and expectations were very different.} Many reviewers were struck by how scientists' code differed from theirs, particularly in lacking commenting and explanation in the code. Several suggested that scientists appear to have ``less concern for maintainability and readability'' and that the code was ``not written for others to use''. Some also pointed out naive lack of complexity or abstraction, redundancy in the code, inconsistencies in or ignorance of standards in formatting, and unhelpful naming. On their side, the scientists all aspired to create readable and re-usable code, but many noted that their code doesn't respect the common etiquette of open source: for example, they often don't make an effort to package their code for re-use by others. One scientist commented that: ``[It's] not important to have something that's exact---only when we publish'' and reiterated the low status of code in their science: ``In the business of science, all that matters is the figures. The quality of the code is just not on the critical path.'' This echoes findings from other empirical research \cite{b:segal2005,b:segal2007}. \noindent \textbf{4) Both sides wanted more contact.} All but one of the reviewers remarked that they miss the social context they normally associate with code reviews: working relationships, understanding goals and priorities, and trust. They would have preferred some form of dialog, not least to provide them with the code context, and to establish appropriate expectations for both reviewers and authors (e.g., ``knowing what kind of feedback the author wants''). A typical remark was, ``It would have been easier if we were allowed to contact the scientist just to get a feel for his mindset.'' Most referred to dialogs that are normally part of their code review experience, whether spoken or on-line: ``Discussion catches details that get lost otherwise'' such as ``tiny changes in numerical interpretation that are important''. The reviewers also pointed out that the dialog has value for the reviewer, as well as for the author: ``When there is a dialog, you end up learning a lot yourself.'' \section{Transferring the Skill} Despite these frustrations, participants in this study were positive about it. We therefore set up a second study that paired experienced scientific programmers with small groups of less-experienced ones to explore ways of transferring the practice itself. This second study also aimed to address the fact that scientists and developers alike wanted reviews during development, when they could act on the feedback, rather than a ``drive-by'' review after submission. Ten groups ranging in size from a couple of people to half a dozen initially signed up to take part in this study. In interviews and early discussions, they identified four main reasons why they think they ought to do code review: \noindent \textbf{1) Rigor:} Scientists want to get the right answer and be able to reproduce their work later. (Most recognized that correctness and reproducibility aren't the same thing.) \noindent \textbf{2) Reusability:} Scientists are very aware that their understanding of code dissipates over time and that this is a large hidden cost. Equally, they suspect that they spend a lot of time reinventing wheels. They may not know how code review will help with that, but they hope that it will. \noindent \textbf{3) Collaboration:} Many scientists hoped that they could use code review as an excuse for conversations about code with their colleagues---conversations that simply don't happen right now. Some believed that review would foster better testing, encourage scientists to produce code that is easier to understand and share, and make team members more aware of each other's research. \noindent \textbf{4) Knowledge transfer:} All participants felt that there must be a better way to build programs than what they're doing right now. Taking part in this study was, for them, a way to get mentoring about programming in general from someone who could answer questions they didn't know to ask. One thing we \emph{didn't} hear was people saying that they wanted to learn about the science embodied in the code being reviewed. Rightly or wrongly, scientists seem to feel that ``what it does'' can be learned in other ways. Each team was paired with a mentor: another scientist-developer with experience of code review, from a different institution but usually in the same or a cognate discipline, who would introduce the team to code review processes and good practices. Teams and mentors collaborated during March--June 2014. Each mentor decided how to introduce and conduct reviews with the team, informed by the mentor's own experience and discussion with the team. The scientists identified which of their code to review, in consultation with the mentor. We followed teams' interactions on email (via shared mailing lists) and within their repositories, attended meetings when possible (using video or telephone conferencing), and interviewed participants periodically through the study about their goals, experiences, and impressions. Our biggest finding is how much \emph{perspective matters to collaboration}. Over time, professional software developers (and scientists whose primary role has become programming) integrate skills and knowledge, building up a standard vocabulary of terms, tools, and conventions, as well as a practical repertoire that it's easy to take for granted. Most scientists don't (yet) have such an overview: they are focused almost entirely on the purpose the code serves rather than the code itself. They often lack the integrated understanding and skill that comes with experience, which meant it was often very useful for mentors to ``state the obvious''. How to make the transition from the first perspective to the second is usually not clear to working scientists. ``Stuff that seemed like overkill makes sense now... A lightbulb finally went off: I understand how other people look at my code and how to make it work.'' And what basics to articulate---where to look and how to look---is often not clear to experienced developers. If the two are going to collaborate, then at some point the different perspectives must be acknowledged and bridged. Social mechanisms like courtesy, deference, and avoiding embarrassment tend to obscure this, so it's crucial to articulate collaborators' perspectives explicitly for both sides, and equally to help establish appropriate expectations explicitly. This observation isn't new: as Segal discussed in her study of industrial software engineers collaborating with scientists \cite{b:segal2005}, ``[Programmers] demand an up-front articulation of requirements, whereas the scientists had experience, and hence expectations, of emergent requirements. The project documentation does not suffice to construct a shared understanding.'' For every mentor who says, ``I wish I knew more clearly what the scientists are expecting,'' there's a scientist who says, ``I wish I knew what it's reasonable to expect.'' One concrete step we could take is to show scientists what it actually looks like to do a code review: not just the end result, but the process itself. The most successful groups---those that completed the pilot study, made code review part of their daily practice, and considered it beneficial---both saw code review done and did it themselves. Their mentors walked through an initial code review; articulated what they look for in the code, what they see, and how they interpret it; and made concrete suggestions about first steps that clarified scientists' expectations. But participants (both scientists and mentors) from all of the successul teams perceived the real benefits only after they undertook the process themselves, engaging in code review for each other, and discussing those reviews with others. Another observation was that \emph{there are several models of peer review}, each with its own emphasis. In \emph{asynchronous review}, which is the predominant model in open-source development, comments and revisions are mediated by the version control system. \emph{Offline review with synchronous presentation} uses the tools available in the version control system to present the code and supporting material for review, and to capture the reviewer's comments, but the reviewer and development team meet (usually virtually) to talk through the code and the review in order to facilitate a freer interaction, especially in terms of questions and clarifications. And finally, \emph{synchronous review} involves walking and talking through the code together, whether one-to-one, as a group, or with one person leading. There are other axes of variation: is review one-to-one, one-to-many, or done in a group? Does it use advance preparation or is done on the day? Is there a leader? These choices should depend on whether the goal is fixing code, learning from code, or learning from other developers: while offline preparation helps developers form independent opinions, synchronous activity helps develop rapport and trust. The groups that were most successful in the second study took the time for a synchronous conversation to introduce themselves and to discuss their goals and expectations (often in addition to introductory emails). Their mentors set out a review process and timetable at the outset, based on their experience. This usually involved a synchronous conversation of fixed length at regular intervals (3-4 weeks) with interim goals and tasks agreed during each of those conversations. Finally, \emph{do not underestimate returns from small investments}. None of the mentors expected scientists to overhaul complete code bases. The advice from one mentor was cogent: if you check the docstring and write a test every time you touch a method, the code improvements will accumulate over time with minimal effort. This insight was echoed in different ways throughout the study. Adopting good practices and applying them incrementally, doing one thing at a time as it is needed, will achieve changes faster than you might think. \section{Conclusions} All of the mentors who engaged in the pilot study did so because, in their experience, the benefits of code review to science are profound in terms of improving reproducibility, promoting re-usability, disseminating best practices, and thereby improving efficiency and avoiding error. In other words, they believe code review improves their \emph{science} as well as the code itself. Moreover, they believe that the cost of introducing code review is quickly recovered, and that good practices can be introduced incrementally in ways that are not disruptive or expensive. The scientists engaged in the pilot study because they wanted to write more accurate, efficient, and re-usable code, and wanted support in learning how to do so. They too perceived that improvements to code development pay off in terms of their science, and code review provides an otherwise scarce opportunity to ``talk code'' and learn better practices. Many of the scientists reported benefits even during the study: code improvements, better use of facilities provided in development tools, and better documentation practices spreading in the lab. Nevertheless, many teams found it difficult to reserve the time to work with the mentor, given lab priorities such as bidding for further funding and meeting publication deadlines. And priorities were not the only obstacles: identifying specific code for review (from a larger code base) was not always obvious, the code was not always well-modularized and was rarely well-documented, and teams often had to put code into a repository for the very first to make it available for review. Some groups also needed to negotiate issues of confidentiality (for datasets as well as code). Based on our study, we believe the following should be priorities when scientists start doing code review: \begin{enumerate} \item Have a goal, a benefit in mind. \item Start with a conversation: articulate your goals and expectations, build rapport. \item Choose the right pieces of code for the first reviews. A good starting point is 3--4 pages long, fairly self-contained, and under active development, so that small patches are coming in regularly. \item Make your code available in a repository with a typical data set and an overview of how the code works. \item Set up a schedule and commit a little time on a regular basis. \item Understand that it's the code that's being reviewed (not you). Tailor your comments to others on the same terms. \item Make the process reciprocal: be prepared to make as well as receive comments. \end{enumerate} We have learned a lot from these two studies, but they are only the beginning. As our work progresses, we hope to: \begin{enumerate} \item articulate a curriculum for scientists who would like to adopt code review, \item provide examples of people doing review and talking about their reasoning (most likely in the form of short screencasts of reviews being done), \item develop heuristics to help people decide which scientific software needs specialist scientific knowledge and which doesn't, and \item track what happens to code in the months \emph{after} review. \end{enumerate} If you would like to help with this work, we would enjoy hearing from you. {\small \bibliographystyle{unsrt}
1,116,691,498,866
arxiv
\section{Introduction} KTaO$_3$ (KTO) is a member of a class of materials with the perovskite crystal structure which are being explored in the emerging area of oxide electronics. These materials display an array of interesting physical phenomena, that includes ferroelectricity, magnetism, metal-insulator transition and superconductivity~\cite{hwang-emergent}. Such remarkable properties are largely related to the strongly localized $d$ orbitals that dominate the electronic structure of these materials. An immediate drawback for electronic applications is the limitation in electron mobility, which is often attributed to the heavy band masses resulting from $d$-orbital derived conduction bands and enhanced electron-phonon interactions. Low-temperature electron mobilities exceeding 20,000 cm$^2$/Vs in KTO have been reported, while the observed room-temperature mobilities are limited to 30 cm$^2$/Vs~\cite{kto-mobility}. In the related and more widely studied material SrTiO$_3$ (STO), low-temperature mobilities exceeding 50,000 cm$^2$/Vs have been reported, while mobilities at room temperature are less than 10 cm$^2$/Vs~\cite{stemmer-new,stemmer-sto-mobility}. The realization of rather high density two-dimensional electron gas at the interface of perovskite materials~\cite{ohtomo-2deg,moetakef-2deg} has led to an increased interest in these systems for device applications. However, limits in room-temperature electron mobility constitute a major hurdle. On the other hand, these materials display notably large Seebeck coefficients~\cite{sto-seebeck-kuomoto,sto-seebeck,kto-S,kto-mobility,stemmer-seebck}, which makes them promising for thermoelectric applications. In this context, a first-principles approach to the transport properties in these materials is crucial to uncover the fundamental microscopic mechanisms behind the experimental observations, and pave the way to improved device performance. At very low temperatures (below 10 K), the electron mobility is limited by ionized impurity scattering~\cite{Jena-STO,Spinelli-STO}. At intermediate to low temperatures, recent studies have indicated that the mobility is determined by a combination of several mechanisms, such as transverse optical (TO) phonon scattering~\cite{Jena-STO} and electron-electron scattering~\cite{stemmer-sto-T2}. However, the rather low mobility at room temperature is generally believed to originate from strong scattering of electrons by polar longitudinal optical (LO) phonons~\cite{frederikse-sto,tufte-sto}. In general, studies of electron mobility are based on experimental analysis of transport data using phenomenological models, whereas investigations using first-principles methods have been scarce. Recent theoretical studies indicate that LO phonons are the main source of scattering which leads to the observed low mobilities at room temperature~\cite{sto-us,polaron-devreese}. Investigations of other mechanisms, such as TO phonon scattering, have been limited due to difficulties arising from strong anharmonic effects at the temperature range (around 100K) where they become effective~\cite{sto-ph3,siemons-kto}. There are some limited studies on the effect of electron-electron scattering in STO~\cite{Mazin-T2,sto-ee}, however, a study based on a fully first-principles band structure is still not yet available. In the case of thermal transport, few groups have investigated the Seebeck coefficient of STO and KTO~\cite{seebeck-sto,seebeck-stokto}, using calculated effective masses and density of states. Although the results are in reasonable agreement with experiments, the effect of electron-phonon scattering on the thermal transport properties has not been explicitly included in these studies. In this work, we investigate the transport properties of KTO and STO, and analyze the underlying mechanisms that determine the room-temperature transport coefficients. Our calculations are based on a fully first-principles description of the electronic band structure of both materials, and a solution of the Boltzmann transport equation (BTE) within the relaxation time approximation. The relaxation time, i.e., inverse scattering rate, is calculated by an analytical model for the electron-(LO)phonon scattering, providing an efficient computational approach compared to calculations of electron-phonon scattering matrix elements using density functional perturbation theory~\cite{restrepo,kaasbjerg-mos2,mbn-1,mbn-2,wuli}. The comparison of the results for STO and KTO show that the band-widths of the conduction bands as well as the spin-orbit splitting play an important role in determining the room-temperature transport properties. Our results show that the superior room-temperature mobility in KTO, compared to STO, arises not only from the larger conduction band-width, but also from the stronger spin-orbit coupling which lifts the degeneracy of the lowest-energy conduction bands, thus reducing the effective number of conduction bands to which electrons can scatter. Therefore, the lower effective number of conduction bands mitigates the strength of electron-phonon scattering, leading to a significant enhancement of carrier mobility at room temperature. On the other hand, it also yields a smaller Seebeck coefficient. Such insights, combined with the efficiency of computational approach described here, opens up the possibility to screen a large number of perovskite oxides aiming at the discovery of new materials with targeted transport properties. The paper is organized as follows: In Section~\ref{sec:methods}, we describe the details of the computational approach. In Section~\ref{sec:bandsph}, we report the calculated electronic band structure and optical phonon frequencies of both KTO and STO. In Section~\ref{sec:eph}, we discuss details on the calculation of electron-phonon scattering rates, and in Section~\ref{sec:tr}, we present the calculations of transport integrals and discuss the behavior of mobility and Seebeck coefficient as a function of electron concentration in KTO and STO. In Section~\ref{sec:tauk}, we provide a detailed discussion of the differences in transport coefficients arising from using a constant scattering time {\em vs.} the full ${\bf k}$-dependent scattering time obtained from electron-phonon scattering rates. In Section~\ref{sec:disc} we present our concluding remarks. \section{\label{sec:methods} Computational Methods} Structural optimizations and electronic band structure calculations presented in this paper are performed using density functional theory (DFT) and the plane-waves pseudopotential method as implemented in the PWSCF code of the {\it Quantum ESPRESSO} package~\cite{QE}. The exchange-correlation energy is approximated using the local density approximation (LDA) with the Perdew-Zunger parametrization~\cite{pz}. K, Sr, Ta, Ti and O atoms are represented by ultrasoft pseudopotentials~\cite{uspp}. Fully relativistic pseudopotentials are employed with non-collinear spins~\cite{dalcorso} in the calculations including spin-orbit coupling. The electronic wavefunctions and charge density are expanded in plane-wave basis sets with energy cutoffs of 50 Ry and 600 Ry, respectively. Brillouin-zone (BZ) integrations are performed on a 8$\times$8$\times$8 grid of special $k$-points\cite{mp}. The phonon frequencies at the zone center ($\Gamma$) are calculated using density functional perturbation theory (DFPT) as implemented in {\it Quantum ESPRESSO}~\cite{dfpt}. The splitting of longitudinal and transverse optical modes at $\Gamma$ is taken into account via the method of Born and Huang~\cite{born-huang}. For the calculation of the electron-phonon scattering rates and transport integrals, a 60$\times$60$\times$60 grid is used to sample the BZ. The Dirac-delta functions appearing in the calculation of scattering rates is approximated by a Gaussian smearing function with a width of 0.2 eV. \section{\label{sec:bandsph}Electronic and Vibrational Spectrum} We consider the cubic phase of KTaO$_3$, which is the stable phase at room temperature. The optimized lattice parameter is $a_0 = 3.94$ \AA, which is about $1\%$ smaller than the experimental value~\cite{kto-lattice}, as expected from the LDA functional. LDA calculations yield an indirect band gap of 2.13 eV for the collinear spin, and 2.00 eV for the non-collinear spin calculations, respectively. As expected, LDA underestimates the band gap of KTO, which is 3.64 eV (indirect)~\cite{kto-gap}. We note that the band gap does not enter explicitly in the transport calculations, and only the conduction band structure is relevant, as discussed previously in Ref.(\onlinecite{sto-us}). Therefore, we limit our study to the LDA results in this work. The conduction band structure of KTO is shown in Fig.~\ref{fig:bands}, where results are presented for both collinear and non-collinear calculations. Also shown is the band structure of STO (from Ref.(\onlinecite{sto-us})) for comparison. The conduction bands are derived mainly from the Ta 5$d$ orbitals. Due to cubic symmetry, the 5$d$ states are split into e$_g$ and t$_{2g}$ states, with the lowest lying conduction bands being t$_{2g}$ states. In case of collinear spin calculation, spin-orbit interactions are not present, and therefore the three conduction bands are degenerate at $\Gamma$. In non-collinear spin calculations, spin-orbit coupling is included, the threefold degeneracy at $\Gamma$ is lifted, and one of the bands is split by $\Delta_{\rm SO} \simeq 0.4 {\rm eV}$ to higher energy (see Fig.~\ref{fig:bands}(b)). The phonon frequencies at $\Gamma$ and the electronic dielectric constant were obtained by DFPT calculations. The calculated electronic dielectric constant is $\epsilon_{\infty} = $ 5.4, which is in reasonable agreement with the experimental value of approximately 4.6~\cite{kto-dielectric}. The calculated phonon frequencies at $\Gamma$ are listed in Table~\ref{tab:optph}, which are also in good agreement with the experimental measurements and with previous calculations~\cite{kto-singh}. For STO, we use the phonon frequencies and the electronic structure reported in Ref.~\onlinecite{sto-us}, which were also calculated using LDA. Unlike KTO, there is negligible difference between collinear and non-collinear calculations in STO, due to Ti having a much smaller atomic mass than Ta, which leads to a relatively small spin-orbit splitting of 28 meV~\cite{sto-hse-bs}. Therefore, only collinear calculations for STO are used as a basis of comparison. \begin{table}[!ht] \caption{\label{tab:optph} Calculated and experimental longitudinal optical (LO) and transverse optical (TO) phonon frequencies at the $\Gamma$ point, in units of cm$^{-1}$.} \begin{ruledtabular} \begin{tabular}{cccc} \multicolumn{2}{c}{ LO} & \multicolumn{2}{c}{TO} \\ LDA & Exp. & LDA & Exp. \\\hline 176 & 188$^a$, 184-185$^{b,c}$ & 105 & 88$^a$, 81-85$^b$, 88$^c$ \\ 253 & 290$^a$, 279$^b$, 255$^c$ & 186 & 199$^a$, 198-199$^{b,c}$ \\ 403 & 423$^a$, 421-422$^{b,c}$ & 253 & 290$^a$, 279$^b$, 255$^c$ \\ 820 & 833$^a$, 826-838$^{b,c}$ & 561 & 549$^a$, 546-556$^b$, 574$^c$ \end{tabular} \end{ruledtabular} \begin{flushleft} $^a$ Ref.~\onlinecite{kto-phon-1}, $^b$ Ref.~\onlinecite{kto-phon-2}, $^c$ Ref.~\onlinecite{kto-phon-3}. \end{flushleft}\end{table} \begin{figure}[!ht] \includegraphics[width=0.35\textwidth]{KTO.eps} \includegraphics[width=0.35\textwidth]{KTOnc.eps} \hspace{0.5cm} \includegraphics[width=0.35\textwidth]{STO.eps} \caption{\label{fig:bands} Conduction band structure for KTO, (a) in collinear calculation, and (b) in noncollinear calculation. Band structure of STO is shown in (c) for comparison.} \end{figure} \section{\label{sec:eph}Electron-phonon coupling} In this work, we consider electron-phonon interactions only due to polar longitudinal optical (LO) phonons. The interactions of electrons with the macroscopic electric field due to a LO mode can be analytically described by the Fr\"ohlich model~\cite{mahan}, where the electron-phonon coupling depends on the dielectric constants and the LO phonon frequency. In STO and KTO, there are multiple LO modes that contribute to the macroscopic electric field. In this case, the electron-phonon coupling matrix element squared is given by~\cite{polaron-devreese,toyozawa,eagles} \begin{equation} \left\vert g_{{\bf q}\, \nu} \right\vert^2 = \frac{1}{q^2}\, \left( \frac{e^2\, \hbar\, \omega_{L\nu}}{2 \epsilon_0\, V_{\rm cell}\, \epsilon_{\infty}} \right)\, \frac{\prod_{j=1}^{N} \left(1-\frac{\omega_{T j}^2}{\omega_{L \nu}^2}\right)} {\prod_{j \neq \nu} \left( 1 - \frac{\omega_{L j}^2}{\omega_{L \nu}^2} \right)}, \label{eqn:gnu2} \end{equation} where N is the number of LO/TO modes, $\omega_{L \nu}$ are LO mode frequencies, $\omega_{T \nu}$ are TO mode frequencies, V$_{\rm cell}$ is the unit cell volume, and $\epsilon_{\infty}$ is the electronic dielectric constant. This equation is obtained using the electron-phonon Hamiltonian in Ref.~\onlinecite{toyozawa} and the Lyddane-Sachs-Teller (LST) relation $\epsilon(\omega) = \epsilon_{\infty}\, \prod_{j=1}^{N}\, (\omega^2-\omega_{L,j}^2)/(\omega^2-\omega_{T,j}^2)$. In this expression, the dispersion of the optical phonon frequencies are ignored and in our calculations, we use their value at the zone center. Alternative formulations of the LO phonon coupling based on Born effective charges, which take into account their full dispersion, have also been discussed in the literature~\cite{calandra-ep,giustino-ep}. The calculated electron-phonon coupling elements are listed in Table~\ref{tab:Cnu}. Here, we define \begin{equation} C_{\nu} = \left(\frac{a_0}{2\pi}\right)^2\, q^2\, \vert g_{{\bf q} \nu} \vert^2, \label{eqn:Cnu} \end{equation} where $a_0$ is the cubic lattice parameter. \begin{table}[!ht] \caption{\label{tab:Cnu} Calculated electron-phonon couplings in $eV^2$ based on Eqns.(\ref{eqn:gnu2}) and (\ref{eqn:Cnu}).} \begin{ruledtabular} \begin{tabular}{ccc} & STO & KTO \\ \hline $C_1$ & $1.43 \times 10^{-5}$ & $1.82 \times 10^{-5}$ \\ $C_2$ & $1.33 \times 10^{-3}$ & $1.46 \times 10^{-3}$ \\ $C_3$ & $6.77 \times 10^{-3}$ & $7.50 \times 10^{-3}$ \end{tabular} \end{ruledtabular} \end{table} Note that there are only three coupling constants listed in Table~\ref{tab:Cnu}. One of the LO modes in Table~\ref{tab:optph} is not polar-- the mode with frequency 253 cm$^{-1}$ appears both in the LO and TO sectors, and since it is nonpolar, it does not split up. The constants $C_{\nu}$ are enumerated in order of increasing LO mode frequency. The electron-phonon couplings reported here for STO are lower than those reported in Ref~\onlinecite{sto-us}. This is due to the fact that the electron-phonon coupling constant used in Ref~\onlinecite{sto-us} was only valid for the case of one LO mode, overestimating their strength when three modes were present. The coupling constants listed in Table~\ref{tab:Cnu} are also in agreement with previous results in literature available for STO~\cite{polaron-devreese}. For comparison, we computed values of $C_{\nu}$ resulting from the parameters reported in Ref~\onlinecite{polaron-devreese}. Using the dimensionless constants $\alpha_{\nu}$, experimental phonon frequencies, and the band mass of $0.81\, m_e$, from Ref~\onlinecite{polaron-devreese}, one obtains: $C_1 = 1.26 \times 10^{-5}\, eV^2$, $C_2 = 1.25 \times 10^{-3}\, eV^2$ and $C_3 = 9.50 \times 10^{-3}\, eV^2$. The first two coupling constants are in good agreement with our results for STO in Table~\ref{tab:Cnu}. Our constant $C_3$ is slightly lower than that of Ref~\onlinecite{polaron-devreese}. The effective mass enters into the definition of the dimensionless constants $\alpha_{\nu}$ defined in Ref~\onlinecite{polaron-devreese}, but cancels out when $C_{\nu}$ are calculated, so its value does not have any effect. The differences between our results and that of Ref~\onlinecite{polaron-devreese} come from the fact that we use calculated phonon frequencies and dielectric constants instead of experimental values. Using Fermi's golden rule, one can write the scattering rate of electrons in the state $\psi_{n {\bf k}}$ as~\cite{mahan} \begin{eqnarray} && \tau_{n {\bf k}}^{-1} = \frac{2\pi}{\hbar}\, \sum_{{\bf q} \nu, m}\, \vert g_{{\bf q} \nu} \vert^2\, \times \nonumber\\ && \quad \Big\{ \left( n_{{\bf q} \nu} + f_{m, {\bf k+q}} \right)\, \delta\left( \epsilon_{m, {\bf k+q}} - \epsilon_{n {\bf k}} - \hbar \omega_{L \nu} \right) \nonumber\\ && \quad + \left( 1 + n_{{\bf q} \nu} - f_{m, {\bf k+q}} \right)\, \delta\left( \epsilon_{m, {\bf k+q}} - \epsilon_{n {\bf k}} + \hbar \omega_{L \nu} \right) \Big\}, \label{eqn:tau} \end{eqnarray} where $\epsilon_{n {\bf k}}$ are electron energies, $\hbar \omega_{L \nu}$ are LO phonon energies, $n_{{\bf q} \nu}$ and $f_{m, {\bf k+q}}$ are phonon and electron occupation factors described by Bose-Einstein and Fermi-Dirac distributions, respectively. More precisely, the rate in Eqn. (\ref{eqn:tau}) corresponds to the quasiparticle relaxation rate for electrons interacting with LO phonons, and the transport rate contains extra band-velocity factors~\cite{mahan}. However, recent studies have shown that these factors do not lead to significant changes in the calculated transport properties~\cite{wuli,sto-us}, therefore we use the form in Eqn.(\ref{eqn:tau}) for brevity. In Fig.~\ref{fig:tauBZ} we show the scattering rates $\tau_{n {\bf k}}^{-1}$ along $\Gamma$-X and $\Gamma$-M. The rates for the three t$_{2g}$-derived conduction bands in KTO are shown. Each of the three conduction bands depicted represents a doubly degenerate state. For all the conduction bands, the rate initially decreases away from $\Gamma$, and then raises again towards the middle of the zone. \begin{figure}[!ht] \includegraphics[width=0.45\textwidth]{tauBZ-spc.eps} \caption{\label{fig:tauBZ} Electron-phonon scattering rate $\tau^{-1}_{n {\bf k}}$ calculated along $\Gamma$-X and $\Gamma$-M for the three conduction bands at 300 K and $n=10^{20} \rm{cm}^{-3}$.} \end{figure} The dependence of the scattering rates on electron density can be obtained by defining the following quantity \begin{equation} D_n\, \tau_n^{-1}(E_F) \equiv \sum_{\bf k}\, \delta( E_f - \epsilon_{n {\bf k}} )\, \tau_{n {\bf k}}^{-1} \label{eqn:Dtau} \end{equation} where $D_n(E_F) = \sum_{\bf k} \delta( E_f - \epsilon_{n {\bf k}} )$ is the density of states at the Fermi level. $D_n \tau^{-1}_n(E_F)$ represent the scattering rate for each band weighted by the density of states (it is dimensionless when $\tau^{-1}$ is represented in units of energy). Since the scattering rate near Fermi level determines the transport properties, (see Eqns. (\ref{eqn:int})), $D_n \tau^{-1}_n(E_F)$ represents a measure of the strength of electron-phonon scattering for each band. \begin{figure}[!ht] \includegraphics[width=0.4\textwidth]{tauEf-KTOc.eps} \includegraphics[width=0.4\textwidth]{tauEf-KTOnc.eps} \caption{\label{fig:tauEf} Electron-phonon scattering rates in KTO weighted by the density of states at the Fermi level for (a) collinear and (b) noncollinear calculation as a function of electron density at 300 K.} \end{figure} The weighted scattering rates at the Fermi level for each of the conduction bands are shown in Fig.~\ref{fig:tauEf}. The lowest lying band (i.e. band 1) always has the highest rate, as a result of the enhancement in the density of states, since this band is the less dispersive among the three. Comparing the collinear and noncollinear calculations, we observe that inclusion of spin-orbit coupling reduces the effective rate at E$_F$ significantly. This is due to two factors: i) Since the highest lying band is pushed up in energy by 0.4 eV, electrons residing in the two lower lying bands cannot scatter via LO phonons to this band (and vice versa), reducing the available scattering channels. The spin-orbit splitting is much larger than the maximum LO phonon energy (around 0.1 eV), and as a result the energy conserving delta functions in Eq.(\ref{eqn:tau}) vanish for inter-band transitions involving the split-off band. ii) Collinear spin calculations result in less dispersive bands resulting in higher density of states, as can be seen in Fig.~\ref{fig:nef}. This effect has also been observed for effective band masses near the zone center~\cite{sto-hse-bs}. \begin{figure}[!ht] \includegraphics[width=0.45\textwidth]{nef_all-n.eps} \caption{\label{fig:nef} Densities of states at the Fermi level. For KTO, both collinear and noncollinear calculations are displayed.} \end{figure} The inset in Fig.~\ref{fig:tauEf}(b) shows that the split-off band (band 3) has significantly smaller effective scattering rate at E$_F$ compared to the collinear case in Fig.~\ref{fig:tauEf}(a). This is due to the fact that the Fermi energies for the considered electron densities are much lower than the split-off band energies, leading to a reduction of its density of states. The overall reduction of scattering rates when spin-orbit coupling is strong has in fact been proposed before~\cite{sto-us} and KTO serves as an example for this behavior. \section{\label{sec:tr}Transport Integrals} For calculation of transport coefficients, we compute the following two integrals: \begin{eqnarray} && I^{(1)}_{\alpha\, \beta} = \frac{1}{V_{\rm cell}}\, \sum_{n, {\bf k}}\, \tau_{n {\bf k}}\, \left( - \frac{\partial f_{n {\bf k}}}{\partial \epsilon_{n {\bf k}}} \right)\, v_{n {\bf k}, \alpha}\, v_{n {\bf k}, \beta} \nonumber\\ && I^{(2)}_{\alpha\, \beta} = \frac{1}{V_{\rm cell}}\, \sum_{n, {\bf k}}\, \tau_{n {\bf k}}\, \left( - \frac{\partial f_{n {\bf k}}}{\partial \epsilon_{n {\bf k}}} \right)\, \left( \epsilon_{n {\bf k}} - E_F \right) \, v_{n {\bf k}, \alpha}\, v_{n {\bf k}, \beta}. \nonumber\\ \label{eqn:int} \end{eqnarray} In terms of these integrals, the conductivity ($\sigma$) and Seebeck ($S$) tensors are given by \begin{equation} \sigma_{\alpha \beta} = 2\, e^2\, I^{(1)}_{\alpha \beta} \,\,\, , \,\,\, S_{\alpha \beta} = -\frac{1}{e T}\, I^{(1) -1}_{\alpha \nu}\, I^{(2)}_{\nu \beta}, \label{eqn:tr} \end{equation} where E$_F$ is the Fermi energy, $\alpha, \beta, \nu$ are cartesian indices (repeated indices are summed over), and the factor 2 accounts for the spin (in case of the noncollinear calculation, the factor 2 is not included and the band indices $n$ also contain the total angular momentum quantum number $j=l \pm 1/2$). The band velocities $v_{n {\bf k}}$ are defined as \begin{equation} v_{n {\bf k}, \alpha} \equiv \frac{1}{\hbar}\, \frac{\partial \epsilon_{n {\bf k}}}{\partial k_{\alpha}}. \label{eqn:v} \end{equation} In this study, we compute band velocities in a very dense grid (60$\times$60$\times$60) using finite differences. For the cases of KTO and STO, the conduction band structure is rather simple (i.e. no band crossings), therefore this approach leads to accurate band velocities. In more complex band structures, it is necessary to use interpolation techniques for accurate calculations~\cite{boltztrap,boltzwann}. Since KTO and STO are cubic, the conductivity and the Seebeck tensors are diagonal with equal entries. Therefore, we will refer to the scalar quantities of conductivity ($\sigma$) and Seebeck coefficient (S) from now on. The mobility is defined through the ratio of the conductivity to the electron density as \begin{equation} \mu = \frac{\sigma}{n e}. \label{eqn:mob} \end{equation} Using the scattering rates computed from Eqn.(\ref{eqn:tau}), based on the electron-phonon couplings of Eqn.(\ref{eqn:gnu2}), we calculate the transport integrals in Eqn.(\ref{eqn:int}) for KTO and STO. For KTO, we perform the integration both for collinear and noncollinear band structures while for STO we only use the collinear band structure. We have explicitly checked that the noncollinear calculations in STO lead to only negligible changes in the transport integrals. In the following two subsections, we report the the room-temperature mobility and Seebeck coefficients. \begin{figure}[!ht] \includegraphics[width=0.45\textwidth]{Mu_all-n.eps} \caption{\label{fig:muall} Calculated mobilities of STO and KTO at 300K. For KTO, both collinear and noncollinear calculations are displayed. } \end{figure} \subsection{Electron mobilities} Fig.~\ref{fig:muall} shows the calculated mobilities as a function of electron concentration at 300 K. KTO with spin-orbit interactions taken into account (i.e. noncollinear) displays the highest mobility across the range of electron densities we have considered, whereas the mobilities in STO are significantly lower ~\footnote{ In a previous work~\cite{sto-us}, the mobilities calculated at room temperature for STO are much smaller than the ones reported here. There are two reasons for this difference: 1) In Ref.~\onlinecite{sto-us}, scattering rates are approximated with their values at the zone center, 2) The electron-phonon coupling was described by a different formula, which was only valid only for the case of a single LO mode. Instead, the correct formula is used in this paper.}. As discussed previously, inclusion of spin-orbit splitting reduces the effective scattering rate, leading to an increase in the mobilities, which is the reason for higher values in the noncollinear calculation. In the case of the collinear calculations, where the bands are not split due to spin-orbit coupling, the mobilities are higher in KTO than in STO because the higher band velocities. Since Ta 5$d$ orbitals lead to wider conduction bands compared to those of Ti 3$d$, such an increase is expected. These findings are in agreement with the experimental observation of higher room-temperature mobilities in KTO compared to STO~\cite{kto-mobility,kto-S}. We note that our calculations overestimate the observed room temperature mobilities of both STO, which is less than 10 cm$^2$/Vs ~\cite{frederikse-sto,tufte-sto,stemmer-sto-mobility}, and KTO, which is around 30 cm$^2$/Vs~\cite{kto-mobility}, for a range of electron concentrations. This is due to the fact that our calculations contain only LO phonon scattering mechanism, and ignores others such as impurities~\cite{Spinelli-STO}, TO-phonon~\cite{Jena-STO} and electron-electron scattering~\cite{stemmer-sto-T2}. Further insight into the behavior of mobility shown in Fig.(\ref{fig:muall}) can be gained by calculating the average values of the Fermi velocities, which we define as \begin{equation} v^2_F \equiv \frac{\sum_{n, {\bf k}} \delta (E_F - \epsilon_{n {\bf k}})\, v_{n {\bf k}}^2 } {\sum_{n, {\bf k}} \delta (E_F - \epsilon_{n {\bf k}})}, \label{eqn:v2e} \end{equation} where the index $n$ runs over the conduction bands. Shown in Fig.~\ref{fig:v2f} are $v_F$, which have higher values in KTO compared to STO. While KTO in the noncollinear calculation has a higher band-width (as can seen in Figs.~\ref{fig:bands} and ~\ref{fig:nef}), the averaged values of the Fermi velocities are slightly lower than that of KTO in the collinear calculation. This is most probably due to the highly non-parabolic Fermi surface of KTO, which has high and low velocity regions, averaging out to a similar numerical value for both collinear and non-collinear calculations. Instead, lower scattering rates in the noncollinear calculations lead to the enhancement of mobilities as seen in Fig.~\ref{fig:muall}. \begin{figure}[!ht] \includegraphics[width=0.45\textwidth]{v2_all-n.eps} \caption{\label{fig:v2f} Averaged Fermi velocities (rescaled with the lattice parameter $a_0$). For KTO, both collinear and noncollinear calculations are displayed. } \end{figure} An additional feature which deserves attention is the increase in mobility with increasing electron density $n$ as seen in Fig.~\ref{fig:muall}. This may seem counterintuitive, since the effective scattering rates at E$_F$ also increases with $n$ (see Fig.~\ref{fig:tauEf}). However, band velocities $v_{n {\bf k}}$ increase as a function of electron energy in the conduction band. The increase in $v_{n {\bf k}}^2$ dominates over the increase in $\tau^{-1}_{n {\bf k}}$ leading to an overall increase of the mobility as a function of $n$. \subsection{Seebeck coefficients} Fig.~\ref{fig:Sall} shows the calculated Seebeck coefficients as a function of electron concentration at 300 K. The noncollinear calculation for KTO displays the lowest Seebeck (absolute value) coefficients across the range of electron densities we have considered. STO on the other hand displays the highest Seebeck coefficients, with collinear KTO displaying slightly lower values. This behavior can be understood through the dependence of the Seebeck coefficient on E$_F$. The Seebeck coefficient is inversely proportional to E$_F$~\cite{mahan,seebeck-sto,seebeck-stokto} (for a parabolic band approximation in a metal, it is inversely proportional to E$_F$). For a given electron concentration $n$, E$_F$ is lowest for STO, since its conduction band width is smaller than KTO, which in turn leads to a higher Seebeck coefficient $|S|$. The difference between collinear and noncollinear calculations in KTO mainly results from the effective number of conduction bands. For the noncollinear calculation, the split-up band is higher in energy, leading to an effective 2-conduction-band system (see Fig.~\ref{fig:bands}(b)). For a given $n$, E$_F$ increases when the number of conduction bands decrease; when there are less bands which are close in energy, electrons occupy higher lying bands. Thus, $|S|$ in the noncollinear case is lower than that of the collinear one, due to the reduction of the number of conduction bands~\cite{seebeck-stokto}. \begin{figure}[!ht] \includegraphics[width=0.45\textwidth]{S_all-n.eps} \caption{\label{fig:Sall} Calculated Seebeck coefficients of STO and KTO at 300K. For KTO, both collinear and noncollinear calculations are displayed. } \end{figure} Compared to experimental results, our calculations slightly underestimate the Seebeck coefficients. For STO, experiments~\cite{seebeck-sto} yield $147\, < |S| < 380$ $\mu V/K$ in the range $10^{21} > n > 8.8 \times 10^{19}$ cm$^3$, while our calculations yield $80\, < |S| < 220$ $\mu V/K$ in the range $10^{21} > n > 10^{20}$ cm$^3$. Similarly for KTO, experiments~\cite{kto-S} yield $200\, < |S| < 460$ $\mu V/K$ in the range $5.4 \times 10^{18} > n > 1.4 \times 10^{19}$ cm$^3$, while our calculations yield $138\, < |S| < 489$ $\mu V/K$ in the range $10^{20} > n > 10^{18}$ cm$^3$. The underestimation of the Seebeck coefficient is most probably related to the overestimation of the conduction band-widths in LDA, thus leading to higher E$_F$ which reduces $|S|$~\cite{seebeck-stokto}. In addition, there may be second order contributions to the Seebeck coefficient from coupling to phonons (i.e. phonon drag effect) missing in our calculations, although these are most significant at lower temperatures~\cite{sto-ph-drag,stemmer-new}. The changes in the electron-phonon scattering rates do not lead to a significant change in the calculated Seebeck coefficients, as we will discuss in the next section. This is in contrast with the calculations of mobility presented in the previous subsection, which display a strong dependence on the values of the scattering rates. The Seebeck coefficient is a ratio of two transport integrals (see Eqns. (\ref{eqn:tr})). This leads to an exact cancellation of the scattering rate when it is taken as a constant. For the case of the scattering rate which is not constant, this cancellation is not valid. However, our calculations show that the difference between the calculations taking into account the full scattering rate $\tau_{n {\bf k}}^{-1}$ and a constant rate is very small for case of room temperature Seebeck coefficients. \section{\label{sec:tauk}Constant $\tau$ vs. $\tau_{n {\bf k}}$} One of the most common approximations in transport calculations is to assume that the scattering rate appearing in Eqns.(\ref{eqn:int}) is constant~\cite{boltztrap,boltztrap}. For transport coefficients which are ratios of two transport integrals (such as the Seebeck coefficient) the dependence on the constant scattering rate disappears. However, for transport coefficients such as mobility, the scattering rate is usually fitted to an experimental value. Here, we demonstrate that while a constant scattering rate provides reasonable Seebeck coefficients, mobilities are not predicted accurately across a range of electron densities at 300 K. \begin{figure}[!ht] \includegraphics[width=0.4\textwidth]{Mu_KTO-nc-n.eps} \includegraphics[width=0.4\textwidth]{S_KTO-nc-n.eps} \caption{\label{fig:MuSKTO} Calculated mobilities (a) and Seebeck coefficients (b) of KTO at 300K as a function of $n$ for a constant scattering time $\tau_{\Gamma}$ and the full scattering time $\tau_{n {\bf k}}$. For comparison, mobilities calculated with $4 \times \tau_{\Gamma}$ are also shown. } \end{figure} In order to compare constant scattering rate with full $k$-dependent scattering rate calculations presented in the previous section, we choose the constant scattering rate to be the band averaged value at the zone center ($\tau_{\Gamma}^{-1}$). The resulting mobilities using a constant scattering rate in comparison with the $k$-dependent scattering rate are shown in Fig.(\ref{fig:MuSKTO} a) for the noncollinear spin calculation at 300 K (for collinear KTO and STO, similar results are obtained, which are not shown here). Also shown in Fig.(\ref{fig:MuSKTO} a) are the mobilities calculated with constant $0.25\, \tau_{\Gamma}^{-1}$ for comparison. As can be seen, calculations with constant scattering rate have various problems: 1) The average value of the rate at the zone center underestimates the mobility and a larger value should be used. 2) While it is possible to obtain the mobility that results from $k$-dependent scattering rate using just a constant rate, this constant number has to be chosen for each given electron density $n$. 3) The dependence of mobility on $n$ is incorrectly predicted when a constant rate is used, since it results in decreasing mobility as a function of $n$, opposite to what $k$-dependent scattering rate yields. As a result, a constant rate not only leads to a loss of predictability for mobilities (due to not being able to fix a single value for the rate), it also leads to incorrect qualitative behavior. On the other hand, the Seebeck coefficients computed with a constant rate (which cancels out) are very close to ones that are computed with full $k$-dependent scattering rate as can be seen in Fig.(\ref{fig:MuSKTO} b). This is rather a nontrivial result, since the two transport integrals in Eqns.(\ref{eqn:int}) are peaked at different regions in the Brillouin zone, where $\tau_{n {\bf k}}$ attains different values. In other words, even if we have approximated the integrals $I^{(n)}_{\alpha \beta}$ (n=1,2) with a constant value where the integrands are peaked, the scattering times appearing in the numerator and denominator of $S$ will be different and there will be no cancellation. It is therefore interesting that a constant scattering rate works remarkably well for predicting room-temperature Seebeck coefficients (assuming all the scattering being due to LO phonons) for KTO (and STO). One would expect a similar behavior in other perovskite oxides as well, thus justifying the use of constant scattering rates for the study of thermoelectric properties. \section{\label{sec:disc} Conclusion} In this work, we have studied room-temperature transport properties of the cubic perovskites KTO and STO. The calculations for the transport properties were based on Boltzmann transport theory with relaxation time approximation. The relaxation times were calculated using an analytical model for scattering of electrons and LO phonons. Our results have shown that the superior mobility of KTO with respect to STO results from two main reasons: 1) The larger band-width of Ta 5$d$-derived conduction bands in KTO than the Ti 3$d$ derived conduction bands in STO; 2) Strong spin-orbit coupling in KTO leading to an effective 2-conduction-band system, compared to the 3-conduction-band system in STO. Despite its larger mobility, KTO has lower Seebeck coefficients, due to the fact that the effective number of conduction bands are smaller than in STO. We have also compared calculations with constant scattering rate with calculations using the full $k$-dependent scattering rate. While using a constant scattering rate is a widely used approximation in the literature, our results have shown that it results in loss of predictability for calculations of mobility. Instead, for calculations of the Seebeck coefficient, a constant scattering rate gives almost the same results with full $k$-dependent scattering rate, justifying its use for thermal transport calculations in the literature. The computational approach we have presented is rather efficient (compared to those based on full electron-phonon coupling matrices obtained from DFPT), and results in good agreement with experiments in predicting the relative transport coefficients of KTO and STO. Using our approach to calculate transport coefficients on a large set of perovskite oxides can be a basis to search materials with targeted properties for device applications. \section{Acknowledgements} We acknowledge support from the Center for Scientific Computing from the CNSI, MRL: an NSF MRSEC (DMR-1121053) and NSF CNS-0960316. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. We also thank D. M. Eagles for very useful discussions.
1,116,691,498,867
arxiv
\section{Introduction} There is an English idiom: \textquotedblleft a picture is worth a thousand words". Such theory has been standing in human society for many years, as this is the way our brain functions. With the development of neural network and deep learning, it can be applied to machine as well. In other words, we human beings are able to teach or train computer to recognize objects from pictures and even describe them in our natural language. Thanks Google for organizing this \textquoteleft Google Cloud \& YouTube-8M Video Understanding Challenge', which gives us a wonderful opportunity to test new ideas and implement them with the Google cloud platform. In this paper, we first review the baseline algorithms, and then introduce the innovative ensemble experiments. \section{Data} There are two types of data, video-level and frame-level features. The data and detailed information are well explained in YouTube-8M dataset webpage (\url{https://research.google.com/youtube8m/download.html}). Both of the video-level and frame-level data are stored as tensorflow.Example protocol buffers, which have been saved as \textquoteleft tfrecord' files. For each type of the data, it has been split in three sets: train, validate and test. The numbers of observations in each dataset are given in following table. \begin{tabular}{|c|c|c|c|} \hline &Train & Validate &Test \\ \hline Number of Obs & 4,906,660 & 1,401,82 & 700,640 \\ \hline \end{tabular} Video-level data example proto is given in following text format: \begin{itemize} \item \textquotedblleft video\_id": an id string; \item \textquotedblleft labels": a list of integers; Note: the feature for test set is missing, but given for train and validation datasets. \item \textquotedblleft mean\_rgb": a 1024-dim float list; \item \textquotedblleft mean\_audio": a 128-dim float list. \end{itemize} A few examples of video-level data are given in following table: \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{cccc} \hline\hline video\_id (ID) &labels ($y$)&mean\_rbg ($X_1$)&mean\_audio ($X_2$)\\\hline \textquoteleft -09K4OPZSSo'&[66]& \begin{tabular}{c} [0.11, -0.87, \\ -0.19, -0.22, \\ 0.23, $\cdots$] \end{tabular} & \begin{tabular}{c} [0.89, 1.31, \\ -0.13, 0.20,\\ -1.76, $\cdots $] \end{tabular} \\\hline \textquoteleft -0MDly\_IiNM' & [37, 101, 29, 23] & \begin{tabular}{c} [-0.99, 1.02, \\ -0.74, 0.09, \\ 0.56, $\cdots $] \end{tabular} & \begin{tabular}{c} [-0.66, -1.12, \\ 0.61, -1.37,\\ -0.01, $\cdots $] \end{tabular}\\\hline \multicolumn{4}{c}{$\vdots $} \\\hline\hline \end{tabular}} \end{center} Frame-level data has similar \textquoteleft video\_id' and \textquoteleft labels' features, but \textquoteleft rgb' and \textquoteleft audio' are given in each frame: \begin{itemize} \item \textquotedblleft video\_id": e.g. \textquoteleft -09K4OPZSSo'; \item \textquotedblleft labels": e.g. [66]; \item Feature\_list \textquotedblleft rgb": e.g. [[a 1024-dim float list], [a 1024-dim float list],...]; \item Feature\_list \textquotedblleft audio": e.g. [[a 128-dim float list],[a 12-dim float list],...]. Note: each frame represents one second of the video, which up to 300. \end{itemize} The data can be represented as $(X_{train},y_{train})$, $(X_{val},y_{val})$ and $(X_{test})$, where $X$ is the information of each video (features), and $y$ is the corresponding labels. In video-level data, $X=(mean\_rgb, mean\_audio)$. \section{Baseline Approaches} \cite{abu2016youtube} gave detailed introduction for some baseline approaches, including logistic regression and mixture of experts for video-level data, as well as frame-level logistic regression, deep bag of frame and long short-term memory models for frame-level data. Given the video-level representations, we train independent binary classifiers for each label using all the data. Exploiting the structure information between the various labels is left for future work. A key challenge is to train these classifiers at the scale of this dataset. Even with a compact video-level representation for the 6M training videos, it is unfeasible to train batch optimization classifiers, like SVM. Instead, we use online learning algorithms, and use Adagrad to perform model updates on the weight vectors given a small mini-batch of examples (each example is associated with a binary ground-truth value). \subsection{Models from Video-level Features} The average values of rgb and audio presentation are extracted from each video for model training. We focus on performance of logistic regression model and mixture of experts(MoE) model, which can be trained within 2 hours on Google cloud platform. \subsubsection{Logistic Regression } Logistic regression computes the weighted entity similarly to linear regression, and obtains the probability by output logistic results \cite{6}. Its cost function is called as log-loss: \begin{equation} \lambda||W_e||^2_2+\sum_{i=1}^{N}\mathcal{L}(y_{i,e},\sigma(W_e^Tx_i)), \end{equation} where $\sigma(\cdot)$ is the standard logistic, $\sigma(z)=1/(1+exp(-z)).$ The optimal weights can be found with gradient descent algorithm. \subsubsection{Mixture of Experts (MoE)} Mixture of experts (MoE) was first proposed by Jacobs and Jordan \cite{jordan1994hierarchical}. Given a set of training examples $(x_i, g_i), i=1...N$ for a binary classifier, where $x_i$ is the feature vector and $g_i \in [0,1]$ is the ground-truth, let $\mathcal{L}(p_i,g_i)$ be the log-loss between the predicted probability and the ground-truth: \begin{equation} \mathcal{L}(p, g) = - g \log p - (1 - g) \log(1 - p). \end{equation} Probability of each entity is calculated from a softmax distribution of a set of hidden states. Cost function over all the datasets is log-loss. Video-level models trained with RGB feature achieve 70\% - 74\% precision accuracy. Adding audio feature raises the score to 78\% precision accuracy. We also tested including validation dataset as part of training datasets. With this larger training set, prediction accuracy on tests set is 2.5\% higher than using original training sets only for model fitting. In other words, we used 90\% of YouTube-8M datasets to obtain better model parameters \subsection{Models from Frame-level Features} Videos are decoded at one frame-per-second to extract frame-level presentations. In our experiments, most frame-level models achieve similar performance as video-level models. \subsubsection{Frame-level Logistic Model} The frame-level features in the training dataset are obtained by randomly sampling 20 frames in each video. Frames from the same video are all assigned the ground-truth of the corresponding video. There are totally about 120 million frames. So the frame-level features are: $$(x_i,y_i^\epsilon), \epsilon=1,...,4800, i=1,\cdots,120M$$ where $x_i \in \mathcal{R}^{1024}$ and $y_i^\epsilon\in\{0,1\}.$ The logistic models are trained in \textquotedblleft one-vs-all" sense; hence there are totally 4800 models. For inference on test data, we compute the probability of existence of label $e$ in each video $\nu $ as follows \begin{equation} p_\nu (e|x_{1:F_\nu }^{\nu })=\frac{1}{F_\nu}\sum_{j=1}^{F_\nu}p_\nu(e|x_j^{\nu}), j=1,\cdots,F_\nu, \end{equation} where $F_\nu$ is the number of frames in a video. That is, video-level probabilities are obtained by simply averaging frame-level probabilities. \subsubsection{Deep Bag of Frame (DBoF) Pooling} Deep bag of frame model is a convolutional neural network. The main idea is to design two layers in the convolutional part. In the first layer, the up-projection layer, the weights are still applied on frames, although all selected frames share the same parameter. The second layer is pooling the previous layer into video level. The approach enjoys the computational benefits of CNN, while at the same time the weights on the up-projection layer can still provide a strong representation of input features on frame level. For implementation and test, more features and input data can slightly improve the results. For example, if we combined training and validate data, the score will be improved. When we add both features (RGB + audio), the result is boosted by around 0.4\%. The computing cost is quite low compared to other frame level models. It took 36 hours using one single GPU for training data. The benchmark model is not well implemented for parallel computing in the prediction stage. Using more GPUs doesn't boost training speed. We tested 4 GPU, and the total time is only reduced by 10\%. \subsubsection{Long Short-Term Memory (LSTM) Model Trained from Frame-Level Features} Long short-term memory (LSTM) \cite{hochreiter1997long} is a recurrent neural network (RNN) architecture. In contrast to conventional RNN, LSTM uses memory cells to store, modify, and access internal state, allowing it to better discover long-range temporal relationships. As shown in Figure \ref{fig:LSTM_plot} (\cite{yue2015beyond}) the LSTM cell stores a single floating point value and it maintained the value unless it is added to by the input gate or diminished by the forget gate. The emission of the memory value from the LSTM cell is controlled by the output gate. \begin{figure}[!htbp] \centering \includegraphics[width=0.7\linewidth]{LSTM_plot} \caption{} \label{fig:LSTM_plot} \end{figure} The hidden layer $H$ of the LSTM is computed as follows \cite{yue2015beyond}: \begin{eqnarray} i_t &=& \sigma (W_{xi}X_{t} +W_{hi}h_{t-1} +W_{ci}c_{t-1} + b_{i})\\ f_t &=& \sigma (W_{xf}X_{t} +W_{hf}h_{t-1} +W_{cf} c_{t-1} + b_f )\\ c_t &=& f_{t}c_{t-1} + i_t \tanh(W_{xc}X_t +W_{hc}h_{t-1} + b_c)\\ o_t &=& \sigma (W_{xo}x_t +W_{ho}h_{t-1} +W_{co}c_t + b_o) \\ h_t &=& o_t \tanh(c_t) \end{eqnarray} where $x$ denote the input, $W$ denote weight matrices (e.g. the subscript hi is the hidden-input weight matrix), $b$ terms denote bias vectors, $\sigma $ is the logistic sigmoid function, and $i, f, o$, and $c$ are respectively the input gate, forget gate, output gate, and cell activation vectors. In the current project, the LSTM model was build followed a similar approach to \cite{yue2015beyond}. Provided best performance on the validation set, 2 stacked LSTM layers with 1024 hidden units and 60 unrolling iterations were used \cite{abu2016youtube}. \section{Ensemble Approaches} Several predictions on the base level have been generated. Most of the models perform reasonably well, and aggregating the predictions even better results. It is known as ensemble learning to combine the base level models and train a second level to improve the prediction. There are several approaches of ensemble learning, such as blending, averaging, bagging, voting, etc. Where blending (\cite{bigchaos}) is a powerful method for model ensemble. Averaging, the simple solution, works well, and is also applied in this case. Bagging (short for bootstrap aggregating) is to train the same models on different random subsets of the training sets, and aggregating the results. The improvement from bagging could be limited. Majority voting is not appropriate in this case, since the outputs here are the confidence level probabilities instead of label ids. \subsection{Blending} A detailed introduction for popular kaggle ensembling methods are given in \url{https://mlwave.com/kaggle-ensembling-guide}. Define $f_1, f_2, \cdots $ be different classification models, like logistic and MoE in this case. The general idea for blending method is given as follows: \begin{itemize} \item Create a small holdout set, like 10\% of the train set. \item Build the prediction model with rest of the train set. \item Train the stacker model in this holdout set only. \end{itemize} Training the blend model for the YouTube-8M data requires large computational memories. Google cloud platform provides sufficient memory and computing resources for blending. We tried this blending method in video-level logistic and MoE base models with validation set as the holdout set. \begin{itemize} \item Build the logistic and MoE model on video-level train data, $y_{train}\sim X_{train}.$ \item Do the inference and print the output prediction on validation set ($\hat{y}_{val}^{logistic}, \hat{y}_{val}^{moe}$) and test set ($\hat{y}_{test}^{logistic}, \hat{y}_{test}^{moe}$). \item Predictions on dataset $p$ ($p=$ val or test) with model $q$ ($q=$ logistic or MoE), $\hat{y}_p^q$, gives top 20 predictions and their probabilities, e.g., \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{ll} \hline\hline VideoId & LabelConfidencePairs \\\hline 100011194 & \begin{tabular}{l} 1 0.991708 4 0.830637 1833 0.781667 \\ 2292 0.730538 297 0.718730 3547 0.465280 \\ 34 0.396639 1511 0.371649 2 0.351788 \\ 0 0.303522 92 0.169908 933 0.164513 \\ 198 0.145657 202 0.143494 658 0.106776 \\ 74 0.089043 167 0.088266 33 0.052943 \\ 332 0.049101 360 0.045714 \end{tabular} \\\hline 100253546 & \begin{tabular}{l} 77 0.996484 21 0.987201 142 0.971881 \\ 59 0.931193 112 0.817585 0 0.445608 \\ 8 0.112624 11 0.100307 17 0.025623 \\ 262 0.021074 1 0.020778 312 0.020060 \\ 75 0.017796 57 0.011925 60 0.005532 \\ 67 0.004512 69 0.004346 575 0.004044 \\ 3960 0.003965 710 0.003961 \end{tabular}\\\hline \multicolumn{2}{c}{$\vdots $} \\\hline\hline \end{tabular}} \end{center} In order to apply second stage stacker model. We define stacking new feature $X_{p}^{q*}$ to be a 4716-dimension vector. Each dimension represents a label with corresponding probability. The stacking new feature has default value 0, and the 20 estimated probabilities are defined. For simplicity, if there are 3 estimated label with probabilities, the new defined feature is given as following: $$(\hat{y}_{p}^{q})_i=[1\quad 0.99\quad 4\quad 0.83 \quad 5\quad 0.78]$$ $\Rightarrow$ $$(X_{p}^{q*})_{i}=(0.99, 0,0,0.83,0.78,0,\cdots,0)$$ % where $i=1, \cdots, 1401828$ if $p=$val, $i=1, \cdots, 700640$ if $p=$test. Write the new defined stacking feature to be $X_{val}^{*}=(X_{val}^{logistic *}, X_{val}^{moe *})$ and $X_{test}^{*}=(X_{test}^{logistic *}, X_{test}^{moe *})$. \item Stacker model is hard to run in local machine with new defined feature $X_{val}^{*}$, since it costs too much memories. Hence, we save the new defined data as tfrecord file, so that the model can be run in Google cloud. The new defined data example proto is given as following: \begin{itemize} \item[a.] \textquotedblleft video\_id" \item[b.] \textquotedblleft labels" \item[c.] \textquotedblleft logistic\_newfeature": float array of length 4716 \item[d.] \textquotedblleft moe\_newfeature": float array of length 4716 \end{itemize} \item Train the stacker model in the validation new feature $X_{val}^*$, $y_{val}\sim X_{val}^*.$ In this case, we use only logistic and MoE as stacker model, since these two are well-defined in th package. Other models need to be defined if desire to use. \item Do the inference on the test set $X_{test}^*$, hence final prediction $\hat{y}_{test}^*.$ \end{itemize} Note since the new defined feature has length 4716 instead of 1024, train.py and test.py in Google cloud script need some small correction to apply this method. Defined feature\_names and feature\_sizes need to match with new defined feature name and corresponding length. By adding the new features, the score has been improved from 0.760 to 0.775, see Table \ref{tab:predictions}. \subsection{Averaging} The idea of averaging is to generate a smooth separation between different predictors and reduce over-fitting. For each video, if one label is predicted several times among these models, the mean value of these confidence level values is used as the final prediction. On the other hand, if a label id is rarely predicted among the base models, the final prediction is calculated using the sum of these confidence level values divided by the number of base models, eventually lower the confidence level. On a computer with 8G physical memories, 30G virtual memory, the averaging process for the whole test dataset, which includes 700640 YouTube video records, can be finished within 15 minutes, which is quite efficient considering the size of files. Several strategies are tested for the averaging: Strategy A: 3 frame level models, 2 video level models, 2 blending models (video level logistic model blending with MoE prediction, video level MoE model blending with MoE prediction), the GAP score is 0.80396 after averaging, higher than the score of any individual base predictions. Strategy B: 3 frame level models, 2 video level models, 2 blending models (video level logistic model blending with both logistic and MoE prediction, video level MoE model blending with both logistic and MoE prediction). The basement of the 2 blending predictions are the 2 video level models, thus the 2 blending models are highly correlated with the 2 video level models. The video level logistic model is a weaker predictor than video level MoE predictor. The usage of logistic prediction blended models increases the weight of this weaker predictor. Therefore, the GAP score is 0.80380 after averaging, slightly lower than Strategy A. Strategy C: the base models are the same as in strategy A, except that the deep bag of frame pooling frame model prediction is replaced by the one generated after fine-tuning the hyper-parameters. Including the individual model with better performance, the score after averaging is improved to 0.80424. Strategy D: the base models in strategy D is the same as strategy C, but the weight of the blending model (video level MoE model blending with both logistic and MoE prediction) is increased to 2. Among all the base models, the GAP score of the MoE blending model is the highest. The score rises to 0.80692 with the weighted average strategy. Strategy E: similar to strategy D, the weight of the MoE blending model is 2. The logistic model blending with logistic plus MoE prediction is replaced by the logistic model blending only with MoE prediction, which reduces the logistic components compared to strategy D. This approach yields the best GAP score, 0.80695. Such method boosts the final result by 7-8\%, quite remarkable considering the simplicity nature. Table \ref{tab:predictions} shows results of all base models, where the scores are calculate using GAP metrics on Kaggle platform. \begin{figure} \centering \includegraphics[width=\linewidth]{stacking_plot} \caption{Stacking Diverse Predictors Flow Chart} \label{fig:stacking_plot} \end{figure} \begin{table} \caption{Predictions from Individual Base Models} \label{tab:predictions} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline Models & Features &Datasets for model fitting &Score \\ \hline \begin{tabular}{c} Frame Level\\ LSTM Model \end{tabular} &rgb&validate&0.7457\\\hline \begin{tabular}{c} Frame Level\\ Deep Bag of \\ Pooling Model I \end{tabular} &audio&train+validate&0.77 \\\hline \begin{tabular}{c} Frame Level \\ Deep Bag of \\ Pooling Model II \end{tabular} &audio&train&0.767\\\hline \begin{tabular}{c} Video Level \\ Logistic Model \end{tabular} &audio/rgb&train+validate&0.76036\\\hline \begin{tabular}{c} Video Level \\ Mixture of \\ Experts Model \end{tabular} &audio/rgb&train+validate&0.78453\\\hline \begin{tabular}{c} Video Level \\ Logistic Model \\ Blending I \end{tabular} &audio/rgb/logistic/moe&validate&0.77518\\\hline \begin{tabular}{c} Video Level \\ Logistic Model \\ Blending II \end{tabular} &audio/rgb/moe&validate&0.76873\\ \hline \begin{tabular}{c} Video Level \\ Mixture of \\ Experts Model \\ Blending \end{tabular} &audio/rgb/logistic/moe&validate&0.78617\\ \hline \end{tabular} } \end{table} \section{Conclusion} Utilizing the open resource of Youtube-8M train and evaluation datasets, we trained baseline models in a fast-track fashion. Video-level representations are trained with a) logistic models; b) MoE model, while frame-level features are trained with a) LSTM model; b) Dbof model; c) frame-level logistic model. We also demonstrated the efficiency of blending and averaging to improve the accuracy of prediction. Blending plays a key role to raise the performance of baseline. We are able to train the blender model on Google cloud, where the computer resource can digest thousands of features. The averaging solution, in addition, aggregates the wisdom from all predictions with low computer cost. {\small \bibliographystyle{ieee}
1,116,691,498,868
arxiv
\section{Introduction} This paper is the companion paper of \cite{ccsprecub}. The preceding paper was devoted to fixing K. Worytkiewicz's combinatorial semantics of CCS (Milner's Calculus of Communicating System) \cite{0683.68008} \cite{MR1365754} in terms of labelled precubical sets \cite{exHDA} in order to stick to the higher dimensional automata paradigm. This paradigm states that the concurrent execution of $n$ actions must be abstracted by exactly \textit{one} full $n$-cube: see \cite{ccsprecub} Theorem~5.2 for a rigorous formalization of this paradigm and also Proposition~\ref{pourquoi_non_degenere} of this paper. There was a problem in K. Worytkiewicz's approach because of a version of the labelled coskeleton construction adding too many cubes and therefore not satisfying Proposition~\ref{pourquoi_non_degenere}. The purpose of the preceding paper was also to built an appropriate geometric realization functor from labelled precubical sets to labelled flows. The little bit surprising fact arising from this construction was that a satisfactory geometric realization functor does require the use of the model structure of flows introduced in \cite{model3}. A consequence of the preceding paper was to give a proof of the expressiveness of the category of flows. The geometric intuition underlying these two semantics, i.e. in terms of precubical sets and in terms of flows, is extensively explained in Section~\ref{introbis} which must be considered as a part of this introduction. In this work, we push a little bit further the study of the semantics of CCS in terms of labelled flows. Indeed, we explain the effect of the geometric realization functor on each operator of CCS. Section~\ref{effect} is the section of the paper presenting these new results. In particular, Theorem~\ref{parho} presents an interpretation of the parallel composition with synchronization in terms of flows without any combinatorial construction. This must be considered as the main result of the paper. The only case treated in this paper is the one of CCS without message passing. But all the results can be easily adapted to any other process algebra with any other synchronization algebra. The case of TCSP \cite{0628.68025} was explained in \cite{ccsprecub}. For general synchronization algebras, all proofs of the paper are exactly the same, except the proof of Proposition~\ref{p1eq} which must be very slightly modified: see the comment in the footnote~\ref{moregeneral}. \subsection*{Outline of the paper} Section~\ref{introbis} explains, with the example of the concurrent execution of two actions $a$ and $b$, the geometric intuition underlying the two semantics studied in this paper. It must be considered as part of the introduction and it is strongly recommended the reading for anyone not knowing the subject (and also for the other ones). In particular, the notions of labelled precubical set and of labelled flow are reminded here. Section~\ref{comb} recalls the syntax of CCS and the construction of the combinatorial semantics of \cite{ccsprecub} in terms of labelled precubical sets. The geometric realization functor is then introduced in Section~\ref{catsem}. Since we \textit{do} need to work in the homotopy category of flows, Section~\ref{relevance} proving that two precubical sets are isomorphic if and only if the associated flows are weakly S-homotopy equivalent is fundamental. Finally Section~\ref{effect} is an exposition of the effect of the geometric realization functor from precubical sets to flows on each operator defining the syntax of CCS. It is the technical core of the paper. And Section~\ref{bonus} is a bonus explaining some ideas towards a pure homotopical semantics of CCS: Theorem~\ref{weakweakweak} is a consequence of all the theorems of Section~\ref{effect} and of some known facts about realization of homotopy commutative diagrams over free Reedy categories and their links with some kinds of weak limits and weak colimits in the homotopy category of a model category. \subsection*{Prerequisites} The reading of this work requires some familiarity with model category techniques \cite{MR99h:55031} \cite{ref_model2}, with category theory \cite{MR1712872} \cite{MR96g:18001a}\cite{gz}, and also with locally presentable categories \cite{MR95j:18001}. We use the locally presentable category of $\Delta$-generated topological spaces. Introductions about these spaces are available in \cite{delta} \cite{FR} and \cite{interpretation-glob}. \subsection*{Notations} Let $\mathcal{C}$ be a cocomplete category. The class of morphisms of $\mathcal{C}$ that are transfinite compositions of pushouts of elements of a set of morphisms $K$ is denoted by $\cell(K)$. An element of $\cell(K)$ is called a \textit{relative $K$-cell complex}. The category of sets is denoted by ${\brm{Set}}$. The class of maps satisfying the right lifting property with respect to the maps of $K$ is denoted by $\inj(K)$. The class of maps satisfying the left lifting property with respect to the maps of $\inj(K)$ is denoted by $\cof(K)$. The cofibrant replacement functor of a model category is denoted by $(-)^{cof}$. The notation $\simeq$ means \textit{weak equivalence} or \textit{equivalence of categories}, the notation $\cong$ means \textit{isomorphism}. The notation $\id_A$ means identity of $A$. The initial object (resp. final object) of a category is denoted by $\varnothing$ (resp. $\mathbf{1}$). The cofibrant replacement functor of any model category is denoted by $(-)^{cof}$. The category of partially ordered set or poset together with the strictly increasing maps ($x<y$ implies $f(x)<f(y)$) is denoted by ${\brm{PoSet}}$. The set of morphisms from an object $X$ to an object $Y$ of a category $\mathcal{C}$ is denoted by $\mathcal{C}(X,Y)$. \subsection*{Acknowledgments} I thank very much Andrei R\u{a}dulescu-Banu for bringing Theorem~\ref{weak} to my attention. \section{Example of two concurrent executions} \label{introbis} We want to explain in this section what we mean by \textit{combinatorial semantics} and \textit{categorical semantics} with the example of the concurrent execution of two actions $a$ and $b$. This section also recalls the definitions of \textit{labelled precubical set} and of \textit{labelled flow}. For other references about topological models of concurrency, see \cite{survol} for a survey. Consider two actions $a$ and $b$ whose concurrent execution is topologically represented by the square $[0,1]^2$ of Figure~\ref{carreplein}. The topological space $[0,1]^2$ itself represents the underlying state space of the process. Four distinguished states are depicted on Figure~\ref{carreplein}. The state $0 = (0,0)$ is the initial state. The state $2 = (1,1)$ is the final state. At the state $1 = (1,0)$, the action $a$ is finished and the action $b$ is not yet started. At the state $3 = (0,1)$, the action $b$ is finished and the action $a$ is not yet started. So the boundary $[0,1]\p\{0,1\} \cup \{0,1\}\p [0,1]$ of the square $[0,1]^2$ models the sequential execution of the actions $a$ and $b$ whereas their concurrent execution is modeled by including the $2$-dimensional square $]0,1[ \p ]0,1[$. In fact, the possible execution paths from the initial state $0 = (0,0)$ to the final state $2 = (1,1)$ are all continuous paths from $0 = (0,0)$ to $2 = (1,1)$ which are non-decreasing with respect to each axis of coordinates. Nondecreasingness corresponds to irreversibility of time. \begin{figure} \begin{center} \includegraphics[width=5cm]{carreplein.eps} \end{center} \caption{Concurrent execution of two actions $a$ and $b$} \label{carreplein} \end{figure} In the combinatorial semantics, the preceding situation is abstracted by a $2$-cube viewed as a precubical set. Let us now recall the definition of these objects. A good reference for presheaves is \cite{MR1300636}. \begin{nota} Let $[0] = \{0\}$ and $[n] = \{0,1\}^n$ for $n \geq 1$. By convention, $\{0,1\}^0=\{0\}$. \end{nota} Let $\delta_i^\alpha : [n-1] \rightarrow [n]$ be the set map defined for $1\leq i\leq n$ and $\alpha \in \{0,1\}$ by $\delta_i^\alpha(\epsilon_1, \dots, \epsilon_{n-1}) = (\epsilon_1, \dots, \epsilon_{i-1}, \alpha, \epsilon_i, \dots, \epsilon_{n-1})$. The small category $\square$ is by definition the subcategory of the category of sets with set of objects $\{[n],n\geq 0\}$ and generated by the morphisms $\delta_i^\alpha$. \begin{defn} \cite{Brown_cube} The category of presheaves over $\square$, denoted by $\square^{op}{\brm{Set}}$, is called the category of {\rm precubical sets}. A precubical set $K$ consists of a family of sets $(K_n)_{n \geq 0}$ and of set maps $\partial_i^\alpha:K_n \rightarrow K_{n-1}$ with $1\leq i \leq n$ and $\alpha\in\{0,1\}$ satisfying the cubical relations $\partial_i^\alpha\partial_j^\beta = \partial_{j-1}^\beta \partial_i^\alpha$ for any $\alpha,\beta\in \{0,1\}$ and for $i<j$. An element of $K_n$ is called a {\rm $n$-cube}. \end{defn} Let $\square[n]:=\square(-,[n])$. By the Yoneda lemma, one has the natural bijection of sets \[\square^{op}{\brm{Set}}(\square[n],K)\cong K_n\] for every precubical set $K$. The boundary of $\square[n]$ is the precubical set denoted by $\partial \square[n]$ defined by removing the interior of $\square[n]$: \begin{itemize} \item $(\partial \square[n])_k := (\square[n])_k$ for $k<n$ \item $(\partial \square[n])_k = \varnothing$ for $k\geq n$. \end{itemize} In particular, one has $\partial \square[0] = \varnothing$. So the $2$-cube $\square[2]$ models the underlying time flow of the concurrent execution of $a$ and $b$. However, the same $2$-cube models the underlying time flow of the concurrent execution of any pair of actions. So we need a notion of labelling. Let $\Sigma$ be a set of labels, containing among other things the two actions $a$ and $b$. \begin{prop} \cite{labelled} \label{cubeetiquette} Put a total ordering $\leq$ on $\Sigma$. Let \begin{itemize} \item $(!\Sigma)_0=\{()\}$ (the empty word) \item for $n\geq 1$, $(!\Sigma)_n=\{(a_1,\dots,a_n)\in \Sigma \p \dots \p \Sigma,a_1 \leq \dots \leq a_n \}$ \item $\partial_i^0(a_1,\dots,a_n) = \partial_i^1(a_1,\dots,a_n) = (a_1,\dots,\widehat{a_i},\dots,a_n)$ where the notation $\widehat{a_i}$ means that $a_i$ is removed. \end{itemize} Then these data generate a precubical set. \end{prop} \begin{rem} The isomorphism class of $!\Sigma$ does not depend of the choice of the total ordering on $\Sigma$. \end{rem} \begin{defn} (Goubault) A {\rm labelled precubical set} is an object of the comma category $\square^{op}{\brm{Set}} {\downarrow} !\Sigma$. \end{defn} In the combinatorial semantics, the concurrent action of the two actions $a$ and $b$ is then modeled by the labelled $2$-cube $\ell:\square[2]\rightarrow !\Sigma$ sending the identity of $[2]$ (the interior of the square) to $(a,b)$ if $a\leq b$ or to $(b,a)$ if $b\leq a$ as depicted in Figure~\ref{exab}. \begin{figure} \[ \xymatrix{ & () \ar@{->}[rd]^{(b)}&\\ ()\ar@{->}[ru]^{(a)}\ar@{->}[rd]_{(b)} & (a,b) & ()\\ &()\ar@{->}[ru]_{(a)}&} \] \caption{Concurrent execution of $a$ and $b$ with $a\leq b$ as labelled precubical set} \label{exab} \end{figure} The categorical semantics is much simpler to explain. Each of the four distinguished states $0$, $1$, $2$ and $3$ of Figure~\ref{carreplein} is represented by an object of a small category. Each execution path of the boundary of the square is represented by a morphism with the composition of execution paths corresponding to the composition of morphisms. So one has $6$ execution paths $\vec{01}$, $\vec{12}$, $\vec{012}$, $\vec{03}$, $\vec{32}$ and $\vec{032}$ with the algebraic rules $\vec{01} * \vec{12} = \vec{012}$ and $\vec{03} * \vec{32} = \vec{032}$ where $*$ is of course the composition law. The interior of the square is then modeled by the algebraic relation $\vec{012} = \vec{032}$. This small category is nothing else but the small category corresponding to the poset $\{\widehat{0} < \widehat{1}\}^2$. And this poset is nothing else but the poset of vertices of the $2$-cube. The partial ordering models observable time ordering. In fact, one needs to work with categories \textit{enriched over topological spaces} in the sense of \cite{MR2177301}, i.e. with topologized homsets, for being able to model more complicated situations of concurrency. In the situation above, the space of morphisms is of course discrete. For various mathematical reasons, e.g. \cite{model3} Section~20 and \cite{diCW} Section~6, one also needs to work with small category \textit{without identity maps}. Note that the category of small categories without identity maps and the usual one of small categories are certainly not equivalent since a round-trip using the adjunction between them adds a loop to each object. And a lots of theorems proved in the framework of flows (i.e. small categories without identity maps enriched over topological spaces) are merely wrong whenever identity maps are added. In this paper, one will also work with the locally presentable category of \textit{$\Delta$-generated topological spaces}, denoted by $\top$, i.e. of spaces which are colimits of simplices. Several introductions about these topological spaces are available in \cite{delta} \cite{FR} and \cite{interpretation-glob} respectively. Let us only mention one striking property of $\Delta$-generated topological spaces: as the simplicial sets, they are isomorphic to the disjoint sum of their [path-]connected components by \cite{interpretation-glob} Proposition~2.8. This property has a lots of very nice consequences. \begin{defn} \cite{model3} A {\rm flow} $X$ is a small category without identity maps enriched over $\Delta$-generated topological spaces. The composition law of a flow is denoted by $*$. The set of objects is denoted by $X^0$. The space of morphisms from $\alpha$ to $\beta$ is denoted by $\P_{\alpha,\beta} X$~\footnote{Sometimes, an object of a flow is called a state and a morphism a (non-constant) execution path.}. Let $\P X$ be the disjoint sum of the spaces $\P_{\alpha,\beta} X$. A morphism of flows $f:X \rightarrow Y$ is a set map $f^0:X^0 \rightarrow Y^0$ together with a continuous map $\P f:\P X \rightarrow \P Y$ preserving the structure. The corresponding category is denoted by ${\brm{Flow}}$. \end{defn} Each poset $P$ can be associated with a flow denoted in the same way. The set of objects $P^0$ is the underlying set of $P$ and there is one and only one morphism from $\alpha$ to $\beta$ if and only if $\alpha < \beta$. The composition law is then defined by $(\alpha,\beta)*(\beta,\gamma) = (\alpha,\gamma)$ for any $\alpha<\beta<\gamma\in P$. Note that the flow associated with a poset is \textit{loopless}, i.e. for every $\alpha\in P^0$, one has $\P_{\alpha,\alpha}P=\varnothing$. This construction induces a functor ${\brm{PoSet}} \rightarrow {\brm{Flow}}$ from the category of posets together with the strictly increasing maps to the category of flows. In the categorical semantics, the underlying time flow of the concurrent execution of two actions $a$ and $b$ is then modeled by the flow associated with the poset $\{\widehat{0} < \widehat{1}\}^2$. Like in the combinatorial semantics, one needs a notion of labelling. \begin{defn} \label{defflowlabel} The {\rm flow of labels} $?\Sigma$ is defined as follows: $(?\Sigma)^0 = \{0\}$ and $\P ?\Sigma$ is the discrete free associative monoid without unit generated by the elements of $\Sigma$ and by the algebraic relations $a*b=b*a$ for all $a,b\in \Sigma$. \end{defn} \begin{defn} A {\rm labelled flow} is an object of the comma category ${\brm{Flow}} {\downarrow} ?\Sigma$. \end{defn} So the concurrent execution of the two actions $a$ and $b$ will be modeled by the labelled flow $\ell:\{\widehat{0} < \widehat{1}\}^2 \rightarrow ?\Sigma$ with $\ell(\vec{012}) = \ell(\vec{032}) = a*b$, $\ell(\vec{01}) = a$, $\ell(\vec{12}) = b$, $\ell(\vec{03}) = b$ and $\ell(\vec{32}) = a$ as in Figure~\ref{exab2}. \begin{figure} \[ \xymatrix{ & 0 \ar@{->}[rd]^{b}&\\ 0\ar@{->}[ru]^{a}\ar@{->}[rd]_{b} & = &0 \\ &\ar@{->}[ru]_{a}0&} \] \caption{Concurrent execution of $a$ and $b$ as labelled flow} \label{exab2} \end{figure} \section{Combinatorial semantics of CCS} \label{comb} \subsection*{Syntax of the process algebra CCS} A short introduction about process algebra can be found in \cite{MR1365754}. An introduction about CCS for mathematician is available in the companion paper \cite{ccsprecub}. Let $\Sigma$ be a non-empty set. Its elements are called \textit{labels} or \textit{actions} or \textit{events}. From now on, the set $\Sigma$ is supposed to be equipped with an involution $a\mapsto \overline{a}$. Moreover, the set $\Sigma$ contains a distinct action $\tau$ with $\tau=\overline{\tau}$. The \textit{process names} are generated by the following syntax: \[ P::=nil \ |\ a.P \ |\ (\nu a)P \ |\ P + P \ |\ P|| P \ |\ \rec(x)P(x)\] where $P(x)$ means a process name with one free variable $x$. The variable $x$ must be \textit{guarded}, that is it must lie in a prefix term $a.x$ for some $a\in\Sigma$. \subsection*{Parallel composition with synchronization of labelled precubical sets} \begin{defn} \label{def_preshell} Let $\ell:K\rightarrow !\Sigma$ be a labelled precubical set. Let $n\geq 1$. A {\rm labelled $n$-shell} of $K$ is a commutative diagram of precubical sets \[ \xymatrix{ \partial\square[n+1] \ar@{->}[rr]^-{x} \fd{} && K \fd{\ell} \\ && \\ \square[n+1] \fr{} && !\Sigma.} \] Suppose moreover that $K_0=[p]$ for some $p\geq 2$. The labelled $n$-shell above is {\rm non-twisted} if the set map $x_0:[n+1] = \partial\square[n+1]_0 \rightarrow [p] = K_0$ is a composite \[x_0:[n+1] \stackrel{\phi}\longrightarrow [q] \stackrel{\psi}\longrightarrow [p]\] where $\psi$ is a morphism of the small category $\square$ and where $\phi$ is of the form $(\epsilon_1,\dots,\epsilon_{n+1}) \mapsto (\epsilon_{i_1},\dots,\epsilon_{i_q})$ such that $1=i_1 \leq \dots \leq i_q = n+1$ and $\{1,\dots,n+1\}\subset \{i_1,\dots,i_q\}$. \end{defn} The map $\phi$ is not necessarily a morphism of the small category $\square$. For example $\phi:[3] \rightarrow [5]$ defined by $\phi(\epsilon_1,\epsilon_2,\epsilon_3)=(\epsilon_1, \epsilon_1, \epsilon_2, \epsilon_3, \epsilon_3)$ is not a morphism of $\square$. Note the set map $x_0$ is always one-to-one. All labelled shells of this paper will be supposed non-twisted. \begin{defn} Let $\square_n\subset \square$ be the full subcategory of $\square$ whose set of objects is $\{[k],k\leq n\}$. The category of presheaves over $\square_n$ is denoted by $\square_n^{op}{\brm{Set}}$. Its objects are called the {\rm $n$-dimensional precubical sets}. \end{defn} Let $K$ be an object of $\square_{1}^{op}{\brm{Set}} {\downarrow} !\Sigma$ such that $K_0=[p]$ for some $p\geq 2$. Let $K^{(n)}$ be the object of $\square_{n}^{op}{\brm{Set}} {\downarrow} !\Sigma$ inductively defined for $n\geq 1$ by $K^{(1)} = K$ and by the following pushout diagram of labelled precubical sets (the shells are always non-twisted by hypothesis): \[ \xymatrix{ \bigsqcup\limits_{\hbox{labelled $n$-shells}}\partial\square[n+1] \fr{} \fd{}&& K^{(n)} \fd{}\\ && \\ \bigsqcup\limits_{\hbox{labelled $n$-shells}}\square[n+1]\fr{}&& K^{(n+1)}. \cocartesien } \] Since $(\partial\square[n+1])_p = (\square[n+1])_p$ for $p\leq n$, one has $(K^{(n+1)})_p = (K^{(n)})_p$ for $p\leq n$. And by construction, $(K^{(n+1)})_{n+1}$ is the set of non-twisted labelled $n$-shell of $K$. There is an inclusion map $K^{(n)} \rightarrow K^{(n+1)}$. \begin{nota} Let $K$ be a $1$-dimensional labelled precubical set with $K_0=[p]$ for some $p\geq 2$. Then let \[\COSK(K):= \varinjlim_{n\geq 1} K^{(n)}.\] \end{nota} The important property of the $\COSK$ operator, also used in the proof of Proposition~\ref{p1eq}, is the following one ensuring that the higher dimensional automata paradigm is satisfied indeed: \begin{prop} \label{pourquoi_non_degenere} (\cite{ccsprecub} Proposition~3.16) Let $\square[n]$ be a labelled precubical set with $n\geq 2$. Then one has the isomorphism of labelled precubical sets $\COSK(\square[n]_{\leq 1}) \cong \square[n]$. \end{prop} Roughly speaking, Proposition~\ref{pourquoi_non_degenere} states that for all $1\leq p\leq n-1$, there is a bijection between the non-twisted labelled $p$-shells of $\square[n]$ and the $(p+1)$-cubes of $\square[n]$. If the condition non-twisted is removed, i.e. if we work with a too naive notion of labelled coskeleton construction as in \cite{exHDA}, then Proposition~\ref{pourquoi_non_degenere} is no longer true. Indeed, a naive labelled coskeleton construction adds too many cubes whereas the higher dimensional automata paradigm states that one must recover exactly $\square[n]$ from $\square[n]_{\leq 1}$ which corresponds to the concurrent execution of $n$ actions. That was the problem in K. Worytkiewicz's coskeleton construction, which was corrected in the companion paper \cite{ccsprecub}. \begin{defn} Let $K$ and $L$ be two labelled precubical sets. The {\rm synchronized tensor product} is by definition \[K \otimes_\sigma L := \varinjlim_{\square[m]\rightarrow K}\varinjlim_{\square[n]\rightarrow L} \COSK(Z)\] where $Z$ is the $1$-dimensional precubical set defined by: \begin{itemize} \item $Z_0:= \square[m]_0 \p \square[n]_0$ \item $Z_1:=(\square[m]_1 \p \square[n]_0) \oplus (\square[m]_0 \p \square[n]_1) \oplus \{(x,y)\in \square[m]_1 \p \square[n]_1, \overline{\ell(x)} = \ell(y)\}$ with an obvious definition of the face maps and the labelling $\tau =\ell(x,y)$ if $\overline{\ell(x)} = \ell(y)$. \end{itemize} \end{defn} \subsection*{Construction of the labelled precubical set of paths} \begin{defn} A labelled precubical set $\ell:K\rightarrow !\Sigma$ {\rm decorated by process names} is a labelled precubical set together with a set map $d:K_0 \rightarrow {\brm{Proc}}_\Sigma$ called the {\rm decoration}. \end{defn} It is recalled in Table~\ref{combsem} the construction of the labelled precubical set $\square\llbracket P\rrbracket$ of \cite{ccsprecub} by induction on the syntax of the name. The labelled precubical set $\square\llbracket P\rrbracket$ has a unique initial state canonically decorated by the process name $P$ and its other states will be decorated as well in an inductive way. Therefore for every process name $P$, $\square\llbracket P\rrbracket$ is an object of $\{i\}{\downarrow} \square^{op}{\brm{Set}} {\downarrow} !\Sigma$. \begin{table} \begin{center} \begin{tabular}{|l|} \hline $\square\llbracket nil\rrbracket:=\square[0]$ \\ \hline $\square\llbracket \mu.nil\rrbracket:=\mu.nil \stackrel{(\mu)}\longrightarrow nil$\\ \hline $\xymatrix{ \square[0]=\{0\} \ar@{->}[r]^-{0\mapsto nil} \ar@{->}[d]^-{0\mapsto P} & \square\llbracket \mu.nil\rrbracket \ar@{->}[d] \\ \square\llbracket P\rrbracket \ar@{->}[r] & \cocartesien {\square\llbracket \mu.P\rrbracket}}$\\ \hline $\square\llbracket P+Q\rrbracket := \square\llbracket P\rrbracket \oplus \square\llbracket Q\rrbracket$\\ with the binary coproduct taken in $\{i\}{\downarrow} \square^{op}{\brm{Set}} {\downarrow} !\Sigma$ \\ \hline $\xymatrix{ \square\llbracket (\nu a) P\rrbracket \ar@{->}[r] \ar@{->}[d] \cartesien & \square\llbracket P\rrbracket \ar@{->}[d] \\ !(\Sigma\backslash (\{a,\overline{a}\})) \ar@{->}[r] & !\Sigma}$\\ \hline $\square\llbracket P||Q\rrbracket := \square\llbracket P\rrbracket \otimes_\sigma \square\llbracket Q\rrbracket$ \\ \hline $\square\llbracket \rec(x)P(x)\rrbracket:=\varinjlim\limits_n \square\llbracket P^n(nil)\rrbracket$\\ \hline \end{tabular} \caption{Combinatorial semantics of CCS} \label{combsem} \end{center} \end{table} \section{Categorical semantics of CCS} \label{catsem} The categorical semantics of CCS is obtained from the combinatorial semantics by applying the geometric realization functor $| - | : \square^{op}{\brm{Set}} \rightarrow {\brm{Flow}}$ introduced in \cite{ccsprecub}~\footnote{Of course, all theorems proved in the case of compactly generated topological spaces in \cite{ccsprecub} are still available in the case of $\Delta$-generated topological spaces since they only depend on the model structure on ${\brm{Flow}}$.}. As already explained in \cite{ccsprecub}, it is necessary to use the model structure introduced in \cite{model3}, and adapted in \cite{interpretation-glob} for the framework of $\Delta$-generated topological spaces. Equivalent geometric realization functors are defined in \cite{realization}. We will use the construction of \cite{ccsprecub} in this paper. Let $Z$ be a topological space. The flow ${\rm{Glob}}(Z)$ is defined by \begin{itemize} \item ${\rm{Glob}}(Z)^0=\{\widehat{0},\widehat{1}\}$, \item $\P {\rm{Glob}}(Z)= \P_{\widehat{0},\widehat{1}} {\rm{Glob}}(Z) = Z$, \item $s=\widehat{0}$, $t=\widehat{1}$ and a trivial composition law. \end{itemize} It is called the \textit{globe} of the space $Z$. The model structure of \cite{interpretation-glob} is characterized as follows: \begin{itemize} \item The weak equivalences are the \textit{weak S-homotopy equivalences}, i.e. the morphisms of flows $f:X\longrightarrow Y$ such that $f^0:X^0\longrightarrow Y^0$ is a bijection of sets and such that $\P f:\P X\longrightarrow \P Y$ is a weak homotopy equivalence. \item The fibrations are the morphisms of flows $f:X\longrightarrow Y$ such that $\P f:\P X\longrightarrow \P Y$ is a Serre fibration. \end{itemize} This model structure is cofibrantly generated. The set of generating cofibrations is the set $I^{gl}_{+}=I^{gl}\cup \{R:\{0,1\}\longrightarrow \{0\},C:\varnothing\longrightarrow \{0\}\}$ with \[\boxed{I^{gl}=\{{\rm{Glob}}(\mathbf{S}^{n-1})\subset {\rm{Glob}}(\mathbf{D}^{n}), n\geq 0\}}\] where $\mathbf{D}^{n}$ is the $n$-dimensional disk and $\mathbf{S}^{n-1}$ the $(n-1)$-dimensional sphere. By convention, the $(-1)$-dimensional sphere is the empty space. The set of generating trivial cofibrations is \[\boxed{J^{gl}=\{{\rm{Glob}}(\mathbf{D}^{n}\p\{0\})\subset {\rm{Glob}}(\mathbf{D}^{n}\p [0,1]), n\geq 0\}}.\] The mapping from $\Obj(\square)$ (the set of objects of $\square$) to $\Obj({\brm{Flow}})$ (the class of flows) defined by $[0] \mapsto \{0\}$ and $[n] \mapsto \{\widehat{0}<\widehat{1}\}^n$ for $n\geq 1$ induces a functor from the category $\square$ to the category ${\brm{Flow}}$ by composition \[\square \subset {\brm{PoSet}} \longrightarrow {\brm{Flow}}.\] \begin{nota} A state of the flow $\{\widehat{0}<\widehat{1}\}^n$ is denoted by a $n$-uple of elements of $\{\widehat{0},\widehat{1}\}$. The unique morphism/execution path from $(x_1,\dots,x_n)$ to $(y_1,\dots,y_n)$ is denoted by a $n$-uple $(z_1,\dots,z_n)$ with $z_i=x_i$ if $x_i=y_i$ and $z_i=*$ if $x_i<y_i$. For example in the flow $\{\widehat{0}<\widehat{1}\}^2$ depicted in Figure~\ref{cube2}, one has the algebraic relation $(*,*) = (\widehat{0},*)*(*,\widehat{1}) = (*,\widehat{0}) * (\widehat{1},*)$. \end{nota} \begin{defn} \cite{ccsprecub} Let $K$ be a precubical set. By definition, the {\rm geometric realization} of $K$ is the flow \[\boxed{|K| := \varinjlim_{\square[n]\rightarrow K} (\{\widehat{0}<\widehat{1}\}^n)^{cof}}. \] \end{defn} The following proposition is helpful to understand what this geometric realization functor is. The principle of its proof will be reused in the paper. \begin{prop} \label{rea-hocolim} Let $K$ be a precubical set. One has a natural weak S-homotopy equivalence \[|K| \simeq \holiminj_{\square[n]\rightarrow K} \{\widehat{0}<\widehat{1}\}^n. \] \end{prop} \begin{proof} Consider the category of cubes $\square{\downarrow} K$ of $K$. It is defined by the pullback diagram of small categories \[ \xymatrix{ \square {\downarrow} K \fr{}\fd{}\cartesien && \square^{op}{\brm{Set}}{\downarrow} K \fd{}\\ &&\\ \square \fr{} && \square^{op}{\brm{Set}}.} \] In other terms, an object of $\square {\downarrow} K$ is a morphism $\square[m]\rightarrow K$ and a morphism of $\square{\downarrow} K$ is a commutative diagram \[ \xymatrix{ \square[m] \ar@{->}[rd] \ar@{->}[rr] && \square[n] \ar@{->}[ld]\\ & K.&} \] The category $\square{\downarrow} K$ is a Reedy direct category with the degree function $d(\square[n]\rightarrow K)=n$. Since this Reedy category is direct, the matching category is always empty. So by \cite{ref_model2} Proposition~15.10.2, the Reedy category has fibrant constants and the colimit functor $\varinjlim:{\brm{Flow}}^{\square{\downarrow} K} \rightarrow {\brm{Flow}}$ is a left Quillen functor if ${\brm{Flow}}^{\square{\downarrow} K}$ is equipped with the Reedy model structure by \cite{ref_model2} Theorem~15.10.8. Consider the functor $D:\square{\downarrow} K \rightarrow {\brm{Flow}}$ defined by $D(\square[n]\rightarrow K) := (\{\widehat{0}<\widehat{1}\}^n)^{cof}$. One has to check that the diagram $D$ is Reedy cofibrant and the proof will be complete. By definition of the Reedy model structure, it suffices to show that for all $n\geq 0$, and with $\alpha = \square[n]\rightarrow K$, the map $L_{\alpha}D \rightarrow D(\alpha)$ is a cofibration where $L_{\alpha}D$ is the latching object at $\alpha$. It is easy to see that the latter map is the morphism of flows $|\partial\square[n] \subset \square[n]|$ which is a cofibration of flows by Theorem~\ref{pasdepb} below. \end{proof} \begin{figure} \[ \xymatrix{ (\widehat{0},\widehat{0}) \fr{(\widehat{0},*)}\fd{(*,\widehat{0})}\ar@{->}[ddrr]^-{(*,*)} && (\widehat{0},\widehat{1})\fd{(*,\widehat{1})}\\ && \\ (\widehat{1},\widehat{0}) \fr{(\widehat{1},*)}&& (\widehat{1},\widehat{1})} \] \caption{The flow $|\square[2]|_{bad} = \{\widehat{0}<\widehat{1}\}^2$ ($(*,*) = (\widehat{0},*)*(*,\widehat{1}) = (*,\widehat{0}) * (\widehat{1},*)$)} \label{cube2} \end{figure} The functor $[n] \mapsto \{\widehat{0}<\widehat{1}\}^n$ from $\square$ to ${\brm{Flow}}$ also induces a \textit{bad} realization functor from $\square^{op}{\brm{Set}}$ to ${\brm{Flow}}$ defined by \[\boxed{|K|_{bad} := \varinjlim_{\square[n]\rightarrow K} \{\widehat{0}<\widehat{1}\}^n}.\] This functor is a bad realization because of the following bad behaviour: \begin{thm} \label{pbpb} (\cite{ccsprecub} Theorem~7.2) Let $n\geq 3$. The inclusion of precubical sets $\partial\square[n] \subset \square[n]$ induces an isomorphism $|\partial\square[n]|_{bad} \cong |\square[n]|_{bad}$. \eth On the contrary, the geometric realization functor is well-behaved: \begin{thm} \label{pasdepb} (\cite{ccsprecub} Proposition~7.6 and \cite{ccsprecub} Theorem~7.8) For any $n\geq 0$, the map of flows $|\partial\square[n] \subset \square[n]|$ is a non-trivial cofibration of flows. Moreover, the path space $\P_{\widehat{0}\dots\widehat{0}, \widehat{1}\dots\widehat{1}}|\partial\square[n]|$ is homotopy equivalent to $\mathbf{S}^{n-2}$ and the path space $\P_{\widehat{0}\dots\widehat{0}, \widehat{1}\dots\widehat{1}}|\square[n]|$ is contractible. \eth Let $K\rightarrow !\Sigma$ be a labelled precubical set. Then the composition $|K| \rightarrow |!\Sigma| \rightarrow |!\Sigma|_{bad} \cong ?\Sigma$ gives rise to a labelled flow by \cite{ccsprecub} Proposition~8.1. \begin{nota} For every process name $P$, let $\llbracket P\rrbracket := | \square\llbracket P\rrbracket |$. The flow $\llbracket P\rrbracket$ is always cofibrant by \cite{ccsprecub} Proposition~7.7. \end{nota} Note that a decorated labelled precubical set gives rise to a decorated labelled flow in the following sense: \begin{defn} A labelled flow $\ell:X \rightarrow ?\Sigma$ {\rm decorated by process names} is a labelled flow together with a set map $d:X^0 \rightarrow {\brm{Proc}}_\Sigma$ called the {\rm decoration}. \end{defn} \section{Relevance of weak S-homotopy for concurrency theory} \label{relevance} The translation of the combinatorial semantics of CCS into a categorical semantics in terms of flows requires the use of \textit{non-canonical} constructions, more precisely, a non-canonical choice of a cofibrant replacement functor, and also later non-canonical choices for homotopy limits and homotopy colimits. The following theorem is therefore very important: \begin{thm} \label{noncan} For any flow $X$, there exists at most one precubical set $K$ up to isomorphism such that $|K| \simeq X$. In other terms, the functor \[K\mapsto \hbox{weak S-homotopy type of }|K|\] from $\square^{op}{\brm{Set}}$ to the homotopy category of flows ${\mathbf{Ho}}({\brm{Flow}})$ reflects isomorphisms. \eth The precubical set $K$ does not necessarily exist. For example, ${\rm{Glob}}(\mathbf{S}^1)$ is not weakly S-homotopy equivalent to any geometric realization of any precubical set. Indeed, if there existed a precubical set $K$ with $|K| \simeq {\rm{Glob}}(\mathbf{S}^1)$, then $K$ would have a unique initial state $\widehat{0}$ and a unique final state $\widehat{1}$, so $K_0 = \{\widehat{0},\widehat{1}\}$. So the only possibility is a set of $1$-cubes from $\widehat{0}$ to $\widehat{1}$. Thus the space $\P(|K|)$ would be homotopy equivalent to a discrete space. Before proving Theorem~\ref{noncan}, we need to establish several preliminary results involving among other things the simplicial structure of the category of flows. In any flow $X$, if two execution paths $x$ and $y$ are in the same path-connected component of some $\P_{\alpha,\beta}X$, then there exists a continuous map $\phi:[0,1] \rightarrow \P_{\alpha,\beta}X$ with $\phi(0) = x$ and $\phi(1) = y$. So for any execution path $z$ such that $x*z$ and $y*z$ exist, the continuous map $\psi$ from $[0,1]$ to $\P X$ defined by $\psi(t)=\phi(t)*z$ is a continuous path from $x*z$ to $y*z$. Hence: \begin{nota} Any flow $X$ induces a flow over the category of sets denoted by $\widehat{\pi}_0(X)$ defined by $\widehat{\pi}_0(X)^0=X^0$, $\P\widehat{\pi}_0(X)=\pi_0(\P X)$ where $\pi_0$ is the path-connected component functor and with the composition law induced by the one of $X$. \end{nota} \begin{nota} Let ${\brm{Flow}}({\brm{Set}})$ be the category of flows enriched over sets, i.e. of small categories without identity maps. \end{nota} \begin{prop} \label{left-adjoint} The functor $\widehat{\pi}_0:{\brm{Flow}} \rightarrow {\brm{Flow}}({\brm{Set}})$ is a left adjoint. In particular, it is colimit-preserving. \end{prop} \begin{proof} It suffices to prove that the path-connected component functor $\pi_0:\top \rightarrow {\brm{Set}}$ is a left adjoint (let us repeat that we are working with $\Delta$-generated topological spaces). Here are two possible arguments: \begin{enumerate} \item Every space is homeomorphic to the disjoint sum of its path-connected components by \cite{interpretation-glob} Proposition~2.8. In fact, a space is even connected if and only if it is path-connected. So it is easy to see that the right adjoint is the functor from ${\brm{Set}}$ to $\top$ taking a set $S$ to the discrete space $S$. \item The functor from ${\brm{Set}}$ to $\top$ taking a set $S$ to the discrete space $S$ commutes with limits because there is no non-discrete totally disconnected $\Delta$-generated spaces, and with colimits as in the category of general topological spaces. In particular it is accessible. So by \cite{MR95j:18001} Theorem~1.66, it has a left adjoint and it is easy to see that the left adjoint is the path-connected component functor. \end{enumerate} \end{proof} \begin{prop} \label{P-accessible} (Compare with \cite{interpretation-glob} Proposition~4.9) The path space functor $\P:{\brm{Flow}} \rightarrow \top$ is a right adjoint. In particular, it is accessible. \end{prop} In fact, the functor $\P:{\brm{Flow}} \rightarrow \top$ is of course finitely accessible. \begin{proof} Let $Z$ be a topological space. By \cite{interpretation-glob} Proposition~2.8, $Z$ is homeomorphic to the disjoint union of its path-connected components. Let us write this situation by \[Z\cong \bigsqcup_{Z_i\in\pi_0(Z)} Z_i.\] Then one has for any flow $X$ \begin{eqnarray*} \top(Z,\P X) &\cong& \prod_{Z_i\in\pi_0(Z)} \top(Z_i,\P X) \\ &\cong& \prod_{Z_i\in\pi_0(Z)} {\brm{Flow}}({\rm{Glob}}(Z_i),X) \\ &\cong& {\brm{Flow}}(\bigsqcup_{Z_i\in\pi_0(Z)}{\rm{Glob}}(Z_i),X). \end{eqnarray*} So the path space functor $\P:{\brm{Flow}} \rightarrow \top$ is accessible by \cite{MR95j:18001} Theorem~1.66. \end{proof} \begin{prop} \label{inc-path} Let $i:A\rightarrow X$ be a cofibration of flows between cofibrant flows. Then the continuous map $\P i:\P A \rightarrow \P X$ is a cofibration between cofibrant spaces. \end{prop} Note that Proposition~\ref{inc-path} remains true if we only suppose that the space $\P A$ is cofibrant. Proposition~\ref{inc-path} is a generalization of \cite{interpretation-glob} Proposition~7.5. \begin{proof} Let us suppose first that there is a pushout diagram of flows \[ \xymatrix{ {\rm{Glob}}(\mathbf{S}^{n-1}) \fr{} \fd{} && A \fd{i} \\ &&\\ {\rm{Glob}}(\mathbf{D}^{n}) \fr{} && X.\cocartesien} \] By \cite{model3} Proposition~15.1, the continuous map $\P i:\P A \rightarrow \P X$ is a transfinite composition of pushouts of maps of the form $\id_{X_1}\p\dots\p i_n\p \dots \p \id_{X_p}$ where the spaces $X_i$ are spaces of the form $\P_{\alpha,\beta}A$ and where $i_n:\mathbf{S}^{n-1}\subset \mathbf{D}^n$ is the inclusion with $n\geq 0$. Any space of the form $\P_{\alpha,\beta}A$ is cofibrant by \cite{interpretation-glob} Proposition~7.5 since $A$ is cofibrant. So the map $\P i:\P A \rightarrow \P X$ is a cofibration because the model category $(\top,\p)$ is monoidal. Let us treat now the general case. The cofibration $i$ is a retract of a map $j:A\rightarrow Y$ of $\cell(I^{gl}_{+})$ by a map which fixes $A$ by \cite{MR99h:55031} Corollary~2.1.15. So the continuous map $\P i : \P A \rightarrow \P X$ is a retract of the continuous map $\P j : \P A \rightarrow \P Y$. The map of flows $j : A \rightarrow Y$ is the composition of a transfinite sequence $Z : \lambda \rightarrow {\brm{Flow}}$ for some ordinal $\lambda$ with $Z_0 = A$. By Proposition~\ref{P-accessible}, one has the homeomorphism $\varinjlim \P Z_\alpha \cong \P Y$. The first part of this proof implies that $\P j : \P A \rightarrow \P Y$ is then a cofibration of spaces, and therefore that $\P i:\P A \rightarrow \P X$ is a cofibration as well. \end{proof} \begin{nota} The associative monoid without unit $(\mathbb{N}^{*},+)$ of strictly positive integers together with the addition can be viewed as a flow with one object, the discrete path space $\mathbb{N}^{*}$ and the composition law $+$. \end{nota} \begin{nota} Let $K$ be a precubical set. Let $K_{\leq n}$ be the precubical set obtained from $K$ by keeping the $p$-dimensional cubes of $K$ only for $p\leq n$. In particular, $K_{\leq 0}=K_0$. \end{nota} \begin{prop} \label{length} Let $K$ be a precubical set. There exists a unique morphism of flows $L_K:\widehat{\pi}_0(|K|) \rightarrow \mathbb{N}^{*}$, natural with respect to $K$, such that for any $x \in K_1$, for any $z \in \P(|\square[1]|)$, one has $L_K(|x|(z)) = 1$. \end{prop} \begin{proof} We construct $L_K:\widehat{\pi}_0(|K_{\leq n}|) \rightarrow \mathbb{N}^{*}$ for any precubical set $K$ by induction on $n\geq 0$. There is nothing to do for $n=0$. The passage from $|K_{\leq n}|$ to $|K_{\leq n+1}|$ is done as usual by the following pushout diagram of flows: \[ \xymatrix{ \bigsqcup_{x:\partial\square[n+1]\rightarrow K} |\partial\square[n+1]| \fd{}\fr{} && |K_{\leq n}|\fd{}\\ &&\\ \bigsqcup_{x:\partial\square[n+1]\rightarrow K} |\square[n+1]| \fr{}&& \cocartesien {|K_{\leq n+1}|}} \] where the sum is over all $n$-shells $x:\partial\square[n+1]\subset \square[n+1]\rightarrow K$. Let $n\geq 0$. By induction hypothesis, the flow $\widehat{\pi}_0(|\partial\square[n+1]|)$ and $\widehat{\pi}_0(|K_{\leq n}|)$ are defined. We know that the map of flows $|\partial\square[n+1]| \rightarrow |\square[n+1]|$ is a cofibration by Theorem~\ref{pasdepb}. In fact, this map of flows induces the identity maps $\P_{\alpha,\beta} (|\partial\square[n+1]|) = \P_{\alpha,\beta} (|\square[n+1]|)$ for $(\alpha,\beta) \neq (\widehat{0}\dots\widehat{0}, \widehat{1}\dots\widehat{1})$ and a non-trivial cofibration~\footnote{Let us recall that the space $\P_{\widehat{0}\dots\widehat{0},\widehat{1}\dots\widehat{1}}(|\square[n+1]|)$ is contractible and that by Theorem~\ref{pasdepb}, there is a homotopy equivalence $\P_{\widehat{0}\dots\widehat{0},\widehat{1}\dots\widehat{1}}(|\partial\square[n+1]|) \simeq \mathbf{S}^{n-1}$.} between cofibrant spaces $\P_{\widehat{0}\dots\widehat{0},\widehat{1}\dots\widehat{1}}(|\partial\square[n+1]|) \rightarrow \P_{\widehat{0}\dots\widehat{0},\widehat{1}\dots\widehat{1}}(|\square[n+1]|)$ by Proposition~\ref{inc-path}. Then let $L_{\square[n+1]}(x)=n+1$ for any $x \in \P_{\widehat{0}\dots\widehat{0},\widehat{1}\dots\widehat{1}}(|\square[n+1]|)$. One obtains the commutative square of ${\brm{Flow}}({\brm{Set}})$: \[ \xymatrix{ \bigsqcup_{x:\partial\square[n+1]\rightarrow K} \widehat{\pi}_0(|\partial\square[n+1]|) \fd{}\fr{} && \widehat{\pi}_0(|K_{\leq n}|)\fd{}\\ &&\\ \bigsqcup_{x:\partial\square[n+1]\rightarrow K} \widehat{\pi}_0(|\square[n+1]|) \fr{}&& {\mathbb{N}^{*}}.} \] By Proposition~\ref{left-adjoint} and by the universal property of the pushout, one obtains the natural map $\widehat{\pi}_0(|K_{\leq n+1}|) \rightarrow \mathbb{N}^{*}$. Since the functor $K\mapsto \widehat{\pi}_0(|K|)$ is a left adjoint, one obtains a natural map $\widehat{\pi}_0(|K|) \cong \varinjlim \widehat{\pi}_0(|K_{\leq n}|) \rightarrow \mathbb{N}^{*}$. \end{proof} \begin{defn} The integer $L_K(x)$ for $x\in \P(|K|)$ is called the {\rm length} of $x$. \end{defn} Proposition~\ref{length} means that the length of $x\in \P(|K|)$ satisfies the following (intuitive) algebraic rules: \begin{itemize} \item $L_K(x*y) = L_K(x) + L_K(y)$ if $x$ and $y$ are composable \item $L_K(x) = L_K(y)$ if $x$ and $y$ are in the same path-connected component of the space $\P(|K|)$ \item $L_K(x)=1$ if $x$ corresponds to an edge, i.e. a $1$-cube, of the precubical set $K$ \item the naturality of the morphism of flows $L_K : \widehat{\pi}_0(|K|) \rightarrow \mathbb{N}^{*}$ means that length is preserved by a map of precubical sets. \end{itemize} The model category ${\brm{Flow}}$ is simplicial by \cite{realization} Section~3 and \cite{interpretation-glob} Appendix~B. Let $\map(X,Y)$ be the function complex from $X$ to $Y$. It is equal to the simplicial nerve of the space ${\brm{FLOW}}(X,Y)$ of morphisms of flows from $X$ to $Y$ equipped with the Kelleyfication of the relative topology. \begin{prop} \label{inj} Let $K$ be a precubical set. Let $n\geq 0$. The natural set map $K_n \rightarrow {\brm{Flow}}(|\square[n]|,|K|)$ defined by taking $x\in K_n$ to $|x|:|\square[n]| \rightarrow |K|$ is one-to-one. \end{prop} \begin{proof} One has $|K| = \varinjlim_{\square[n]\rightarrow K} |\square[n]|$ by definition. If $x$ and $y$ are two different $n$-cubes of $K$, then they correspond to two different copies of $|\square[n]|$ in the colimit calculating $K$. Let \[\gamma\in \P_{\widehat{0}\dots\widehat{0},\widehat{1}\dots\widehat{1}}|\square[n]| \backslash \P_{\widehat{0}\dots\widehat{0},\widehat{1}\dots\widehat{1}}|\partial\square[n]|.\] Then $|x|(\gamma) \neq |y|(\gamma)$. Therefore $|x| \neq |y|$. \end{proof} \begin{nota} Let $K$ be a precubical set. The precubical set $\widehat{K}$ is defined by \[\boxed{\widehat{K} = \pi_0 \map(|\square[*]|,|K|) = \pi_0 {\brm{FLOW}}(|\square[*]|,|K|)}.\] \end{nota} Since $|\square[n]|$ is cofibrant and since all flows are fibrant, the function complex $\map(|\square[n]|,|K|)$ is weakly equivalent to the homotopy function complex from $|\square[n]|$ to $|K|$. Thus $\widehat{K}_n = {\mathbf{Ho}}({\brm{Flow}})(|\square[n]|,|K|)$ for all $n\geq 0$ where ${\mathbf{Ho}}({\brm{Flow}})$ is the homotopy category of ${\brm{Flow}}$. The natural map of precubical sets $K \rightarrow {\brm{Flow}}(|\square[*]|,|K|)$ induces a natural map of precubical sets $K\rightarrow \widehat{K}$. \begin{prop} \label{inj2} Let $K$ be a precubical set. Let $n\geq 0$. The continuous map $j_n:{\brm{FLOW}}(|\square[n]|,|K_{\leq n}|) \rightarrow {\brm{FLOW}}(|\square[n]|,|K|)$ induced by the inclusion of precubical sets $K_{\leq n} \subset K$ is an inclusion of $\Delta$-generated spaces in the sense that one has a homeomorphism \[{\brm{FLOW}}(|\square[n]|,|K_{\leq n}|) \cong j_n({\brm{FLOW}}(|\square[n]|,|K_{\leq n}|))\] with the right-hand topological space equipped with the Kelleyfication of the relative topology. \end{prop} \begin{proof}[Sketch of proof] The map $j_n:{\brm{FLOW}}(|\square[n]|,|K_{\leq n}|) \rightarrow {\brm{FLOW}}(|\square[n]|,|K|)$ is clearly one-to-one. It suffices to prove that for any continuous map $\phi : Z \rightarrow {\brm{FLOW}}(|\square[n]|,|K|)$ such that $\phi(Z)\subset j_n({\brm{FLOW}}(|\square[n]|,|K_{\leq n}|))$, the unique set map $Z\rightarrow {\brm{FLOW}}(|\square[n]|,|K_{\leq n}|)$ induced by $\phi$ is continuous. By Theorem~\ref{pasdepb}, the map $|K_{\leq n}| \rightarrow |K|$ is a cofibration of flows. One has $|K_{\leq n}|^0 = |K|^0 = K_0$ and the continuous map $\P(|K_{\leq n}|) \rightarrow \P(|K|)$ is a cofibration of spaces by Proposition~\ref{inc-path}. So the latter continuous map is a closed $T_1$-inclusion of general topological spaces by \cite{MR99h:55031} Lemma~2.4.5, and also an inclusion of $\Delta$-generated spaces. By \cite{interpretation-glob} Appendix~B, the category of flows enriched over $\Delta$-generated topological spaces is tensored and cotensored over the $\Delta$-generated spaces in the sense of \cite{strom2}. So the continuous map $\phi : Z \rightarrow {\brm{FLOW}}(|\square[n]|,|K|)$ corresponds by adjunction to a morphism of flows $|\square[n]| \otimes Z \rightarrow |K|$. By hypothesis, the map $\P \phi$ factors uniquely as a set map as a composite \[\P(|\square[n]| \otimes Z) \rightarrow \P(|K_{\leq n}|) \rightarrow \P(|K|).\] Since the right-hand map is a closed $T_1$-inclusion of general topological spaces, the left-hand map $\P(|\square[n]| \otimes Z) \rightarrow \P(|K_{\leq n}|)$ is continuous. Hence the factorization $|\square[n]| \otimes Z \rightarrow |K_{\leq n}| \rightarrow |K|$. By adjunction, one obtains the continuous map $Z\rightarrow {\brm{FLOW}}(|\square[n]|,|K_{\leq n}|)$. \end{proof} \begin{prop} \label{reflete} The functor $K\mapsto \widehat{K}$ reflects isomorphisms, i.e a map of precubical sets $f:K \rightarrow L$ is an isomorphism if and only if the map of precubical sets $\widehat{f}:\widehat{K} \rightarrow \widehat{L}$ is an isomorphism. \end{prop} \begin{proof} It turns out that the natural map of precubical sets $K\rightarrow \widehat{K}$ is a monomorphism. Indeed, take two elements $x$ and $y$ of $K_n$ such that $|x|$ and $|y|$ are in the same path-connected component of ${\brm{FLOW}}(|\square[n]|,|K|)$. By definition, there exists a continuous map $\phi : [0,1] \rightarrow {\brm{FLOW}}(|\square[n]|,|K|)$ such that $\phi(0) = |x|$ and $\phi(1) = |y|$. For any $z\in \P (|\square[n]|)$, one has the inequality $L_K(\phi(t)(z)) \leq n$ for all $t\in [0,1]$ because $L_{\square[n]}(z) \leq n$ and because maps of precubical sets preserve length. But any execution path of $\P(|K|)\backslash \P(|K_{\leq n}|)$ is of length strictly greater than $n$. So the map $\phi$ factors uniquely as a composite $[0,1] \rightarrow {\brm{FLOW}}(|\square[n]|,|K_{\leq n}|) \rightarrow {\brm{FLOW}}(|\square[n]|,|K|)$ by Proposition~\ref{inj2}. Since a non-trivial homotopy $\phi$ would necessarily use higher dimensional cubes of $K\backslash K_{\leq n}$, the homotopy $\phi$ is trivial. Therefore $|x| = |y|$, and by Proposition~\ref{inj} one obtains $x = y$. So the precubical set $K$ is naturally isomorphic to a precubical subset of $\widehat{K}$. Take a map $f:K\rightarrow L$. Then, by naturality, there is a commutative square of precubical sets \[ \xymatrix{ K\fd{f} \fr{} && \widehat{K} \fd{\widehat{f}}\\ &&\\ L \fr{}&& \widehat{L}.} \] If $f$ is not an isomorphism, then two situations may happen: \begin{itemize} \item There exist $n\geq 0$ and two distinct $n$-cubes $x$ and $y$ of $K$, and therefore of $\widehat{K}$, with $f(x) = f(y)$. Then $\widehat{f}(x) = \widehat{f}(y)$ and therefore $\widehat{f}$ is not an isomorphism. \item There exist $n\geq 0$ and a $n$-cube $x$ of $L$ which does not belong to the image of $f$. Since the map $\widehat{f}$ factors as a composite $\widehat{K} \rightarrow \widehat{f(K)} \rightarrow \widehat{L}$, the $n$-cube $x$ does not have any antecedent by $\widehat{f}$. So $\widehat{f}$ is not an isomorphism. \end{itemize} \end{proof} \begin{proof}[Proof of Theorem~\ref{noncan}] Let $K$ and $L$ be two precubical sets with $|K|\simeq |L|$. For all $n\geq 0$, the functor $\map(|\square[n]|,-):{\brm{Flow}} \rightarrow \Delta^{op}\set$ preserves weak equivalences between fibrant objects by \cite{ref_model2} Corollary~9.3.3 since this functor is a right Quillen functor. So there is an isomorphism $\widehat{K}\cong \widehat{L}$ since both $|K|$ and $|L|$ are fibrant~\footnote{All flows are actually fibrant.}. And by Proposition~\ref{reflete}, one obtains an isomorphism $K\cong L$. \end{proof} In conclusion, we can safely work up to weak S-homotopy without losing any kind of computer-scientific information already present in the structure of the precubical set. \section{Effect of the geometric realization functor when it is a left adjoint} \label{effect} One has the isomorphism $\llbracket P+Q\rrbracket \cong \llbracket P\rrbracket \oplus \llbracket Q\rrbracket$ of $\{i\}{\downarrow} {\brm{Flow}} {\downarrow} !\Sigma$ since the geometric realization functor is a left adjoint. \begin{prop} \label{L1} One has the pushout diagram of labelled flows \[ \xymatrix{ \{0\} \fr{0\mapsto nil} \fd{0\mapsto i} && \llbracket \mu.nil\rrbracket \fd{} \\ &&\\ \llbracket P\rrbracket \fr{} && \cocartesien {\llbracket \mu.P\rrbracket}}\] and this diagram is also a homotopy pushout diagram. \end{prop} \begin{proof} The diagram above is a pushout diagram since the geometric realization functor is a left adjoint. This diagram is also a homotopy pushout diagram by \cite{MR99h:55031} Lemma~5.2.6 since the three flows $\{0\}$, $\llbracket \mu.nil\rrbracket$ and $\llbracket P\rrbracket$ are cofibrant and since the map $\{0\} \rightarrow \llbracket P\rrbracket$ is a cofibration. \end{proof} \begin{prop} \label{L2} Let $P(x)$ be a process name with one free guarded variable $x$. Then one has the isomorphism \[ \llbracket \rec(x)P(x)\rrbracket \cong \varinjlim_n \llbracket P^n(nil)\rrbracket\] and the colimit is also a homotopy colimit. \end{prop} \begin{proof} The isomorphism comes again from the fact that the geometric realization functor is a left adjoint. The tower of flows $n\mapsto \llbracket P^n(nil)\rrbracket$ is a tower of cofibrant flows and each map $\llbracket P^n(nil)\rrbracket \rightarrow \llbracket P^{n+1}(nil)\rrbracket$ is a cofibration by Theorem~\ref{pasdepb}. So the colimit is also a homotopy colimit by \cite{ref_model2} Proposition~15.10.12. \end{proof} \begin{prop} \label{unpullback} Let $K \rightarrow !\Sigma$ be a labelled precubical set. Let $\Sigma' \subset \Sigma$. Consider the pullback diagram of precubical sets \[ \xymatrix{ L \cartesien \fr{} \fd{} && K \fd{} \\ && \\ !\Sigma' \fr{} && !\Sigma.} \] Then the commutative diagram of flows \[ \xymatrix{ |L| \fr{} \fd{} && |K| \fd{} \\ && \\ ?\Sigma' \fr{} && ?\Sigma} \] obtained by taking the realization of the first diagram and by composing with the commutative square \[ \xymatrix{ |!\Sigma'| \fr{} \fd{} && |!\Sigma| \fd{} \\ && \\ ?\Sigma' \fr{} && ?\Sigma} \] is a pullback and a homotopy pullback diagram of flows. \end{prop} \begin{proof} It is well-known that every precubical set $K$ is a $\{\partial\square[n] \subset \square[n],n\geq 0\}$-cell complex since the passage from $K_{\leq n-1}$ to $K_{\leq n}$ for $n\geq 1$ is done by the following pushout diagram: \[ \xymatrix{ \bigsqcup_{x\in K_n} \partial\square[n] \fr{} \fd{} && K_{\leq n-1} \fd{} \\ && \\ \bigsqcup_{x\in K_n} \square[n] \fr{} && K_{\leq n}\cocartesien} \] where the map $\partial\square[n] \rightarrow K_{\leq n-1}$ indexed by $x\in K_n$ is induced by the $(n-1)$-shell $\partial\square[n] \subset \square[n] \stackrel{x}\rightarrow K$. One also has the pullback diagram of sets \[ \xymatrix{ L_n \cong \square^{op}{\brm{Set}}(\square[n],L) \cartesien \fr{} \fd{} && K_n \cong \square^{op}{\brm{Set}}(\square[n],K) \fd{} \\ && \\ (!\Sigma')_n \cong \square^{op}{\brm{Set}}(\square[n],!\Sigma') \fr{} && (!\Sigma)_n \cong \square^{op}{\brm{Set}}(\square[n],!\Sigma)} \] by the Yoneda lemma and because pullbacks are calculated pointwise in the category of precubical sets. So the precubical set $L$ is the $\{\partial\square[n] \subset \square[n],n\geq 0\}$-cell subcomplex obtained by keeping the cells $\partial\square[n] \subset \square[n]$ induced by the $n$-dimensional cubes $\square[n] \rightarrow K$ such that the composite $\square[n] \rightarrow K \rightarrow !\Sigma$ factors as a composite $\square[n] \rightarrow !\Sigma' \rightarrow !\Sigma$. Thus, the map $L \rightarrow K$ is a relative $\{\partial\square[n] \subset \square[n],n\geq 0\}$-cell complex. One has the bijection $( !\Sigma')_0 \cong ( !\Sigma)_0$. Therefore $L_0\cong K_0$ and the map $L \rightarrow K$ is a relative $\{\partial\square[n] \subset \square[n],n\geq 1\}$-cell complex. Since the realization functor $K \mapsto |K|$ is a left adjoint, the map $|L| \rightarrow |K|$ is then a relative $\{|\partial\square[n]| \subset |\square[n]|,n\geq 1\}$-cell complex. By Theorem~\ref{pasdepb}, we deduce that the map $|L| \rightarrow |K|$ is a cofibration of flows with $|L|^0=|K|^0$. By Proposition~\ref{inc-path}, the continuous map $\P(|L|) \rightarrow \P(|K|)$ is a [closed $T_1$-]inclusion of general topological spaces in the sense that for any continuous map $f:Z\rightarrow \P(|K|)$ such that $f(Z)$ is in the image of $\P(|L|)$, there exists a unique continuous map $\overline{f}:Z \rightarrow \P(|L|)$ such that the composition $Z \rightarrow \P(|L|) \rightarrow \P(|K|)$ is equal to $f$. Consider a commutative diagram of flows \[ \xymatrix{ W \ar@{-->}[rd]^-{k}\ar@/^10pt/[rrrd]^-{u}\ar@/_10pt/[dddr]_-{v}& && \\ &|L| \fr{} \fd{} && |K| \fd{\ell} \\ &&& \\ &?\Sigma' \fr{} && ?\Sigma } \] Let $\gamma\in\P W$. By definition, one has \[|K| =\varinjlim\limits_{\square[n] \rightarrow K} (\{\widehat{0}<\widehat{1}\}^n)^{cof}\] with one copy of $(\{\widehat{0}<\widehat{1}\}^n)^{cof}$ corresponding to one element $x \in K_n$. Thus, $u(\gamma) = \gamma_1 *\dots * \gamma_r$ with $\gamma_i\in \P (\{\widehat{0}<\widehat{1}\}^{n_i})^{cof}$ corresponding to a $n_i$-dimensional cube $x_i$ of $K$. And $\ell(\gamma_1 *\dots * \gamma_r) = a_1*\dots*a_s$ with $a_i\in \Sigma'$ for all $i=1,\dots,s$ (note $r$ is not necessarily equal to $s$). By construction of $L$, the $n_i$-dimensional cube $x_i$ of $K$ then belongs to $L$. By definition, one has \[|L| =\varinjlim\limits_{\square[n] \rightarrow K} (\{\widehat{0}<\widehat{1}\}^n)^{cof}\] with one copy of $(\{\widehat{0}<\widehat{1}\}^n)^{cof}$ corresponding to one element $x \in L_n$. So $u(\gamma)$ belongs to the image of the inclusion of spaces $\P (|L|) \rightarrow \P (|K|)$. Hence the existence and the uniqueness of $k$. So the commutative square \[ \xymatrix{ |L| \fr{} \fd{} && |K| \fd{} \\ && \\ ?\Sigma' \fr{} && ?\Sigma} \] is a pullback diagram of flows. A map of flows $f:X \rightarrow Y$ is a fibration if and only if the continuous map $\P f:\P X \rightarrow \P Y$ is a Serre fibration. Therefore all objects of ${\brm{Flow}}$ are fibrant. And the map $|K| \rightarrow ?\Sigma$ is a fibration of flows since the path space $\P(?\Sigma)$ is discrete. Thus, the pullback diagram \[ \xymatrix{ |L| \cartesien \fr{} \fd{} && |K| \fd{} \\ && \\ ?\Sigma' \fr{} && ?\Sigma} \] is also a homotopy pullback diagram of flows by e.g. \cite{MR99h:55031} Lemma~5.2.6. \end{proof} \begin{cor} \label{L3} Let $P$ be a process name. Then the commutative diagram \[ \xymatrix{ \llbracket (\nu a)P\rrbracket \fr{} \fd{} && \llbracket P\rrbracket \fd{} \\ && \\ ?(\Sigma\backslash (\{a,\overline{a}\})) \fr{} && ?\Sigma} \] is both a pullback diagram and a homotopy pullback diagram of flows. \end{cor} The following proposition is crucial to get rid of the coskeleton construction in the interpretation of the parallel composition with synchronization. \begin{prop} \label{p1eq} Let $\square[m]$ be a labelled $m$-cube with $m\geq 0$. Let $\square[n]$ be a labelled $n$-cube with $n\geq 0$. Then the map $|\square[m]\otimes_\sigma \square[n]| \rightarrow |\square[m]\otimes_\sigma \square[n]|_{bad}$ is a trivial fibration of flows. \end{prop} \begin{proof} By Theorem~\ref{pbpb} saying that $|\partial\square[n]|_{bad}\cong |\square[n]|_{bad}$ for $n\geq 3$, and since the bad geometric realization is a left adjoint, one has the pushout diagram of flows: \[ \xymatrix{ \bigsqcup\limits_{\hbox{labelled $1$-shells}} |\partial\square[2]|_{bad}\fd{} \fr{} && |(\square[m]\otimes_\sigma \square[n])_{\leq 1}|_{bad}\fd{} \\ && \\ \bigsqcup\limits_{\hbox{labelled $1$-shells}} |\square[2]|_{bad} \fr{} && \cocartesien {|\square[m]\otimes_\sigma \square[n]|_{bad}}} \] The path space $\P(|(\square[m]\otimes_\sigma \square[n])_{\leq 1}|_{bad})$ contains the free compositions of (composable) $1$-cubes of $\square[m]\otimes_\sigma \square[n]$. The effect of the map $\P(|(\square[m]\otimes_\sigma \square[n])_{\leq 1}|_{bad}) \rightarrow \P(|\square[m]\otimes_\sigma \square[n]|_{bad})$ is to add algebraic relations $v*w=x*y$ whenever $\ell(v)=\ell(y)$, $\ell(w)=\ell(x)$ and $\ell(v) * \ell(w) = \ell(w) * \ell(v)$. The map $|\square[m]\otimes_\sigma \square[n]| \rightarrow |\square[m]\otimes_\sigma \square[n]|_{bad}$ induces a bijection $|\square[m]\otimes_\sigma \square[n]|^0 \cong |\square[m]\otimes_\sigma \square[n]|^0_{bad}$. The continuous map $\P(|\square[m]\otimes_\sigma \square[n]|) \rightarrow \P(|\square[m]\otimes_\sigma \square[n]|_{bad})$ is a Serre fibration since the space $\P(|\square[m]\otimes_\sigma \square[n]|_{bad})$ is discrete. Therefore, it remains to prove that the fibre of the fibration $\P (|\square[m]\otimes_\sigma \square[n]|) \rightarrow \P(|\square[m]\otimes_\sigma \square[n]|_{bad})$ over $x_1*\dots*x_r \in \P(|\square[m]\otimes_\sigma \square[n]|_{bad})$ where $x_1, \dots, x_r \in (\square[m]\otimes_\sigma \square[n])_1$ is contractible. Since the labels of $x_1,\dots,x_r$ commute with one another~\footnote{\label{moregeneral}For more general synchronization algebras, it is not true that all the labels necessarily commute with one another. One has first to set $x_1*\dots *x_r = y_1 * \dots * y_s$ where the labels contained in each $y_i$ commute with one another and one has then to say that the fibre over $x_1*\dots *x_r$ is the product of the contractible fibres over the $y_i$.}, this fibre is equal to the path space $\P_{\widehat{0}\dots\widehat{0},\widehat{1}\dots\widehat{1}}(|\COSK(\square[r]_{\leq 1})|)$ of execution paths from the initial state to the final state of the $r$-cube filled out by the $\COSK$ operator. So the fibre is contractible by Proposition~\ref{pourquoi_non_degenere}. \end{proof} \begin{thm} \label{parho} Let $P$ and $Q$ be two process names of ${\brm{Proc}}_\Sigma$. Then the flow associated with the process $P||Q$ is weakly S-homotopy equivalent to the flow \[\holiminj_{\square[m]\rightarrow \square\llbracket P\rrbracket}\holiminj_{\square[n]\rightarrow \square\llbracket Q\rrbracket} |(\square[m]\otimes_\sigma \square[n])_{\leq 2}|_{bad}.\] \eth Note that by the Fubini theorem for homotopy colimits (e.g., \cite{monographie_hocolim} Theorem~24.9) the order of homotopy colimits is not important. \begin{proof}[Sketch of proof] By Proposition~\ref{p1eq} and Theorem~\ref{pbpb}, one has a weak S-homotopy equivalence \[\holiminj_{\square[m]\rightarrow \square\llbracket P\rrbracket}\holiminj_{\square[n]\rightarrow \square\llbracket Q\rrbracket} |\square[m]\otimes_\sigma \square[n]| \stackrel{\simeq}\longrightarrow \holiminj_{\square[m]\rightarrow \square\llbracket P\rrbracket}\holiminj_{\square[n]\rightarrow \square\llbracket Q\rrbracket} |(\square[m]\otimes_\sigma \square[n])_{\leq 2}|_{bad}.\] For similar reasons to the proof of Proposition~\ref{rea-hocolim}, the double colimit \[\varinjlim_{\square[m]\rightarrow \square\llbracket P\rrbracket}\varinjlim_{\square[n]\rightarrow \square\llbracket Q\rrbracket} |\square[m]\otimes_\sigma \square[n]|\] is a homotopy colimit because the diagram is Reedy cofibrant over a fibrant constant Reedy category. So the canonical map \[\holiminj_{\square[m]\rightarrow \square\llbracket P\rrbracket}\holiminj_{\square[n]\rightarrow \square\llbracket Q\rrbracket} |\square[m]\otimes_\sigma \square[n]| \stackrel{\simeq}\longrightarrow\varinjlim_{\square[m]\rightarrow \square\llbracket P\rrbracket}\varinjlim_{\square[n]\rightarrow \square\llbracket Q\rrbracket} |\square[m]\otimes_\sigma \square[n]|\] is a weak S-homotopy equivalence. Since the geometric realization functor is a left adjoint, the right-hand double colimit is isomorphic to \[\left|\varinjlim_{\square[m]\rightarrow \square\llbracket P\rrbracket}\varinjlim_{\square[n]\rightarrow \square\llbracket Q\rrbracket} \square[m]\otimes_\sigma \square[n]\right|,\] hence the result by \cite{ccsprecub} Proposition~4.6 saying that the operator $\otimes_\sigma$ preserves colimits. \end{proof} The flow $|(\square[m]\otimes_\sigma \square[n])_{\leq 2}|_{bad}$ is obtained from the flow $|(\square[m]\otimes_\sigma \square[n])_{\leq 1}|_{bad}$ by adding an algebraic rule $x*y=z*t$ for each $4$-uple $(x,y,z,t)$ such that $\ell(x) = \ell(t)$, $\ell(y) = \ell(z)$ and $\ell(x)*\ell(y) = \ell(z)* \ell(t)$. So the coskeletal approach has totally disappeared in the statement of Theorem~\ref{parho}. \begin{cor} Let $P$ and $Q$ be two process names of ${\brm{Proc}}_\Sigma$. Then the flow associated with the process $P||Q$ is weakly S-homotopy equivalent to the flow \[\varinjlim_{\square[m]\rightarrow \square\llbracket P\rrbracket}\varinjlim_{\square[n]\rightarrow \square\llbracket Q\rrbracket} (|(\square[m]\otimes_\sigma \square[n])_{\leq 2}|_{bad})^{cof}.\] \end{cor} \begin{proof} In the model category of flows, the class of cofibrations which are monomorphisms is closed under pushout and transfinite composition. Therefore the cofibrant replacement of a monomorphism is a cofibration, and even an inclusion of subcomplexes (\cite{ref_model2} Definition~10.6.7) because the cofibrant replacement functor is obtained by the small object argument, starting from the identity of the initial object, i.e. the empty flow. So the diagram calculating \[\varinjlim_{\square[m]\rightarrow \square\llbracket P\rrbracket}\varinjlim_{\square[n]\rightarrow \square\llbracket Q\rrbracket} (|(\square[m]\otimes_\sigma \square[n])_{\leq 2}|_{bad})^{cof}\] is Reedy cofibrant. Thus the double colimit above has the correct weak S-homotopy type. \end{proof} \section{Towards a pure homotopical semantics of CCS} \label{bonus} \begin{table} \begin{center} \begin{tabular}{|l|} \hline $[P]:=|\llbracket P\rrbracket|$ for $P\in{\brm{Proc}}_\Sigma$\\ \hline $\xymatrix{ [\{0\}] \ar@{->}[r]^-{0\mapsto nil} \ar@{->}[d]^-{0\mapsto P} & [\mu.nil] \ar@{->}[d] \\ [P] \ar@{->}[r] & \wcocartesien {[\mu.P]}}$\\ \hline $[P+Q] := [P] \oplus [Q]$\\ with the binary coproduct taken in ${\mathbf{Ho}}(\{i\}{\downarrow} {\brm{Flow}} {\downarrow} ?\Sigma)$ \\ \hline $\xymatrix{ [(\nu a) P] \ar@{->}[r] \ar@{->}[d] \wcartesien & [P] \ar@{->}[d] \\ [?(\Sigma\backslash (\{a,\overline{a}\}))] \ar@{->}[r] & [?\Sigma]}$\\ \hline $[\rec(x)P(x)]:=\wliminj\limits_n [P^n(nil)]$\\ \hline \end{tabular} \caption{Pure homotopical semantics of a restriction of CCS, w meaning Heller's privileged weak (co)limits of ${\mathbf{Ho}}({\brm{Flow}})$} \label{wsem} \end{center} \end{table} Let us restrict our attention to CCS without parallel composition with synchronization. So the new syntax of the language \textit{for this section only} is: \[ P::=P\in{\brm{Proc}}_\Sigma \ |\ a.P \ |\ (\nu a)P \ |\ P + P \ |\ \rec(x)P(x).\] Denote by ${\mathbf{Ho}}({\brm{Flow}})$ the homotopy category of flows, i.e. the categorical localization of the flows by the weak S-homotopy equivalences. We want to explain in this section how it is possible to construct a semantics of this restriction of CCS in terms of elements of ${\mathbf{Ho}}({\brm{Flow}})$. The following theorem is about realization of homotopy commutative diagrams in the particular case of a diagram over a Reedy category. It gives a sufficient condition for a homotopy commutative diagram to be coherently homotopy commutative. \begin{thm} (Cisinski) \label{weak} (\cite{CiSi} for the finite case and \cite{cofibrationcat} Theorem~8.8.5 for the generalization) Let $\mathcal{M}$ be a model category. Let $\mathcal{B}$ be a small Reedy category which is free, i.e. freely generated by a graph. Moreover, let us suppose that $\mathcal{B}$ is either direct or inverse, i.e. there exists a degree function from the set of objects of $\mathcal{B}$ to some ordinal such that every non-identity map of $\mathcal{B}$ always raises or always lowers the degree. Then the canonical functor \[\dgm_\mathcal{B} : {\mathbf{Ho}}(\mathcal{M}^\mathcal{B}) \longrightarrow {\mathbf{Ho}}(\mathcal{M})^\mathcal{B}\] from the homotopy category of diagrams of objects of $\mathcal{M}$ over $\mathcal{B}$ to the category of diagrams of objects of ${\mathbf{Ho}}(\mathcal{M})$ over $\mathcal{B}$ is full and essentially surjective. \eth The homotopy category of flows ${\mathbf{Ho}}({\brm{Flow}})$ is weakly complete and weakly cocomplete as any homotopy category of any model category \cite{MR99h:55031}. Weak limit and weak colimit satisfy the same property as limit and colimit except the uniqueness. Weak small (co)products coincide with small (co)products. Weak (co)limits can be constructed using small (co)products and weak (co)equalizers in the same way as (co)limits are constructed by small (co)products and (co)equalizers (\cite{MR1712872} Theorem~1 p109). And a weak coequalizer \[A\stackrel{f,g}\rightrightarrows B \stackrel{h}\longrightarrow D\] is given by a weak pushout \[ \xymatrix{ B \fr{h} && D \\ && \\ A\sqcup B \fu{(f,\id_B)}\fr{(g,\id_B)} && B. \fu{h}} \] And finally, weak pushouts (resp. weak pullbacks) are given by homotopy pushouts (resp. homotopy pullbacks) (e.g., \cite{rep} Remark~4.1 and \cite{MR920963} Chapter~III). As explain in \cite{cofibrationcat}, Theorem~\ref{weak} can be also used for the construction of certain kind of weak limits and of weak colimits: \begin{cor} \label{weakweak} (\cite{cofibrationcat} Theorem~8.8.6) Let $\mathcal{M}$ be a model category. Let $\mathcal{B}$ be a small Reedy category which is free, i.e. freely generated by a graph. Moreover, let us suppose that $\mathcal{B}$ is either direct or inverse. Let $X\in {\mathbf{Ho}}(\mathcal{M})^\mathcal{B}$. Let $X'\in \mathcal{M}^\mathcal{B}$ with $\dgm_\mathcal{B}(X')=X$. \begin{enumerate} \item If $\mathcal{B}$ is direct, then a weak colimit $\wliminj X$ of $X$ is given by \[\boxed{\wliminj X := \dgm_\mathcal{B}(\holiminj X') \simeq \dgm_\mathcal{B}(\varinjlim X'^{cof})}\] where the cofibrant replacement $X'^{cof}$ is taken in the Reedy model structure of $\mathcal{M}^\mathcal{B}$. This weak colimit, called the privileged weak colimit in Heller's terminology, is unique up to a non-canonical isomorphism. \item If $\mathcal{B}$ is inverse, then a weak limit $\wlimproj X$ of $X$ is given by \[\boxed{\wlimproj X := \dgm_\mathcal{B}(\holimproj X') \simeq \dgm_\mathcal{B}(\varprojlim X'^{fib})}\] where the fibrant replacement $X'^{fib}$ is taken in the Reedy model structure of $\mathcal{M}^\mathcal{B}$. This weak limit, called the privileged weak limit in Heller's terminology, is unique up to a non-canonical isomorphism. \end{enumerate} \end{cor} We have now the necessary tools to state the theorem: \begin{thm} \label{weakweakweak} For each process name $P$ of our restriction of CCS, consider the object $[P]$ of ${\mathbf{Ho}}({\brm{Flow}})$ defined by induction on the syntax of $P$ as in Table~\ref{wsem}. Then one has $\llbracket P\rrbracket \in [P]$, i.e. the weak S-homotopy type of $\llbracket P\rrbracket$ is $[P]$. \eth \begin{proof} One observes that the small categories involved for the construction of pushouts and colimits of towers are Reedy direct free and that the small category involved for the construction of pullbacks is Reedy inverse free. One then proves $\llbracket P\rrbracket \in [P]$ by induction on the syntax of $P$ with Corollary~\ref{weakweak}, Proposition~\ref{L1}, Proposition~\ref{L2} and Corollary~\ref{L3}. \end{proof} We do not know how to construct a pure homotopical semantics of the parallel composition with synchronization.
1,116,691,498,869
arxiv
\section{Introduction} \label{sec:intro} We consider the problem of fairly allocating indivisible goods to interested agents, a task that occurs frequently in the society and has been a subject of study for decades in economics and more recently in computer science. Several notions of fairness have been proposed to this end; the most popular ones include \emph{envy-freeness} \cite{Foley67,Varian74} and \emph{proportionality} \cite{Steinhaus48}. Envy-freeness stipulates that each agent likes her bundle at least as much as that of any other agent, while proportionality requires that each agent receive her proportional share in the allocation. While an allocation satisfying both notions can be obtained when we deal with divisible goods such as cake or land, this is not the case for indivisible goods like houses or cars. Indeed, if there is one indivisible good and several agents, some agent is necessarily left empty-handed and neither of the notions can be satisfied. In fact, the same example shows that even a multiplicative approximation of these notions cannot be guaranteed.\footnote{As a result of the lack of existence guarantee for these fairness criteria, statements on the asymptotic existence when utilities are drawn from distributions have been shown \cite{AmanatidisMaNi15,DickersonGoKa14,KurokawaPrWa16,ManurangsiSu17,Suksompong16-2}.} A notion that was designed to fix this problem and has been a subject of much interest in the last few years is called the \emph{maximin share}, first introduced in this context by Budish \cite{Budish11} based on earlier concepts by Moulin \cite{Moulin90}. The maximin share of an agent can be determined as follows: If there are $n$ agents in total, let the agent of interest partition the goods into $n$ bundles, knowing that she will get the bundle that she likes least. The maximum utility that she can obtain via this procedure is her maximin share. The intuition behind the maximin share is that the agent could feel entitled to this share, since she is already taking the least preferable bundle (according to her own preferences) from the partition. In other words, the indivisibility of the goods cannot be an excuse not to give her this share. An allocation is said to satisfy the \emph{maximin share criterion} if every agent receives a utility no less than her maximin share. While an allocation satisfying the maximin share criterion exists in several restricted settings \cite{BouveretLe16} as well as asymptotically \cite{AmanatidisMaNi15,KurokawaPrWa16}, Procaccia and Wang \cite{ProcacciaWa14} showed through rather intricate examples that, somewhat surprisingly, such an allocation does not always exist in general. As a result, the same authors turned the focus to approximation and showed that an allocation that yields a $2/3$-approximation of the maximin share to every agent always exists, and Amanatidis et al. \cite{AmanatidisMaNi15} exhibited an efficient algorithm that computes such an allocation. Amanatidis et al. also presented a simpler algorithm with an approximation ratio of 1/2. Barman and Krishna Murthy \cite{BarmanKr17} gave a simpler efficient $2/3$-approximation algorithm and moreover initiated the study of maximin share for agents with submodular (as opposed to additive) valuation functions. Gourv\`{e}s and Monnot \cite{GourvesMo17} extended the problem to the case where the goods satisfy a matroidal constraint. Farhadi et al. \cite{FarhadiGhHa17} studied the maximin share criterion when agents have unequal entitlements. Recently, Ghodsi et al. \cite{GhodsiHaSe17} improved the approximation ratio to $3/4$. The question of truthfulness for maximin share approximation algorithms has also been explored \cite{AmanatidisBiCh17,AmanatidisBiMa16}. Besides its theoretical appeal, the maximin share has been used in applications including the popular fair division website Spliddit \cite{GoldmanPr14}. In this paper, we apply the concept of maximin share to a more general setting of fair division in which goods are allocated not to individual agents, but rather to groups of agents who can have varying preferences on the goods. Several practical situations involving fair division fit into this model. For instance, an outcome of a negotiation between countries may have to be approved by members of the cabinets of each country. It could well be the case that one member of a cabinet of a country thinks that the division is fair while another does not. Similarly, in a divorce case, it is not hard to imagine that different members of the family on the husband side and the wife side have varying opinions on a proposed settlement. Another example is a large company or university that needs to divide its resources among competing groups of agents (e.g., departments in a university). The agents in each group have different and possibly misaligned interests; the professors who perform theoretical research may prefer more whiteboards and open space in the department building, while those who engage in experimental work are more likely to prefer laboratories. These situations cannot be modeled by the traditional fair division setting where each recipient of a bundle of goods is represented by a single preference. The added element of having several agents in the same group receiving the same bundle of goods corresponds to the social choice framework of aggregating individual preferences to reach a collective decision \cite{ArrowSeSu02}. Manurangsi and Suksompong \cite{ManurangsiSu17} recently investigated the asymptotic existence of fair divisions under this setting, while Segal-Halevi and Nitzan \cite{SegalhaleviNi15,SegalhaleviNi16} considered fairness in the allocation of \emph{divisible} goods to groups. A related setting, where a subset of indivisible goods is allocated to a (single) group of agents, has also been studied \cite{ManurangsiSu17-2,Suksompong16}. \subsection{Our Results} We extend the maximin share notion to groups in a natural way by calculating the maximin share for each agent using the number of groups instead of the number of agents. When there are two groups, we completely determine the cardinality of agents in the groups for which it is possible to approximate the maximin share within a positive factor that depends only on the number of agents and not on the number of goods. In particular, an approximation is possible when one of the groups contain a single agent, when both groups contain two agents, or when the groups contain three and two agents respectively. In all other cases, no approximation is possible in a strong sense: There exists an instance with only four goods in which some agent with positive maximin share necessarily gets zero utility. The results for the setting with two groups are presented in Section \ref{sec:twogroups} and summarized in Table \ref{table:twogroups}. Even though we leave a gap between the lower and upper bounds of the approximation ratio, the reader should bear in mind that this gap remains even for the previously studied setting where goods are allocated to individual agents. Indeed, the gap between $3/4$ and $1-o(1)$ for maximin share approximation has not been closed despite several works in this direction, while in the simplest case with three agents, the gap remains between 8/9 and $1-o(1)$ \cite{AmanatidisMaNi15,GhodsiHaSe17,GourvesMo17,KurokawaPrWa16,ProcacciaWa14}.\footnote{When there are two agents, it is not hard to see that a cut-and-choose protocol guarantees the full maximin share for both agents.} Thus, despite its relatively simple definition, determining the best approximation ratio for the maximin share is perhaps a harder problem than it might seem at first glance. In addition, although the case of two groups might seem like a rather restricted case, recall that fair division between two \emph{agents}, which is even more restricted, has enjoyed significant attention in the literature (e.g., \cite{BramsFi00,BramsKiKl12,BramsKiKl14}). In Section \ref{sec:manygroups}, we generalize to the setting with several groups of agents. On the positive side, we show that a positive approximation is possible if only one group contains more than a single agent. On the other hand, we show on the negative side that when all groups contain at least two agents and one group contains at least five agents, it is possible that some agent with positive maximin share will be forced to obtain zero utility, which means that there is no hope of obtaining an approximation in this case. \section{Preliminaries} \label{sec:prelim} Let $G=\{g_1,g_2,\dots,g_m\}$ denote the set of goods, and $A$ the set of agents. The agents are partitioned into $k$ groups $A_1,\dots,A_k$. Group $A_i$ contains $n_i$ agents; denote by $a_{ij}$ the $j$th agent in group $A_i$. The agents in each group will be collectively allocated a subset of goods $G$; suppose that the agents in group $A_k$ receive the subset $G_k$. Each agent $a_{ij}$ has some nonnegative utility $u_{ij}(g)$ for every good $g\in G$. We assume that the agents are endowed with \emph{additive} utility functions, i.e., $u_{ij}(G')=\sum_{g\in G'}u_{ij}(g)$ for any agent $a_{ij}\in A$ and any subset of goods $G'\subseteq G$. This assumption is commonly made in the fair division literature, especially the literature that deals with the maximin share; it provides a reasonable tradeoff between simplicity and expressiveness \cite{AmanatidisMaNi15,BouveretLe16,KurokawaPrWa16,ProcacciaWa14}. Let $\textbf{u}_{ij}=(u_{ij}(g_1),u_{ij}(g_2),\dots,u_{ij}(g_m))$ be the utility vector of agent $a_{ij}$. We refer to a setting with groups of agents, goods, and utility functions as an \emph{instance}. We now define the maximin share. Let $K=\{1,2,\dots,k\}$. Suppose that in a partition $\mathcal{G}'$ of the goods, the agents in group $A_k$ receive the subset $G'_k$. \begin{definition} \label{def:maximin} The \emph{maximin share} of agent $a_{ij}$ is defined as \[\max_{\mathcal{G}'}\min_{k\in K}u_{ij}(G'_k),\] where the maximum ranges over all (complete) partitions $\mathcal{G}'$ of the goods in $G$. Any partition for which this maximum is attained is called a \emph{maximin partition} of agent $a_{ij}$. An allocation $\mathcal{G}$ of goods to the groups is said to satisfy the \emph{maximin share criterion} if every agent obtains at least her maximin share in $\mathcal{G}$. \end{definition} As an example, suppose that there are two groups and four goods, and an agent's utilities for the goods are given by $u(g_1)=6, u(g_2)=3$, and $u(g_3)=u(g_4)=2$. The maximin share of the agent is 6, as can be seen from the partition $(\{g_1\},\{g_2,g_3,g_4\})$. This partition is the unique maximin partition for the agent. It follows directly from Definition \ref{def:maximin} that the maximin share of an agent $a_{ij}$ is at most $u_{ij}(G)/k$. In addition, any envy-free or proportional allocation also satisfies the maximin share criterion.\footnote{Assuming we extend the notions of envy-freeness and proportionality to groups in a similar manner as we do for the maximin share.} Since an allocation satisfying the maximin share criterion does not always exist even for three groups with one agent each \cite{ProcacciaWa14}, we will be interested in one that \emph{approximates} the maximin share criterion, i.e., gives every agent at least a multiplicative factor $\alpha$ of her maximin share, for some $\alpha\in(0,1)$. \section{Two Groups of Agents} \label{sec:twogroups} In this section, we consider the setting where there are two groups of agents and characterize the cardinality of the groups for which a positive approximation of the maximin share is possible regardless of the number of goods. In particular, suppose that the two groups contain $n_1$ and $n_2$ agents, where we assume without loss of generality that $n_1\geq n_2$. Then a positive approximation is possible when $n_2=1$ as well as when $(n_1,n_2)=(2,2)$ or $(3,2)$. The results are summarized in Table \ref{table:twogroups}. \begin{table} \centering \begin{tabular}{| c | c |} \hline \textbf{Number of agents} & \textbf{Approximation ratio} \\ \hline $(n_1,n_2)=(1,1)$ & $\alpha=1$ (Cut-and-choose protocol; see, e.g., \cite{BouveretLe16}) \\ \hline $(n_1,n_2)=(2,1)$ & $2/3\leq\alpha\leq 3/4$ (Theorem \ref{thm:tworoundrobin}) \\ \hline $n_2=1$ & $2/(n_1+1)\leq \alpha\leq 1/\left\lfloor \left\lfloor\sqrt{2n_1}\right\rfloor/2\right\rfloor$ (Corollary \ref{cor:manyonefloor}) \\ \hline $(n_1,n_2)=(2,2)$ & $1/8\leq\alpha\leq 1/2$ (Theorem \ref{thm:twotwo}) \\ \hline $(n_1,n_2)=(3,2)$ & $1/16\leq\alpha\leq 1/2$ (Theorem \ref{thm:threetwo}) \\ \hline $n_1\geq 4, n_2\geq 2$ & $\alpha=0$ (Proposition \ref{prop:fourtwo}) \\ \hline $n_1,n_2\geq 3$ & $\alpha=0$ (Proposition \ref{prop:threethree}) \\ \hline \end{tabular} \vspace{5mm} \caption{Values of the best possible approximation ratio, denoted by $\alpha$, for the maximin share when there are two groups with $n_1\geq n_2$ agents. The approximation ratios hold regardless of the number of goods.} \label{table:twogroups} \end{table} \subsection{Large Number of Agents: No Possible Approximation} We start by showing that when the numbers of agents in the groups are large enough, no approximation of the maximin share is possible. Observe that if we prove that a maximin share approximation is not possible for groups with $n_1$ and $n_2$ agents, then it is also not possible for groups with $n_1'\geq n_1$ and $n_2'\geq n_2$ agents, since we would still need to fulfill the approximation for the first $n_1$ and $n_2$ agents in the respective groups. \begin{proposition} \label{prop:fourtwo} If $n_1\geq 4$ and $n_2\geq 2$, then there exists an instance in which some agent with nonzero maximin share necessarily receives zero utility. \end{proposition} \begin{proof} Assume that $n_1=4$ and $n_2=2$, and suppose that there are four goods. The utilities of the agents in the first group are $\textbf{u}_{11}=(0,1,0,1)$, $\textbf{u}_{12}=(0,1,1,0)$, $\textbf{u}_{13}=(1,0,0,1)$, and $\textbf{u}_{14}=(1,0,1,0)$, while the utilities of those in the second group are $\textbf{u}_{21}=(1,1,0,0)$ and $\textbf{u}_{22}=(0,0,1,1)$. In this example, every agent has a maximin share of 1. To guarantee nonzero utility for the agents in the second group, we must allocate at least one of the first two goods and at least one of the last two goods to the group. But this implies that some agent in the first group receives zero utility. \qed \end{proof} \begin{proposition} \label{prop:threethree} If $n_1, n_2\geq 3$, then there exists an instance in which some agent with nonzero maximin share necessarily receives zero utility. \end{proposition} \begin{proof} Assume that $n_1=n_2=3$, and suppose that there are four goods. The utilities of the agents in the first group are $\textbf{u}_{11}=(0,1,0,1)$, $\textbf{u}_{12}=(1,0,0,1)$, and $\textbf{u}_{13}=(1,0,1,0)$, while the utilities of those in the second group are $\textbf{u}_{21}=(1,1,0,0)$, $\textbf{u}_{22}=(0,0,1,1)$, and $\textbf{u}_{23}=(1,0,0,1)$. In this example, every agent has a maximin share of 1. To guarantee nonzero utility for the agents in the second group, we must allocate at least one of the first two goods and at least one of the last two goods to the group. If we allocate at least three goods to the second group, some agent in the first group is left with zero utility. Else, if we allocate exactly two goods to the second group, we may allocate goods $\{g_1,g_3\}$, $\{g_1,g_4\}$, or $\{g_2,g_4\}$, which leaves goods $\{g_2,g_4\}$, $\{g_2,g_3\}$, or $\{g_1,g_3\}$ to the first group, respectively. But in each of these cases, some agent in the first group receives zero utility. \qed \end{proof} \subsection{Approximation via Modified Round-Robin Algorithm} When both groups contain a single agent, it is known that a simple ``cut-and-choose'' protocol similar to a famous cake-cutting protocol yields the full maximin share for both agents (see, e.g., \cite{BouveretLe16}). It turns out that as soon as at least one group contains more than one agent, the full maximin share can no longer be guaranteed. We next consider the simplest such case where the groups contain one and two agents, respectively. The maximin share approximation algorithm for this case is similar to the modified round-robin algorithm that yields a $1/2$-approximation for an arbitrary number of agents \cite{AmanatidisMaNi15}, but we will need to make some adjustments to handle more than one agent being in the same group. For the algorithm, we will need the following two lemmas, which admit rather straightforward proofs. \begin{lemma}[\cite{AmanatidisMaNi15}] \label{lem:roundrobin} Suppose that each group contains one agent. Consider a round-robin algorithm in which the agents take turns taking their favorite good from the remaining goods. In the resulting allocation, the envy that an agent has toward any other agent is at most the maximum utility of the former agent for any single good. Moreover, if an agent is ahead of another agent in the round-robin ordering, then the former agent has no envy toward the latter agent. \end{lemma} \begin{lemma}[\cite{AmanatidisMaNi15,BouveretLe16}] \label{lem:addonemfs} Given an arbitrary instance in which each group contains one agent. If we allocate an arbitrary good to an agent as her only good, then the maximin share of any remaining agent with respect to the remaining goods does not decrease. \end{lemma} We are now ready to handle the case with one and two agents in the groups. \begin{theorem} \label{thm:tworoundrobin} Let $(n_1,n_2)=(2,1)$, and suppose that $\alpha$ is the best possible approximation ratio for the maximin share. Then $2/3\leq \alpha\leq 3/4$. \end{theorem} \begin{proof} We first show the upper bound. Suppose that there are four goods. The utilities of the agents in the first group for the goods are $\textbf{u}_{11}=(3,1,2,2)$ and $\textbf{u}_{12}=(2,3,2,1)$, while the utilities of the agent in the second group are $\textbf{u}_{21}=(3,2,2,1)$. In this example, every agent has a maximin share of 4. We will show that any allocation gives some agent a utility of at most 3. Note that an allocation that would give every agent a utility of at least 4 must allocate two goods to both groups. If the fourth and one of the second and third goods are allocated to the second group, the agent gets a utility of 3. Otherwise, one can check that one of the agents in the first group gets a utility of 3. Next, we exhibit an algorithm that guarantees each agent a $2/3$ fraction of her maximin share. Since we do not engage in interpersonal comparisons of utilities, we may assume without loss of generality that every agent has utility 1 for the whole bundle of goods. Since the maximin share of an agent is always at most $1/2$, it suffices to allocate a bundle worth at least $1/3$ to her. If some good is worth at least $1/3$ to $a_{21}$, let her take that good. By Lemma \ref{lem:addonemfs}, the maximin shares of $a_{11}$ and $a_{12}$ do not decrease. Since they receive all of the remaining goods, they obtain their maximin share. Hence we may now assume that no good is worth at least $1/3$ to $a_{21}$. We let each of $a_{11}$ and $a_{12}$, in arbitrary order, take a good worth at least $1/3$ if there is any. There are three cases. \begin{itemize} \item Both of them take a good. Then each of them gets a utility of at least $1/3$. Since every good is worth less than $1/3$ to $a_{21}$, she also gets a utility of at least $1-1/3-1/3=1/3$ from the remaining goods. \item One of them takes a good and the other does not. Assume without loss of generality that $a_{11}$ is the agent who takes a good. We run the round-robin algorithm on $a_{12}$ and $a_{21}$ using the remaining goods, starting with $a_{21}$. Since every good is worth less than $1/3$ to $a_{21}$, the value of the whole bundle of goods minus the good that $a_{11}$ takes is at least $2/3$. By Lemma \ref{lem:roundrobin}, $a_{21}$ gets a utility of at least $1/2\times 2/3 = 1/3$. Similarly, the envy of $a_{12}$ toward $a_{21}$ is at most the maximum utility of $a_{12}$ for a good allocated during the round-robin algorithm, which is at most $1/3$. This implies that $a_{21}$'s bundle is worth at most $2/3$ to $a_{12}$, and hence $a_{12}$'s bundle in the final allocation (i.e., her bundle from the round-robin algorithm combined with the good that $a_{11}$ takes) is worth at least $1/3$ to her. \item Neither of them takes a good. We run the round-robin algorithm on all three agents, starting with $a_{21}$. By Lemma \ref{lem:roundrobin}, $a_{21}$ gets a utility of at least $1/3$. The envy of $a_{11}$ toward $a_{21}$ is at most the maximum utility of $a_{11}$ for a good, which is at most $1/3$. Hence $a_{21}$'s bundle is worth at most $2/3$ to $a_{11}$, which means that $a_{11}$'s bundle in the final allocation (i.e., her bundle combined with $a_{12}$'s bundle) is worth at least $1/3$ to her. An analogous argument holds for $a_{12}$. \end{itemize} This covers all three possible cases. \qed \end{proof} Next, we generalize to the setting where the first group contains an arbitrary number of agents while the second group contains a single agent. In this case, an algorithm similar to that in Theorem \ref{thm:tworoundrobin} can be used to obtain a constant factor approximation when the number of agents is constant. In addition, we show that the approximation ratio necessarily degrades as the number of agents grows. \begin{algorithm} \caption{Algorithm for approximate maximin share when the groups contain $n_1\geq 2$ and $n_2=1$ agents (Theorem \ref{thm:manyone}).} \label{alg:algomanyone} \begin{algorithmic}[1] \Procedure{Approximate\textendash Maximin\textendash Share\textendash 1}{} \If{Agent $a_{21}$ values some good $g$ at least $\frac{1}{n_1+1}$} \State Allocate $g$ to $a_{21}$ and the remaining goods to the first group. \Else \State Let each agent in the first group, in arbitrary order, take a good worth at least $\frac{1}{n_1+1}$ to her if there is any. \State Allocate the remaining goods to the agents who have not taken a good using the round-robin algorithm, starting with $a_{21}$. \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \begin{theorem} \label{thm:manyone} Let $n_1\geq 2$ and $n_2=1$, and suppose that $\alpha$ is the best approximation ratio for the maximin share. Then $\frac{2}{n_1+1}\leq \alpha\leq \frac{1}{\lfloor f(n_1)/2\rfloor}$, where $f(n_1)$ is the largest integer such that $\binom{f(n_1)}{2}\leq n_1$. \end{theorem} \begin{proof} We first show the upper bound. Let $l=f(n_1)$, and suppose that there are $l$ goods. Let $\binom{l}{2}$ of the agents in the first group positively value a distinct set of two goods. In particular, each of them has utility 1 for both goods in their set and 0 for the remaining goods. Let the agent in the second group have utility 1 for all goods. In this example, each agent in the first group has a maximin share of 1, while the agent $a_{21}$ has a maximin share of $\lfloor l/2\rfloor$. To guarantee nonzero utility for the $l$ agents in the first group, we must allocate all but at most one good to the group, leaving at most one good for the second group. So the agent in the second group obtains at most a $1/\lfloor l/2\rfloor$ fraction of her maximin share. An algorithm that guarantees a $\frac{2}{n_1+1}$-approximation of the maximin share (Algorithm \ref{alg:algomanyone}) is similar to that for the case $n_1=2$ (Theorem \ref{thm:tworoundrobin}). Again, we normalize the utility of each agent for the whole set of goods to 1. First, let $a_{21}$ take a good worth at least $\frac{1}{n_1+1}$ to her if there is any. If she takes a good, we allocate the remaining goods to the first group and are done by Lemma \ref{lem:addonemfs}. Else, we let each of the agents in the first group, in arbitrary order, take a good worth at least $\frac{1}{n_1+1}$ to her if there is any. After that, we run the round-robin algorithm on the agents who have not taken a good, starting with $a_{21}$. Suppose that $r$ agents in the first group take a good. Each of them obtains a utility of at least $\frac{1}{n_1+1}$. The remaining goods, which are allocated by the round-robin algorithm, are worth a total of at least $\frac{n_1+1-r}{n_1+1}$ to $a_{21}$. Since there are $n_1+1-r$ agents who participate in the round-robin algorithm, and $a_{21}$ is the first to choose, she obtains utility at least $\frac{1}{n_1+1-r}\cdot\frac{n_1+1-r}{n_1+1}=\frac{1}{n_1+1}$. Finally, by Lemma \ref{lem:roundrobin}, each agent in the first group who does not take a good in the first stage has envy at most $\frac{1}{n_1+1}\leq \frac13$ toward $a_{21}$. Hence for such an agent, $a_{21}$'s bundle is worth at most $2/3$, and so the bundle allocated to the first group is worth at least $\frac13\geq \frac{1}{n_1+1}$. \qed \end{proof} Algorithm \ref{alg:algomanyone} can be implemented in time polynomial in the number of agents and goods. Also, since $\binom{\lfloor\sqrt{2n_1}\rfloor}{2}\leq \frac{(\sqrt{2n_1})^2}{2}=n_1$, we have the following corollary. \begin{corollary} \label{cor:manyonefloor} Let $n_1\geq 2$ and $n_2=1$, and suppose that $\alpha$ is the best approximation ratio for the maximin share. Then $\frac{2}{n_1+1}\leq \alpha\leq \frac{1}{\left\lfloor \left\lfloor\sqrt{2n_1}\right\rfloor/2\right\rfloor}.$ \end{corollary} \subsection{Approximation via Maximin Partitions} We now consider the two remaining cases, $(n_1,n_2)=(2,2)$ and $(3,2)$. We show that in both cases, a positive approximation is also possible. However, the algorithms for these two cases will rely on a different idea than the previous algorithms. These positive results provide a clear distinction between the settings where it is possible to approximate the maximin share and those where it is not. For the former settings, the maximin share can be approximated within a positive factor independent of the number of goods. On the other hand, for the latter settings, there exist instances in which some agent with positive maximin share necessarily gets zero utility even when there are only four goods (Propositions \ref{prop:fourtwo} and \ref{prop:threethree}), and therefore no approximation is possible even if we allow dependence on the number of goods. \begin{algorithm} \caption{Algorithm for approximate maximin share when the groups contain $n_1=n_2=2$ agents (Theorem \ref{thm:twotwo}).} \label{alg:algotwotwo} \begin{algorithmic}[1] \Procedure{Approximate\textendash Maximin\textendash Share\textendash 2}{} \For{each agent $a_{ij}$} \State Compute her maximin partition $(G_{ij},G\backslash G_{ij})$. \EndFor \State Partition $G$ into 16 subsets $(H_1,H_2,\dots,H_{16})$ according to whether each good belongs to $G_{ij}$ or $G\backslash G_{ij}$ for each $1\leq i,j\leq 2$. \If{Some subset $H_p$ is \emph{important} (i.e., of value at least $1/8$ of the agent's maximin share) to both $a_{11}$ and $a_{12}$} \State Allocate $H_p$ to the first group and the remaining goods to the second group. \Else \State Suppose that $H_p,H_q$ are important to $a_{11}$ and $H_r,H_s$ to $a_{12}$. \State Find a pair from $(H_p,H_r),(H_p,H_s),(H_q,H_r),(H_q,H_s)$ that does not coincide with the important subsets for an agent in the second group. \State Allocate that pair of subsets to the first group and the remaining goods to the second group. \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \begin{theorem} \label{thm:twotwo} Let $(n_1,n_2)=(2,2)$, and suppose that $\alpha$ is the best possible approximation ratio for the maximin share. Then $1/8\leq \alpha\leq 1/2$. \end{theorem} \begin{proof} We first show the upper bound. Suppose that there are four goods. The utilities of the agents in the first group for the goods are $\textbf{u}_{11}=(0,2,1,1)$ and $\textbf{u}_{12}=(2,0,1,1)$, while the utilities of those in the second group are $\textbf{u}_{21}=(1,1,0,0)$ and $\textbf{u}_{22}=(0,0,1,1)$. In this example, both agents in the first group have a maximin share of 2, and both agents in the second group have a maximin share of 1. To ensure nonzero utility for the agents in the second group, we must allocate at least one of the first two goods and at least one of the last two goods to the group. However, this implies that some agent in the first group gets a utility of at most 1. Next, we exhibit an algorithm that yields a $1/8$-approximation of the maximin share (Algorithm~\ref{alg:algotwotwo}). For each agent $a_{ij}$, let $(G_{ij},G\backslash G_{ij})$ be one of her maximin partitions. By definition, we have that both $u_{ij}(G_{ij})$ and $u_{ij}(G\backslash G_{ij})$ are at least the maximin share of $a_{ij}$. Let $(H_1,H_2,\dots,H_{16})$ be a partition of $G$ into 16 subsets according to whether each good belongs to $G_{ij}$ or $G\backslash G_{ij}$ for each $1\leq i,j\leq 2$; in other words, for every $i,j$, the goods in each set $H_k$ either all belong to $G_{ij}$ or all belong to $G\backslash G_{ij}$. By the pigeonhole principle, for each agent $a_{ij}$, among the eight sets $H_k$ whose union is $G_{ij}$, the set that she values most gives her a utility of at least $1/8$ of her maximin share; call this set $H_p$. Likewise, we can find a set $H_q\subseteq G\backslash G_{ij}$ that the agent values at least $1/8$ of her maximin share. We call these subsets \emph{important} to $a_{ij}$. It suffices for every agent to obtain a subset that is important to her. If some subset $H_p$ is important to both $a_{11}$ and $a_{12}$, we allocate that subset to the first group and the remaining goods to the second group. Since each agent in the second group has at least two important subsets, and only one is taken away from them, this yields the desired guarantee. Else, two subsets $H_p,H_q$ are important to $a_{11}$ and two other subsets $H_r,H_s$ are important to $a_{12}$. We will assign one of the pairs $(H_p,H_r),(H_p,H_s),(H_q,H_r),(H_q,H_s)$ to the first group. If a pair does not work, that means that some agent in the second group has exactly that pair as her important subsets. But there are four pairs and only two agents in the second group, hence some pair must work. \qed \end{proof} We briefly discuss the running time of Algorithm \ref{alg:algotwotwo}. The algorithm can be implemented efficiently except for one step: computing a maximin partition of each agent. This step is NP-hard even when the two agents have identical utility functions by a straightforward reduction from the partition problem. Nevertheless, Woeginger \cite{Woeginger97} showed that a PTAS for the problem exists.\footnote{Woeginger also showed that an FPTAS for this problem does not exist unless $\text{P}=\text{NP}$.} Using the PTAS, we can compute an approximate maximin partition instead of an exact one and obtain a $\left(1/8-\epsilon\right)$-approximate algorithm for the maximin share in time polynomial in the number of goods for any constant $\epsilon>0$. A similar idea can be used to show that a positive approximation of the maximin share is possible when $(n_1,n_2)=(3,2)$. \begin{theorem} \label{thm:threetwo} Let $(n_1,n_2)=(3,2)$, and suppose that $\alpha$ is the best possible approximation ratio for the maximin share. Then $1/16\leq \alpha\leq 1/2$. \end{theorem} \begin{proof} The upper bound follows from Theorem \ref{thm:twotwo} and the observation preceding Proposition \ref{prop:fourtwo}. For the lower bound, compute the maximin partition for each agent, and partition $G$ into 32 subsets according to which part of the partition of each agent a good belongs to. For each agent, at least 2 of the subsets are \emph{important}, i.e., of value at least $1/16$ of her maximin share. If some subset is important to both $a_{21}$ and $a_{22}$, allocate that subset to them and the remaining goods to the first group. Otherwise, we can allocate some pair of important subsets to the second group using a similar argument as in Theorem \ref{thm:twotwo}. \qed \end{proof} \subsection{Experimental Results} To complement our theoretical results, we ran computer experiments to see the extent to which it is possible to approximate the maximin share in random instances. For $(n_1,n_2)=(2,2)$ and $(3,2)$, we generated 100000 random instances where there are four goods and the utility of each agent for each good is drawn independently and uniformly at random from the interval $[0,1]$. The results are shown in Table \ref{table:experiment1}. An approximation ratio of $0.9$ can be guaranteed in over $90\%$ when there are two agents in each group, and in over $80\%$ when there are three agents in one group and two in the other. In other words, an allocation that ``almost'' satisfies the maximin criterion can be found in a large majority of the instances. However, the proportion drops significantly to around $70\%$ and $50\%$ if we demand that the partition yield the full maximin share to the agents, indicating that this is a much more stringent requirement. We also ran the experiment on instances where the utilities are drawn from an exponential distribution and from a log-normal distribution. As shown in Tables \ref{table:experiment2} and \ref{table:experiment3}, the number of instances for which the (approximate) maximin criterion is satisfied is lower for both distributions than for the uniform distribution. This is to be expected since the utilities are less spread out, meaning that conflicts are more likely to occur. The heavy drop as we increase the requirement from $\alpha\geq 0.9$ to $\alpha\geq 1$ is present for these distributions as well. We also remark here that the case with four goods seems to be the hardest case for maximin approximation. Indeed, with two goods an allocation yielding the full maximin share always exists, with three goods the maximin share is low since any partition leaves at most one good to one group, and with more than four goods there are more allocations and therefore more possibilities to exploit the differences between the utilities of the agents. This intuition aligns with the fact that the instances we use to show the approximation lower bounds in this paper all involve exactly four goods. \begin{table} \centering \begin{subtable}{\textwidth} \centering \begin{tabular}{| c | c | c | c | c | c | c |} \hline & $\alpha \geq 0.5$ & $\alpha \geq 0.6$ & $\alpha \geq 0.7$ & $\alpha \geq 0.8$ & $\alpha \geq 0.9$ & $\alpha \geq 1$ \\ \hline $(n_1,n_2)=(2,2)$ & 100000 & 100000 & 99937 & 98803 & 92015 & 69248 \\ \hline $(n_1,n_2)=(3,2)$ & 100000 & 99997 & 99672 & 96174 & 81709 & 49386 \\ \hline \end{tabular} \caption{The uniform distribution over the interval $[0,1]$.} \vspace{2mm} \label{table:experiment1} \end{subtable} \begin{subtable}{\textwidth} \centering \begin{tabular}{| c | c | c | c | c | c | c |} \hline & $\alpha \geq 0.5$ & $\alpha \geq 0.6$ & $\alpha \geq 0.7$ & $\alpha \geq 0.8$ & $\alpha \geq 0.9$ & $\alpha \geq 1$ \\ \hline $(n_1,n_2)=(2,2)$ & 100000 & 99982 & 99280 & 94464 & 80683 & 55833 \\ \hline $(n_1,n_2)=(3,2)$ & 100000 & 99827 & 97295 & 86293 & 63914 & 36626 \\ \hline \end{tabular} \caption{The exponential distribution with mean 1.} \vspace{2mm} \label{table:experiment2} \end{subtable} \begin{subtable}{\textwidth} \centering \begin{tabular}{| c | c | c | c | c | c | c |} \hline & $\alpha \geq 0.5$ & $\alpha \geq 0.6$ & $\alpha \geq 0.7$ & $\alpha \geq 0.8$ & $\alpha \geq 0.9$ & $\alpha \geq 1$ \\ \hline $(n_1,n_2)=(2,2)$ & 100000 & 99990 & 99220 & 92658 & 74966 & 55768 \\ \hline $(n_1,n_2)=(3,2)$ & 100000 & 99895 & 97159 & 82918 & 57068 & 36802 \\ \hline \end{tabular} \caption{The log-normal distribution with parameters $\mu=0$ and $\sigma=1$ for the associated normal distribution.} \label{table:experiment3} \end{subtable} \caption{Experimental results showing the number of instances, out of 100000, for which the respective maximin approximation ratio is achievable by some allocation of goods when utilities are drawn independently from the specified probability distribution.} \end{table} \section{Several Groups of Agents} \label{sec:manygroups} In this section, we consider a more general setting where there are several groups of agents. We show that when only one group contains more than a single agent, a positive approximation is possible independent of the number of goods. On the other hand, when all groups contain at least two agents and one group contain at least five agents, no approximation is possible in a strong sense. \subsection{Positive Result} We first show a positive result when all groups but one contain a single agent. This is a generalization of the corresponding result for two groups (Theorem \ref{thm:manyone}). The algorithm also uses the round-robin algorithm as a subroutine, but more care must be taken to account for the extra groups. Recall that when all groups contain a single agent, the best known approximation ratio is $3/4$, obtained by Ghodsi et al.'s algorithm \cite{GhodsiHaSe17}. \begin{theorem} \label{thm:manyones} Let $n_1\geq 2$ and $n_2=n_3=\dots=n_k=1$. Then it is possible to give every agent at least $\frac{2}{n_1+2k-3}$ of her maximin share. \end{theorem} \begin{proof} Let $\alpha:=\frac{1}{n_1+2k-3}$. If some agent in a singleton group values a good at least $\alpha$ times her value for the whole set of goods, put that good as the only good in her allocation. Since her maximin share is at most $1/k\leq 1/2$ times her value for the whole set of goods, this agent obtains the desired guarantee. We will give the remaining agents their guarantees with respect to the reduced set of goods and agents. By Lemma \ref{lem:addonemfs}, the maximin share of an agent can only increase as we remove an agent and a good, and the approximation ratio $\frac{2}{n_1+2k-3}$ also increases as $k$ decreases. This implies that guarantees for the reduced instance also translate to ones for the original instance. We recompute each agent's value for the whole set of goods as well as the number of groups and repeat this step until no agent in a singleton group values a good at least $\alpha$ times her value for the whole set of goods. Next, we normalize the utility of each agent for the whole set of goods to 1 as in Theorem \ref{thm:tworoundrobin}. We let each of the agents in the first group, in arbitrary order, take a good worth at least $\alpha$ to her if there is any. The approximation guarantee is satisfied for any agent who takes a good. Suppose that after this step, there are $n_0\leq n_1$ agents in the first group and $k_0\leq k-1$ agents in the remaining groups who have not taken a good. We run the round-robin algorithm on these $n_0+k_0$ agents, starting with the $k_0$ agents who do not belong to the first group. Consider one of the $n_0$ agents in the first group. Since no good allocated by the round-robin algorithm is worth at least $\alpha$ to her, she has envy at most $\alpha$ toward each of the $k_0$ agents. Assume for contradiction that the bundle allocated to the first group is worth less than $\alpha$ to her. Then she values the bundle of each of the $k_0$ agents at most $2\alpha$. Hence her utility for the whole set of goods is less than $\alpha+2k_0\alpha=\frac{2k_0+1}{n_1+2k-3}\leq\frac{2k-1}{2k-1}=1$, a contradiction. Consider now one of the $k_0$ agents in the remaining group. She has utility at least $1-(n_1-n_0)\alpha$ for the set of goods allocated by the round-robin algorithm. With respect to the bundles allocated by the round-robin algorithm, she has no envy toward herself or any of the $n_0$ agents in the first group, and she has envy at most $\alpha$ toward the remaining $k_0-1$ agents. Summing up the corresponding inequalities, averaging, and using the fact that her utility for all bundles combined is at least $1-(n_1-n_0)\alpha$, we find that her utility for her own bundle is at least $\frac{1-(n_1-n_0)\alpha-(k_0-1)\alpha}{n_0+k_0}$. It suffices to show that this is at least $\alpha$. The inequality is equivalent to $\alpha(2k_0+n_1-1)\leq 1$, which holds since $k_0\leq k-1$. \qed \end{proof} \subsection{Negative Result} We next show that when all groups contain at least two agents and one group contains at least five agents, no approximation is possible. \begin{theorem} \label{thm:manytwos} Let $n_1\geq 4$ if $k$ is even and $n_1\geq 5$ if $k$ is odd, and $n_2=n_3=\dots=n_k=2$. Then there exists an instance in which some agent with nonzero maximin share necessarily receives zero utility. \end{theorem} \begin{proof} Let $n_1=4$ if $k$ is even and $5$ if $k$ is odd, and suppose that there are $2k$ goods. In each of the groups $2,3,\dots,k$, one agent has utility 1 for the first $k$ goods and 0 for the last $k$, while the other agent has utility 0 for the first $k$ goods and 1 for the last $k$. Hence all of these agents have a maximin share of 1. To ensure that they all get nonzero utility, each group must receive one of the first $k$ and one of the last $k$ goods. This only leaves one good from the first $k$ and one from the last $k$ to the first group. First, consider the case $k$ even. Let the utilities of the agents in the first group be given by \begin{itemize} \item $\textbf{u}_{11}=(\overbrace{1,1,\dots,1}^{k/2},\overbrace{0,0,\dots,0}^{k/2},\overbrace{1,1,\dots,1}^{k/2},\overbrace{0,0,\dots,0}^{k/2})$; \item $\textbf{u}_{12}=(\overbrace{1,1,\dots,1}^{k/2},\overbrace{0,0,\dots,0}^{k/2},\overbrace{0,0,\dots,0}^{k/2},\overbrace{1,1,\dots,1}^{k/2})$; \item $\textbf{u}_{13}=(\overbrace{0,0,\dots,0}^{k/2},\overbrace{1,1,\dots,1}^{k/2},\overbrace{1,1,\dots,1}^{k/2},\overbrace{0,0,\dots,0}^{k/2})$; \item $\textbf{u}_{14}=(\overbrace{0,0,\dots,0}^{k/2},\overbrace{1,1,\dots,1}^{k/2},\overbrace{0,0,\dots,0}^{k/2},\overbrace{1,1,\dots,1}^{k/2})$. \end{itemize} All four agents have a maximin share of 1, but for any combination of a good from the first $k$ goods and one from the last $k$, some agent obtains a utility of 0. Next, consider the case $k$ odd. Let the utilities of the agents in the first group be given by \begin{itemize} \item $\textbf{u}_{11}=(\overbrace{1,1,\dots,1}^{(k-1)/2},\overbrace{0,0,\dots,0}^{(k+1)/2},\overbrace{1,1,\dots,1}^{(k+1)/2},\overbrace{0,0,\dots,0}^{(k-1)/2})$; \item $\textbf{u}_{12}=(\overbrace{1,1,\dots,1}^{(k+1)/2},\overbrace{0,0,\dots,0}^{(k-1)/2},\overbrace{0,0,\dots,0}^{(k+1)/2},\overbrace{1,1,\dots,1}^{(k-1)/2})$; \item $\textbf{u}_{13}=(\overbrace{0,0,\dots,0}^{(k+1)/2},\overbrace{1,1,\dots,1}^{(k-1)/2},\overbrace{0,0,\dots,0}^{(k-1)/2},\overbrace{1,1,\dots,1}^{(k+1)/2})$; \item $\textbf{u}_{14}=(\overbrace{0,0,\dots,0}^{(k-1)/2},\overbrace{1,1,\dots,1}^{(k+1)/2},\overbrace{1,1,\dots,1}^{(k-1)/2},\overbrace{0,0,\dots,0}^{(k+1)/2})$; \item $\textbf{u}_{15}=(\overbrace{1,1,\dots,1}^{(k-1)/2},0,\overbrace{1,1,\dots,1}^{k-1},0,\overbrace{1,1,\dots,1}^{(k-1)/2})$. \end{itemize} All five agents have a maximin share of 1, but as in the previous case, any combination of a good from the first $k$ goods and one from the last $k$ yields no utility to some agent. \qed \end{proof} \section{Conclusion and Future Work} In this paper, we study the problem of approximating the maximin share when we allocate goods to groups of agents. When there are two groups, we characterize the cardinality of the groups for which we can obtain a positive approximation of the maximin share. We also show positive and negative results for approximation when there are several groups. We conclude the paper by listing some future directions. For two groups, closing the gap between the lower and upper bounds of the approximation ratios (Table \ref{table:twogroups}) is a significant problem from a theoretical point of view but perhaps even more so from a practical one. In particular, it would be especially interesting to determine the asymptotic behavior of the best approximation ratio when one group contains a single agent and the number of agent in the other group grows. For the case of several groups, one can ask whether it is in general possible to obtain a positive approximation when some groups contain a single agent while others contain two agents; the techniques that we present in this paper do not seem to extend easily to this case. Another question is to determine whether the dependence on the number of groups in the approximation ratio (Theorem \ref{thm:manyones}) is necessary. One could also address the issue of truthfulness or add constraints on the allocation, for example by requiring that the allocation form a contiguous block on a line, as has been done for the traditional fair division setting \cite{AmanatidisBiMa16,BouveretCeEl17,Suksompong17}. In light of the fact that the positive results in this paper only hold for groups with a small number of agents, a natural question is whether we can relax the fairness notion in order to allow for more positive results. For example, one could consider only requiring that a certain fraction of agents in each group, instead of all of them, think that the allocation is fair. Indeed, for two groups with any number of agents, there exists an allocation that yields at least half of the maximin share to at least half of the agents in each group \cite{SegalhaleviSu17}. Alternatively, if we use envy-freeness as the fairness notion, then a possible relaxation is to require envy-freeness only up to some number of goods, where the number of goods could depend on the number of agents in each group. From a broader point of view, an intriguing future direction is to explore other fairness notions such as envy-freeness and proportionality in our setting where goods are allocated to groups of agents. Even though an allocation satisfying these notions does not necessarily exist, and indeed even an approximation cannot be guaranteed, we can still strive for an algorithm that produces such an allocation whenever one exists. This has been obtained for envy-freeness for groups with single agents, for example by the undercut procedure \cite{BramsKiKl12}. Such an algorithm for our generalized setting would be both interesting and important in our opinion. \subsubsection*{Acknowledgments.} The author thanks the anonymous reviewers for their helpful feedback and acknowledges support from a Stanford Graduate Fellowship.
1,116,691,498,870
arxiv
\section{Introduction} Classification, also known as \textquotedbl{}supervised learning\textquotedbl{}, is a fundamental building block of statistical machine learning. Applications of statistical classification methods include, for example, cancer diagnosis \citep{tibshirani2002diagnosis}, text categorization \citep{joachims1998text}, computer vision \citep{phillips1998support}, protein interaction predictions \citep{chowdhary2009bayesian}, etc. Well-known classification methods include logistic regression, naive Bayes classifier, K-nearest-neighbors, support vector machines \citep{boser1992training}, and random forests \citep{breiman2001random}. As important players in this field, linear and quadratic discriminant analysis (LDA and QDA) \citep{anderson1958introduction} are widely used. Compared with LDA, QDA is able to exploit interaction effects of predictors. With rapid technical advances in data collection, it has become common that the number of predictors is much larger than the number of observations, which is also known as the ``large $p$ small $n$'' problem. For example, in gene expression microarray analysis, usually $n$ is in hundreds of samples, whereas $p$ is in thousands of genes \citep{efron2010large}. In a typical genome-wide association study, $n$ is in the order of a few thousands of subjects, and $p$ is from several thousands to millions of SNP markers \citep{waldmann2013evaluation}. Vanilla LDA or QDA are infeasible when $p>n$ since the sample covariance matrices are consequently singular. Even in low-dimensional scenarios, including many irrelevant predictors can significantly impair the classification accuracy. A number of variable selection methods have been developed for high-dimensional classification problems, of which many focused on imposing regularizations on the LDA model. For example, \cite{witten2011penalized} proposed to use fused Lasso to penalize discriminant vectors in Fisher's discriminant problem. \cite{cai2011direct} proposed to estimate the product of precision matrix and the difference between two mean vectors directly through a constrained $L_1$ minimization. \cite{han2013coda} relaxed the normal assumption of LDA to entertain Gaussian Copula models. More developments on high-dimensional LDA can be found in \cite{guo2007regularized}, \cite{fan2008high}, \cite{clemmensen2011sparse}, \cite{shao2011sparse}, \cite{mai2012direct} and \cite{fan2013optimal}. Aforementioned methods work for LDA models with only linear main effects. In many applications, however, interaction effects may be significant and scientifically interesting. On the other hand, in moderate to high dimensional situations, including in the model too many noise variables and their interaction terms can lead to an over-fitting problem more severe than that of linear discriminant models, resulting in a much impaired prediction accuracy. In recent years, there has been a significant surge of interest in detecting interaction effects for regression or classification problems \citep{simon2012permutation,bien2013lasso,jiang2014variable,fan2015innovated}, which both improves the classification accuracy and is of scientific interest. In this article, we use the term ``interaction'' to refer to all second-order effects, including both two-way interactions $X_{i}X_{j}$ with $i\neq j$ and quadratic terms $X_{i}^{2}$. To motivate later developments, we consider a two-class Gaussian classification problem with both linear and interaction effects with 3 true predictors. The oracle Bayes rule is to classify an observation to class 1 if $Q\left(\mathbf{X}\right)>0$, and to class 0 otherwise, where \begin{equation} Q\left(\mathbf{X}\right)=1.627+X_{1}-0.6X_{1}^{2}-0.6X_{3}^{2}-0.7X_{1}X_{2}-0.7X_{2}X_{3}.\label{eq:toy} \end{equation} We simulated 100 independent datasets, each having $100$ observations in every class. Figure \ref{fig:sim_toy} shows the scatterplot of $\left(X_{1},X_{2}\right)$ for one simulated dataset. For each simulated dataset, we applied LDA, logistic regression, and QDA to train classifiers, and the classification accuracy was estimated by using $1000$ additional testing samples generated from the Oracle model. As shown in Table \ref{tab:lda_vs_qda}, both LDA and logistic regression with only linear terms had poor prediction powers, whereas QDA improved the classification accuracy dramatically. We further tested the classification accuracy of QDA when $k$ additional noise predictors were included ($k=1, \ldots, 50$), each being drawn independently from $\mathcal{N}\left(0,1\right)$. Figure \ref{fig:sim_toy} shows that the classification error rate of QDA increased dramatically as the number of noise predictors increased, demonstrating the necessity of developing methods capable of selecting both main effect and interaction terms efficiently. \begin{figure}[h] \noindent \begin{centering} \hspace{-5pt}\includegraphics[viewport=0 15 324 310,scale=0.68]{toy2}$\;\;\;$\includegraphics[viewport=0bp 15bp 324bp 310bp,scale=0.68]{QDA_errs} \par\end{centering} \caption{A two-class Gaussian classification problem, where Class 1 samples were drawn from $N(\boldsymbol{\mu}_{1}, \boldsymbol{\Omega}_1^{-1})$, and Class 2 from $N(\boldsymbol{\mu}_{2}, \boldsymbol{\Omega}_{2}^{-1})$. We set $\boldsymbol{\mu}_{1}=-\boldsymbol{\mu}_{2}=\left(0.5,0,0\right)$, $\boldsymbol{\Omega}_{1}=\mathbf{I}_{3}-\boldsymbol{\Omega}$, and $\boldsymbol{\Omega}_{2}=\mathbf{I}_{3}+\boldsymbol{\Omega}$, where $\boldsymbol{\Omega}$ has entries $\omega_{22}=1$, $\omega_{11}=\omega_{33}=-0.60$, $\omega_{12}=\omega_{23}=-0.35$, and $\omega_{13}=0$. {\bf Left}: Scatterplot of $\left(X_{1},X_{2}\right)$ overlaid with corresponding theoretical contours for one simulated dataset. {\bf Right}: QDA classification error rate versus number of noise predictors. \label{fig:sim_toy}} \end{figure} \begin{table}[h]fa \label{tab:lda_vs_qda} \vspace{5pt} \begin{centering} \begin{tabular}{ccccc} \hline \noalign{\vskip4bp} Method & LDA & Logistic regression & QDA & QDA with 50 noise predictors\tabularnewline[\doublerulesep] \noalign{\vskip\doublerulesep} Test error \% & $34.81$ $(1.47)$ & $34.88$ $(1.38)$ & $15.65$ $(0.84)$ & $37.33$ $(1.78)$\tabularnewline[4bp] \hline \noalign{\vskip\doublerulesep} \end{tabular} \par\end{centering} \caption{Means (standard deviations) of testing error rates for different classification methods over $100$ replications. }\end{table} However, a direct application of Lasso on logistic regression with all second-order terms is prohibitive for moderately large $p$ (e.g., $p\ge1000$). To cope with this difficulty, \cite{fan2015innovated} proposed innovated interaction screening (IIS) based on transforming the original $p$-dimensional predictor vector by multiplying the estimated precision matrix for each class. IIS first reduces the number of predictors to a smaller order of $p$, and then identifies both important main effects and interactions using the elastic net penalty \citep{zou2005regularization}. The performance of the resulting method, IIS-SQDA, relies heavily on the estimation of the $p\times p$ dimensional precision matrix, which is usually a hard problem under high-dimensional settings. \cite{murphy2010variable}, \cite{zhang2011bic}, and \cite{maugis2011variable} proposed stepwise procedures for QDA variable selection. These methods were shown to be consistent under the multivariate Gaussian assumption on the design matrix. In practice, however, performances of these methods can be much compromised when the normality assumption is violated, especially when predictors follow heavier-tailed distributions or when they are correlated in non-linear manners (see Section \ref{sec:simstudy}). In order to gain robustness and computational efficiency, we propose the method \underline{S}tepwise c\underline{O}nditional likelihood variable selection for \underline{D}iscriminant \underline{A}nalysis (SODA) under the logistic regression framework, which starts with a forward stepwise selection procedure to add in predictors with main and/or interaction effects so as to reduce the number of candidate predictors to a smaller order of $n$, and finishes with a backward stepwise elimination procedure for further narrowing down individual main and interaction effects. The criterion used for both forward addition and backward elimination is the extended Bayesian information criterion (EBIC) \citep{chen2008extended}. Although stepwise variable selection methods have been widely known and used for regression problems, stepwise selection of interaction terms has been rare. Available methods typically consider adding interaction terms only among those predictors that have been selected for their main effects. In comparison, in each forward addition step, SODA evaluates the overall contribution of a predictor including both its main effects and its interactions with selected predictors. Under some regularity conditions, we establish the screening consistency of the forward step and the individual term selection consistency of the backward step of SODA under high-dimensional settings. An interesting and powerful extension of SODA is for variable selection in multiple index models \citep{li1991sliced,cook2007fisher,jiang2014variable}, which assume that the response $Y$ (may be either discrete or continuous) depends on a subspace of $\mathbf{X}$ through an unknown (nonlinear) link function. The most popular method for estimating the subspace is the sliced inverse regression (SIR) method \citep{li1991sliced}. We note that after slicing (discretizing) the response variable $y$, we can apply SODA effectively for variable selection and model fitting. We call this extension the \underline{S}liced \underline{SODA} (S-SODA). Compared with variable selection methods based on SIR (see \cite{jiang2014variable} for references), S-SODA does not require the linearity condition and enjoys much improved robustness without much sacrifice in sensitivity. The rest of the article is organized as follows. SODA and S-SODA are presented in full detail in Section 2. Theoretical properties of SODA are studied in Section 3. Simulation results are shown in Section 4 to compare performances of SODA and S-SODA with those of other methods. In Section 5 We further apply SODA to a couple of real examples to evaluate its empirical performances, and in Section 6 conclude the article with a short discussion. Detailed theoretical proofs and additional empirical results are provided in the Supplemental Materials. \section{Variable and Interaction Selection for Discrete Response Models} \subsection{Quadratic logistic regression model and its extended BIC} We consider the $K$-class classification problem. Let $Y\in\left\{ 1,\dots,K\right\} $ denote the class label, let $\mathbf{X}=\left(X_{1},X_{2},\dots,X_{p}\right)^{T}$ be a vector of $p$ predictors, and let $\left\{ \left(\mathbf{x}_{i},y_{i}\right):\,i=1,\dots,n\right\}$ denote $n$ independent observations on $\left(\mathbf{X},Y\right)$. When $p$ is large, usually only a small proportion of predictors have predictive power on $Y$. Let $\mathcal{P}$ denote the set of relevant predictors, and let $\mathcal{P}^{c}=\left\{ 1,\dots,p\right\} \backslash\mathcal{P}$ be noise ones. That is, \[ P\left(Y\mid\mathbf{X}_{\mathcal{P}},\mathbf{X}_{\mathcal{P}^{c}}\right)=P\left(Y\mid\mathbf{X}_{\mathcal{P}}\right). \] We consider the following logistic model: \begin{eqnarray} p\left(Y=k\mid\mathbf{X},\boldsymbol{\theta}\right) & = & \frac{\exp\left[\delta_{k}\left(\mathbf{X}\mid\boldsymbol{\theta}\right)\right]}{1+\sum_{l=1}^{K-1}\exp\left[\delta_{l}\left(\mathbf{X}\mid\boldsymbol{\theta}\right)\right]},\quad k=1,\dots,K,\label{eq:main_model} \end{eqnarray} where $\delta_{k}\left(\mathbf{X}\mid\boldsymbol{\theta}\right)$ is the discriminant function for class $k$ and $\boldsymbol{\theta}$ denotes the vector of parameters. Choosing class $K$ as the baseline class so that $\delta_{K}\left(\mathbf{X}\mid\boldsymbol{\theta}\right)=0$, we assume that \begin{equation} \delta_{k}\left(\mathbf{X}\mid\boldsymbol{\theta}\right)=\alpha_{k}+\boldsymbol{\beta}_{k}^{T}\mathbf{X}+\mathbf{X}^{T}\mathbf{A}_{k}\mathbf{X},\quad\mbox{for }k=1,\dots,K-1. \label{eq:quadratic} \end{equation} Since $\mathbf{X}$ is conditioned on, we do not need to model the distribution of $\mathbf{X}_{\mathcal{P}}$ or $\mathbf{X}_{\mathcal{P}^{c}}$, which is both convenient and robust for variable selection. Special cases of this model include: \begin{itemize} \item \vspace{1pt} Multinomial logistic regression (with $\mathbf{A}_{k}=\mathbf{0}$ for all $k$) \item \vspace{-2pt}Linear/quadratic discriminant analysis, where $p\left(\mathbf{X}_{\mathcal{P}}\mid Y\right)$ is multivariate normal distribution \item \vspace{-2pt}Discriminant analyses where $p\left(\mathbf{X}_{\mathcal{P}}\mid Y\right)$ is in the multivariate exponential family, \[ p\left(\mathbf{X}_{\mathcal{P}}=\mathbf{x}\mid Y=k,\boldsymbol{\eta}\right)=h\left(\mathbf{x}\right)g\left(\boldsymbol{\eta}_{k}\right)\exp\left(\boldsymbol{\eta}_{k}^{T}\mathbf{x}\right). \] \end{itemize} To see the connection between QDA and model (\ref{eq:main_model}), it is noted that for QDA models, \begin{eqnarray*} \alpha_{k} & = & \log\left(\pi_{k}/\pi_{K}\right)-\frac{1}{2}\left(\log\left|\boldsymbol{\Sigma}_{k}\right|-\log\left|\boldsymbol{\Sigma}_{K}\right|+\boldsymbol{\mu}_{k}^{T}\boldsymbol{\Sigma}_{k}^{-1}\boldsymbol{\mu}_{k}-\boldsymbol{\mu}_{K}^{T}\boldsymbol{\Sigma}_{K}^{-1}\boldsymbol{\mu}_{K}\right),\\ \boldsymbol{\beta}_{k}^{T} & = & \boldsymbol{\mu}_{k}^{T}\boldsymbol{\Sigma}_{k}^{-1}-\boldsymbol{\mu}_{K}^{T}\boldsymbol{\Sigma}_{K}^{-1},\\ \mathbf{A}_{k} & = & -\frac{1}{2}\left(\boldsymbol{\Sigma}_{k}^{-1}-\boldsymbol{\Sigma}_{K}^{-1}\right),\quad\mbox{for }k=1,\dots,K-1. \end{eqnarray*} Let $\mathcal{M}$ and $\mathcal{I}$ denote subsets of main effects and interaction pairs, respectively, and let $\mathcal{M}_{0}$ and $\mathcal{I}_{0}$ denote the corresponding true sets defined as \[ \mathcal{M}_{0}=\left\{ j:\,\exists\,k\,\,\mbox{ s.t. }\,\,\beta_{k,j}\neq0\right\} \quad\mbox{and}\quad\mathcal{I}_{0}=\left\{ \left(i,j\right):\,\exists k\,\,\mbox{ s.t. }\,A_{k,i,j}\neq0\right\} , \] with $k$ indicating the class label. Let $\mathcal{A}=\mathcal{M}_{0}\cup\mathcal{I}_{0}$ denote the true set of all effects, and let $\mathcal{S}=\mathcal{M}\cup\mathcal{I}$. The true set of relevant predictors $\mathcal{P}$ can be derived from $\mathcal{A}$ as \[ \mathcal{P}=\mathcal{M}_{0}\cup\left\{ j:\,\exists\,i\:\:\mbox{s.t.}\,\,\left(i,j\right)\in\mathcal{I}_{0}\right\} . \] Our main objective is to infer $\mathcal{A}$, with a special interest in terms in $\mathcal{I}_{0}$. Let $\boldsymbol{\theta}_{\mathcal{S}}$ denote the collection of all coefficients in model (\ref{eq:quadratic}), whose $0$'s correspond to terms not in $\mathcal{S}$, and let $\boldsymbol{\theta}_{k,\mathcal{S}}$ denote the corresponding coefficients for class $k$. For a dataset $\left\{ \left(\mathbf{x}_{i},y_{i}\right):\,i=1,\dots,n\right\} $, the log-likelihood for $\boldsymbol{\theta}_{\mathcal{S}}$ is denoted as $l_{n}\left(\boldsymbol{\theta}_{\mathcal{S}}\right)$. Let $\mathbf{Z} \equiv (1,\mathbf{X}, \mathbf{X}\bigotimes \mathbf{X})$ be the augmented version of $\mathbf{X}$, containing intercept $1$, main effects, and all interaction terms of $\mathbf{X}$. Let $\mathbf{z}_{i}$ be the $i$-th observation of $\mathbf{Z}$. Then $l_{n}\left(\boldsymbol{\theta}_{\mathcal{S}}\right)$ takes the form of a logistic regression model in $\mathbf{Z}$: \[ l_{n}\left(\boldsymbol{\theta}_{\mathcal{S}}\right)\:=\:\sum_{i=1}^{n}\left\{ \boldsymbol{\theta}_{y_{i},\mathcal{S}}^{T}\mathbf{z}_{i}-\log\left(1+\sum_{l=1}^{K-1}\exp\left(\boldsymbol{\theta}_{l,\mathcal{S}}^{T}\mathbf{z}_{i}\right)\right)\right\} . \] Let $\tilde{\boldsymbol{\theta}}_{\mathcal{S}}$ denote the MLE of $\boldsymbol{\theta}_{\mathcal{S}}$. By Lemma 2 in the appendix, with high probability $l_{n}\left(\boldsymbol{\theta}_{\mathcal{S}}\right)$ is convex and $\tilde{\boldsymbol{\theta}}_{\mathcal{S}}$ can be obtained by Newton-Raphson algorithm. Let $\boldsymbol{\theta}_{0}$ denote the true parameter vector. Theorem 1 illustrates the consistency of $\tilde{\boldsymbol{\theta}}_{\mathcal{S}}$ for any reasonable set $\mathcal{S}$. \begin{thm} \textbf{\emph{\label{theorem:MCLE_asymptotics}}}Under Conditions C1 $\sim$ C4 in Section \ref{sec:theory}, as $n\to\infty$, \begin{equation} \max_{\mathcal{S}\supset\mathcal{A},\,\left|\mathcal{S}\right|\le Q}\left\Vert \tilde{\boldsymbol{\theta}}_{\mathcal{S}}-\boldsymbol{\theta}_{0}\right\Vert _{2}=O_{p}\left(n^{-1/2+\xi}\right), \end{equation} for any constants $0<\xi<1/2$ and $Q\ge\left|\mathcal{A}\right|$ independent of $n$. \end{thm} In high-dimensional settings, the classic Bayesian information criterion (BIC) \citep{schwarz1978estimating} is too liberal and tends to select many false positives \citep{broman2002model}. \cite{chen2008extended} proposed extended BIC (EBIC) and showed it to be consistent for linear regression models under high-dimensional settings. The EBIC for set $\mathcal{S}$ is specified as \begin{equation} \text{EBIC}_{\gamma}\left(\mathcal{S}\right)\,=\,-2\,l_{n}\left(\tilde{\boldsymbol{\theta}}_{\mathcal{S}}\right)+\left|\mathcal{S}\right|\log n+2\gamma\,\left|\mathcal{S}\right|\log p,\label{eq:EBIC} \end{equation} where $\left|\mathcal{S}\right|$ is the size of set $\mathcal{S}$, and $\gamma$ is a tuning parameter. The selection of $\gamma$ may depend on the relative sizes of $n$ and $p$, and some heuristics on determining $\gamma$ practically is discussed in section \ref{sub:impl}. Let $\tilde{\mathcal{S}}_{\text{EBIC}}$ be the selected set of predictors minimizing the EBIC, and let $Q$ be any positive constant greater than constant $p_{0}$ in condition (C1) in section \ref{sec:theory}. Then, \begin{equation} \tilde{\mathcal{S}}_{\text{EBIC}}\,=\,\underset{\mathcal{S}:\:\left|\mathcal{S}\right|\,\leq\,Q}{\arg\min}\;\text{EBIC}_{\gamma}\left(\mathcal{S}\right),\label{eq:S_hat} \end{equation} where $\left|\mathcal{S}\right|$ denotes the size of set $\mathcal{S}$. The asymptotic property of $\tilde{\mathcal{S}}_{\text{EBIC}}$ is shown by the following theorem. \begin{thm} \textbf{\emph{(EBIC criterion consistency)}}\label{theorem:EBIC} Under Conditions C1 $\sim$ C4 in Section \ref{sec:theory}, $\tilde{\mathcal{S}}_{\text{EBIC}}$ is a consistent estimator of $\mathcal{A}$, i.e., \[ \text{Pr}\left(\tilde{\mathcal{S}}_{\text{EBIC}}=\mathcal{A}\right)\to1,\mbox{ as }n\to\infty, \] for any $\gamma>2-1/\left(2\kappa\right)$. \end{thm} By treating our model as a logistic regression on $\left(\mathbf{Z},Y\right)$, Theorem \ref{theorem:EBIC} follows directly from the asymptotic consistency of EBIC for generalized linear models (GLM), which was proved in \cite{chen2012extended} and \cite{foygel2011bayesian} in both fixed and random design contexts. We thus omit its proof. Different from \cite{chen2012extended} and \cite{foygel2011bayesian}, here we require $\gamma>2-1/\left(2\kappa\right)$ instead of $\gamma>1-1/\left(2\kappa\right)$ to penalize additional model flexibility caused by the inclusion of interaction terms. \subsection{SODA: a stepwise variable and interaction selection procedure} In practice it is infeasible to enumerate all possible $\mathcal{S}$ to find the one that minimizes the EBIC. For a closely related generalized linear model variable selection problem, \cite{chen2012extended} and \cite{foygel2011bayesian} used Lasso \citep{tibshirani1996regression} to obtain a solution path of predictor sets, and chose the optimal set on the path with the lowest EBIC. However, this method also fails under the high-dimensional setting for QDA, in which there are $O\left(p^{2}\right)$ candidate interaction terms. Furthermore, Lasso's variable selection consistency for logistic regression requires the incoherence condition \citep{ravikumar2010high}, which can be easily violated due to correlations between interaction terms and their corresponding main effect terms. The IIS procedure proposed in \cite{fan2015innovated} requires the estimation of the $p\times p$ precision matrix, which is by itself a challenging problem. If the related and unrelated predictors are moderately correlated, IIS\textquoteright s marginal screening strategy has difficulties in filtering out noise predictors. We propose here the stepwise procedure SODA, consisting of three stages: (1) a preliminary forward main effect selection; (2) forward variable selection (considering both main and interaction effects), and (3) backward elimination. \begin{enumerate} \item \textbf{Preliminary main effect selection}: This step is the same as that in the standard stepwise regression method. Let $\mathcal{M}_{t}$ denote the selected set of main effects at step $t$. SODA starts with $\mathcal{M}_{1}=\emptyset$ and iterates the operations below until termination. \begin{enumerate} \item \vspace{-5pt}For each predictor $j\notin\mathcal{M}_{t}$, create a new candidate set $\mathcal{M}_{t,j}=\mathcal{M}_{t}\cup\left\{ j\right\} $. \item \vspace{2pt}Find the predictor $j$ with lowest $\mbox{EBIC}_{\gamma}\left(\mathcal{M}_{t,j}\right)$. If $\mbox{EBIC}_{\gamma}\left(\mathcal{M}_{t,j}\right)<\mbox{EBIC}_{\gamma}\left(\mathcal{M}_{t}\right)$, continue with $\mathcal{M}_{t+1}=\mathcal{M}_{t,j}$, otherwise terminate with $\tilde{\mathcal{M}}_{F}$ and go to 2. \end{enumerate} \item \textbf{Forward variable addition (both main and interaction effects):} Let $\mathcal{C}_{t}$ denote the selected set of predictors at step $t$, and let $\mathcal{S}_{t}=\tilde{\mathcal{M}}_{F}\cup\mathcal{C}_{t}\cup\left(\mathcal{C}_{t}\times\mathcal{C}_{t}\right)$ denote the set of terms induced by $\mathcal{C}_{t}$. SODA starts with $\mathcal{C}_{1}=\emptyset$ and iterate the operations below until termination. \begin{enumerate} \item \vspace{-5pt}For each $j\notin\mathcal{C}_{t}$, create a candidate set $\mathcal{C}_{t,j}=\mathcal{C}_{t}\cup\left\{ j\right\} $ and let $\mathcal{S}_{t,j}=\tilde{\mathcal{M}}_{F}\cup\mathcal{C}_{t,j}\cup\left(\mathcal{C}_{t,j}\times\mathcal{C}_{t,j}\right)$. \item \vspace{2pt}Find the predictor $j$ with lowest $\mbox{EBIC}_{\gamma}\left(\mathcal{S}_{t,j}\right)$. If $\mbox{EBIC}_{\gamma}\left(\mathcal{S}_{t,j}\right)<\mbox{EBIC}_{\gamma}\left(\mathcal{S}_{t}\right)$, continue with $\mathcal{C}_{t+1}=\mathcal{C}_{t,j}$, otherwise terminate with $\tilde{\mathcal{C}}_{F}$ and go to 3. \end{enumerate} \item \textbf{Backward elimination}: Let $\mathcal{S}_{t}$ denote the selected set of individual terms at step $t$ of backward stage. SODA starts with $\mathcal{S}_{1}=\tilde{\mathcal{M}}_{F}\cup\tilde{\mathcal{C}}_{F}\cup\left(\tilde{\mathcal{C}}_{F}\times\tilde{\mathcal{C}}_{F}\right)$ and iterate the operations below until termination. \begin{enumerate} \item \vspace{-5pt}For each main or interaction term $j\in\mathcal{S}_{t}$ (e.g. $j=1$ or $j=\left(1,2\right)$), create a candidate set $\mathcal{S}_{t,j}=\mathcal{S}_{t}\backslash\left\{ j\right\} $. \item \vspace{2pt}Find term $j$ with lowest $\mbox{EBIC}_{\gamma}\left(\mathcal{S}_{t,j}\right)$. If $\mbox{EBIC}_{\gamma}\left(\mathcal{S}_{t,j}\right)<\mbox{EBIC}_{\gamma}\left(\mathcal{S}_{t}\right)$, remove term $j$, otherwise terminate and retain set $\tilde{\mathcal{S}}=\mathcal{S}_{t}$. \end{enumerate} \end{enumerate} Stepwise methods had been primary tools for conducting variable selection in regression problems long before the recent development of Lasso-type methods. The forward stepwise procedure has also been considered for variable screening for linear regressions in high-dimensional settings \citep{wasserman2009high,wang2009forward}. When considering interactions, a standard approach typically examines only those among the variables that have been deemed significant due to their main effects. However, Stage 2 of SODA for forward variable addition is different. After the preliminary selection of Stage 1, in Stage 2 SODA keeps track of a new set of variables ${\cal C}_t$, of which all main and quadratic terms are considered together. In other words, at each step SODA evaluates the EBIC for the overall effect of adding one predictor. Thus, choosing one variable to add in the forward variable selection stage is of order $O(p)$, and the whole stage is of order $O(ps)$, where $s$ is the number of truly relevant predictors. A naive method that searches through all individual terms is of order $O\left(p^{2} s^2 \right)$. Another important feature of SODA is that each backward step only eliminates one individual term instead of all terms related to one predictor. In other words, SODA selects individual main and interaction effect terms without any nesting requirements. Our theory shows that the forward variable addition step is sufficient for SODA to achieve the screening consistency. However, the number of parameters and the EBIC penalization in this forward step increases quadratically with the cardinality of $\mathcal{C}_{t}$. Therefore it can be hard to add predictors with only weak main effects. To optimize the empirical performance, we include the preliminary main effect selection stage to identify predictors with only weak main effects. \subsection{Sliced SODA (S-SODA) for general index models} In his seminal work on nonlinear dimension reduction, \cite{li1991sliced} proposed a semi-parametric index model of the form \begin{equation} Y=f\left(\boldsymbol{\beta}_{1}^{T}\mathbf{X},\boldsymbol{\beta}_{2}^{T}\mathbf{X},\dots,\boldsymbol{\beta}_{d}^{T}\mathbf{X},\varepsilon\right),\label{eq:gim} \end{equation} where $f$ is an unknown function and $\varepsilon$ is random error independent of $\mathbf{X}$, and the sliced inverse regression (SIR) method to estimate the central dimension reduction subspace (CDRS) spanned by the directions $\boldsymbol{\beta}_{1},\dots,\boldsymbol{\beta}_{d}$. Since the estimation of CDRS does not automatically lead to variable selection, several methods have been developed to do simultaneous dimension reduction and variable selection for index models. For example, \cite{li2005model} designed a backward subset selection method, and \cite{li2007sparse} developed the sparse SIR (SSIR) algorithm to obtain shrinkage estimates of the SDR directions under $L_{1}$ norm. Motivated by stepwise regression for linear models, \cite{zhong2012correlation} proposed a forward stepwise variable selection procedure called correlation pursuit (COP) for index models. \cite{lin2015} showed the necessary and sufficient condition for SIR to be consistent in high-dimensional settings and introduced a diagonal thresholding method, DT-SIR, for variable selection. \cite{lin2016} proposed a new formulation of the SIR estimation and a direct application of Lasso for variable selection with index models. The aforementioned SIR-based methods consider primarily the information from the first conditional moment, $\mathbb{E}\left(\mathbf{X}\mid Y\right)$, and tend to miss important variables with second-order effects. In order to overcome this problem, \cite{jiang2014variable} proposed SIRI, which utilizes both the first and the second conditional moments to select variables. SIRI derives its procedure from a likelihood-ratio test perspective by assuming the following working model: \begin{equation} \begin{aligned}\mathbf{X}_{\mathcal{P}}\mid s\left(Y\right)=h & \;\:\sim\;\:\mathcal{N}\left(\boldsymbol{\mu}_{h},\boldsymbol{\Sigma}_{h}\right),\\ \mathbf{X}_{\mathcal{P}^{c}}\mid\mathbf{X}_{\mathcal{P}},s\left(Y\right)=h & \;\:\sim\;\:\mathcal{N}\left(\boldsymbol{a}+\boldsymbol{B}^{T}\mathbf{X}_{\mathcal{P}},\boldsymbol{\Sigma}_{0}\right), \end{aligned} \quad h=1,\dots,H.\label{eq:SIRI} \end{equation} where $\mathcal{P}$ denote the set of true predictors. \cite{jiang2014variable} showed that SIRI is a consistent variable selection procedure for model (\ref{eq:SIRI}), and also for a more general class of models satisfying the following linearity and constant variance conditions. In fact, all the aforementioned methods require either the linearity condition or the constant variance condition, or both. \vspace{3mm} \noindent \textbf{Linearity condition:} $E\left(\mathbf{X}_{\mathcal{P}^{c}}\mid\mathbf{X}_{\mathcal{P}}\right)$ is linear in $\mathbf{X}_{\mathcal{P}}$. \noindent \textbf{Constant variance condition:} $\text{Cov}\left(\mathbf{X}_{\mathcal{P}^{c}}\mid\mathbf{X}_{\mathcal{P}}\right)$ is a constant. \vspace{3mm} When the linearity and constant variance conditions approximately hold, SIRI and other SIR-related methods usually enjoy excellent empirical performances. However, when either condition is violated, the performances of these methods deteriorate rapidly. This issue motivates us to develop sliced-SODA (S-SODA), a modification of SODA. As a working model, S-SODA can be seen as assuming only the first half of model (\ref{eq:SIRI}) without any distributional assumption on $\mathbf{X}_{\mathcal{P}^{c}}$: \begin{equation} \begin{aligned}\mathbf{X}_{\mathcal{P}}\mid s\left(Y\right)=h & \;\:\sim\;\:\mathcal{N}\left(\boldsymbol{\mu}_{h},\boldsymbol{\Sigma}_{h}\right), \end{aligned} \quad h=1,\dots,H.\label{eq:S-SODA_model} \end{equation} Note that model (\ref{eq:S-SODA_model}) is essentially the QDA model, and we can apply SODA as a consistent variable selection procedure. More precisely, S-SODA starts by sorting the samples in ascending order of $y_{i}$, and equally partitioning them into $H$ slices (the discretization step). It then applies SODA to data $\left\{ \left(s_{i},\mathbf{x}_{i}\right)\right\} _{i=1}^{n}$, where $s_{i}$ denote the slice index for $y_{i}$. S-SODA finally outputs all the selected main and interaction terms. \subsection{Post-selection prediction for continuous response} S-SODA conducts variable selection for semi-parametric model (\ref{eq:gim}) without knowing the true functional form of the link function. After variable selection, it is of interest to predict the response variable $\tilde{y}$ for a new observation of predictors $\tilde{\mathbf{x}}$. Suppose our training data consist of $n$ independent observations $\left\{ \left(y_{i},\mathbf{x}_{i}\right)\right\} _{i=1}^{n}$. Let $\tilde{\mathcal{S}}$ denote the selected set of terms by S-SODA, and let $\tilde{\mathcal{P}}$ denote the set of predictors with any term in $\tilde{\mathcal{S}}$, which is the S-SODA estimate of $\mathcal{P}$. Let $\hat{\boldsymbol{\mu}}=\left(\hat{\boldsymbol{\mu}}_{1},\dots,\hat{\boldsymbol{\mu}}_{H}\right)$, $\hat{\boldsymbol{\Sigma}}=\left(\hat{\boldsymbol{\Sigma}}_{1},\dots,\hat{\boldsymbol{\Sigma}}_{H}\right)$, where $\hat{\boldsymbol{\mu}}_{h}$ and $\hat{\boldsymbol{\Sigma}}_{h}$ are respectively the sample mean vector and covariance matrix of $\mathbf{X}_{\tilde{\mathcal{P}}}$ in slice $h$. Note that $\hat{\boldsymbol{\mu}}$ and $\hat{\boldsymbol{\Sigma}}$ are MLEs of parameters in model (\ref{eq:S-SODA_model}). Inverting model (\ref{eq:S-SODA_model}) by the Bayes rule, we have \[ \text{Pr}\left(s\left(Y\right)=h\mid\mathbf{X}_{\tilde{\mathcal{P}}},\boldsymbol{\mu},\boldsymbol{\Sigma}\right)=\frac{N\left(\mathbf{X}_{\tilde{\mathcal{P}}}\mid\boldsymbol{\mu}_{h},\boldsymbol{\Sigma}_{h}\right)}{\sum_{l=1}^{H}N\left(\mathbf{X}_{\tilde{\mathcal{P}}}\mid\boldsymbol{\mu}_{l},\boldsymbol{\Sigma}_{l}\right)},\quad h=1,\dots,H. \] We consider the conditional expectation $\mathbb{E}\left[Y\mid\mathbf{X}_{\tilde{\mathcal{P}}}\right]$ as prediction of $Y$ given $\mathbf{X}_{\tilde{\mathcal{P}}}$. Note that \begin{eqnarray*} \mathbb{E}\left[Y\mid\mathbf{X}_{\tilde{\mathcal{P}}}\right] & = & \sum_{h=1}^{H}\mathbb{E}\left[Y\mid s\left(Y\right)=h,\mathbf{X}_{\tilde{\mathcal{P}}}\right]\text{Pr}\left(s\left(Y\right)=h\mid\mathbf{X}_{\tilde{\mathcal{P}}},\boldsymbol{\mu},\boldsymbol{\Sigma}\right)\\ & = & \sum_{h=1}^{H}\frac{\mathbb{E}\left[Y\mid s\left(Y\right)=h,\mathbf{X}_{\tilde{\mathcal{P}}}\right]\cdot N\left(\mathbf{X}_{\tilde{\mathcal{P}}}\mid\boldsymbol{\mu}_{h},\boldsymbol{\Sigma}_{h}\right)}{\sum_{l=1}^{H}N\left(\mathbf{X}_{\tilde{\mathcal{P}}}\mid\boldsymbol{\mu}_{l},\boldsymbol{\Sigma}_{l}\right)}. \end{eqnarray*} We use a plug-in estimator of $\mathbb{E}\left[Y\mid\mathbf{X}_{\tilde{\mathcal{P}}}\right]$, denoted as $\hat{Y}=\hat{\mathbb{E}}\left[Y\mid\mathbf{X}_{\tilde{\mathcal{P}}}\right]$, where \begin{equation} \hat{Y}=\hat{\mathbb{E}}\left[Y\mid\mathbf{X}_{\tilde{\mathcal{P}}}\right]=\sum_{h=1}^{H}\frac{\hat{M}_{h}\cdot N\left(\mathbf{X}_{\tilde{\mathcal{P}}}\mid\hat{\boldsymbol{\mu}}_{h},\hat{\Sigma}_{h}\right)}{\sum_{l=1}^{H}N\left(\mathbf{X}_{\tilde{\mathcal{P}}}\mid\hat{\boldsymbol{\mu}}_{l},\hat{\Sigma}_{l}\right)}.\label{eq:Y_pred} \end{equation} where $\hat{M}_{h}$ is the sample mean of response $Y$ in slice $h$. $\hat{M}_{h}$ can be considered as the zero-th order approximation to $\mathbb{E}\left[Y\mid s\left(Y\right)=h,\mathbf{X}_{\tilde{\mathcal{P}}}\right]$, in the sense that $\hat{M}_{h}$ is independent of $\mathbf{X}_{\tilde{\mathcal{P}}}$. A more sophisticated model is to consider the first-order approximation that models $\mathbb{E}\left[Y\mid s\left(Y\right)=h,\mathbf{X}_{\tilde{\mathcal{P}}}\right]$ as a linear combination of $\mathbf{X}_{\tilde{\mathcal{P}}}$ in each slice. \subsection{Implementation issues with SODA: tuning parameter and screening depth\label{sub:impl}} Sections \ref{sec:theory} characterizes asymptotic properties of the EBIC and SODA and provides some guidelines for choosing the tuning parameter $\gamma$ of EBIC. However, these asymptotic results are not directly usable . In practice, we propose to use a $10$-fold cross-validation (CV) procedure for selecting $\gamma$ from $\left\{ 0,0.5,1.0\right\}$. For simulation studies and real data analyses in Sections \ref{sec:simstudy} and \ref{sec:realdata}, to make SODA more easily comparable with Lasso-EBIC studied in \cite{chen2012extended}, we fixed $\gamma=0.5$ as suggested in \cite{chen2012extended}. The forward variable addition stage terminates if EBICs of all candidate models are larger than the EBIC of the current model. Therefore, the screening depth of the forward stage is determined by the EBIC. In Theorem \ref{theorem:forward}, we show that this procedure is asymptotically screening consistent; namely, the truly relevant terms will be all included by the end of the forward stage. Nevertheless, SODA is not sensitive to adding more terms in the forward stage since those unrelated terms will be eventually eliminated in the backward stage. Missing one relevant term is usually more harmful than including one noise term. Therefore, to optimize the empirical performance, we let SODA continue the forward variable addition for $p_{f}$ steps after the step that fails to decrease EBIC (default $p_{f}=3$). \section{Theoretical Properties of SODA\label{sec:theory}} To study theoretical properties of SODA procedure, we assume the following conditions: \begin{enumerate} \item[\textbf{(C1)}] The divergence speed of $p$ is bounded above by $p\le n^{\kappa}$ for some $\kappa>0$, and the size of the true predictor set $\mathcal{P}$ is bounded as $\left|\mathcal{P}\right|\le p_{0}$ for a fixed integer $p_{0}$. \item[\textbf{(C2)}] Magnitudes of true coefficients in $\boldsymbol{\theta}_{\mathcal{A}}$ are bounded above and below by constants, namely there exist positive constants $\theta_{\max}>\theta_{\min}>0$ such that \[ \theta_{\min}\le\min\left\{ \left|\theta_{j}\right|:\,j\in\mathcal{A}\right\} \le\max\left\{ \left|\theta_{j}\right|:\,j\in\mathcal{A}\right\} \le\theta_{\max}. \] \item[\textbf{(C3)}] Let $\mathbf{Z}$ be the augmented version of $\mathbf{X}$, containing intercept $1$, as well as all first-order and second-order terms of $\mathbf{X}$. Each $Z_{j}\in\mathbf{Z}$ is sub-exponential, i.e. there are positive constants $C_{1}$ and $C_{2}$ such that, \[ \text{Pr}\left(\left|Z_{j}-\mathbb{E}\left[Z_{j}\right]\right|>t\right)\le C_{1}\exp\left(-C_{2}t\right)\quad\text{ for all }t>0. \] \item[\textbf{(C4)}] Let $\text{Cov}\left(\mathbf{Z}\right)$ denote the covariance matrix of $\mathbf{Z}$. There exist constants $0<\tau_{1}<\tau_{2}<\infty$ such that \[ \tau_{1}\le\lambda_{\min}\left(\text{Cov}\left(\mathbf{Z}\right)\right)<\lambda_{\max}\left(\text{Cov}\left(\mathbf{Z}\right)\right)\le\tau_{2}, \] where $\lambda_{\min}\left(\cdot\right)$ and $\lambda_{\max}\left(\cdot\right)$ denote the smallest and largest eigenvalues of a matrix. \end{enumerate} We show that the forward variable addition stage (Stage 2 of SODA) is already screening consistent. To proceed, we need to define the following concept to study the stepwise detectability of true predictors in $\mathcal{P}$. Let $\boldsymbol{\theta}_{\mathcal{S}}^{*}$ denote the population version of the risk minimizer, \[ \boldsymbol{\theta}_{\mathcal{S}}^{*}=\underset{\boldsymbol{\theta}_{\mathcal{S}}}{\arg\min}\,\mathbb{E}\left[-\log p\left(Y\mid\mathbf{X},\boldsymbol{\theta}_{\mathcal{S}}\right)\right], \] where the expectation is over the joint distribution of $\left(Y,\mathbf{X}\right)$. Let vector $\boldsymbol{\theta}_{\mathcal{S}}^{j*}$ be parameters in $\boldsymbol{\theta}_{\mathcal{S}}^{*}$ associated with predictor $X_{j}$. The stepwise detectable condition is necessary for the screening consistency of the forward variable addition stage. \begin{defn} \textbf{(Stepwise detectable condition)} A set of predictors $\mathcal{C}_{1}$ is stepwise detectable given $\mathcal{C}_{2}$ if $\mathcal{C}_{1}\cap\mathcal{C}_{2}=\emptyset$, and for any set $\mathcal{C}$ satisfying $\mathcal{C}\supset\mathcal{C}_{2}$ and $\mathcal{C}\not\supset\mathcal{C}_{1}$, there exist constants $\theta_{\max}>\theta_{\min}>0$, such that \[ \theta_{\min}\le\max_{j\in\mathcal{C}^{c}\cap\mathcal{C}_{1}}\left\Vert \boldsymbol{\theta}_{\mathcal{S}_{\mathcal{C}\cup\left\{ j\right\} }}^{j*}\right\Vert _{\infty}\le\theta_{\max}, \] where $\mathcal{S}_{\mathcal{C}\cup\left\{ j\right\} }=\mathcal{M}^{j}\cup\mathcal{I}^{j}$ with $\mathcal{M}^{j}=\mathcal{C}\cup\left\{ j\right\} $ and $\mathcal{I}^{j}=\mathcal{M}^{j}\times\mathcal{M}^{j}$, and $\left\Vert \cdot\right\Vert _{\infty}$ denotes the $L_{\infty}$ norm. Let $\mathcal{T}_{m}=\left\{ j:\,\text{predictor }j\mbox{ is stepwise detectable given }\cup_{i=0}^{m-1}\mathcal{T}_{i-1}\right\} $ and $\mathcal{T}_{0}=\emptyset$. The set of true predictors $\mathcal{P}$ is said to be stepwise detectable if $j\in\cup_{i=1}^{\infty}\mathcal{T}_{i}$ for all $j\in\mathcal{P}$. \end{defn} In other words, if the current selection $\mathcal{C}$ contains $\mathcal{C}_{2}$, then there always exist detectable predictors conditioning on currently selected variables until we include all the predictors indexed by $\mathcal{C}_{1}$. A true predictor $j\in\mathcal{P}$ is not stepwise detectable either because it perfectly correlates with some other terms, or its effects can only be detected conditioning on some other stepwise undetectable terms. We give an example to illustrate the scenarios when stepwise detectable condition may or may not hold. Suppose there are two true jointly normal relevant predictors $X_{1}$ and $X_{2}$ with means $\mu_{1}$ and $\mu_{2}$, and there is only one interaction term $X_{1}X_{2}$ in model (\ref{eq:main_model}), i.e. $\mathcal{A}=\left\{ \left(1,2\right)\right\} $. $\mathcal{P}$ is not stepwise detectable if both $\mu_{1}=0$ and $\mu_{2}=0$. Starting from empty set $\emptyset$, the forward procedure will not add $X_{1}$ or $X_{2}$ into the model, because there is no main effect for $X_{1}$ and $X_{2}$ and the interaction term $X_{1}X_{2}$ does not correlate with marginal terms $X_{1}$ and $X_{2}$ ($\text{Cov}\left(X_{1},X_{1}X_{2}\right)=0$ and $\text{Cov}\left(X_{2},X_{1}X_{2}\right)=0$). However, if either $\mu_{1}\neq0$ or $\mu_{2}\neq0$, $\mathcal{P}=\left\{ 1,2\right\} $ is stepwise detectable. Let $\tilde{\mathcal{S}}_{F}=\tilde{\mathcal{M}}_{F}\cup\tilde{\mathcal{C}}_{F}\cup\left(\tilde{\mathcal{C}}_{F}\times\tilde{\mathcal{C}}_{F}\right)$ denote the selected set of terms at the end of forward variable addition stage. It is unrealistic to require $\tilde{\mathcal{S}}_{F}=\mathcal{A}$. However, it should be demanded that $\tilde{\mathcal{S}}_{F}\supseteq\mathcal{A}$, i.e. $\tilde{S}_{F}$ contains all relevant terms. We define the forward stage to be screening consistent if $p\left(\tilde{\mathcal{S}}_{F}\supseteq\mathcal{A}\right)\to1$. We also do not want the size of $\tilde{\mathcal{S}}_{F}$ to be too large, otherwise forward variable addition loses its purpose. The screening consistency of forward stage is established by the following theorem. \begin{thm} \emph{\label{theorem:forward}}\textbf{\emph{(Forward stage screening consistency)}}\emph{ }If conditions C1 $\sim$ C4 hold, and all predictors in $\mathcal{P}$ are stepwise detectable, then the forward variable addition stage finishes in finite number of steps and is screening consistent. In particular, as $n\to\infty$, \[ \text{Pr}\left(\left|\tilde{\mathcal{C}}_{F}\right|\le Q\right)\to1,\;\,\mbox{ and }\;\,\text{Pr}\left(\tilde{\mathcal{C}}_{F}\supseteq\mathcal{P}\right)\to1, \] where $Q=\left\lceil 8\lambda_{1}^{-1}\theta_{\min}^{-2}\log K\right\rceil $, $\lambda_{1}$ is a positive constant defined in Lemma 2 in appendix, $K$ is the number of classes, and $\theta_{\min}$ is a positive constant defined in condition C2. \end{thm} In other words, asymptotically $\tilde{\mathcal{C}}_{F}$ contains all predictors in $\mathcal{P}$, which implies $\tilde{\mathcal{S}}_{F}\supseteq\mathcal{A}$, and the forward stage stops in finite number of steps. We show in the following theorem two uniform bounds guaranteeing that all unrelated terms will be eliminated and all related terms will be kept in the backward stage. \begin{thm} \label{theorem:backward}\textbf{\emph{(Uniform bound of EBIC in backward stage)}}\emph{ }Fix any positive constant $Q>0$. Under conditions C1 $\sim$ C4, as $n\to\infty$,\emph{ } \begin{equation} \text{Pr}\left(\max_{\mathcal{S}\supsetneq\mathcal{A}:\left|\mathcal{S}\right|\le Q}\min_{j\in\mathcal{S}\backslash\mathcal{A}}\left\{ \text{EBIC}_{\gamma}\left(\mathcal{S}\backslash\left\{ j\right\} \right)-\text{EBIC}_{\gamma}\left(\mathcal{S}\right)\right\} <0\right)\to1,\label{eq:backward_1} \end{equation} and \begin{equation} \text{Pr}\left(\min_{\mathcal{S}\supset\mathcal{A}:\left|\mathcal{S}\right|\le Q}\min_{j\in\mathcal{A}}\left\{ \text{EBIC}_{\gamma}\left(\mathcal{S}\backslash\left\{ j\right\} \right)-\text{EBIC}_{\gamma}\left(\mathcal{S}\right)\right\} <0\right)\to0,\label{eq:backward_2} \end{equation} for any constant $\gamma>Q-\left|\mathcal{A}\right|-\left(2\kappa\right)^{-1}$. \end{thm} Eq (\ref{eq:backward_1}) implies that if $\mathcal{S}\supsetneq\mathcal{A}$ and $\left|\mathcal{S}\right|\le Q$, there will be at least one unrelated term $j\in\mathcal{S}\cap\mathcal{A}^{c}$ such that removing $j$ from $\mathcal{S}$ leads to lower EBIC. Eq (\ref{eq:backward_2}) implies that if $\mathcal{S}\supset\mathcal{A}$ and $\left|\mathcal{S}\right|\le Q$, there is no related term $j\in\mathcal{A}$ such that removing $j$ from $\mathcal{S}$ leads to lower EBIC. As a summary, as $n\to\infty$, with probability tending to $1$, no related term will be eliminated, and all unrelated terms will be eliminated in the backward stage until $\tilde{\mathcal{S}}=\mathcal{A}$. Theorem \ref{theorem:backward} requires candidate sets have finite size ($\left|\mathcal{S}\right|\le Q$), which is proved by Theorem \ref{theorem:forward} to hold asymptotically for the starting set of the backward stage $\tilde{\mathcal{S}}_{F}$. Hence, combining Theorem \ref{theorem:forward} and \ref{theorem:backward} establishes the model selection consistency of SODA. Proofs of the theorems are in the on-line Supplemental Materials. \section{Simulation Results} \label{sec:simstudy} \subsection{Discriminant analysis with interactions} We first evaluate performances of a few methods on main and interaction effects selection under the discriminant analysis framework. Besides SODA, we consider the backward procedure in \cite{zhang2011bic} (denoted as ZW), the forward-backward method in \cite{murphy2010variable} (denoted as MDR), hierNet in \cite{bien2013lasso} and IIS-SQDA in \cite{fan2015innovated}. Both ZW and MDR require the joint normality between $\mathbf{X}_{\mathcal{P}}$ and $\mathbf{X}_{\mathcal{P}^{c}}$. HierNet is a Lasso-like procedure to detect multiplicative interactions between predictors under hierarchical constraints. For hierNet, we select the regularization parameter with the lowest CV error. We have also reported in the Supplemental Materials a comparison between SODA and Lasso-logistic for variable selections when the underlying model has only linear main effects, and found that SODA was competitive with Lasso in all cases we tested and out-performed Lasso significantly when the ``incoherence" \citep{ravikumar2010high} or the ``irrepresentable'' \citep{zhao2006model} condition was violated. We first considered four simulation settings in Examples 1.1$\sim$1.4 for the classification example introduced in Section 1 (see Figure 1 for more details), and then examined two more simulation scenarios (Examples 1.5 and 1.6) in which the interaction effects and main effects are from different predictors. For Examples 1.1$\sim$1.4, there are two classes ($K=2$) and $p$ predictors, among which $X_{1}$, $X_{2}$ and $X_{3}$ are relevant ones, i.e., $\mathcal{P}=\left\{ 1,2,3\right\}$, and are simulated as multivariate Gaussian conditional on the class label. Other $p-3$ predictors are irrelevant but correlated with the three relevant ones. The oracle Bayes classification rule for these four examples is to label an observation class 1 if $Q\left(\mathbf{x}\right)>0$, and 0 otherwise, where \[ Q\left(\mathbf{x}\right)=1.627+X_{1}-0.6X_{1}^{2}-0.6X_{3}^{2}-0.7X_{1}X_{2}-0.7X_{2}X_3, \] indicating that $\mathcal{A}=\left\{ 1,\left(1,1\right),\left(3,3\right),\left(1,2\right),\left(2,3\right)\right\}$, representing one linear effect and four interaction effects without the hierarchy restriction. The setting of Example 1.1 follows the multivariate normal model while the other three do not. Examples 1.1$\sim$1.3 are of moderate dimension with $p$=50, and Example 1.4 simulates a high-dimensional scenario with $p$=1000. For each simulation setting, we generated 100 datasets with 10 different sample sizes for each class, ranging linearly in log-scale from $100$ to $1000$: $n=100, [100\times10^{1/9}],\dots,1000$. For SODA, hierNet, and IIS-SQDA, the set of selected predictor variables is defined as the union of all predictors appearing in the selected linear and interaction terms. We calculated the average number of false negatives and false positives for variable selection (VFN and VFP), main effect term selection (MFN and MFP), and interaction term selection (IFN and IFP). To benchmark the classification accuracy, we also include the full model of LDA and QDA with all predictors, and the Oracle model that contains exactly the five true terms. The average classification test error rate (TE) of each method is estimated by applying the trained model to 10,000 extra observations simulated from the true model. Results for Examples 1.1$\sim$1.4 are shown in Figure \ref{fig:sim_1.1-1.3}. For SODA, hierNet and IIS-SQDA, we also counted the numbers of FNs and FPs for the selection of main effect and interaction terms, respectively, and show them in Table \ref{tab:sim_1.1-1.4_MI}. \vspace{3mm} \textbf{Example 1.1. } {\it Multivariate Gaussian}. Irrelevant predictors were simulated as linear combinations of relevant ones as follows: \[ X_{j}=b_{j,0}+b_{j,1}X_{l}+b_{j,2}X_{k}+\varepsilon_{j}, \ j=3,\ldots, 50, \] where $X_{k}$ and $X_{l}$ were randomly selected from $\{ X_{1}, X_{2}, X_{3}\}$, coefficients $b_{j,0}$, $b_{j,1}$ and $b_{j,2}$ were drawn from uniform distribution $U\left[-1,1\right]$, and $\varepsilon_{j}\sim\mathcal{N}\left(0,2\right)$. As shown in Figure~\ref{fig:sim_1.1-1.3}, for this example ZW, MDR, and SODA were all able to detect all relevant predictors as $n$ increases, with both VFN and VFP being very low. They achieved almost the Oracle classification accuracy. In contrast, IIS-SQDA and hierNet selected too many false positives, which resulted in high test error rates, and the number VFP+VFN increased with $n$. This strange phenomenon has also been observed by other researchers \citep{fan2015innovated,yu2014modified}. The performances of IIS-SQDA, hierNet and SODA on individual term selection are shown in Table~\ref{tab:sim_1.1-1.4_MI}. SODA selected individual terms nearly perfectly. HierNet is based on Lasso and IIS-SQDA uses elastic net. The variable selection consistency of Lasso and elastic net require the {\it Irrepresentable Condition} \citep{zhao2006model} and the {\it Elastic Irrepresentable Condition} \citep{jia2010model}, which may not hold here. Moreover, it was observed that the cross-validation is too liberal for Lasso, leading to a large number of false positives \citep{yu2014modified}. As expected, LDA and QDA without variable selection performed the worst. \vspace{3mm} \textbf{Example 1.2. } {\it Non-Gaussian irrelevant predictors}. Irrelevant variables were simulated to be quadratically dependent of relevant ones: \begin{equation} X_{j}=b_{j,0}+b_{j,1}X_{k}+b_{j,2}X_{l}+b_{j,3}X_{k}^{2}+b_{j,4}X_{l}^{2}+\varepsilon_{j}, \ j=3,\ldots, 50, \label{eq:sim_4} \end{equation} where $X_{k}$ and $X_{l}$ were randomly selected from $\{ X_{1}, X_{2}, X_{3}\}$, coefficients $b_{j,0}$, ..., $b_{j,4}$ were drawn from $U\left[-1,1\right]$, and $\varepsilon_{j}\sim\mathcal{N}\left(0,5\right)$. As shown in Figure \ref{fig:sim_1.1-1.3}, ZW and MDR selected 4 to 10 FP and FN predictors on average. IIS-SQDA and hierNet selected a large number of FP terms, as shown in Table~\ref{tab:sim_1.1-1.4_MI}, due to the correlation between relevant and irrelevant predictors as well as correlations between main and interaction terms. \vspace{3mm} \textbf{Example 1.3. }{\it Heteroskedastic covariates.} Irrelevant we simulated as follows: \begin{equation} X_{j}=b_{j,1}X_{k}+b_{j,2}X_{l}+\left|X_{k}\right|\varepsilon_{j}, \ j=3,\ldots, 50, \label{eq:sim_5} \end{equation} where $X_{k}$ and $X_{l}$ were randomly selected from $\{ X_{1}, X_{2}, X_{3}\}$, coefficients $b_{j,1}$ and $b_{j,3}$ were drawn from $U\left[-1,1\right]$, and $\varepsilon_{j}\sim\mathcal{N}\left(0,1\right)$. It violates the constant variance assumption of ZW and MDR. Thus, ZW, MDR, IIS-SQDA and hierNet all performed sub-optimally. In contrast, SODA selected almost no VFP and VFN, and achieved near-Oracle prediction accuracy when $n\ge200$. \vspace{3mm} \textbf{Example 1.4. }{\it High-dimensional and non-Gaussian.} Irrelevant predictors were simulated as follows. For $j\in\left\{ 4,\dots,100\right\} $, we drew 60\% of the $X_j$'s at random and simulated them from $\mathcal{N}\left(m_{j},1\right)$, $m_{j}\sim U\left[0,1\right]$. The remaining $40\%$ of the $X_j$'s were simulated as non-linearly related to $\left(X_{k},X_{l}\right)$ similarly as (\ref{eq:sim_4}) or (\ref{eq:sim_5}), where $k$ and $l$ were randomly chosen from $\left\{ 1,2,3\right\} $. For $j\in\left\{ 101,\dots,1000\right\} $, we first drew all predictors from $\mathcal{N}\left(m_{j},1\right)$, and then randomly selected $40\%$ of them and re-simulated each of the selected $X_j$ as (\ref{eq:sim_4}) or (\ref{eq:sim_5}), where $k$ and $l$ are indexes uniformly drawn from $\{101,\ldots, 1000\}$. We changed ZW to a forward procedure since the backward procedure is not feasible when $p>n$. Results are shown in Figure \ref{fig:sim_1.1-1.3} and Table \ref{tab:sim_1.1-1.4_MI}. MDR results are not shown because it is unstable for highly correlated $\mathbf{X}$ matrices and usually keeps on adding new predictors until the estimation of covariance matrices become singular. Overall, SODA performed much better than ZW and IIS-SQDA, and achieved near-oracle the classification accuracy for $n>100$. Figure~\ref{fig:sim_1.4_time} shows the running times in log-scale versus $n$ for IIS-SQDA, ZW, and SODA, On average, IIS-SQDA took 800 minutes, ZW took 22 minutes, and SODA took 4 minutes to analyze one simulated dataset with $p=1000$ and $n=1000$. In contrast, hierNet did not finish the simulation experiments in 24 hours and is thus not included in the comparison. \begin{figure}[h] \noindent \begin{centering} \includegraphics[scale=0.775]{sim_legend} \par\end{centering} \noindent \begin{centering} \includegraphics[scale=1,height=1.7in,width=2.8in]{AFPNs_1} \includegraphics[scale=1,height=1.7in,width=2.8in]{ACEs_1} \par\end{centering} \noindent \begin{centering} \includegraphics[scale=1,height=1.7in,width=2.8in]{AFPNs_3} \includegraphics[scale=1,height=1.7in,width=2.8in]{ACEs_3} \par\end{centering} \noindent \begin{centering} \includegraphics[scale=1,height=1.7in,width=2.8in]{AFPNs_4} \includegraphics[scale=1,height=1.7in,width=2.8in]{ACEs_4} \par\end{centering} \noindent \begin{centering} \includegraphics[scale=1,height=1.7in,width=2.8in]{AFPNs_6} \includegraphics[scale=1,height=1.7in,width=2.8in]{ACEs_6} \par\end{centering} \caption{Results for Example 1.1$\sim$1.4. VFP: average number of variable selection false positives. VFN: average number of variable selection false negatives. LDA and QDA used all the variables without any selection, so they do not appear in the left panel and their TEs were high. MDR and hierNet all broke down for Example 1.4. LDA and QDA also did not work due to large $p$. \label{fig:sim_1.1-1.3}} \end{figure} \begin{table}[h] \begin{centering} {\footnotesize{}}% \begin{tabular}{|cc|cccc|cccc|cccc|} \hline & & \multicolumn{4}{c|}{\textbf{\footnotesize{}SODA}} & \multicolumn{4}{c|}{\textbf{\footnotesize{}IIS-SQDA}} & \multicolumn{4}{c|}{\textbf{\footnotesize{}hierNet}}\tabularnewline[\doublerulesep] \cline{3-14} {\footnotesize{}Example} & \textbf{\footnotesize{}$n$} & \textbf{\footnotesize{}MFN} & \textbf{\footnotesize{}MFP} & \textbf{\footnotesize{}IFN} & \textbf{\footnotesize{}IFP} & \textbf{\footnotesize{}MFN} & \textbf{\footnotesize{}MFP} & \textbf{\footnotesize{}IFN} & \textbf{\footnotesize{}IFP} & \textbf{\footnotesize{}MFN} & \textbf{\footnotesize{}MFP} & \textbf{\footnotesize{}IFN} & \textbf{\footnotesize{}IFP}\tabularnewline[\doublerulesep] \hline & {\footnotesize{}$100$} & \textbf{\footnotesize{}$0.05$} & {\footnotesize{}$0.16$} & {\footnotesize{}$1.01$} & {\footnotesize{}$0.30$} & {\footnotesize{}$0.27$} & {\footnotesize{}$2.39$} & {\footnotesize{}$0.90$} & {\footnotesize{}$48.5$} & {\footnotesize{}$0$} & {\footnotesize{}$12.6$} & {\footnotesize{}$1.58$} & {\footnotesize{}$2.26$}\tabularnewline[\doublerulesep] {\footnotesize{}1.1} & {\footnotesize{}$215$} & {\footnotesize{}$0$} & {\footnotesize{}$0.01$} & {\footnotesize{}$0.04$} & {\footnotesize{}$0.02$} & {\footnotesize{}$0.08$} & {\footnotesize{}$2.90$} & {\footnotesize{}$0.25$} & {\footnotesize{}$63.2$} & {\footnotesize{}$0$} & {\footnotesize{}$19.2$} & {\footnotesize{}$1.10$} & {\footnotesize{}$14.0$}\tabularnewline[\doublerulesep] & {\footnotesize{}$1000$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$6.39$} & {\footnotesize{}$0$} & {\footnotesize{}$112$} & {\footnotesize{}$0$} & {\footnotesize{}$44.6$} & {\footnotesize{}$0$} & {\footnotesize{}$46.2$}\tabularnewline[\doublerulesep] \hline & {\footnotesize{}$100$} & {\footnotesize{}$0.26$} & {\footnotesize{}$0.58$} & {\footnotesize{}$1.74$} & {\footnotesize{}$0.28$} & {\footnotesize{}$0.26$} & {\footnotesize{}$12.9$} & {\footnotesize{}$0.40$} & {\footnotesize{}$7.42$} & {\footnotesize{}$0$} & {\footnotesize{}$14.9$} & {\footnotesize{}$1.44$} & {\footnotesize{}$7.24$}\tabularnewline[\doublerulesep] {\footnotesize{}1.2} & {\footnotesize{}$215$} & {\footnotesize{}$0$} & {\footnotesize{}$0.13$} & {\footnotesize{}$0.27$} & {\footnotesize{}$0.03$} & {\footnotesize{}$0$} & {\footnotesize{}$19.7$} & {\footnotesize{}$0.02$} & {\footnotesize{}$11.1$} & {\footnotesize{}$0$} & {\footnotesize{}$22.9$} & {\footnotesize{}$0.65$} & {\footnotesize{}$15.3$}\tabularnewline[\doublerulesep] & {\footnotesize{}$1000$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$28.5$} & {\footnotesize{}$0$} & {\footnotesize{}$24.3$} & {\footnotesize{}$0$} & {\footnotesize{}$39.1$} & {\footnotesize{}$0$} & {\footnotesize{}$46.9$}\tabularnewline[\doublerulesep] \hline & {\footnotesize{}$100$} & {\footnotesize{}$0.12$} & {\footnotesize{}$0.13$} & {\footnotesize{}$1.50$} & {\footnotesize{}$0.70$} & {\footnotesize{}$0.09$} & {\footnotesize{}$5.59$} & {\footnotesize{}$0.13$} & {\footnotesize{}$44.9$} & {\footnotesize{}$0.04$} & {\footnotesize{}$21.4$} & {\footnotesize{}$2.20$} & {\footnotesize{}$19.3$}\tabularnewline[\doublerulesep] {\footnotesize{}1.3} & {\footnotesize{}$215$} & {\footnotesize{}$0.02$} & {\footnotesize{}$0.03$} & {\footnotesize{}$0.17$} & {\footnotesize{}$0.07$} & {\footnotesize{}$0$} & {\footnotesize{}$8.96$} & {\footnotesize{}$0$} & {\footnotesize{}$61.1$} & {\footnotesize{}$0$} & {\footnotesize{}$29.5$} & {\footnotesize{}$0.93$} & {\footnotesize{}$25.7$}\tabularnewline[\doublerulesep] & {\footnotesize{}$1000$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$14.71$} & {\footnotesize{}$0$} & {\footnotesize{}$99.8$} & {\footnotesize{}$0$} & {\footnotesize{}$42.8$} & {\footnotesize{}$0$} & {\footnotesize{}$46.0$}\tabularnewline[\doublerulesep] \hline & {\footnotesize{}$100$} & {\footnotesize{}$0.20$} & {\footnotesize{}$0.22$} & {\footnotesize{}$1.58$} & {\footnotesize{}$0.30$} & {\footnotesize{}$0.68$} & {\footnotesize{}$1.58$} & {\footnotesize{}$0.42$} & {\footnotesize{}$6.08$} & \textbf{\footnotesize{} } & \textbf{\footnotesize{} } & \textbf{\footnotesize{} } & \textbf{\footnotesize{} }\tabularnewline[\doublerulesep] {\footnotesize{}1.4} & {\footnotesize{}$215$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0.14$} & {\footnotesize{}$0$} & {\footnotesize{}$0.20$} & {\footnotesize{}$1.74$} & {\footnotesize{}$0.10$} & {\footnotesize{}$8.84$} & \textbf{\footnotesize{} } & \textbf{\footnotesize{} } & \textbf{\footnotesize{} } & \textbf{\footnotesize{} }\tabularnewline[\doublerulesep] & {\footnotesize{}$1000$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$0$} & {\footnotesize{}$3.68$} & {\footnotesize{}$0$} & {\footnotesize{}$40.7$} & \textbf{\footnotesize{} } & \textbf{\footnotesize{} } & \textbf{\footnotesize{} } & \textbf{\footnotesize{} }\tabularnewline[\doublerulesep] \hline \end{tabular} \par\end{centering}{\footnotesize \par} \caption{Variable Selection Results for Examples 1.1 $\sim$ 1.4. MFP / MFN: Average number of main effect false positives and negatives. IFP / IFN: Average number of interaction effect false positives and negatives. The number of observations for each class is denoted by $n$. \label{tab:sim_1.1-1.4_MI}} \end{table} \begin{figure}[h] \noindent \begin{centering} \includegraphics[height=2.5in,width=4in]{TIMEs_6} \par\end{centering} \caption{Mean running time in seconds for ZW, IIS-SQDA, and SODA for Example 1.4; and hierNet did not finish the job within 24 hours.\label{fig:sim_1.4_time}} \end{figure} \vspace{3mm} \textbf{Example 1.5. }{\it Interactions only.} We simulated the scenario in which there are only interaction effects. In particular, we removed the main effect term $X_{1}$ from the previous classification rule so that the new classification function becomes \[ Q\left(\mathbf{x}\right)=1.777-0.6X_{1}^{2}-0.6X_{3}^{2}-0.7X_{1}X_{2}-0.7X_{2}X_{3}. \] \vspace{3mm} \textbf{Example 1.6. }{\it Anti-hierarchical interactions.} We adopt the terminology ``anti-hierarchical'' from \cite{bien2013lasso}, which refers to the scenario that the main effects and interaction effects are from different set of predictors. In this example, the classification function $Q\left(\mathbf{x}\right)$ is \[ Q\left(\mathbf{x}\right)=1.777+X_{4}-X_{5}-0.6X_{1}^{2}-0.6X_{3}^{2}-0.7X_{1}X_{2}-0.7X_{2}X_{3}. \] For both Examples 1.5 and 1.6, we let $p=50$ and let irrelevant predictors be simulated in the same way as Example 1.2. The results are shown in Figure \ref{fig:sim_1.5-1.6}. Overall, the results are similar to previous examples that SODA had fewer VFPs and VFNs, and also lower TE rates compared with other methods. In all cases we found that ZW and MDR performed very similarly when $n$ is large. \begin{figure}[h] \noindent \begin{centering} \includegraphics[scale=0.775]{sim_legend} \par\end{centering} \noindent \begin{centering} \includegraphics[scale=1,height=1.8in,width=2.8in]{AFPNs_7} \includegraphics[scale=1,height=1.8in,width=2.8in]{ACEs_7} \par\end{centering} \noindent \begin{centering} \includegraphics[scale=1,height=1.8in,width=2.8in]{AFPNs_8} \includegraphics[scale=1,height=1.8in,width=2.8in]{ACEs_8} \par\end{centering} \caption{Results for Example 1.5 $\sim$ 1.6. VFP: average number of variable selection false positives. VFN: average number of variable selection false negatives. \label{fig:sim_1.5-1.6}} \end{figure} \subsection{Continuous-response index models} We examine here variable selection methods for nonlinear models with continuous responses. Besides S-SODA, we considered all the five methods studied in \cite{jiang2014variable}: Lasso, DC-SIS, hierNet, COP, and SIRI. DC-SIS \citep{li2012feature} is a sure independence screening procedure based on distance correlation, which has been shown to be capable of detecting relevant variables when interactions are present. HierNet \citep{bien2013lasso} is a Lasso-like procedure to detect multiplicative interactions between predictors under hierarchical constraints. For SIRI and S-SODA, we equally partition $\left\{ y_{i}\right\} _{i=1}^{n}$ into $H=5$ slices. In order to improve SIRI's robustness, we consider a modified version of SIRI, termed as N-SIRI, which pre-processes $\mathbf{X}$ by marginally quantile-normalizing each predictor to the standard normal distribution. We considered the following five simulation examples: \begin{eqnarray*} \text{Example 2.1}:\qquad Y & = & 3X_{1}+1.5X_{2}+2X_{3}+2X_{4}+2X_{5}+\sigma\epsilon,\\ \text{Example 2.2}:\qquad Y & = & X_{1}+X_{1}X_{2}+X_{1}X_{3}+\sigma\epsilon,\\ \text{Example 2.3}:\qquad Y & = & X_{1}^{2}X_{2}/X_{3}^{2}+\sigma\epsilon,\\ \text{Example 2.4}:\qquad Y & = & X_{1}/\exp\left(X_{2}+X_{3}\right)+\sigma\epsilon,\\ \text{Example 2.5}:\qquad Y & = & X_{1}+X_{2}+\left(1+X_{3}\right)^{2}\epsilon, \end{eqnarray*} where $\sigma=0.2$ and $\varepsilon\sim N(0,1)$ independent of $\mathbf{X}$. In each example, we simulated the predictors $\mathbf{X}$ with dimension $p=1000$. In order to test robustness of the methods, we simulated $\mathbf{X}$ under three scenarios. \begin{itemize} \item \vspace{-2pt}Scenario (a): $\mathbf{X}$ is simulated from multivariate Gaussian with correlation $0.5^{\left|i-j\right|}$. In this scenario the linearity and constant variance conditions hold. \item \vspace{-5pt}Scenario (b): Each predictor $X_{j}$, $j=1,\dots,p$, is simulated from the $\chi_{1}^{2}$ distribution independently. In this scenario the linearity and constant variance conditions hold, but the distribution of $\mathbf{X}$ is non-normal. \item \vspace{-5pt}Scenario (c): $X_{1},\dots,X_{125}$ were simulated from multivariate Gaussian with correlation $0.5^{\left|i-j\right|}$. For $X_{126},\dots,X_{1000}$, we simulated according to the following schemes: \begin{eqnarray*} X_{j} & = & X_{j-125}^{2}+\varepsilon_{j},\;\;j=126,\dots,250,\\ X_{j} & = & \sqrt{\left|X_{j-250}\right|}+\varepsilon_{j},\;\;j=251,\dots,375,\\ X_{j} & = & \sin\left(X_{j-375}\right)+\varepsilon_{j},\;\;j=376,\dots,500,\\ X_{j} & = & \log\left(\left|X_{j-500}\right|\right)+\varepsilon_{j},\;\;j=501,\dots,625,\\ X_{j} & = & \exp\left(X_{j-625}\right)+\varepsilon_{j},\;\;j=626,\dots,750,\\ X_{j} & = & \exp\left(\left|X_{j-750}\right|\right)+\varepsilon_{j},\;\;j=751,\dots,875,\\ X_{j} & = & X_{j-875}^{2}\varepsilon_{j},\;\;j=876,\dots,1000. \end{eqnarray*} \end{itemize} For each simulation setting, we generated 100 datasets with sample size $n=200$, and applied the aforementioned seven methods to each simulated dataset. For each method, the average number of false positives (FPs) and false negatives (FNs) were calculated over the 100 datasets. The results for the five examples are shown in Figure~\ref{fig:E2}. As expected, all the seven methods worked well for Example 2.1 in scenario (a), with low FPs and FNs, since the underlying structure is indeed linear Gaussian. For scenarios (b) and (c) with non-Gaussian predictors, DC-SCAD, hierNet, and SIRI generated more FPs and/or FNs than other methods. In general. SIRI performed the worst for this example. But with quantile-normalization, N-SIRI performed very competitively. S-SODA worked well for all the three scenarios, almost as good as Lasso. In Examples 2.2$\sim$2.5 the relationships between $Y$ and $\mathbf{X}$ is non-linear. Thus, as expected Lasso and DC-SCAD tended to miss important predictors, resulting in high number of FNs. HierNet can only detect second-order interactions, such as $X_{1}X_{2}$, but fails to identify more complicated relationships such as $Y=X_{1}^{2}X_{2}/X_{3}^{2}$ and $Y=X_{1}/\exp\left(X_{2}+X_{3}\right)$. COP only identifies the information from the first conditional moment $\mathbb{E}\left(\mathbf{X}\mid Y\right)$, and misses important variables with interaction or other second-order effects. As expected, SIRI usually worked well for scenario (a). N-SIRI worked well for both scenarios (a) and (b) since the joint distribution of the predictors become multivariate Gaussian after quantile-normalization. For scenario (c), SIRI performed very poorly, while N-SIRI performed very respectfully, although it still had more FPs and FNs than S-SODA. In contrast, S-SODA worked well for all three scenarios. All these examples demonstrated the efficiency and robustness of S-SODA for variable selection in semi-parametric nonlinear regression models. \begin{figure}[h] \begin{centering} \frame{\includegraphics[height=1.5in, width=4in]{fig_s1}} \frame{\includegraphics[height=1.5in, width=4in]{fig_s2}} \frame{\includegraphics[height=1.5in, width=4in]{fig_s3}} \frame{\includegraphics[height=1.5in, width=4in]{fig_s6}} \frame{\includegraphics[height=1.5in, width=4in]{fig_s5}} \par\end{centering} \caption{Simulation study results for Examples 2.1$\sim$2.5. \label{fig:E2}} \end{figure} \iffalse \begin{figure}[h] \begin{centering} \includegraphics[scale=0.8]{fig_s3} \par\end{centering} \caption{Simulation study results for Example 2.3: $Y=X_{1}^{2}X_{2}/X_{3}^{2}+\sigma\epsilon$\label{fig:E2.3}} \end{figure} \begin{figure}[h] \begin{centering} \includegraphics[scale=0.85]{fig_s6} \par\end{centering} \caption{Simulation study results for Example 2.4: $Y=X_{1}/\exp\left(X_{2}+X_{3}\right)+\sigma\epsilon$\label{fig:E2.4}} \end{figure} \begin{figure} \begin{centering} \includegraphics[scale=0.85]{fig_s5} \par\end{centering} \caption{Simulation study results for Example 2.5: $Y=X_{1}+X_{2}+\left(1+X_{3}\right)^{2}\epsilon$\label{fig:E2.5}} \end{figure} \fi \subsection{Prediction of continuous surface} We consider three examples to test the performance of using formula (\ref{eq:Y_pred}) to predict $Y$, with $p=1000$ predictors simulated in the same way as scenario (a) in the previous section. In order to visualize $\mathbb{E}\left[Y\mid\mathbf{X}_{\mathcal{P}}\right]$ and $\hat{\mathbb{E}}\left[Y\mid\mathbf{X}_{\tilde{\mathcal{P}}}\right]$ surface in a three-dimensional plot, we only had 2 relevant predictors, i.e., $\mathbf{X}_{\mathcal{P}}=\left(X_{1},X_{2}\right)$: \begin{eqnarray*} \text{Example 3.1:}\qquad Y & = & X_{1}+X_{2}+\sigma\epsilon,\\ \text{Example 3.2:}\qquad Y & = & X_{1}/\exp\left(X_{2}\right)+\sigma\epsilon,\\ \text{Example 3.3:}\qquad Y & = & \left(1+X_{1}^{2}+X_{2}^{2}\right)^{-1}+\sigma\epsilon, \end{eqnarray*} where $\sigma=0.2$ and $\epsilon\sim N\left(0,1\right)$. For each example we simulated $n=500$ samples, and applied S-SODA to the simulated data. S-SODA correctly identified $\tilde{\mathcal{P}}=\left\{ 1,2\right\} $. We further used formula (\ref{eq:Y_pred}) with $\hat{\boldsymbol{\mu}}$ and $\hat{\boldsymbol{\Sigma}}$ being the MLEs to predict $\hat{\mathbb{E}}\left[Y\mid\mathbf{X}_{\tilde{\mathcal{P}}}\right]$ with $H=25$ slices for each example. The results are shown in Figure~\ref{fig:E3}. Encouragingly, it is observed that even though we do not know the true functional form of $\mathbb{E}\left[Y\mid\mathbf{X}_{\mathcal{P}}\right]$, our prediction $\hat{\mathbb{E}}\left[Y\mid\mathbf{X}_{\tilde{\mathcal{P}}}\right]$ well captures the landscape of $\mathbb{E}\left[Y\mid\mathbf{X}_{\mathcal{P}}\right]$ in these examples. \begin{figure} \begin{centering} \includegraphics[viewport=0bp 35bp 576bp 250bp,width=5.5in,height=1.8in]{pred_ex_1} \includegraphics[viewport=0bp 35bp 576bp 290bp,width=5.5in,height=2.0in]{pred_ex_2} \includegraphics[viewport=0bp 35bp 576bp 290bp,width=5.5in,height=2.0in]{pred_ex_3} \par\end{centering} \caption{Results for the simulation Examples 3.1-3.3. Left panel: theoretical surface $\mathbb{E}\left[Y\mid\mathbf{X}\right]$; Right panel: surface $\hat{\mathbb{E}}\left[Y\mid\mathbf{X}\right]$ predicted by S-SODA. \label{fig:E3}} \end{figure} \iffalse \begin{figure} \begin{centering} \includegraphics[viewport=0bp 35bp 576bp 290bp,scale=0.70]{pred_ex_2} \par\end{centering} \caption{(Left) Surface $\mathbb{E}\left[Y\mid\mathbf{X}\right]=X_{1}/\exp\left(X_{2}\right)$. (Right) $\hat{\mathbb{E}}\left[Y\mid\mathbf{X}\right]$ predicted by S-SODA.\label{fig:E3.2}} \end{figure} \begin{figure} \begin{centering} \includegraphics[viewport=0bp 35bp 576bp 290bp,scale=0.70]{pred_ex_3} \par\end{centering} \caption{(Left) Surface $\mathbb{E}\left[Y\mid\mathbf{X}\right]=\left(1+X_{1}^{2}+X_{2}^{2}\right)^{-1}$. (Right) $\hat{\mathbb{E}}\left[Y\mid\mathbf{X}\right]$ predicted by S-SODA.\label{fig:E3.3}} \end{figure} \fi \section{Real Data Applications \label{sec:realdata}} We applied SODA, Lasso-Logistic, MDR and IIS-SQDA on real datasets to compare their performances. We did not include the ZW method due to its similarity with MDR. The classification accuracy of the selected models were evaluated by 10-fold cross-validation after the variable selection. For Lasso-Logistic and SODA, we used $\text{EBIC}_{0.5}$ as model selection criterion. We consider three datasets: (1) a Michigan lung cancer dataset analyzed in \cite{efron2009empirical} with large $p>5000$; (2) the Ionosphere dataset, with $p=33$; and (3) the dataset Pumadyn, with $p=32$ and a continuous response. The Ionosphere dataset was downloaded from UCI Machine Learning Repository\footnote{\texttt{https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+Strength}}, and the Pumadyn dataset was downloaded from DELVE (Data for Evaluating Learning in Valid Experiments)\footnote{\texttt{http://www.cs.toronto.edu/\textasciitilde{}delve/data/pumadyn/desc.html}}. \subsection{Michigan lung cancer dataset\label{sub:mich_lung}} This dataset was published in \cite{beer2002gene}, in which researchers measured mRNA expression levels of $p=5,217$ genes in tumor tissues of 86 lung cancer patients. Among the 86 patients, 62 are labeled as in ``good status'', and 26 in ``bad status''. The goal is to classify new patients into one of two statuses. Results on this dataset are summarized in Table \ref{tab:mich_lung}. IIS-SQDA did not finish in 48 hours for this dataset, so we omitted its result. In the solution path of Lasso-Logistic, the lowest $\text{EBIC}_{0.5}$ was achieved at $112.2$ with $1$ gene, and the corresponding CV error rate was $29\%$. SODA selected $2$ main effects and $2$ interaction effects with the $\text{EBIC}_{0.5}$ score at $69.8$ and the CV error rate at $11\%$. Similar to the prostate cancer dataset (see Supplemental Materials), SODA worked much better than Lasso-Logistic for finding the minimum of $\text{EBIC}_{0.5}$ ($69.8$ vs $112.2$). Comparing results of Lasso-Logistic and SODA selected models, it is obvious that interaction effects selected by SODA contribute substantially to the classification accuracy. MDR failed to converge on this dataset. MDR selected as many genes as possible until the number of selected genes was the same as the number of samples in the smaller class (26) and achieved a CV error rate of $28\%$. The observation that MDR failed to converge for both of these two large $p$ datasets illustrates the fact that the QDA variable selection methods with joint normality assumption work poorly for high-dimensional real datasets. \begin{table}[h] \begin{centering} {\footnotesize{}}% \begin{tabular}{|cc|cc|ccc|ccc|ccc|} \hline \multicolumn{2}{|c|}{\textbf{\footnotesize{}Shrunken centroid }} & \multicolumn{2}{c|}{\textbf{\footnotesize{}Empirical Bayes}} & \multicolumn{3}{c|}{\textbf{\footnotesize{}MDR}} & \multicolumn{3}{c|}{\textbf{\footnotesize{}Lasso-Logistic}} & \multicolumn{3}{c|}{\textbf{\footnotesize{}SODA}}\tabularnewline {\footnotesize{}\#P} & {\footnotesize{}CVE} & {\footnotesize{}\#P} & {\footnotesize{}CVE} & {\footnotesize{}$\Delta\text{BIC}$} & {\footnotesize{}\#P} & {\footnotesize{}CVE} & {\footnotesize{}$\text{EBIC}_{0.5}$} & {\footnotesize{}\#P} & {\footnotesize{}CVE} & {\footnotesize{}$\text{EBIC}_{0.5}$} & {\footnotesize{}\#M/\#I} & {\footnotesize{}CVE}\tabularnewline \hline \hline {\footnotesize{}0} & {\footnotesize{}0.28} & {\footnotesize{}5} & {\footnotesize{}0.41} & {\footnotesize{}-353} & {\footnotesize{}1} & {\footnotesize{}0.27} & \textbf{\footnotesize{}112.2} & \textbf{\footnotesize{}1} & \textbf{\footnotesize{}0.29} & {\footnotesize{}111.2} & {\footnotesize{}1 / 0} & {\footnotesize{}0.33}\tabularnewline {\footnotesize{}5} & {\footnotesize{}0.28} & {\footnotesize{}20} & {\footnotesize{}0.43} & {\footnotesize{}-177} & {\footnotesize{}2} & {\footnotesize{}0.26} & {\footnotesize{}113.9} & {\footnotesize{}2} & {\footnotesize{}0.25} & {\footnotesize{}104.1} & {\footnotesize{}2 / 0} & {\footnotesize{}0.25}\tabularnewline {\footnotesize{}11} & {\footnotesize{}0.29} & {\footnotesize{}40} & {\footnotesize{}0.39} & {\footnotesize{}-178} & {\footnotesize{}3} & {\footnotesize{}0.25} & {\footnotesize{}122.2} & {\footnotesize{}3} & {\footnotesize{}0.25} & {\footnotesize{}98.2} & {\footnotesize{}3 / 0} & {\footnotesize{}0.21}\tabularnewline {\footnotesize{}21} & {\footnotesize{}0.28} & {\footnotesize{}60} & {\footnotesize{}0.41} & {\footnotesize{}-165} & {\footnotesize{}4} & {\footnotesize{}0.24} & {\footnotesize{}121.0} & {\footnotesize{}4} & {\footnotesize{}0.20} & {\footnotesize{}89.6} & {\footnotesize{}4 / 0} & {\footnotesize{}0.17}\tabularnewline {\footnotesize{}55} & {\footnotesize{}0.35} & {\footnotesize{}80} & {\footnotesize{}0.40} & {\footnotesize{}-156} & {\footnotesize{}5} & {\footnotesize{}0.25} & {\footnotesize{}131.5} & {\footnotesize{}5} & {\footnotesize{}0.22} & {\footnotesize{}79.2} & {\footnotesize{}5 / 0} & {\footnotesize{}0.12}\tabularnewline {\footnotesize{}109} & {\footnotesize{}0.35} & {\footnotesize{}100} & {\footnotesize{}0.39} & {\footnotesize{}-134} & {\footnotesize{}8} & {\footnotesize{}0.24} & {\footnotesize{}144.5} & {\footnotesize{}6} & {\footnotesize{}0.23} & {\footnotesize{}94.4} & {\footnotesize{}5 / 1} & {\footnotesize{}0.12}\tabularnewline {\footnotesize{}260} & {\footnotesize{}0.37} & {\footnotesize{}120} & {\footnotesize{}0.40} & {\footnotesize{}-132} & {\footnotesize{}11} & {\footnotesize{}0.29} & {\footnotesize{}150.5} & {\footnotesize{}7} & {\footnotesize{}0.25} & {\footnotesize{}$\vdots$} & {\footnotesize{}$\vdots$} & {\footnotesize{}$\vdots$}\tabularnewline {\footnotesize{}567} & {\footnotesize{}0.38} & {\footnotesize{}140} & {\footnotesize{}0.40} & {\footnotesize{}-131} & {\footnotesize{}14} & {\footnotesize{}0.28} & {\footnotesize{}158.4} & {\footnotesize{}8} & {\footnotesize{}0.22} & \textbf{\footnotesize{}69.8} & \textbf{\footnotesize{}2 / 2} & \textbf{\footnotesize{}0.11}\tabularnewline {\footnotesize{}1,173} & {\footnotesize{}0.40} & {\footnotesize{}160} & {\footnotesize{}0.42} & {\footnotesize{}-143} & {\footnotesize{}17} & {\footnotesize{}0.30} & {\footnotesize{}171.1} & {\footnotesize{}9} & {\footnotesize{}0.23} & & & \tabularnewline {\footnotesize{}2,532} & {\footnotesize{}0.38} & {\footnotesize{}180} & {\footnotesize{}0.38} & {\footnotesize{}-146} & {\footnotesize{}20} & {\footnotesize{}0.27} & {\footnotesize{}177.6} & {\footnotesize{}10} & {\footnotesize{}0.23} & & & \tabularnewline {\footnotesize{}5,217} & {\footnotesize{}0.38} & {\footnotesize{}200} & {\footnotesize{}0.40} & \textbf{\footnotesize{}-151} & \textbf{\footnotesize{}25} & \textbf{\footnotesize{}0.28} & {\footnotesize{}188.6} & {\footnotesize{}11} & {\footnotesize{}0.23} & & & \tabularnewline \hline \end{tabular} \par\end{centering}{\footnotesize \par} \caption{Analysis results of the Michigan lung cancer dataset by five methods. For Lasso-Logistic, MDR and SODA, the selected set with the lowest BIC score is highlighted in bold font. $\Delta\text{BIC}$: For MDR method, the difference of $\text{BIC}_{G}$ between two adjacent steps. CVE: prediction error rate estimated by 10-fold cross-validation. \#P: number of selected predictors. \#M / \#I: number of selected main effect and interaction terms by SODA. \label{tab:mich_lung}} \end{table} \subsection{Ionosphere dataset} This dataset is a two-class classification problem with 351 samples and 32 predictors. Targets are ``Good'' and ``Bad'' radar returns from the ionosphere. ``Good'' radar returns are those showing evidence of some type of structure in the ionosphere, while ``Bad'' returns do not. We applied Lasso-Logistic, MDR, IIS-SQDA and SODA to this dataset. Since the number of candidate predictors is not large, we also ran Lasso-Logistic with all main effect terms and $32\times\left(32+1\right)/2=528$ interaction terms, which is referred to as Lasso-Logistic-2. Results are summarized in Table \ref{tab:Ionosphere}. In the solution path of Lasso-Logistic, the lowest $\text{EBIC}_{0.5}$ was achieved at $302.9$ with $6$ predictors, and the corresponding CV error rate was $14\%$. Lasso-Logistic-2 selected 2 main effect terms and 5 interaction terms with $\text{EBIC}_{0.5}$=$248.7$ and CV error rate $8\%$. SODA selected $4$ main effect and $4$ interaction effect terms with $\text{EBIC}_{0.5}$=$204.2$ and CV error rate $6\%$. Again, SODA found a smaller $\text{EBIC}_{0.5}$ value than both Lasso methods. MDR method selected all 32 predictors and achieves CV error rate $28\%$. IIS-SQDA selected 10 main effect and 96 interaction terms and achieved CV error rate $16\%$. Since MDR selected all 32 predictors, by definition MDR selected model is the full QDA model. Comparing this full QDA model with the SODA selected model, we see that EBIC-based variable selection resulted in a much more interpretable model with a substantially reduced classification error rate. \setlength\tabcolsep{4.5pt} \begin{table}[h] \begin{centering} {\footnotesize{}}% \begin{tabular}{|ccc|ccc|ccc|cc|ccc|} \hline \multicolumn{3}{|c|}{\textbf{\footnotesize{}MDR}} & \multicolumn{3}{c|}{\textbf{\footnotesize{}Lasso-Logistic}} & \multicolumn{3}{c|}{\textbf{\footnotesize{}Lasso-Logistic-2}} & \multicolumn{2}{c|}{\textbf{\footnotesize{}IIS-SQDA}} & \multicolumn{3}{c|}{\textbf{\footnotesize{}SODA}}\tabularnewline {\footnotesize{}$\Delta\text{BIC}$} & {\footnotesize{}\#P} & {\footnotesize{}CVE} & {\footnotesize{}$\text{EBIC}_{0.5}$} & {\footnotesize{}\#P} & {\footnotesize{}CVE} & {\footnotesize{}$\text{EBIC}_{0.5}$} & {\footnotesize{}\#M / \#I} & {\footnotesize{}CVE} & {\footnotesize{}\#M / \#I} & {\footnotesize{}CVE} & {\footnotesize{}$\text{EBIC}_{0.5}$} & {\footnotesize{}\#M / \#I} & {\footnotesize{}CVE}\tabularnewline \hline \hline {\footnotesize{}-326} & {\footnotesize{}1} & {\footnotesize{}0.20} & {\footnotesize{}343.5} & {\footnotesize{}2} & {\footnotesize{}0.19} & {\footnotesize{}279.0} & {\footnotesize{}1 / 2} & {\footnotesize{}0.13} & \textbf{\footnotesize{}10 / 96} & \textbf{\footnotesize{}0.16} & {\footnotesize{}371.2} & {\footnotesize{}1 / 0} & {\footnotesize{}0.21}\tabularnewline {\footnotesize{}-221} & {\footnotesize{}3} & {\footnotesize{}0.26} & {\footnotesize{}329.9} & {\footnotesize{}4} & {\footnotesize{}0.16} & {\footnotesize{}253.8} & {\footnotesize{}2 / 3} & {\footnotesize{}0.10} & & & {\footnotesize{}343.5} & {\footnotesize{}2 / 0} & {\footnotesize{}0.19}\tabularnewline {\footnotesize{}-338} & {\footnotesize{}5} & {\footnotesize{}0.25} & \textbf{\footnotesize{}302.9} & \textbf{\footnotesize{}6} & \textbf{\footnotesize{}0.14} & {\footnotesize{}252.7} & {\footnotesize{}2 / 4} & {\footnotesize{}0.09} & & & {\footnotesize{}319.6} & {\footnotesize{}3 / 0} & {\footnotesize{}0.19}\tabularnewline {\footnotesize{}-298} & {\footnotesize{}7} & {\footnotesize{}0.24} & {\footnotesize{}313.5} & {\footnotesize{}8} & {\footnotesize{}0.16} & \textbf{\footnotesize{}248.7} & \textbf{\footnotesize{}2 / 5} & \textbf{\footnotesize{}0.08} & & & {\footnotesize{}298.8} & {\footnotesize{}4 / 0} & {\footnotesize{}0.15}\tabularnewline {\footnotesize{}-242} & {\footnotesize{}9} & {\footnotesize{}0.25} & {\footnotesize{}312.5} & {\footnotesize{}10} & {\footnotesize{}0.15} & {\footnotesize{}254.4} & {\footnotesize{}2 / 6} & {\footnotesize{}0.08} & & & {\footnotesize{}296.1} & {\footnotesize{}5 / 0} & {\footnotesize{}0.14}\tabularnewline {\footnotesize{}-200} & {\footnotesize{}11} & {\footnotesize{}0.24} & {\footnotesize{}321.7} & {\footnotesize{}12} & {\footnotesize{}0.15} & {\footnotesize{}258.6} & {\footnotesize{}2 / 7} & {\footnotesize{}0.08} & & & {\footnotesize{}232.2} & {\footnotesize{}5 / 1} & {\footnotesize{}0.08}\tabularnewline {\footnotesize{}-278} & {\footnotesize{}15} & {\footnotesize{}0.29} & {\footnotesize{}345.3} & {\footnotesize{}15} & {\footnotesize{}0.15} & {\footnotesize{}267.3} & {\footnotesize{}2 / 8} & {\footnotesize{}0.08} & & & {\footnotesize{}224.1} & {\footnotesize{}5 / 3} & {\footnotesize{}0.07}\tabularnewline {\footnotesize{}-361} & {\footnotesize{}20} & {\footnotesize{}0.28} & {\footnotesize{}363.8} & {\footnotesize{}18} & {\footnotesize{}0.15} & {\footnotesize{}286.3} & {\footnotesize{}2 / 10} & {\footnotesize{}0.08} & & & {\scriptsize{}$\vdots$} & {\scriptsize{}$\vdots$} & {\scriptsize{}$\vdots$}\tabularnewline {\footnotesize{}-434} & {\footnotesize{}25} & {\footnotesize{}0.30} & {\footnotesize{}383.1} & {\footnotesize{}22} & {\footnotesize{}0.15} & {\footnotesize{}290.3} & {\footnotesize{}2 / 11} & {\footnotesize{}0.08} & & & \textbf{\footnotesize{}204.2} & \textbf{\footnotesize{}4 / 4} & \textbf{\footnotesize{}0.06}\tabularnewline \textbf{\footnotesize{}-130} & \textbf{\footnotesize{}32} & \textbf{\footnotesize{}0.28} & {\footnotesize{}445.4} & {\footnotesize{}30} & {\footnotesize{}0.16} & {\footnotesize{}312.0} & {\footnotesize{}2 / 13} & {\footnotesize{}0.08} & & & & & \tabularnewline \hline \end{tabular} \par\end{centering}{\footnotesize \par} \caption{The summary of results on the Ionosphere dataset by the five methods. $\Delta\text{BIC}$: For MDR method, the difference of $\text{BIC}_{G}$ between two adjacent steps. CVE: prediction error rate estimated by 10-fold cross-validation. \#P: number of selected predictors. \#M / \#I: number of selected main effect and interaction terms by SODA. \label{tab:Ionosphere}} \end{table} \subsection{Pumadyn dataset} This dataset was synthesized from a realistic simulation of the dynamics of a robotic arm. It has $n=8192$ samples, $p=32$ predictors, and a continuous response. The predictor set includes angular positions, velocities and torques of the robot arm. The goal is to predict the angular acceleration of the robot arm's links. The samples are split into 4500 in-samples for modeling training, and 3692 out-samples for model evaluation. We trained the S-SODA model with $H=20$ for this dataset, and made the predictions using formula (\ref{eq:Y_pred}). We also applied linear regression with Lasso selection with/without interaction terms, denoted as Lasso-Linear and Lasso-Linear-2, respectively. The results are summarized in Table \ref{tab:Pumadyn}. Lasso's highest out-sample correlation $r=0.477$ were achieved when selecting only 1 predictor (named tau4). S-SODA selected two predictors (tau4 and theta5) and achieved an out-sample correlation $r=0.707$. The predicted surfaces of $\hat{\mathbb{E}}\left[Y\mid\mathbf{X}_{\left(\text{tau4},\text{theta5}\right)}\right]$ from the linear model and S-SODA, respectively, are shown in Figure \ref{fig:pumadyn_pred}. The interaction between predictors tau4 and theta5 are captured by S-SODA but not the linear model. From Table \ref{tab:Pumadyn} we can also see that the interaction between tau4 and theta5 cannot be simply captured by the multiplication term $X_{\text{tau4}}\cdot X_{\text{theta5}}$. \begin{table}[h] \begin{centering} {\footnotesize{}}% \begin{tabular}{|cc|cc|cc|} \hline \multicolumn{2}{|c|}{\textbf{\footnotesize{}Lasso-Linear}} & \multicolumn{2}{c|}{\textbf{\footnotesize{}Lasso-Linear-2}} & \multicolumn{2}{c|}{\textbf{\footnotesize{}S-SODA}}\tabularnewline {\footnotesize{}\# Predictors} & {\footnotesize{}Out-$r$} & {\footnotesize{}\# M / \#I} & {\footnotesize{}Out-$r$} & {\footnotesize{}\# Predictors} & {\footnotesize{}Out-$r$}\tabularnewline \hline \hline {\footnotesize{}1} & {\footnotesize{}0.477} & {\footnotesize{}1 / 0} & {\footnotesize{}0.477} & {\footnotesize{}1} & {\footnotesize{}0.469}\tabularnewline {\footnotesize{}2} & {\footnotesize{}0.477} & {\footnotesize{}1 / 1} & {\footnotesize{}0.476} & {\footnotesize{}2} & {\footnotesize{}0.707}\tabularnewline {\footnotesize{}3} & {\footnotesize{}0.476} & {\footnotesize{}1 / 2} & {\footnotesize{}0.474} & & \tabularnewline {\footnotesize{}4} & {\footnotesize{}0.476} & {\footnotesize{}1 / 3} & {\footnotesize{}0.473} & & \tabularnewline {\footnotesize{}5} & {\footnotesize{}0.476} & {\footnotesize{}1 / 4} & {\footnotesize{}0.473} & & \tabularnewline {\footnotesize{}10} & {\footnotesize{}0.474} & {\footnotesize{}1 / 10} & {\footnotesize{}0.469} & & \tabularnewline {\footnotesize{}20} & {\footnotesize{}0.472} & {\footnotesize{}1 / 20} & {\footnotesize{}0.464} & & \tabularnewline {\footnotesize{}30} & {\footnotesize{}0.472} & {\footnotesize{}1 / 30} & {\footnotesize{}0.459} & & \tabularnewline \hline \end{tabular} \par\end{centering}{\footnotesize \par} \caption{Analysis results of the Pumadyn dataset by the three methods. \#P: the number of selected predictors. \#M / \#I: the number of selected main effect and interaction terms by Lasso on linear model with interaction terms. Out-$r$: the out-sample correlation $r$. \label{tab:Pumadyn}} \end{table} \begin{figure} \begin{centering} \includegraphics[viewport=0bp 35bp 576bp 290bp,scale=0.75]{pred_pumadyn} \par\end{centering} \caption{The predicted $\hat{\mathbb{E}}\left[Y\mid\mathbf{X}_{\left(\text{tau4},\text{theta5}\right)}\right]$ surface from linear model (left) and S-SODA (right).\label{fig:pumadyn_pred}} \end{figure} \section{Concluding Remarks} A somewhat striking observation in this article is that the proposed stepwise selection algorithm SODA, which is guided by EBIC and based on the classic stepwise regression idea with a twist for efficiently searching for interaction terms, out-performed all known advanced methods, such as those based on $L_1$ regularizations, in terms of variable selection accuracy, prediction accuracy, and robustness in a variety of settings when the joint distribution of the predictors do not "behave nicely." In contrast to \cite{murphy2010variable}, \cite{zhang2011bic}, and \cite{maugis2011variable}, the consistency of SODA does not require the joint normality assumption of relevant and irrelevant predictors. Compared to IIS in \cite{fan2015innovated}, SODA's forward variable addition does not need the normal assumption and does not need to estimate large precision matrices. It is worth noting that even for logistic regression models with only main effects, we consistently observed that SODA performed better than or similarly to Lasso-logistic in terms of both the $\text{EBIC}_{0.5}$ score and the CV error rate under various settings, especially when the predictors are highly correlated or the joint distribution of the predictors is long-tailed. In Supplemental Materials, we also observed that Lasso-logistic failed miserably when the `incoherence condition'' \citep{ravikumar2010high} was violated in linear logistic models, whereas SODA still performed robustly. These indicate that EBIC is a good criterion to follow and SODA is a better optimizer of EBIC than Lasso. Indeed, when one moves away from the $L_1$ regularization realm but adopts the $L_0$ regularization framework (such as AIC, BIC, EBIC), Lasso can no longer guarantee to find the optimal solution and is more {\it ad hoc} than stepwise approaches. LDA and QDA complement each other in terms of the bias-variance trade-off. Given finite observations, LDA is simpler and more robust when the response $Y$ can be explained well by the linear effects of $\mathbf{X}$. QDA has the ability to exploit interaction effects, which may contribute dramatically to the classification accuracy, but also has many more parameters to estimate and is more vulnerable to including noise predictors. SODA is designed to be adaptive in the sense that it automatically chooses between LDA and QDA models and takes advantage of both sides. Instead of selecting predictors, SODA selects individual main and interaction terms, which enables SODA to simultaneously utilize interaction terms and avoid including a large number of unnecessary terms. An interesting and also somewhat surprising twist of SODA is its extension S-SODA for dealing with the variable selection problem for semi-parametric models with continuous responses. Our simulation results demonstrated that the simple idea of slicing (aka {\it discretizing}) the response variable can bring a lot to the table, especially coupled with stepwise variable selection tools such as N-SIRI \citep{jiang2014variable} and S-SODA. Even for linear models, S-SODA performed competitively with Lasso, and outperformed other linear or near-linear methods such as hierNet and DC-SCAD when the joint distribution of the covariates is long-tailed. Compared with existing SIR-based methods, SODA does not require the linearity and constant variance conditions and enjoys a much improved robustness. A main limitation of SODA is that the stepwise detectable condition might not hold when main effects are very weak or nonexistent but the interaction effects are strong. This is a generic issue that troubles almost all methods unless we put all interaction terms into the set of candidate variables subject to selection. In empirical studies we found that SODA worked well and had better performances compared to other methods for both simulated and real-data examples, which suggests that this may not be a serious issue in many real applications. Indeed, even for QDA models it is quite unusual and nearly pathological to construct mean vectors and covariance matrices that result in a discriminant function with no main effects but only interaction terms. The Implementation of SODA and S-SODA procedures is available in the R package \texttt{sodavis} on CRAN (\texttt{http://cran.us.r-project.org}).
1,116,691,498,871
arxiv
\section{Introduction} The goal of objective image quality assessment is to design mathematical models that are able to predict the perceptual quality of digital images. The classification of objective image quality assessment algorithms is based on the accessibility of the reference image. In the case of reference image is unavailable image quality assessment is considered as a no-reference (NR) one. Reduced-reference (RR) methods have only partial information about the reference image, while full-reference (FR) algorithms have full access to the reference image. The research of objective image quality assessment demands databases that contain images with the corresponding MOS values. To this end, a number of image quality databases have been made publicly available. Roughly speaking, these databases can be categorized into three groups. The first one contains a smaller set of pristine, reference digital images and artificially distorted images derived from the pristine images considering different artificial distortions at different intensity levels. The second group contains only digital images with authentic distortions collected from photographers, so pristine images cannot be found in such databases. Virtanen \textit{et al.} \cite{virtanen2014cid2013} were first to introduce this type of database for images by releasing CID2013. As a consequence, the development of FR methods is connected to the first group of databases. In contrast Waterloo Exploration \cite{228606:5144146} and KADIS-700k \cite{kadid10k} databases are meant to provide an alternative evaluation of objective image quality assessment models, by means of paired comparisons. That is why, they contain a set of reference (pristine) images, distorted images, and distortion levels. In contrast to other databases, they do not provide MOS values. Information about major publicly available image quality assessment databases are summarized in Table \ref{table:iqadatabase}. In this study, we provide a comprehensive evaluation of 32 full-reference image quality assessment (FR-IQA) algorithms on MDID database. In contrast to other available image quality databases, the images in MDID contain multiple types of distortions simultaneously. The rest of this study is organized as follows. There are a number of publicly available image quality databases, such as IVC \cite{ivcdb}, LIVE IQA \cite{sheikh2006statistical}, A57 \cite{chandler2007vsnr}, Toyoma \cite{tourancheau2008impact}, TID2008 \cite{ponomarenko2009tid2008}, CSIQ \cite{larson2010most}, IVC-LAR \cite{ivclardb}, MMSP 3D \cite{goldmann2010impact}, IRSQ \cite{ma2012image}, \cite{ma2012study}, TID2013 \cite{ponomarenko2013color}, CID2013 \cite{virtanen2014cid2013}, LIVE In the Wild \cite{livewild}, Waterloo Exploration \cite{228606:5144146}, MDID \cite{sun2017mdid}, KonIQ-10k \cite{lin2018koniq}, KADID-10k \cite{lin2019kadid}, and KADIS-700k \cite{lin2019kadid}. In Section \ref{sec:iqadatabases}, we give a brief introduction to each of them. In Section \ref{sec:exp}, we give a comprehensive evaluation of 31 full-reference image quality assessment (FR-IQA) algorithms on MDID database. Finally, a conclusion is drawn in Section \ref{sec:conc}. \section{Image quality databases} \label{sec:iqadatabases} \textbf{IVC}\footnote{http://www2.irccyn.ec-nantes.fr/ivcdb/} \cite{ivcdb} database consists of 10 pristine images, and 235 distorted images, including four types of distortions (JPEG, JPEG2000, locally adaptive resolution coding, blurring). Quality score ratings (1 to 5) are provided in the form of MOS. \textbf{LIVE Image Quality Database}\footnote{http://www.live.ece.utexas.edu/research/quality/subjective.htm} (LIVE IQA) \cite{sheikh2006statistical} has two releases, Release 1 and Release 2. Laboratory for Image and Video Engineering (University of Texas at Austin) conducted an extensive experiment to obtain scores from human subjects for a number of images distorted with different distortion types. Release 2 has more distortion types --- JPEG (169 images), JPEG2000 (175 images), Gaussian blur (145 images, White noise (145 images), bit errors in JPEG2000 bit stream (145 images). The subjective quality scores in this database are DMOS (Differential MOS), ranging from 0 to 100. \textbf{A57 Database}\footnote{http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=26} \cite{chandler2007vsnr} has 3 pristine images, and 54 distorted images, including six types of distortions (JPEG, JPEG2000, JPEG2000 with dynamic contrast-based quantization, quantization of the LH subbands of DWT, additive Gaussian white noise, Gaussian blurring). Quality score ratings (0 to 1) are provided in the form of DMOS. \textbf{Toyoma Database} \cite{tourancheau2008impact} consists of 14 pristine images, and 168 distorted images, including two types of distortions (JPEG, JPEG2000). Quality score ratings (1 to 5) are provided in the form of MOS. \textbf{Tampere Image Database 2008}\footnote{http://www.ponomarenko.info/tid2008.htm} (TID2008) \cite{ponomarenko2009tid2008} contains 25 reference images and 1,700 distorted images (25 reference images $\times17$ types of distortions $\times4$ levels of distortions). The MOS was obtained from the results of 838 experiments carried out by observers from three countries. 838 observers have performed 256,428 comparisons of visual quality of distorted images or 512,856 evaluations of relative visual quality in image pairs. Higher value of MOS (0 - minimal, 9 - maximal, MSE of each score is 0.019) corresponds to higher visual quality of the image. A file enclosed “mos.txt” contains the Mean Opinion Score for each distorted image. \textbf{Computational and Subjective Image Quality}\footnote{http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=23} (CSIQ) \cite{larson2010most} database consists of 30 original images, each distorted using one of six types of distortions, each at four to five different levels of distortion. The images were subjectively rated based on a linear displacement of the images across four calibrated monitors placed side-by-side with equal viewing distance to the observer. The database contains 5,000 subjective ratings from 35 different --- both male and female --- observers. Quality score ratings (0 to 1) are provided in the form of DMOS. \textbf{IVC-LAR}\footnote{http://ivc.univ-nantes.fr/en/databases/LAR/} \cite{ivclardb} database contains 8 pristine images (4 natural images and 4 art images), and 120 distorted images, consisting of three types of distortions (JPEG, JPEG2000, locally adaptive resolution coding). Quality score ratings (1 to 5) are provided in the form of MOS. \textbf{Wireless Imaging Quality}\footnote{https://computervisiononline.com/dataset/1105138665} (WIQ) Database \cite{engelke2009reduced}, \cite{engelke2010subjective} consists of 7 reference images and 80 distorted images. The subjective quality scores are given in DMOS, ranging from 0 to 100. In contrast to other publicly available image quality databases \textbf{MMSP 3D Image Quality Assessment Database}\footnote{https://mmspg.epfl.ch/downloads/3diqa/} \cite{goldmann2010impact} consists of stereoscopic images with a resolution of $1,920\times1,080$ pixels. Specifically, 10 indoor and outdoor scenes were captured with a wide variety of colors, textures, and depth structures. Furthermore, 6 different stimuli have been considered corresponding to different camera distances (10, 20, 30, 40, 50, and 60 cm) for each scene. \textbf{Image Retargeting Subjective Quality}\footnote{http://ivp.ee.cuhk.edu.hk/projects/demo/retargeting/index.html} (IRSQ) Database \cite{ma2012image}, \cite{ma2012study} consists of 57 reference images grouped into four attributes, specfically face and people, clear foreground object, natural scenery, and geometric structure. Moreover, ten different retargeting methods (cropping, seam carving, scaling, shift-map editing, scale and stretch, etc.) are applied to generate retargeted images. In total, 171 test images can be found in this database. \textbf{Tampere Image Database 2013}\footnote{http://www.ponomarenko.info/tid2013.htm} (TID2013) \cite{ponomarenko2013color} contains 25 reference images and 3,000 distorted images (25 reference images $\times24$ types of distortions $\times5$ levels of distortions). MOS (Mean Opinion Score) is provided as subjective score, ranging from 0 to 9. The \textbf{CID2013}\footnote{http://www.helsinki.fi/psychology/groups/visualcognition/} \cite{virtanen2014cid2013} database contains 474 images with authentic distortions captured by 79 imaging devices, such as mobile phones, digital still cameras, and digital single-lens reflex cameras. \textbf{LIVE In the Wild Image Quality Challenge Database}\footnote{http://live.ece.utexas.edu/research/ChallengeDB/} \cite{livewild} contains widely diverse authentic image distortions on a large number of images captured using a representative variety of modern mobile devices. The LIVE In the Wild Image Quality Database has over 350,000 opinion scores on 1,162 images evaluated by over 8,100 unique human observers. \textbf{Waterloo Exploration}\footnote{https://ece.uwaterloo.ca/~k29ma/exploration/} \cite{228606:5144146} database consists of 4,744 reference images and 94,880 distorted images created from them. Instead of collecting MOS for each test image, the authors introduced three alternative test criteria to evaluate the performance of IQA models, such as discriminability test (D-test), listwise ranking consistency test (L-test), and pairwise preference consistency test (P-test). In contrast to other databases considering artificial distortions, \textbf{MDID}\footnote{https://www.sz.tsinghua.edu.cn/labs/vipl/mdid.html} \cite{sun2017mdid} obtains distorted images from reference images with random types and levels of distortions. In this way, each distorted image contains multiple types of distortions simultaneously. Gaussian noise, Gaussian blur, contrast change, JPEG noise, and JPEG2000 noise were considered. The main challenge in applying state-of-the-art deep learning methods to predict image quality in-the-wild is the relatively small size of existing quality scored datasets. The reason for the lack of larger datasets is the massive resources required in generating diverse and publishable content. In \textbf{KonIQ-10k}\footnote{http://database.mmsp-kn.de/koniq-10k-database.html} \cite{lin2018koniq} a new systematic and scalable approach is presented to create large-scale, authentic image datasets for image quality assessment. KonIQ-10k \cite{lin2018koniq} consists of 10,073 images, on which large scale crowdsourcing experiments has been carried out in order to obtain reliable quality ratings from 1,467 crowd workers (1.2 million ratings) \cite{saupe2016crowd}. During the test users exhibiting unusual scoring behavior were removed. \textbf{KADID-10k}\footnote{http://database.mmsp-kn.de/kadid-10k-database.html} \cite{lin2019kadid} consists of 81 pristine images and $10,125$ distorted images derived from the pristine images considering $25$ different distortion types at 5 intensity levels ($10,125=81\times25\times5$). In contrast, \textbf{KADIS-700k} \cite{lin2019kadid} contains $140,000$ pristine images and distorted images were derived using $25$ different distortion types at 5 intensity levels but MOS values are not given in this database. \begin{table}[h] \caption{Major publicly available image quality assessment databases. Publicly available image quality databases can be divided into three groups. The first one contains a smaller set of reference images and artificially distorted images are derived from them using different noise types at different intensity levels. There are also databases which contains only pristine images, distorted images, and distortion levels without MOS. } \centering \begin{center} \begin{tabular}{ |c|c|c|c|c|c|} \hline \footnotesize{Database} & \footnotesize{Year} & \footnotesize{Reference images} & \footnotesize{Test images} & \footnotesize{Distortion type} & \footnotesize{Subjective score} \\ \hline \footnotesize{IVC \cite{ivcdb}} & \footnotesize{2005} & \footnotesize{10} & \footnotesize{235} & \footnotesize{artificial} & \footnotesize{MOS (1-5)} \\ \footnotesize{LIVE IQA \cite{sheikh2006statistical}} & \footnotesize{2006} & \footnotesize{29} & \footnotesize{779} & \footnotesize{artificial} & \footnotesize{DMOS (0-100)} \\ \footnotesize{A57 \cite{chandler2007vsnr}} & \footnotesize{2007} & \footnotesize{3} & \footnotesize{54} & \footnotesize{artificial} & \footnotesize{DMOS (0-1)} \\ \footnotesize{Toyoma \cite{tourancheau2008impact}} & \footnotesize{2008} & \footnotesize{14} & \footnotesize{168} & \footnotesize{artificial} & \footnotesize{MOS (1-5)} \\ \footnotesize{TID2008 \cite{ponomarenko2009tid2008}} & \footnotesize{2008} & \footnotesize{25} & \footnotesize{1,700}& \footnotesize{artificial} & \footnotesize{MOS (0-9)} \\ \footnotesize{CSIQ \cite{larson2010most}} & \footnotesize{2009} & \footnotesize{30} & \footnotesize{866} & \footnotesize{artificial} & \footnotesize{DMOS (0-1)} \\ \footnotesize{IVC-LAR \cite{ivclardb}} & \footnotesize{2009} & \footnotesize{8} & \footnotesize{120} & \footnotesize{artificial} & \footnotesize{MOS (1-5)} \\ \footnotesize{WIQ \cite{engelke2009reduced}, \cite{engelke2010subjective}} & \footnotesize{2009} & \footnotesize{7} & \footnotesize{80} & \footnotesize{artificial} & \footnotesize{DMOS (0-100)} \\ \footnotesize{MMSP 3D \cite{goldmann2010impact}} & \footnotesize{2009} & \footnotesize{9} & \footnotesize{54} & \footnotesize{artificial} & \footnotesize{MOS (0-100)} \\ \footnotesize{IRSQ \cite{ma2012image}, \cite{ma2012study}} & \footnotesize{2011} & \footnotesize{57} & \footnotesize{171} & \footnotesize{artificial} & \footnotesize{MOS (0-5)} \\ \footnotesize{TID2013 \cite{ponomarenko2013color}} & \footnotesize{2013} & \footnotesize{25} & \footnotesize{3,000}& \footnotesize{artificial} & \footnotesize{MOS (0-9)} \\ \footnotesize{CID2013 \cite{virtanen2014cid2013}} & \footnotesize{2013} & \footnotesize{8} & \footnotesize{474}& \footnotesize{authentic} & \footnotesize{MOS (0-9)} \\ \footnotesize{LIVE In the Wild \cite{livewild}} & \footnotesize{2016} & \footnotesize{-} & \footnotesize{1,162}& \footnotesize{authentic} & \footnotesize{MOS (1-5)} \\ \footnotesize{Waterloo Exploration \cite{228606:5144146}} & \footnotesize{2016} & \footnotesize{4,744} & \footnotesize{94,880}&\footnotesize{artificial}& - \\ \footnotesize{MDID \cite{sun2017mdid}} & \footnotesize{2017} & \footnotesize{20} &\footnotesize{1600} &\footnotesize{artificial}& \footnotesize{MOS (0-8)} \\ \footnotesize{KonIQ-10k \cite{lin2018koniq}} & \footnotesize{2018} & \footnotesize{-} &\footnotesize{10,073}& \footnotesize{authentic} & \footnotesize{MOS (1-5)} \\ \footnotesize{KADID-10k \cite{kadid10k}} & \footnotesize{2019} & \footnotesize{81} &\footnotesize{10,125}& \footnotesize{artificial} & \footnotesize{MOS (1-5)} \\ \footnotesize{KADIS-700k \cite{kadid10k}} & \footnotesize{2019} &\footnotesize{140,000}&\footnotesize{700,000}& \footnotesize{artificial} & - \\ \hline \end{tabular} \end{center} \label{table:iqadatabase} \end{table} \section{Experimental results} \label{sec:exp} \begin{table} \caption{Performance comparison of 31 FR-IQA algorithms on MDID database.} \centering \begin{tabular}{|l|l|l|l|l|} \toprule Method &Year & PLCC & SROCC & KROCC \\ \midrule BLeSS-SR-SIM \cite{temel2016bless}& 2016 & 0.7535 &0.8148 &0.6258 \\ BLeSS-FSIM \cite{temel2016bless}& 2016 & 0.8193 &0.8467 &0.6576 \\ BLeSS-FSIMc \cite{temel2016bless}& 2016 & 0.8527 &0.8827 &0.7018 \\ CBM \cite{gao2005content}& 2005 &0.7367 &0.7212 &0.5306 \\ CSV \cite{temel2016csv}& 2016 &0.8785& 0.8814& 0.6998 \\ CW-SSIM \cite{sampat2009complex}&2009 &0.5900 &0.6148 &0.4450 \\ DSS \cite{balanov2015image}&2015 &0.8714 &0.8661 &0.6793 \\ ESSIM \cite{zhang2013edge}& 2013 &0.6694& 0.8253& 0.6349\\ FSIM \cite{zhang2011fsim}& 2011 &0.8591 &0.8870 &0.7074 \\ FSIMc \cite{zhang2011fsim}& 2011 &0.8639 &0.8902 &0.7122 \\ GMSD \cite{xue2013gradient}& 2013 &0.8544& 0.8617& 0.6797 \\ HaarPSI \cite{reisenhofer2018haar}& 2018 &\textbf{0.9051}& \textbf{0.9028}& \textbf{0.7340}\\ MAD \cite{larson2010most}& 2010 &0.7439 &0.7243 &0.5327\\ MCSD \cite{wang2016multiscale}& 2016 &0.8386 &0.8457 &0.6622\\ MDSI ('mult') \cite{nafchi2016mean}& 2016 &0.8130 &0.8278 &0.6441\\ MDSI ('sum') \cite{nafchi2016mean}& 2016 &0.8249 &0.8363 &0.6527\\ MS-SSIM \cite{wang2003multiscale}&2003 &0.7884 &0.8292 &0.6360\\ MS-UNIQUE \cite{prabhushankar2017ms}& 2017 &0.8604 &0.8712 &0.6893\\ NQM \cite{damera2000image}& 2000 &0.6177 &0.5869 &0.4143\\ PerSIM \cite{temel2015persim}& 2015 &0.8282 &0.8196 &0.6296 \\ PSNR-HVS \cite{egiazarian2006new} &2006 &0.679 &0.6637 &0.4845\\ PSNR-HVS-M \cite{ponomarenko2007between} &2007 &0.6875 &0.6739 &0.4944\\ QILV \cite{aja2006image}& 2006 &0.3296 &0.4592 &0.3214\\ QSSIM \cite{kolaman2011quaternion}& 2011 &0.8022 &0.8014 &0.6074\\ RFSIM \cite{zhang2010rfsim}& 2010 &0.7035& 0.6758& 0.4884\\ SCIELAB \cite{zhang1997color}& 1997 &0.2552 &0.1232 &0.0824\\ SR-SIM \cite{zhang2012sr}& 2012 &0.7948 &0.8517 &0.6683\\ SSIM \cite{wang2004image}& 2004 &0.5798 &0.5761 &0.4105 \\ SSIM CNN \cite{amirshahi2018reviving} & 2018 &0.8706 &0.8804 &0.6992\\ SUMMER \cite{temel2019perceptual}& 2019 &0.7427 &0.7343 &0.5434\\ UQI \cite{wang2002universal}& 2002 &0.2175 &0.3608 &0.2476\\ VSI \cite{zhang2014vsi}& 2014 &0.7883 &0.8570 &0.6710\\ \bottomrule \end{tabular} \label{tab:table} \end{table} The evaluation of objective visual quality assessment is based on the correlation between the predicted and the ground-truth quality scores. Pearson’s linear correlation coefficient (PLCC) and Spearman’s rank order correlation coefficient (SROCC) are widely applied to this end. Furthermore, some authors give the Kendall’s rank order correlation coefficient as well. The PLCC between data set $A$ and $B$ is defined as \begin{equation} PLCC(A,B) = \frac{\sum_{i=1}^{n} (A_i-\overline{A})(B_i-\overline{B})}{\sqrt{\sum_{i=1}^{n}(A_i-\overline{A})^{2}}\sqrt{\sum_{i=1}^{n}(B_i-\overline{B})^{2}}}, \end{equation} where $\overline{A}$ and $\overline{B}$ stand for the average of set $A$ and $B$, $A_i$ and $B_i$ denote the $i$th elements of set $A$ and $B$, respectively. For two ranked sets \textit{A} and \textit{B} SROCC is defined as \begin{equation} SROCC(A,B)=\frac{\sum_{i=1}^{n} (A_i-\hat{A})(B_i-\hat{B})}{\sqrt{\sum_{i=1}^{n}(A_i-\hat{A})^{2}}\sqrt{\sum_{i=1}^{n}(B_i-\hat{B})^{2}}}, \end{equation} where $\hat{A} $ and $\hat{B} $ are the middle ranks of set \textit{A} and \textit{B}. KROCC between dataset $A$ and $B$ can be calculated as \begin{equation} KROCC(A,B) = \frac{n_c-n_d}{\frac{1}{2}n(n-1)}, \end{equation} where $n$ is the length of the input vectors, $n_c$ is the number of concordant pairs between $A$ and $B$, and $n_d$ is the number of discordant pairs between $A$ and $B$. We collected 31 FR-IQA metrics whose source codes are available online. Furthermore, we reimplemented SSIM CNN\footnote{https://github.com/Skythianos/Pretrained-CNNs-for-full-reference-image-quality-assessment} \cite{amirshahi2018reviving} in MATLAB R2019a. In Table \ref{tab:table}, we present PLCC, SROCC, and KROCC values measured over the MDID database. It can be clearly seen from the results that there is still a lot of space for the improvement of FR-IQA algorithms because only HaarPSI \cite{reisenhofer2018haar} was able to produce PLCC and SROCC values higher than 0.9. Furthermore, only three methods --- FSIM \cite{zhang2011fsim}, FSIMc \cite{zhang2011fsim}, HaarPSI \cite{reisenhofer2018haar} --- were able to produce KROCC values higher than 0.7. \section{Conclusion} \label{sec:conc} First, we gave information about the mostly applied image quality databases. Subsequently, we extensively evaluated 32 state-of-the-art FR-IQA methods on MDID database whose images contain multiple types of distortions simultaneously. We dmonstrated that there is still a lot of space for the improvement of FR-IQA algorithms because only HaarPSI \cite{reisenhofer2018haar} was able to produce PLCC and SROCC values higher than 0.9. \newpage \bibliographystyle{unsrt}
1,116,691,498,872
arxiv
\section{Introduction} In this paper we study the following semilinear elliptic problem with Robin boundary condition: \begin{equation}\label{eq1} \left\{\begin{array}{ll} -\Delta u(z)+\xi(z)u(z)=f(z,u(z))\ \mbox{in}\ \Omega,\\ \frac{\partial u}{\partial n}+\beta(z)u=0\ \mbox{on}\ \partial\Omega. \end{array}\right\} \end{equation} In this problem $\Omega\subseteq\RR^N$ is a bounded domain with a $C^2$-boundary $\partial\Omega$. The potential function $\xi\in L^s(\Omega)$ with $s>N$ is in general sign-changing. So, the linear part of (\ref{eq1}) is indefinite. The reaction term $f(z,x)$ is a Carath\'eodory function (that is, for all $x\in\RR,\ z\mapsto f(z,x)$ is measurable and for almost all $z\in\Omega,\ x\mapsto f(z,x)$ is continuous), which exhibits superlinear growth near $\pm\infty$. However, $f(z,\cdot)$ does not satisfy the (usual in such cases) Ambrosetti-Rabinowitz condition (AR-condition, for short). Instead, we employ a more general condition which incorporates in our framework superlinear functions with ``slower" growth near $\pm\infty$, which fail to satisfy the AR-condition. Another nonstandard feature of our work is that $f(z,\cdot)$ does not have subcritical polynomial growth. In our case, the growth of $f(z.\cdot)$ is almost critical in the sense that $\lim\limits_{x\rightarrow\pm\infty}\frac{f(z,x)}{|x|^{2^*-2}x}=0$ uniformly for almost all $z\in\Omega$, with $2^*$ being the Sobolev critical exponent for 2, defined by \begin{eqnarray*} 2^*=\left\{\begin{array}{ll} \frac{2N}{N-2} & \mbox{if}\ N\geq 3\\ +\infty & \mbox{if}\ N=1,2. \end{array}\right. \end{eqnarray*} In the boundary condition, $\frac{\partial u}{\partial n}$ denotes the normal derivative of $u\in H^1(\Omega)$ defined by extension of the continuous linear map $$C^1(\overline{\Omega})\ni u\mapsto\frac{\partial u}{\partial n}=(Du,n)_{\RR^N},$$ with $n(\cdot)$ being the outward unit normal on $\partial\Omega$. The boundary coefficient is $\beta\in W^{1,\infty}(\partial\Omega)$ and we assume that $\beta(z)\geq0$ for all $z\in\partial\Omega$. When $\beta\equiv0$, we have the usual Neumann problem. Our aim in this paper is to prove existence and multiplicity results within this general analytical framework. Recently, there have been such results primarily for Dirichlet problems. We mention the works of Lan and Tang \cite{11} (with $\xi\equiv0$), Li and Wang \cite{12}, Miyagaki and Souto \cite{14} (with $\xi\equiv0$), Papageorgiou and Papalini \cite{18}, Qin, Tang and Zhang \cite{24}, Wu and An \cite{29}, Zhang-Liu \cite{30}. For Neumann and Robin problems, we mention the works of D'Agui, Marano and Papageorgiou \cite{3}, Papageorgiou and R\u adulescu {\cite{20, 21,pr17}}, {Papageorgiou, R\u adulescu and Repov\v{s} \cite{prr17}}, Papageorgiou and Smyrlis \cite{23}, {Pucci {\it et al.} \cite{apv13, cpv12}}, Shi and Li \cite{26}. Superlinear problems were treated by Lan and Tang \cite{11}, Li and Wang \cite{12}, Miyagaki and Souto \cite{14}, who proved only existence results. The superlinear case was not studied in the context of Neumann and Robin problems. Our approach uses variational methods based on the critical point theory, together with suitable truncation and perturbation techniques and Morse theory (critical groups). \section{Mathematical Background}\label{sec2} Let $X$ be a Banach space and $X^*$ its topological dual. By $\left\langle \cdot,\cdot\right\rangle$ we denote the duality brackets for the pair $(X^*,X)$. Given $\varphi\in C^1(X,\RR)$, we say that $\varphi$ satisfies the ``Cerami condition" (the ``C-condition" for short), if the following property holds: ``Every sequence $\{u_n\}_{n\geq1}\subseteq X$ such that $\{\varphi(u_n)\}_{n\geq1}\subseteq\RR$ is bounded and $$(1+||u_n||)\varphi'(u_n)\rightarrow0\ \mbox{in}\ X^*\ \mbox{as}\ n\rightarrow\infty,$$ \ \ \, admits a strongly convergent subsequence". This is a compactness-type condition on $\varphi$, which compensates for the fact that the ambient space $X$ is in general not locally compact. It leads to a deformation theorem from which one can derive the minimax theory of the critical values of $\varphi$. A fundamental result of this theory is the so-called ``mountain pass theorem", which we state here in a slightly more general form (see Gasinski and Papageorgiou \cite[p. 648]{6}). {We also point out that Theorem \ref{th1} is a direct consequence of Ekeland \cite[Corollaries 4 and 9]{eke}. } \begin{theorem}\label{th1} Let $X$ be a Banach space. Assume that $\varphi\in C^1(X,\RR)$ satisfies the C-condition and for some $u_0,u_1\in X$ with $||u_1-u_0||>r>0$ we have $$\max\{\varphi(u_0),\varphi(u_1)\}<\inf[\varphi(u):||u-u_0||=r]=m_r$$ and $c=\inf\limits_{\gamma\in \Gamma}\max\limits_{0\leq t\leq 1}\varphi(\gamma(t))\ \mbox{with}\ \Gamma=\{\gamma\in C([0,1],X):\gamma(0)=u_0,\gamma(1)=u_1\}$. Then $c\geq m_r$ and $c$ is a critical value of $\varphi$ (that is, there exists $u_0\in X$ such that $\varphi'(u_0)=0$ and $\varphi(u_0)=c$). \end{theorem} It is well known that when the functional $\varphi$ has symmetry properties, then we can have an infinity of critical points. In this direction, we mention two such results which we will use in the sequel. The first is the so-called ``symmetric mountain pass theorem" due to Rabinowitz \cite[Theorem 9.12, p. 55]{25} (see also Gasinski and Papageorgiou \cite[Corollary 5.4.35, p. 688]{6}). \begin{theorem}\label{th2} Let $X$ be an infinite dimensional Banach space such that $X=Y\oplus V$ with $Y$ finite dimensional. Assume that $\varphi\in C^1(X,\RR)$ satisfies the C-condition and that \begin{itemize} \item [(i)] there exist $\vartheta,\rho>0$ such that $\varphi|_{\partial B_{\rho}\cap V}\geq\vartheta>0$ (here $\partial B_\rho=\{x\in X:||x||=\rho\}$); \item [(ii)] for every finite dimensional subspace $E\subseteq X$, we can find $ R=R(E)$ such that $\varphi|_{X\backslash B_R}\leq 0$ (here $B_R=\{u\in X:||u||<R\}$). \end{itemize} Then $\varphi$ has an unbounded sequence of critical points. \end{theorem} The second such abstract multiplicity result that we will need, is a variant of a classical result of Clark \cite{2}, due to Heinz \cite{8} and Kajikiya \cite{10}. \begin{theorem}\label{th3} If $X$ is a Banach space, $\varphi\in C^1(X,\RR)$ satisfies the C-condition, is even and bounded below, $\varphi(0)=0$ and for every $n\in\NN$ there exist an $n$-dimensional subspace $Y_n$ of $X$ and $\rho_n>0$ such that $$\sup[\varphi(u):u\in Y_n\cap\partial B_{\rho_n}]<0$$ then there exists a sequence $\{u_n\}_{n\geq1}\subseteq X$ of critical points of $\varphi$ such that $u_n\rightarrow 0$ in $X$. \end{theorem} In the analysis of problem (\ref{eq1}), we will use the following three spaces: \begin{itemize} \item [$\bullet$] the Sobolev space $H^1(\Omega)$; \item [$\bullet$] the Banach space $C^1(\overline{\Omega})$; \item [$\bullet$] the ``boundary" Lebesgue spaces $L^q(\partial\Omega)$ with $1\leq q\leq \infty$. \end{itemize} The Sobolev space $H^1(\Omega)$ is a Hilbert space with inner product given by $$(u,h)_{H^1}=\int_\Omega uhdz+\int_\Omega(Du,Dh)_{\RR^N}dz\ \mbox{for all}\ u,h\in H^1(\Omega).$$ By $||\cdot||$ we denote the corresponding norm defined by $$||u||=\left[ ||u||^2_2 + ||Du||^2_2 \right]^{^1/_2}\ \mbox{for all}\ u\in H^1(\Omega).$$ The Banach space $C^1(\overline{\Omega})$ is an ordered Banach space with positive (order) cone given by $$C_+=\{u\in C^1(\overline{\Omega}):u(z)\geq0\ \mbox{for all}\ z\in\overline{\Omega}\}.$$ This cone has a nonempty interior given by $$D_+=\{u\in C_+:u(z)>0\ \mbox{for all}\ z\in\overline{\Omega}\}.$$ On $\partial\Omega$ we consider the $(N-1)$-dimensional Hausdorff (surface) measure $\sigma(\cdot)$. Using this measure, we can define in the usual way the ``boundary" Lebesgue space $L^q(\partial\Omega),1\leq q\leq\infty$. From the theory of Sobolev spaces, we know that there exists a unique continuous linear map $\gamma_0:H^1(\Omega)\rightarrow L^2(\partial\Omega)$, known as the ``trace map", such that $$\gamma_0(u)=u|_{\partial\Omega}\ \mbox{for all}\ u\in H^1(\Omega)\cap C(\overline{\Omega}).$$ So, the trace map extends the notion of boundary values to every Sobolev function. We know that the map $\gamma_0$ is compact into $L^q(\partial\Omega)$ for all $q\in\left[1,\frac{2(N-1)}{N-2}\right)$ if $N\geq3$ and into $L^q(\partial\Omega)$ for all $q\geq1$ if $N=1,2$. Moreover, we have $${\rm ker}\,\gamma_0=H^1_0(\Omega)\ \mbox{and}\ {\rm im}\,\gamma_0=H^{\frac{1}{2},2}(\partial\Omega).$$ In the sequel, for the sake of notational simplicity, we drop the use of the trace map. All restrictions of Sobolev functions on $\partial\Omega$ are understood in the sense of traces. We will need some facts about the spectrum of the differential operator $u\mapsto -\Delta u+\xi(z)u$ with Robin boundary condition. So, we consider the following linear eigenvalue problem: \begin{equation}\label{eq2} \left\{\begin{array}{l} -\Delta u(z)+\xi(z)u(z)=\hat{\lambda}u(z)\ \mbox{in}\ \Omega,\\ \frac{\partial u}{\partial n}+\beta(z)u=0\ \mbox{on}\ \partial\Omega. \end{array}\right\} \end{equation} We assume that $$\xi\in L^s(\Omega)\ \mbox{with}\ s>N\ \mbox{and}\ \beta\in W^{1,\infty}(\partial\Omega)\ \mbox{with}\ \beta(z)\geq0\ \mbox{for all}\ z\in\partial\Omega.$$ Let $\gamma:H^1(\Omega)\rightarrow\RR$ be the $C^1$-functional defined by $$\gamma(u)=||Du||^2_2+\int_\Omega\xi(z)u^2 dz+\int_{\partial\Omega}\beta(z)u^2d\sigma\ \mbox{for all}\ u\in H^1(\Omega).$$ From D'Agui, Marano and Papageorgiou \cite{3}, we know that we can find $\mu>0$ such that \begin{equation}\label{eq3} \gamma(u)+\mu||u||^2_2\geq c_0||u||^2\ \mbox{for all}\ u\in H^1(\Omega),\ \mbox{some}\ c_0>0. \end{equation} Using (\ref{eq3}) and the spectral theorem for compact self-adjoint operators on a Hilbert space, we show that the spectrum $\hat{\sigma}$(\ref{eq2}) of (\ref{eq2}) consists of a sequence $\{\hat{\lambda}_k\}_{k\in\NN}$ of eigenvalues such that $\hat{\lambda}_k\rightarrow+\infty$. By $E(\hat{\lambda}_k)$ $(k\in \NN)$ we denote the corresponding eigenspace. These items have the following properties: \begin{itemize} \item [$\bullet$] $\hat{\lambda}_1$ is simple (that is, ${\rm dim}\,E(\hat{\lambda}_1)=1$) and \begin{equation}\label{eq4} \hat{\lambda}_1=\inf\left[ \frac{\gamma(u)}{||u||^2_2}:u\in H^1(\Omega),u\neq0 \right]. \end{equation} \item [$\bullet$] For every $m\geq2$ we have \begin{eqnarray} \hat{\lambda}_m & = & \inf\left[ \frac{\gamma(u)}{||u||^2_2}:u\in \overline{\mathop{\oplus}\limits_{k\geq m}E(\hat{\lambda}_k)},u\neq0 \right] \nonumber\\ & = & \sup\left[ \frac{\gamma(u)}{||u||^2_2}:u\in \underset{\text{k=1}}{\overset{\text{m}}{\oplus}}E(\hat{\lambda}_k), u\neq0 \right].\label{eq5} \end{eqnarray} \item [$\bullet$] For every $k\in\NN,\ E(\hat{\lambda}_k)$ is finite dimensional, $E(\hat{\lambda}_k)\subseteq C^1(\overline{\Omega})$ and it has the ``unique continuation property" (UCP for short), that is, if $u\in E(\hat{\lambda}_k)$ and vanishes on a set of positive measure, then $u\equiv0$. \end{itemize} Note that in (\ref{eq4}) the infimum is realized on $E(\hat{\lambda}_1)$ and in (\ref{eq5}) both the infimum and the supremum, are realized on $E(\hat{\lambda}_m)$. The above properties, imply that the nontrivial elements of $E(\hat{\lambda}_1)$ have constant sign, while the nontrivial elements of $E(\hat{\lambda}_m)$ (for $m\geq2$) are all nodal (that is, sign changing) functions. By $\hat{u}_1$ we denote the $L^2$-normalized (that is, $||\hat{u}_1||_2=1$) positive eigenfunction. We know that $\hat{u}_1\in C_+$ and by the Harnack inequality (see, for example, Motreanu, Motreanu and Papageorgiou \cite[p. 211]{15}), we have $\hat{u}_1(z)>0$ for all $z\in\Omega$. Moreover, assuming that $\xi^+\in L^\infty(\Omega)$ and using the strong maximum principle, we have $\hat{u}_1\in D_+$. Now let $f_0 :\Omega\times\RR \rightarrow\RR$ be a Carath\'eodory function satisfying $$|f_0(z,x)|\leq a_0(z)(1+|x|^{2^*-1})\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x\in\RR\,.$$ We set $F_0(z,x)=\int^x_0 f_0(z,s)ds$ and consider the $C^1$-functional $\varphi_0:H^1(\Omega)\rightarrow\RR$ defined by $$\varphi_0(u)=\frac{1}{2}\gamma(u)-\int_\Omega F(z,u)dz\ \mbox{for all}\ u\in H^1(\Omega).$$ The next result is a special case of a more general result of Papageorgiou and R\u adulescu \cite{19, 22}. \begin{prop}\label{prop4} Assume that $u_0\in H^1(\Omega)$ is a local $C^1(\overline{\Omega})$-minimizer of $\varphi_0$, that is, there exists $\delta_1>0$ such that $$\varphi_0(u_0)\leq\varphi_0(u_0+h)\ \mbox{for all}\ h\in C^1(\overline{\Omega})\ \mbox{with}\ ||h||_{C^1(\overline{\Omega})}\leq\delta_1.$$ Then $u_0\in C^1(\overline{\Omega})$ and $u_0$ is also a local $H^1(\Omega)$-minimizer of $\varphi_0$, that is, there exists $\delta_2>0$ such that $$\varphi_0(u_0)\leq\varphi_0(u_0+h)\ \mbox{for all}\ h\in H^1(\Omega)\ \mbox{with}\ ||h||\leq\delta_2.$$ \end{prop} Next let us recall a few basic definitions and facts from Morse theory, which we will need in the sequel. So, let $X$ be a Banach space, $\varphi\in C^1(X,\RR)$ and $c\in\RR$. We introduce the following sets: $$\varphi^c=\{ u\in X:\varphi(u)\leq c \},\ K_\varphi=\{ u\in X:\varphi'(u)=0 \},\ K^c_\varphi=\{ u\in K_\varphi:\varphi(u)=c \}.$$ Let $(Y_1,Y_2)$ be a topological pair such that $Y_2\subseteq Y_1\subseteq X$. For $k\in\NN_0$, let $H_k(Y_1,Y_2)$ denote the $k$th relative singular homology group for the pair $(Y_1,Y_2)$ with integer coefficients (for $k\in-\NN$, we have $H_k(Y_1,Y_2)=0$). Given $u_0\in K^c_\varphi$ isolated, the critical groups of $\varphi$ at $u_0$ are defined by $$C_k(\varphi,u_0)=H_k(\varphi^c\cap U,\varphi^c\cap U\backslash\{u_0\})\ \mbox{for all}\ k\in\NN_0,$$ with $U$ being a neighbourhood of $u_0$ satisfying $\varphi^c\cap K_\varphi\cap U=\{u_0\}$. The excision property of singular homology implies that this definition of critical groups is independent of the choice of the neighbourhood $U$. Suppose that $\varphi$ satisfies the C-condition and that $\inf\varphi(K_\varphi)>-\infty$. Let $c<\inf\varphi(K_\varphi)$. The critical groups of $\varphi$ at infinity, are defined by $$C_k(\varphi, \infty)=H_k(X,\varphi^c)\ \mbox{for all}\ k\in\NN_0.$$ This definition is independent of the choice of $c<\inf\varphi(K_\varphi)$. Indeed, let $c<\hat{c}<\inf\varphi(K_\varphi)$. Then from a corollary of the second deformation theorem (see Motreanu, Motreanu and Papageorgiou \cite[Corollary 5.35, p. 115]{15}) we have that $$\varphi^c\ \mbox{is a strong deformation retract of}\ \varphi^{\hat{c}}.$$ Therefore, we have $$H_k(X, \varphi^c)=H_k(X,\varphi^{\hat{c}})\ \mbox{for all}\ k\in\NN_0.$$ We assume that $K_\varphi$ is finite and introduce the following quantities: \begin{eqnarray*} M(t,u) & = & \sum_{k\geqslant0}\ \mbox{rank}\ C_k(\varphi, u)t^k\ \mbox{for all}\ t\in\RR \mbox{, all}\ u\in K_\varphi,\\ P(t,\infty) & = & \sum_{k\geqslant0}\ \mbox{rank}\ C_k(\varphi, \infty)t^k\ \mbox{for all}\ t\in\RR. \end{eqnarray*} Then the Morse relation says that \begin{equation}\label{eq6} \sum_{u\in K_\varphi}\ M(t,u) = P(t,\infty)+(1+t)Q(t), \end{equation} where $Q(t)=\sum_{k\geqslant0}\hat{\beta}_k t^k$ is a formal series in $t\in\RR$, with nonnegative integer coefficients $\hat{\beta}_k$. Finally we fix our notation. So, for $x\in\RR$, we set $x^\pm=\max\{\pm x,0\}$. Then for $u\in H^1(\Omega)$ we define $u^\pm(\cdot)=u(\cdot)^\pm$ and we have $$u=u^+-u^-\ \mbox{,}\quad | u|=u^+-u^-\ \mbox{,}\quad u^\pm\in H^1(\Omega).$$ Given a measurable function $g:\Omega\times\RR\rightarrow\RR$ (for example, a Carath\'eodory function), by $N_g$ we denote the Nemytskii map corresponding to $g$, that is, $$N_g(u)(\cdot)=g(\cdot,u(\cdot))\ \mbox{for all}\ u\in H^1(\Omega).$$ Evidently, $z\mapsto N_g(u)(z)$ is measurable on $\Omega$. By $|\cdot|_N$ we denote the Lebesgue measure on $\RR^N$. We set $$m_+=\min\{k\in\NN:\hat{\lambda}_k>0\}\ \mbox{and}\ m_-=\max\{k\in\NN:\hat{\lambda}_k<0\}.$$ Then we have the following orthogonal direct sum decomposition of the Sobolev space $H^1(\Omega)$: $$H^1(\Omega)=H_-\oplus E(0)\oplus H_+$$ with $H_-=\underset{\text{k=1}}{\overset{\text{m}_-}{\oplus}} E(\hat{\lambda}_k)$, $H_+=\overline{\mathop{\oplus}\limits_{k\geqslant m_+}E(\hat{\lambda}_k)}$. So, every $u\in H^1(\Omega)$ admits a unique sum decomposition $$u=\overline{u}+u^0+\hat{u}\ \mbox{, with}\ \overline{u}\in H_-\ \mbox{, }\ u^0\in E(0),\ \hat{u}\in H_+$$ \begin{itemize} \item [] If $0\notin\hat{\sigma}(2)=\{\hat{\lambda}_k\}_{k\in\NN}$, then $E(0)={0}$ and $m_-=m_+-1$. \item [] If $\xi\geqslant0$ and $\xi\not\equiv 0$ or $\beta\not\equiv0$, then $\hat{\lambda}_1>0$ and so $m_+=1$ and $m_-=0$. \item [] If $\xi\equiv0$, $\beta\equiv0$ (Neumann problem with zero potential), then $$m_+=2,\ m_-=0,\ E(0)=\RR.$$ \item [] If $u,v\in H^1(\Omega)$, and $v\leq u$, then by $[v,u]$ we denote the order interval defined by $$[v,u]=\{y\in H^1(\Omega):\ v(z)\leq y(z)\leq u(z)\ \mbox{for almost all}\ z\in\Omega\}.$$ \end{itemize} \section{Existence Theorems} In this section we prove two existence theorems. The two existence results differ on the geometry near the origin of the energy (Euler) functional. For the first existence theorem, we assume that $f(z,\cdot)$ is strictly sublinear near the origin. More precisely, our hypotheses on the data of problem (\ref{eq1}) are the following: \begin{itemize} \item [$H(\xi):$] $\xi\in L^s(\Omega)$ with $s>\NN$. \item [$H(\beta):$] $\beta\in W^{1,\infty}(\partial\Omega)$ with $\beta(z)\geqslant0$ for all $z\in\partial\Omega$. \begin{remark} When $\beta\equiv0$, we have the usual Neumann problem. \end{remark} \item [$H(f)_1:$] $f:\Omega\times\RR\rightarrow\RR$ is a Carath\'eodory function such that \begin{itemize} \item [$(i)$] for every $\rho>0$, there exists $a_\rho\in L^\infty(\Omega)_+$ such that $$| f(z,x)|\leq a_\rho(z)\ \mbox{for almost all}\ z\in\Omega\ \mbox{, all}\ | x|\leq\rho$$ $$\mbox{and}\ \lim_{x\rightarrow\pm\infty}\frac{f(z,x)}{(x)^{2^*-2}x}=0\ \mbox{uniformly for almost all}\ z\in\Omega;$$ \item [$(ii)$] if $F(z,x)=\int_0^xf(z,s)ds$ and $\tau(z,x)=f(z,x)x-2F(z,x)$, then\\ $\lim_{x\rightarrow\pm\infty}\frac{F(z,x)}{x^2}=+\infty$ uniformly for almost all $z\in\Omega$ and there exists $e\in L^1(\Omega)$ such that $$\tau(z,x)\leq\tau(z,y)+e(z)\ \mbox{for almost all}\ z\in\Omega\ \mbox{, all}\ 0\leq x\leq y\ \mbox{and all}\ y\leq x\leq 0;$$ \item [$(iii)$] $\lim_{x\rightarrow0}\frac{f(z,x)}{x}=0$ uniformly for almost all $z\in\Omega$ and there exists $\delta>0$ such that \begin{eqnarray*} &[a]& F(z,x)\leq0\ \mbox{for almost all}\ z\in\Omega \mbox{, all}\ | x|\leq\delta\mbox{,}\\ \mbox{or}&[b]& F(z,x)\geqslant0\ \mbox{for almost all}\ z\in\Omega \mbox{, all}\ |x|\leq\delta\mbox{.} \end{eqnarray*} \end{itemize} \end{itemize} \begin{remark} Hypothesis $H(f)_1(i)$ is more general than the usual subcritical polynomial growth which says that $$| f(z,x)|\leq c_1(1+| x|^{r-1})\ \mbox{for almost all}\ z\in\Omega\ \mbox{, all}\ x\in\RR,$$ with $c_1>0$ and $1\leq r<2^*$. Here the growth of $f(z,\cdot)$ is almost critical and this means we face the difficulty that the embedding of $H^1(\Omega)$ into $L^{2^*}(\Omega)$ is not compact. We overcome this difficulty without use of the concentration-compactness principle. Instead we use Vitali's theorem. Hypothesis $H(f)_1(ii)$ implies that $$\lim_{x\rightarrow\pm\infty}\frac{f(z,x)}{x}=+\infty\ \mbox{uniformly for almost all}\ z\in\Omega.$$ \end{remark} Therefore $f(z,\cdot)$ is superlinear near $\pm\infty$. Usually such problems are studied using the so-called Ambrosetti-Rabinowitz condition (the AR-condition for short). We recall that the AR-condition says that there exist $q>2$ and $M>0$ such that \begin{align} 0 & < q\ F(z,x)\leq f(z,x)x\ \mbox{for almost all}\ z\in\Omega\ \mbox{, all}\ |x|\geqslant M\label{eq7a}\tag{7a}\\ 0 & \displaystyle < {\rm essinf}_\Omega F(\cdot,\pm M).\label{eq7b}\tag{7b} \end{align} \stepcounter{equation} Integrating (\ref{eq7a}) and using (\ref{eq7b}), we obtain the following weaker condition \begin{equation}\label{eq8} c_2 | x|^q\leq F(z,x)\ \mbox{for almost all}\ z\in\Omega\ \mbox{, all}\ | x|\geqslant M. \end{equation} From (\ref{eq7a}) and (\ref{eq8}), we see that $f(z,\cdot)$ has at least ($q-1$)-polynomial growth. This restriction removes from consideration superlinear functions with ''slower'' growth near $\pm\infty$. For example, consider a function $f(x)$ which satisfies: $$f(x)=x\left[\ln| x|+\frac{1}{2}\right]\ \mbox{for all}\ | x|\geqslant M.$$ In this case the primitive is $F(x)=\frac{1}{2}x^2\ln |x|$ for all $| x|\geq M$ and so (\ref{eq3}) fails. In particular, then the AR-condition (see (\ref{eq7a}) and (\ref{eq7b})) does not hold. In contrast $f(\cdot)$ satisfies our hypothesis $H(f)_1(ii)$. This condition is a slightly more general form of a condition used by Li and Yang \cite{12}. It is satisfied, if there exists $M>0$ such that \begin{eqnarray*} x\mapsto\frac{f(z,x)}{x}\ \mbox{is nondecreasing on}\ [M,+\infty),\\ x\mapsto\frac{f(z,x)}{x}\ \mbox{is nonincreasing on}\ (-\infty,-M]. \end{eqnarray*} Hypothesis $H(f)_1(iii)$ implies that $f(z,\cdot)$ is sublinear near zero. \textbf{Examples}: The following functions satisfy hypotheses $H(f)_1$. For the sake of simplicity, we drop the $z$-dependence: \begin{equation*} f_1(x)=x\left[\ln(1+|x|)+\frac{|x|}{1+|x|}\right]\ \mbox{for all}\ x\in\RR \end{equation*} and \begin{equation*} f_2(x)=\left\{ \begin{array}{ll} \frac{| x|^{2^*-2}x}{\ln(1+|x|)} \left[1-\frac{1}{2^*}\frac{|x|}{(1+| x|)\ln(1+|x|)}\right]-c &\ \mbox{if}\ x<-1\\ |x|^{r-2}x &\ \mbox{if}\ -1\leq x\leq 1\\ \frac{| x|^{2^*-2}x}{\ln(1+| x|)} \left[1-\frac{1}{2^*}\frac{| x|}{(1+|x|)\ln(1+| x|)}\right]+c &\ \mbox{if}\ 1<x \end{array} \right. \end{equation*} with $r>2$ and $c=1-\frac{1}{\ln{2}}\left[1-\frac{1}{2^*}\frac{1}{2\ln{2}}\right]$. Note that we have \begin{equation*} F_1(x) = \frac{1}{2}x^2\ln{(1+|x|)}\ \mbox{for all}\ x\in\RR \end{equation*} \begin{equation*} F_2(x) = \frac{1}{2^*}\frac{|x|^{2^*}}{\ln{(1+| x|)}} + c|x|\ \mbox{for all}\ |x| \geqslant1. \end{equation*} Observe that $f_1(\cdot)$ although superlinear, fails to satisfy the AR-condition, while $f_2$ has almost critical growth. Let $\varphi:H^1(\Omega)\rightarrow\RR$ be the energy (Euler) functional for problem (\ref{eq1}) defined by $$\varphi(u)=\frac{1}{2}\gamma(u)-\int_{\Omega}F(z,u)dz\ \mbox{for all}\ u\in H^1(\Omega).$$ Evidently, $\varphi\in C^1(H^1(\Omega))$. First we show that the functional $\varphi$ satisfies the C-condition. \begin{prop}\label{prop5} If hypotheses $H(\xi),H(\beta),H(f)_1(i),(ii)$ hold, then the functional $\varphi$ satisfies the $C$-condition. \end{prop} \begin{proof} We consider a sequence $\{u_n\}_{n\geq1}\subseteq H^1(\Omega)$ such that \begin{eqnarray} |\varphi(u_n)|\leq M_1\ \mbox{for some}\ M_1>0,\ \mbox{all}\ n\in\NN, \label{eq9}\\ (1+||u_n||)\varphi'(u_n)\rightarrow0\ \mbox{in}\ H^1(\Omega)^*\ \mbox{as}\ n\rightarrow\infty \label{eq10}. \end{eqnarray} From (\ref{eq10}) we have \begin{equation}\label{eq11} \left| \left\langle A(u_n),h \right\rangle+\int_\Omega\xi(z)u_nhdz+\int_{\partial\Omega}\beta(z)u_nhd\sigma-\int_\Omega f(z,u_n)hdz \right|\leq\frac{\varepsilon_n||h||}{1+||u_n||} \end{equation} for all $h\in H^1(\Omega)$, with $\varepsilon_n\rightarrow 0^+$. In (\ref{eq11}) we choose $h=u_n\in H^1(\Omega)$ and obtain \begin{equation}\label{eq12} -\gamma(u_n)+\int_\Omega f(z,u_n)u_n dz\leq\varepsilon_n\ \mbox{for all}\ n\in\NN. \end{equation} On the other hand, by (\ref{eq9}), we have \begin{equation}\label{eq13} \gamma(u_n)-\int_\Omega 2F(z,u_n)dz\leq2M_1\ \mbox{for all}\ n\in\NN. \end{equation} We add (\ref{eq12}) and (\ref{eq13}) and obtain \begin{equation}\label{eq14} \int_\Omega\tau(z,u_n)dz\leq M_2\ \mbox{for some}\ M_2>0,\ \mbox{all}\ n\in\NN. \end{equation} \begin{claim} $\{u_n\}_{n\geq1}\subseteq H^1(\Omega)$ is bounded \end{claim} We argue by contradiction. So, we assume that the Claim is not true. Then by passing to a subsequence if necessary, we have \begin{equation}\label{eq15} ||u_n||\rightarrow+\infty . \end{equation} Let $y_n=\frac{u_n}{||u_n||}$ for all $n\in\NN$. Then $||y_n||=1$ and so we may assume that \begin{equation}\label{eq16} y_n\xrightarrow{w}y\ \mbox{in}\ H^1(\Omega)\ \mbox{and}\ y_n\rightarrow y\ \mbox{in}\ L^{\frac{2s}{s-1}}(\Omega)\ \mbox{and in}\ L^2(\partial\Omega) \end{equation} (note that since $s>N$ (see hypothesis $H(\xi)$), we have $\frac{2s}{s-1}<2^*$). First, we assume that $y\neq0$. Let $\Omega^*=\{z\in\Omega:y(z)\neq0\}$. Then $|\Omega^*|_N>0$ and we have $$|u_n(z)|\rightarrow+\infty\ \mbox{for almost all}\ z\in\Omega^*\ \mbox{(see (\ref{eq15}))}.$$ Using hypothesis $H(f)_1(ii)$ we have $$\frac{F(z,u_n(z))}{||u_n||^2}=\frac{F(z,u_n(z))}{u_n(z)^2}y_n(z)^2\rightarrow+\infty \ \mbox{for almost all}\ z\in\Omega^*.$$ Using Fatou's lemma we can say that \begin{equation}\label{eq17} \int_{\Omega^*}\frac{F(z,u_n)}{||u_n||^2}dz\rightarrow+\infty\ \mbox{as}\ n\rightarrow\infty \end{equation} Hypothesis $H(f)(ii)$ implies that we can find $M_3>0$ such that \begin{equation}\label{eq18} F(z,x)\geq0\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ |x|\geq M_3. \end{equation} From (\ref{eq15}) we see that we may assume that \begin{equation}\label{eq19} ||u_n||\geq1\ \mbox{for all}\ n\in\NN. \end{equation} Then we have \begin{eqnarray*} \int_\Omega\frac{F(z,u_n)}{||u_n||^2}dz & = & \int_{\Omega^*}\frac{F(z,u_n)}{||u_n||^2}dz+\int_{\Omega\backslash\Omega^*}\frac{F(z,u_n)}{||u_n||^2}dz \\ & = & \int_{\Omega^*}\frac{F(z,u_n)}{||u_n||^2}dz+\int_{(\Omega\backslash\Omega^*)\cap\{|u_n|\geq M_3\}} \frac{F(z,u_n)}{||u_n||^2}dz \\ & & + \int_{(\Omega\backslash\Omega^*)\cap\{|u_n|<M_3\}} \frac{F(z,u_n)}{||u_n||^2}dz\\ & \geq & \int_{\Omega^*}\frac{F(z,u_n)}{||u_n||^2}dz-c_3\ \mbox{for some}\ c_3>0,\ \mbox{all}\ n\in\NN\\ & & \mbox{(see (\ref{eq18}), (\ref{eq19}) and hypothesis $H(f)_1(i)$)} \end{eqnarray*} \begin{equation}\label{eq20} \Rightarrow\int_\Omega\frac{F(z,u_n)}{||u_n||^2}dz\rightarrow+\infty\ \mbox{as}\ n\rightarrow\infty\ \mbox{(see (\ref{eq17}))}. \end{equation} By (\ref{eq9}) we have \begin{equation}\label{eq21} \int_\Omega\frac{F(z,u_n)}{||u_n||^2}dz\leq\frac{M_1}{||u_n||^2}+\frac{1}{2}\gamma(y_n)\leq M_4\ \mbox{for some}\ M_4>0, \mbox{all}\ n\in\NN \end{equation} (see (\ref{eq19})). Comparing (\ref{eq20}) and (\ref{eq21}), we get a contradiction. Now suppose that $y=0$. Let $\eta>0$ and set $v_n=(2\eta)^{^1/_2}y_n\in H^1(\Omega)$ for all $n\in\NN$. Then from (\ref{eq16}) and since $y=0$, we have \begin{equation}\label{eq22} v_n\xrightarrow{w}0\ \mbox{in}\ H^1(\Omega)\ \mbox{and}\ v_n\rightarrow0\ \mbox{in}\ L^{\frac{2s}{s-1}}(\Omega)\ \mbox{and in}\ L^2(\partial\Omega). \end{equation} Let $c_4=\sup\limits_{n\geq1}||v_n||^{2^*}_{2^*}$ (see (\ref{eq22})). From hypothesis $H(f)_1(i)$ we see that given $\varepsilon>0$, we can find $c_5=c_5(\varepsilon)>0$ such that \begin{equation}\label{eq23} |F(z,x)|\leq\frac{\varepsilon}{2c_4}|x|^{2^*}+c_5\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x\in\RR. \end{equation} Let $E\subseteq\Omega$ be a measurable set such that $|E|_N\leq\frac{\varepsilon}{2c_5}$. Then we have \begin{eqnarray*} \left| \int_E F(z,v_n)dz \right| & \leq & \int_E |F(z,v_n)|dz \\ & \leq & \frac{\varepsilon}{2c_4}||v_n||^{2^*}_{2^*}+c_5|E|_N\ \mbox{(see (\ref{eq23}))} \\ & \leq & \varepsilon\ \mbox{for all}\ n\in\NN, \end{eqnarray*} $$\Rightarrow\{F(\cdot,v_n(\cdot))\}_{n\geq1}\subseteq L^1(\Omega)\ \mbox{is uniformly integrable}.$$ Also note that by (\ref{eq22}) and by passing to a subsequence if necessary, we have $$F(z,v_n(z))\rightarrow0\ \mbox{for almost all}\ z\in\Omega\ \mbox{as}\ n\rightarrow+\infty\,.$$ So, invoking Vitali's theorem (the extended dominated convergence theorem) we have \begin{equation}\label{eq24} \int_\Omega F(z,v_n)dz\rightarrow0\ \mbox{as}\ n\rightarrow\infty\,. \end{equation} From (\ref{eq15}), we see that we can find $n_0\in\NN$ such that \begin{equation}\label{eq25} 0<(2\eta)^{^1/_2}\frac{1}{||u_n||}\leq1\ \mbox{for all}\ n\geq n_0. \end{equation} We choose $t_n\in[0,1]$ such that \begin{equation}\label{eq26} \varphi(t_n u_n)=\max\left[ \varphi(tu_n):0\leq t\leq1 \right]. \end{equation} From (\ref{eq25}) and (\ref{eq26}) we have \begin{eqnarray} \varphi(t_n u_n) & \geq & \varphi(v_n)\nonumber \\ & = & \eta\left[ \gamma(y_n)+\mu||y_n||^2_2 \right]-\int_\Omega F(z,v_n)dz-\mu\eta||y_n||^2_2\nonumber \\ & \geq & \eta\left[ c_0-\mu||y_n||^2_2 \right]-\int_\Omega F(z,v_n)dz\ \mbox{for all}\ n\geq n_0\label{eq27} \\ & & \mbox{(see (\ref{eq3}))}.\nonumber \end{eqnarray} Recall that $y=0$. So, from (\ref{eq16}), (\ref{eq24}) and (\ref{eq27}), we see that we can find $n_1\in\NN,\ n_1\geq n_0$ such that $$\varphi(t_n u_n)\geq\frac{1}{2}\eta c_0\ \mbox{for all}\ n\geq n_1.$$ But $\eta>0$ was arbitrary. Therefore it follows that \begin{equation}\label{eq28} \varphi(t_n u_n)\rightarrow+\infty\ \mbox{as}\ n\rightarrow\infty\,. \end{equation} We have \begin{equation}\label{eq29} \varphi(0)=0\ \mbox{and}\ \varphi(u_n)\leq M_1\ \mbox{for all}\ n\in\NN\ \mbox{(see (\ref{eq9}))}. \end{equation} From (\ref{eq28}) and (\ref{eq29}) we see that we can find $n_2\in\NN$ such that \begin{equation}\label{eq30} t_n\in(0,1)\ \mbox{for all}\ n\geq n_2. \end{equation} From (\ref{eq26}) and (\ref{eq30}), we have \begin{eqnarray} & & \frac{d}{dt}\varphi(tu_n)|_{t=t_n}=0\ \mbox{for all}\ n\geq n_2, \nonumber \\ & \Rightarrow & \langle \varphi' (t_n u_n),t_n u_n\rangle =0\ \mbox{for all}\ n\geq n_2\ \mbox{(by the chain rule),} \nonumber \\ & \Rightarrow & \gamma(t_n u_n)=\int_\Omega f(z,t_n u_n)(t_n u_n)dz\ \mbox{for all}\ n\geq n_2. \label{eq31} \end{eqnarray} From hypothesis $H(f)_1(ii)$ and (\ref{eq30}), we have \begin{eqnarray} & & \tau(z,t_n u_n)\leq\tau(z,u_n)+e(z)\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ n\geq n_2, \nonumber \\ &\Rightarrow & \int_\Omega\tau(z,t_n u_n)dz\leq\int_\Omega\tau(z,u_n)dz + ||e||_1\ \mbox{for all}\ n\geq n_2 \nonumber \\ &\Rightarrow & \int_\Omega f(z,t_n u_n)(t_n u_n)dz\leq c_6 + \int_\Omega 2F(z,t_n u_n)dz \label{eq32} \\ & & \mbox{for some $c_6>0$, all $n\geq n_2$ (see (\ref{eq14}))}.\nonumber \end{eqnarray} We return to (\ref{eq31}) and use (\ref{eq32}). Then \begin{equation}\label{eq33} 2\varphi(t_n u_n)\leq c_6\ \mbox{for all}\ n\geq n_2. \end{equation} Comparing (\ref{eq28}) and (\ref{eq33}) we get a contradiction. This proves the Claim. On account of the Claim, we may assume that \begin{equation}\label{eq34} u_n\xrightarrow{w}u\ \mbox{in}\ H^1(\Omega)\ \mbox{and}\ u_n\rightarrow u\ \mbox{in}\ L^{\frac{2s}{s-1}}\ \mbox{and in}\ L^2(\partial\Omega). \end{equation} In (\ref{eq11}) we choose $h=u_n-u\in H^1(\Omega)$, pass to the limit as $n\rightarrow\infty$ and use (\ref{eq34}). Then \begin{eqnarray*} & & \lim\limits_{n\rightarrow\infty}\langle A(u_n),u_n-u\rangle=0, \\ & \Rightarrow & ||Du_n||_2\rightarrow||Du||_2 \\ & \Rightarrow & u_n\rightarrow u\ \mbox{in}\ H^1(\Omega) \\ & & \mbox{(by the Kadec-Klee property, see (\ref{eq34}) and Gasinski and Papageorgiou \cite[p. 901]{6})}. \end{eqnarray*} This proves that $\varphi$ satisfies the $C$-condition. \end{proof} We assume that $K_\varphi$ is finite (otherwise we already have an infinity of nontrivial solutions for problem (\ref{eq1})). Then the finiteness of $K_\varphi$ and Proposition \ref{prop5} permit the computation of the critical groups of $\varphi$ at infinity. \begin{prop}\label{prop6} If hypotheses $H(\xi),H(\beta),H(f)_1,(i),(ii)$ hold, then $C_k(\varphi,\infty)=0$ for all $k\in \NN$. \end{prop} \begin{proof} Hypotheses $H(f)_1 (i),(ii)$ imply that given any $\eta>0$, we can find $c_7=c_7(\eta)>0$ such that \begin{equation}\label{eq35} F(z,x) \geq\frac{\eta}{2}x^2-c_7\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x\in\RR. \end{equation} Let $\partial B_1=\{u\in H^1(\Omega):||u||=1\}$. Then for $u\in \partial B_1$ and $t>0$, we have \begin{eqnarray} \varphi(tu) & = & \frac{t^2}{2}\gamma(u) - \int_\Omega F(z,tu)dz \nonumber \\ & \leq & \frac{t^2}{2}\left[\gamma(u)-\eta||u||_2^2\right]+c_7|\Omega|_N\ \mbox{(see (\ref{eq35}))} \nonumber\\ & \leq & \frac{t^2}{2}\left[c_8-\eta||u||_2^2\right]+c_7|\Omega|_N\ \mbox{for some}\ c_8>0 \label{eq36} \end{eqnarray} (see hypotheses $H(\xi),H(\beta)$). Recall that $\eta>0$ is arbitrary. So, we can choose $\eta>\frac{c_8}{||u||_2^2}$. Then it follows from (\ref{eq36}) that \begin{equation}\label{eq37} \varphi(tu) \rightarrow -\infty\ \mbox{as}\ t\rightarrow+\infty\ (u\in\partial B_1). \end{equation} For $u\in\partial B_1$ and $t>0$, we have \begin{eqnarray} \frac{d}{dt}\varphi(tu)&=& \langle\varphi'(tu),u\rangle\ \mbox{(by the chain rule)} \nonumber\\ & = & \frac{1}{t}\langle\varphi'(tu),tu\rangle \nonumber\\ & = & \frac{1}{t}\left[\gamma(tu)-\int_\Omega f(z,tu)(tu)dz \right] \nonumber\\ & \leq & \frac{1}{t}\left[\gamma(tu)-\int_\Omega2F(z,tu)dz+||e||_1 \right]\ \mbox{see hypothesis}\ H(f)_1(ii) \nonumber\\ & = & \frac{1}{t}\left[2\varphi(tu)+||e||_1 \right].\label{eq38} \end{eqnarray} From (\ref{eq37}) and (\ref{eq38}) we infer that $$\frac{d}{dt}\varphi(tu)< 0\ \mbox{for}\ t>0\ \mbox{big}.$$ Invoking the implicit function theorem, we can find $\vartheta\in C(\partial B_1)$ such that \begin{equation}\label{eq39} \vartheta>0\ \mbox{and}\ \varphi(\vartheta(u)u)=\rho_0<-\frac{||e||_1}{2}. \end{equation} We extend $\vartheta$ on $H^1(\Omega)\backslash\{0\}$ by defining $$\hat\vartheta(u)=\frac{1}{||u||}\vartheta\left(\frac{u}{||u||}\right)\ \mbox{for all}\ u\in H^1(\Omega)\backslash\{0\}. $$ We have that $\hat\vartheta\in C(H^1(\Omega)\backslash\{0\})$ and $\varphi(\hat\vartheta(u)u)=\rho_0$. Also \begin{equation}\label{eq40} \varphi(u)=\rho_0\Rightarrow\hat\vartheta(u)=1. \end{equation} Therefore, if we define \begin{equation}\label{eq41} \vartheta_0(u)=\left\{\begin{array}{ll} 1 & \mbox{if}\ \varphi(u)\leq \rho_0 \\ \hat\vartheta(u) & \mbox{if}\ \rho_0<\varphi(u) \end{array} \right. \end{equation} then $\vartheta_0\in C^1(H^1(\Omega)\backslash\{0\})\ \mbox{(see (\ref{eq40}))}$. Consider the deformation $h:[0,1]\times(H^1(\Omega)\backslash\{0\})\rightarrow H^1(\Omega)\backslash\{0\}$ defined by $$h(t,u) = (1-t)u+t\vartheta_0(u)u\ \mbox{for all}\ t\in[0,1]\ \mbox{all}\ u\in H^1(\Omega)\backslash\{0\}.$$ We have \begin{eqnarray*} &&h(0,\cdot)=id|_{H^1(\Omega)\backslash\{0\}} \\ &&h(1,u)\in\varphi^{\rho_0}\ \mbox{(see (\ref{eq41})\ and (\ref{eq40}))}\\ &&h(t,\cdot)|_{\varphi^{\rho_0}} = id|_{\varphi^{\rho_0}}\ \mbox{(see (\ref{eq41}))}. \end{eqnarray*} These properties imply that \begin{equation}\label{eq42} \varphi^{\rho_0}\ \mbox{is a strong deformation retract of}\ H^1(\Omega)\backslash\{0\} . \end{equation} Consider the map $r_0:H^1(\Omega)\backslash\{0\}\rightarrow \partial B_1$ defined by $$r_0(u)=\frac{u}{||u||}\ \mbox{for all}\ u\in H^1(\Omega)\backslash\{0\}.$$ We see that \begin{eqnarray} &&r_0(\cdot)\ \mbox{is continous and}\ r_0|_{\partial B_1}=id|_{\partial B_1}, \nonumber\\ &&\Rightarrow\partial B_1\ \mbox{is a retract of}\ H^1(\Omega)\backslash\{0\}. \label{eq43} \end{eqnarray} Also, if we consider the deformation $h_0:[0,1]\times(H^1(\Omega)\backslash\{0\})\rightarrow H^1(\Omega)\backslash\{0\}$ defined by $$h_0(t,u)=(1-t)u+tr_0(u)\ \mbox{for all}\ t\in[0,1],\ \mbox{all}\ u\in H^1(\Omega)\backslash\{0\}$$ then we see that \begin{equation}\label{eq44} H^1(\Omega)\backslash\{0\} \ \mbox{is deformable into}\ \partial B_1. \end{equation} From (\ref{eq43}), (\ref{eq44}) and Theorem 6.5 of Dugundji \cite[p. 325]{4} it follows that \begin{equation}\label{eq45} \partial B_1\ \mbox{is a deformation retract of}\ H^1(\Omega)\backslash\{0\}. \end{equation} From (\ref{eq42}) and (\ref{eq45}) we infer that \begin{eqnarray} \varphi^{\rho_0}\ \mbox{and}\ \partial B_1\ \mbox{are homotopy equivalent,} \nonumber\\ \Rightarrow H_k(H^1(\Omega),\varphi^{\rho_0}) = H_k(H^1(\Omega),\partial B_1)\ \mbox{for all}\ k\in\NN_0\label{eq46} \end{eqnarray} (see Motreanu, Motreanu and Papageorgiou \cite[p. 143]{15}). Since $\partial B_1$ is the unit sphere of the infinite dimensional Hilbert space $H^1(\Omega)$ then it is contractible (see Gasinski and Papageorgiou \cite[Problems 4.154 and 4.159]{7}). Therefore \begin{equation}\label{eq47} H_k(H^1(\Omega),\partial B_1) = 0\ \mbox{for all}\ k\in\NN_0 \end{equation} (see Motreanu, Motreanu and Papageorgiou \cite[p. 147]{15}). From (\ref{eq46}) and (\ref{eq47}) it follows that \begin{equation}\label{eq48} H_k(H^1(\Omega),\varphi^{\rho_0}) = 0\ \mbox{for all}\ k\in\NN_0. \end{equation} Taking $\rho_0$ more negative if necessary (see (\ref{eq39})), we have \begin{eqnarray*} &&H_k(H^1(\Omega),\varphi^{\rho_0}) = C_k(\varphi,\infty) \ \mbox{for all}\ k\in\NN_0, \\ &\Rightarrow & C_k(\varphi,\infty ) = 0\ \mbox{for all}\ k\in\NN_0. \end{eqnarray*} \end{proof} We also compute the critical groups of $\varphi$ at the origin. Recall that we have the orthogonal direct sum decomposition $$H^1(\Omega)=H_-\oplus E(0)\oplus H_+$$ with $H_-=\underset{\text{k=1}}{\overset{\text{m\_}}{\oplus}}E(\hat{\lambda}_k),H_+=\overline{\mathop{\oplus}\limits_{k\geq m_+}E(\hat{\lambda}_k)}$ (see Section \ref{sec2}). We set $$d_-={\rm dim}\,H_-\ \mbox{and}\ d^0_-={\rm dim}\, (H_-\oplus E(0)).\ \mbox{Note that}$$ \begin{itemize} \item [$\bullet$] $d_-=0$ if $\hat{\lambda}_k\geq0$ for all $k\in\NN_0$ (that is, $H_-=\{0\}$). \item [$\bullet$] $d^0_-=0$ if $\hat{\lambda}_k>0$ for all $k\in\NN_0$ (that is, $H_-\oplus E(0)=\{0\}$). \end{itemize} \begin{prop}\label{prop7} If hypotheses $H(\xi)$, $H(\beta)$, $H(f)_1$ hold, then $C_{d_{-}}(\varphi, 0)\neq0$ or $C_{d_{-}^0}(\varphi, 0)\neq0$. \end{prop} \begin{proof} First we assume that hypothesis $H(f)_1(iii)$ [a] holds. Hypotheses $H(f)_1$ imply that given $\varepsilon>0$, we can find $c_9=c_9(\varepsilon)>0$ such that \begin{equation}\label{eq49} |F(z,x)|\leq \frac{\varepsilon}{2}x^2+c_9|x|^{2^*}\ \mbox{for almost all}\ z\in\RR\ \mbox{all}\ x\in\RR \end{equation} (if $N=1,2$, then we replace $2^*$ by $r>2$). For $u\in\ H_-$ we have \begin{eqnarray} \varphi(u) & = & \frac{1}{2}\gamma(u) - \int_\Omega F(z,u)dz\nonumber \\ & \leq & \frac{\hat{\lambda}_{m_-}+\varepsilon}{2}||u||_2^2 + c_9||u||_{2^*}^{2^*}\ \mbox{(see (\ref{eq49}) and recall that}\ u\in H_-).\label{eq50} \end{eqnarray} Recall that $\hat{\lambda}_{m_-}<0$. So, if we choose $\varepsilon\in(0,-\hat{\lambda}_{m_-})$ and exploit the fact that since $H_-$ is finite dimensional all norms are equivalent, then from (\ref{eq50}) we have \begin{equation}\label{eq51} \varphi(u) \leq -c_{10}|| u||^2 + c_{11}|| u||^{2^*}\ \mbox{for some}\ c_{10}, c_{11}>0,\ \mbox{all}\ u\in H_-. \end{equation} Since $2<2^*$, we see from (\ref{eq51}) that we can find $\rho_1\in(0,1)$ small such that \begin{equation}\label{eq52} u\in H_-,\ || u|| \leq \rho_1 \Rightarrow \varphi(u)\leq0. \end{equation} Recall that $E(0)$ is finite dimensional. So, all norms are equivalent and we can find $\rho_0>0$ such that \begin{equation}\label{eq53} u\in E_0,\ || u|| \leq \rho_0 \Rightarrow |u(z)| \leq \delta/_2\ \mbox{for almost all}\ z\in\Omega\,. \end{equation} Here, $\delta>0$ is as postulated by hypothesis $H(f)_1(iii)$. Let $u\in E(0)\oplus H_+$. Then $u$ admits a unique sum decomposition $$u = u^0+\hat{u}\ \mbox{with}\ u^0\in E(0),\ \hat{u}\in H_+.$$ Note that \begin{equation}\label{eq54} ||u|| \leq \rho_0 \Rightarrow ||u^0|| \leq \rho_0, \end{equation} since $u^0$ is the orthogonal projection of $u$ on $E(0)$ and the orthogonal projection operator has operator norm equal to 1. We define $\Omega_{\delta}=\left\{z\in\Omega : |\hat{u}(z)| \leq \frac{\delta}{2} \right\}.$ Then for $u\in E(0)\oplus H_+$ with $|| u|| \leq \rho_0$, we have \begin{eqnarray} & & |u(z)| \leq|u^0(z)| + |\hat{u}(z)| \leq \frac{\delta}{2}+\frac{\delta}{2}=\delta\ \mbox{for almost all}\ z\in\Omega_\delta \nonumber \\ & & \mbox{(see (\ref{eq53}), (\ref{eq54})),} \nonumber \\ & \Rightarrow & \int_{\Omega_\delta}F(z, u(z))dz \leq 0\ \mbox{(see hypothesis $H(f)_1(iii)[a]$)}. \label{eq55} \end{eqnarray} Also, for $u\in E(0)\oplus H_+$ with $|| u||\leq\rho_0$, we have \begin{eqnarray} | u(z)| \leq | u^0(z)| + |\hat{u}(z)| \leq 2|\hat{u}(z)|\ \mbox{for almost all}\ z\in\Omega \backslash\Omega_\delta \label{eq56} \\ \mbox{(see (\ref{eq53}), (\ref{eq54}))}. \nonumber \end{eqnarray} So, for $u\in E(0)\oplus H_+$ with $|| u|| \leq \rho_0$, exploiting the orthogonality of the component spaces, we have \begin{eqnarray} \varphi(u) & = & \frac{1}{2}\gamma(u) - \int_{\Omega}F(z,u)dz \nonumber \\ & \geqslant & \frac{1}{2 }\gamma(\hat{u}) - \int_{\Omega\backslash\Omega_\delta} F(z,u)dz\ \mbox{(see (\ref{eq55}) and recall that $u^0\in E(0)$)} \nonumber \\ & \geqslant & \frac{1}{2 }\gamma(\hat{u}) - \varepsilon|| \hat{u}||_{2}^{2} - c_{12}||\hat{u}||^{2^*}\ \mbox{for some $c_{12}>0$ (see (\ref{eq49}), (\ref{eq56}))} \nonumber \\ & \geqslant & c_{13}||\hat{u}||^2 - c_{12}||\hat{u}||^{2^*}\ \mbox{for some $c_{13}>0$, choosing $\varepsilon >0$ small} \label{eq57} \\ & & \mbox{(recall that $\hat{u}\in H_+$)}. \nonumber \end{eqnarray} Since $2<2^*$, choosing $\rho_2\in(0,\rho_0]$ small, from (\ref{eq57}) we have \begin{equation}\label{eq58} \varphi(u)>0 = \varphi(0)\ \mbox{for all $u\in E(0)\oplus H_+$, $0<|| u|| \leq \rho_2$.} \end{equation} Then (\ref{eq52}) and (\ref{eq58}) imply that \begin{equation*} \left\{ \begin{array}{ll} \varphi \ \mbox{has a local linking at}\ u=0 \\ \mbox{(for the decomposition $H_-\oplus[E(0)\oplus H_+]$)} \\ u=0\ \mbox{is a strict minimizer of}\ \varphi\vert_{E(0)\oplus H_+} \end{array} \right\} \end{equation*} Then by Motreanu, Motreanu and Papageorgiou \cite[pp. 169, 171]{15}, we have $$C_{d_-}(\varphi,0)\neq0.$$ Now assume that hypothesis $H(f)_1(iii)$ [b] holds. We consider the following orthogonal direct sum decomposition $$H^1(\Omega)=Z\oplus H_+\ \mbox{with}\ Z=H_-\oplus E(0).$$ Then for $u\in H_+$ we have \begin{eqnarray*} \varphi(u) & = & \frac{1}{2}\gamma(u) - \int_\Omega F(z,u)dz \\ & \geqslant & \frac{1}{2}\left[\gamma(u) - \varepsilon ||u||^2_2\right]-c_{14}||u||^{2^*}\ \mbox{for some $c_{14}>0$} \\ & & \mbox{(see (\ref{eq49}))}. \end{eqnarray*} Choosing $\varepsilon>0$ small, we have \begin{eqnarray*} \varphi(u)\geqslant c_{15} || u||^2 - c_{14}|| u||^{2^*}\ \mbox{for some $c_{15}>0$} \\ \mbox{(recall that $u\in H_+$).} \end{eqnarray*} Since $2<2^*$, we can find $\rho_1\in(0,1)$ small such that \begin{equation}\label{eq59} u\in H_+,\ 0<|| u|| \leq \rho_1 \Rightarrow \varphi(0) = 0<\varphi(u). \end{equation} Now suppose that $u\in Z=H_-\oplus E(0)$. The space Z is finite dimensional and so all norms are equivalent. Therefore we can find $\rho_2>0$ such that \begin{equation}\label{eq60} u\in Z,\ || u|| \leq \rho_2 \Rightarrow | u(z)| \leq \delta\ \mbox{for almost all}\ z\in\Omega\,. \end{equation} Here, $\delta>0$ is as postulated in hypothesis $H(f)(iii)$. Every $u\in Z$ can be written in a unique way as $$u=\overline{u}+u^0\ \mbox{with}\ \overline{u}\in H_-,\ u^0\in E(0).$$ Exploiting the orthogonality of the component spaces, for $u\in Z$ with $|| u|| \leq \rho_2$, we have \begin{eqnarray*} \varphi(u) & = & \frac{1}{2}\gamma(u) - \int_\Omega F(z,u)dz \\ & = & \frac{1}{2}\gamma(\overline{u}) - \int_\Omega F(z,u)dz\ \mbox{(since $u^0\in E(0)$)} \\ & \leq & \frac{\hat{\lambda}_{m_-}}{2}|| u||_{2}^{2}\ \mbox{(see (\ref{eq60}) and use hypothesis $H(f)_1(iii)$ [b]).} \end{eqnarray*} Since $\hat{\lambda}_{m_-}<0$, it follows that \begin{equation}\label{eq61} \varphi(u)\leq0\ \mbox{for all}\ u\in Z=\overline{H}\oplus E(0)\ \mbox{with}\ || u|| \leq \rho_2. \end{equation} Then (\ref{eq59}) and (\ref{eq61}) imply that \begin{eqnarray*} \left \{ \begin{array}{l} \varphi\ \mbox{has a local linking at}\ u=0 \\ \mbox{(now for the decomposition $Z\oplus H_+$),} \\ 0\ \mbox{is as strict local minimizer of $\varphi| H_+$} \end{array} \right \} \end{eqnarray*} As before, by Motreanu, Motreanu and Papageorgiou \cite[pp. 169, 171]{15}, we have \begin{equation*} C_{d_-^0}(\varphi,0)\neq0\ \mbox{(recall $d_-^0={\rm dim}\, Z$).} \end{equation*} \end{proof} Now we are ready for our first existence theorem. \begin{theorem}\label{th8} If hypotheses $H(\xi),H(\beta),H(f)_1$ hold, then problem \eqref{eq1} admits a nontrivial solution $\tilde{u}\in C^1(\overline{\Omega})$. \end{theorem} \begin{proof} From Proposition \ref{prop6}, we have that \begin{equation}\label{eq62} C_k(\varphi,\infty)=0\ \mbox{for all}\ k\in\NN_0. \end{equation} Also, Proposition \ref{prop7} says that \begin{equation}\label{eq63} C_{d_-}(\varphi,0)\neq0\ \mbox{or}\ C_{d_-^0}(\varphi,0)\neq0. \end{equation} Then (\ref{eq62}), (\ref{eq63}) and Corollary 6.92 of Motreanu, Motreanu and Papageorgiou \cite[p. 173]{15} imply that we can find $\tilde{u}\in K_\varphi\backslash\{0\}$. From Papageorgiou and R\u adulescu \cite{19}, we have \begin{equation}\label{eq64} \left\{\begin{array}{l} -\Delta\tilde{u}(z)+\xi(z)\tilde{u}(z)=f(z,\tilde{u}(z))\ \mbox{for almost all}\ z\in\Omega \\ \frac{\partial\tilde{u}}{\partial n}+\beta(z)\tilde{u}=0\ \mbox{on}\ \partial\Omega . \end{array}\right\} \end{equation} We define the following functions \begin{equation}\label{eq65} a(z)=\left\{\begin{array}{ccc} 0 & \mbox{if} & |\tilde{u}(z)|\leq1 \\ \frac{f(z,\tilde{u}(z))}{\tilde{u}(z)} & \mbox{if} & 1<|\tilde{u}(z)| \end{array}\right. \mbox{and}\ b(z)=\left\{\begin{array}{ccc} f(z,\tilde{u}(z)) & \mbox{if} & |\tilde{u}(z)|\leq1 \\ 0 & \mbox{if} & 1<|\tilde{u}(z)| \end{array} \right. \end{equation} Hypotheses $H(f)_1$, imply that given $\varepsilon>0$, we can find $c_{16}=c_{16}(\varepsilon)>0$ such that \begin{equation}\label{eq66} |f(z,x)|\leq\varepsilon|x|^{2^*-1}+c_{16}|x|\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x\in\RR. \end{equation} Then from (\ref{eq65}), (\ref{eq66}) and the Sobolev embedding theorem, we have $$a\in L^{\frac{N}{2}}(\Omega).$$ Also, it is clear from (\ref{eq64}) and hypothesis $H(f)_1(i)$ that $$b\in L^\infty(\Omega).$$ We rewrite (\ref{eq64}) as follows \begin{equation*} \left\{\begin{array}{l} -\Delta\tilde{u}(z)=(a(z)-\xi(z))\tilde{u}(z)+b(z)\ \mbox{for almost all}\ z\in\Omega, \\ \frac{\partial\tilde{u}}{\partial n}+\beta(z)\tilde{u}=0\ \mbox{on}\ \partial\Omega. \end{array}\right\} \end{equation*} Invoking Lemma 5.1 of Wang \cite{28}, we have $$\tilde{u}\in L^\infty(\Omega).$$ Then hypotheses $H(\xi),H(f)_1(i)$ imply that $$f(\cdot,\tilde{u}(\cdot))-\xi(\cdot)\tilde{u}(\cdot)\in L^s(\Omega),\quad s>N.$$ Invoking Lemma 5.2 of Wang \cite{28} (the Calderon-Zygmund estimates), we have \begin{eqnarray*} & & \tilde{u}\in W^{2,s}(\Omega), \\ & \Rightarrow & \tilde{u}\in C^{1,\alpha}(\overline{\Omega})\ \mbox{with}\ \alpha=1-\frac{N}{s}>0 \\ & & \mbox{(by the Sobolev embedding theorem).} \end{eqnarray*} \end{proof} In Theorem \ref{th8}, the reaction term $f(z,\cdot)$ is strictly sublinear near zero (see hypothesis $H(f)_1(iii)$). In the next existence theorem, we change the geometry near zero and assume that $f(z,\cdot)$ is linear near zero. In fact we permit double resonance with respect to any nonprincipal spectral interval. The new hypotheses on the reaction term $f(z,x)$ are the following: \smallskip $H(f)_2:f:\Omega\times\RR\rightarrow\RR$ is a Carath\'eodory function such that hypotheses $H(f)_2(i),(ii)$ are the same as the corresponding hypotheses $H(f)_1(i),(ii)$ and\\ $(iii)$ there exist $m\in N,m\geq2$ and $\delta>0$ such that $$\hat{\lambda}_m x^2\leq f(z,x)x\leq\hat{\lambda}_{m+1}x^2\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ |x|\leq\delta.$$ \begin{remark} The behaviour of $f(z,\cdot)$ near $\pm\infty$ remains the same. However, near zero, the growth of $f(z,\cdot)$ has changed. In fact the new condition for $f(z,\cdot)$ near zero implies linear growth. Also, permits resonance with respect to both endpoints of the nonprincipal spectral interval $[\hat{\lambda}_m,\hat{\lambda}_{m+1}]$, $m\geq2$ (double resonance). This means that the computation of the critical groups of $\varphi$ at the origin changes. \end{remark} \begin{prop}\label{prop9} If hypotheses $H(\xi),H(\beta),H(f)_2$ hold, then $C_k(\varphi,0)=\delta_{k,d_m}\ZZ$ for all $k\in\NN_0$ with $d_m=dim\underset{\text{i=1}}{\overset{\text{m}}{\oplus}}E(\hat{\lambda}_i)$. \end{prop} \begin{proof} Let $\vartheta\in(\hat{\lambda}_m,\hat{\lambda}_{m+1})$ and consider the $C^2$-functional $\Psi: H^1(\Omega)\rightarrow\RR$ defined by $$\Psi(u) = \frac{1}{2}\gamma(u) - \frac{\vartheta}{2}|| u||_2^2\ \mbox{for all}\ u\in H^1(\Omega).$$ In this case we consider the following orthogonal direct sum decomposition of the Hilbert space $H^1(\Omega)$: \begin{equation}\label{eq67} H^1(\Omega) = \overline{H}_m \oplus \hat{H}_m\ \mbox{with}\ \overline{H}_m = \mathop{\oplus}\limits_{i=1}^m E(\hat{\lambda}_i), \hat{H}_m = \hat{H}_m^\perp = \overline{\mathop{\oplus}\limits_{i\geqslant m+1} E(\hat{\lambda}_{i})}. \end{equation} The choice of $\vartheta$ and (\ref{eq5}) imply that $$\Psi|_{\overline{H}_m} \leq 0\ \mbox{and}\ \Psi|_{\hat{H}_m\setminus\{0\}} > 0.$$ Then Proposition 2.3 of Su \cite{27} implies that \begin{equation}\label{eq68} C_k(\Psi,0) = \delta_{k,d_m}\ZZ\ \mbox{for all}\ k\in\NN_0. \end{equation} Consider the homotopy $h_*:[0,1]\times H^1(\Omega)\rightarrow\RR$ defined by $$h_*(t,u) = (1-t)\varphi(u) + t\Psi(u)\ \mbox{for all}\ t\in[0,1],\ \mbox{all}\ u\in H^1(\Omega).$$ As in the proof of Theorem \ref{th8}, using the regularity theorem of Wang \cite{28}, we have that $$K_{h_{*}(t,\cdot)} \subseteq C^1(\overline{\Omega})\ \mbox{for all}\ t\in[0,1].$$ Let $t>0$ and suppose that $u\in C^1(\overline{\Omega})$ satisfies $0<|| u||_{C^1(\overline{\Omega})} \leq \delta$, with $\delta>0$ as postulated by hypothesis $H(f)_2(iii)$. Then \begin{equation}\label{eq69} \langle(h_*)'_u(t,u), v\rangle = (1-t)\langle\varphi'(u),v\rangle + t\langle \Psi'(u),v\rangle \ \mbox{for all}\ v\in H^1(\Omega). \end{equation} Using the orthogonal direct sum decomposition (\ref{eq67}), we can write $u$ in a unique way as $$u = \overline{u} + \hat{u}\ \mbox{with}\ \overline{u}\in\overline{H}_m,\ \hat{u}\in\hat{H}_m.$$ Exploiting the orthogonality of the component spaces, we have $$\langle\varphi'(u),\hat{u}-\overline{u}\rangle = \gamma(\hat{u}) - \gamma(\overline{u}) - \int_\Omega f(z,u)(\hat{u}-\overline{u})dz.$$ Recall the choice of $u\in C^1(\overline{\Omega})$ and use hypothesis $H(f)_2(iii)$. Then $$f(z,u(z))(\hat{u}-\overline{u})(z) \leq \hat{\lambda}_{m+1}\hat{u}(z)^2 - \hat{\lambda}_m\overline{u}(z)\ \mbox{for almost all}\ z\in\Omega.$$ Therefore \begin{equation}\label{eq70} \langle\varphi'(u),\hat{u}-\overline{u}\rangle \geqslant \gamma(\hat{u}) - \hat{\lambda}_{m+1}|| \hat{u}||_2^2 - \left[ \gamma(\overline{u}) - \hat{\lambda}_m|| \overline{u}||_2^2 \right] \geqslant 0 \end{equation} (see (\ref{eq5})). Also, again via the orthogonality of the component spaces, we have \begin{equation}\label{eq71} \langle\Psi'(u),\hat{u}-\overline{u}\rangle = \gamma(\hat{u}) - \vartheta|| \hat{u}||_2^2 - \left[ \gamma(\overline{u}) - \vartheta|| \overline{u}||_2^2 \right] \geqslant c_{17}|| u||^2 \end{equation} for some $c_{17}>0$ (recall that $\partial\in(\hat{\lambda}_m,\hat{\lambda}_{m+1})$). Returning to (\ref{eq69}) and using $v = \hat{u}-\overline{u}\in H^1(\Omega)$ and relations (\ref{eq70}), (\ref{eq71}), we obtain $$\langle(h_*)_u'(t,u),\hat{u}-\overline{u}\rangle \geq t c_{17} || u||^2 > 0\ \mbox{for all}\ 0<t\leq 1.$$ For $t=0$, we have $h_*(0,\cdot) = \varphi(\cdot)$ and $0\in K_\varphi$ is isolated (recall that we assumed that $K_\varphi$ is finite or otherwise we already have an infinity of nontrivial solutions). Therefore from the homotopy invariance property of critical groups (see Gasinski and Papageorgiou \cite[Theorem 5.125, p. 836]{7}), we have \begin{eqnarray} & & C_k(h_*(0,\cdot),0) = C_k(h_*(1,\cdot),0)\ \mbox{for all}\ k\in\NN_0, \nonumber \\ & \Rightarrow & C_k(\varphi,0) = C_k(\Psi,0)\ \mbox{for all}\ k\in\NN_0, \nonumber \\ & \Rightarrow & C_k(\varphi,0) = \delta_{k,d_m}\ZZ\ \mbox{for all}\ k\in\NN_0\ \mbox{(see (\ref{eq68})).} \nonumber \end{eqnarray} \end{proof} Using this Proposition and reasoning as in the proof of Theorem \ref{th8}, we obtain the following existence theorem. \begin{theorem}\label{th10} If hypotheses $H(\xi),H(\beta),H(f_2)$ hold, then problem (\ref{eq1}) admits a nontrivial smooth solution $$\tilde{u}\in C^1(\overline{\Omega}).$$ \end{theorem} \section{Equations with Concave Terms} In this section we examine what happens when the reaction function exhibits a concave term (that is, a term which is strictly superlinear near zero). So, the geometry is different from both cases considered in Section 3. To deal with this new problem, we introduce a parameter $\lambda>0$ in the concave term and for all $\lambda>0$ small we prove multiplicity results for the equation. So, now the problem under consideration, is: \begin{equation} \left\{\begin{array}{ll} -\Delta u(z)+\xi(z)u(z)=\lambda|u(z)|^{q-2}u(z)+f(z,u(z))&\mbox{in}\ \Omega,\\ \frac{\partial u}{\partial n}+\beta(z)u=0\ \ \mbox{on}\ \partial\Omega,\ \lambda>0,\ 1<q<2.& \end{array}\right\}\tag{$P_{\lambda}$} \label{eqP} \end{equation} The hypotheses on the perturbation $f(z,x)$ are the following: \smallskip $H(f)_3:$ $f:\Omega\times\RR\rightarrow\RR$ is a Carath\'eodory function such that hypotheses $H(f)_3(i),(ii)$ are the some as the corresponding hypotheses $H(f)_1(i),(ii)$ and \begin{itemize} \item[(iii)] there exist functions $\eta,\eta_0\in L^{\infty}(\Omega)$ such that \begin{eqnarray*} &&\eta(z)\leq\hat{\lambda}_1\ \mbox{for almost all}\ z\in\Omega,\ \eta\not\equiv\hat{\lambda}_1,\\ &&\eta_0(z)\leq\lim\limits_{x\rightarrow 0}\frac{f(z,x)}{x}\leq\limsup\limits_{x\rightarrow 0}\frac{f(z,x)}{x}\leq\eta(z)\ \mbox{uniformly for almost all}\ z\in\Omega. \end{eqnarray*} \end{itemize} For every $\lambda>0$, let $\varphi_{\lambda}:H^1(\Omega)\rightarrow\RR$ be the energy (Euler) functional for problem \eqref{eqP} defined by $$\varphi_{\lambda}(u)=\frac{1}{2}\gamma(u)-\frac{\lambda}{q}||u||^q_q-\int_{\Omega}F(z,u)dz\ \mbox{for all}\ u\in H^1(\Omega).$$ Evidently, $\varphi_{\lambda}\in C^1(H^1(\Omega))$. Also, we consider the following truncations-perturbations of the reaction in problem \eqref{eqP}: \begin{eqnarray} &&\hat{f}^+_{\lambda}(z,x)=\left\{\begin{array}{ll} 0&\mbox{if}\ x\leq 0\\ \lambda x^{q-1}+f(z,x)+\mu x&\mbox{if}\ 0<x, \end{array}\right.\label{eq72}\\ &&\hat{f}^-_{\lambda}(z,x)=\left\{\begin{array}{ll} \lambda |x|^{q-2}x+f(z,x)+\mu x&\mbox{if}\ x\leq 0,\\ 0&\mbox{if}\ 0<x. \end{array}\right.\label{eq73} \end{eqnarray} Here $\mu>0$ is as in (\ref{eq3}). Both $\hat{f}^+_{\lambda}$ and $\hat{f}^-_{\lambda}$ are Carath\'eodory functions. We set $\hat{F}^{\pm}_{\lambda}(z,x)=\int^x_0\hat{f}^{\pm}_{\lambda}(z,s)ds$ and consider the $C^1$-functionals $\hat{\varphi}^{\pm}_{\lambda}:H^1(\Omega)\rightarrow\RR$ defined by $$\hat{\varphi}^{\pm}_{\lambda}(u)=\frac{1}{2}\gamma(u)+\frac{\mu}{2}||u||^2_2-\int_{\Omega}\hat{F}^{\pm}_{\lambda}(z,u)dz\ \mbox{for all}\ u\in H^1(\Omega).$$ Since hypotheses $H(f)_3(i),(ii)$ are the same as $H(f)_1(i),(ii)$ and the concave term $\lambda|x|^{q-2}x$ does not affect the behavior of the reaction near $\pm\infty$, from Proposition \ref{prop5}, we have: \begin{prop}\label{prop11} If hypotheses $H(\xi),\,H(\beta),\,H(f)_3$ hold and $\lambda>0$, then the functional $\varphi_{\lambda}$ satisfies the C-condition. \end{prop} Using similar arguments, we can also prove the following result: \begin{prop}\label{prop12} If hypotheses $H(\xi),H(\beta),H(f)_3$ hold and $\lambda>0$, then the functionals $\hat{\varphi}^{\pm}_{\lambda}$ satisfy the C-condition. \end{prop} \begin{proof} As we already indicated, the proof is basically the same as that of Proposition \ref{prop5}. So, we only present the first part of the proof, which differs a little due to the unilateral nature of the functionals $\hat{\varphi}^{\pm}_{\lambda}$ (see (\ref{eq72}) and (\ref{eq73})). So, let $\{u_n\}_{n\geq 1}\subseteq H^1(\Omega)$ be a sequence such that \begin{eqnarray} &&|\hat{\varphi}^{+}_{\lambda}(u_n)|\leq M_5\ \mbox{for some}\ M_5>0,\ \mbox{all}\ n\in\NN,\label{eq74}\\ &&(1+||u_n||)(\hat{\varphi}^+_{\lambda})'(u_n)\rightarrow 0\ \mbox{in}\ H^1(\Omega)^*\ \mbox{as}\ n\rightarrow\infty\,.\label{eq75} \end{eqnarray} From (\ref{eq75}) we have \begin{eqnarray}\label{eq76} &&\left|\left\langle A(u_n),h\right\rangle+\int_{\Omega}(\xi(z)+\mu)u_nhdz+\int_{\partial\Omega}\beta(z)u_nhd\sigma-\int_{\Omega}\hat{f}^+_{\lambda}(z,u_n)hdz\right|\leq\frac{\epsilon_n||h||}{1+||u_n||}\\ &&\mbox{for all}\ h\in H^1(\Omega),\ \mbox{with}\ \epsilon_n\rightarrow 0^+.\nonumber \end{eqnarray} In (\ref{eq76}) we choose $h=-u^-_n\in H^1(\Omega)$. Then \begin{eqnarray}\label{eq77} &&\gamma(u^-_n)+\mu||u^-_n||^2_2\leq\epsilon_n\ \mbox{for all}\ n\in\NN\ (\mbox{see\ (\ref{eq72})}),\nonumber\\ &\Rightarrow&c_0||u^-_n||^2\leq\epsilon_n\ \mbox{for all}\ n\in\NN\ (\mbox{see (\ref{eq3})}),\nonumber\\ &\Rightarrow&u^-_n\rightarrow 0\ \mbox{in}\ H^1(\Omega). \end{eqnarray} Using (\ref{eq72}) and reasoning as in the proof of Proposition \ref{prop5}, we show that \begin{equation}\label{eq78} \{u^+_n\}_{n\geq 1}\subseteq H^1(\Omega)\ \mbox{is bounded}. \end{equation} From (\ref{eq77}) and (\ref{eq78}) it follows that $$\{u_n\}_{n\geq 1}\subseteq H^1(\Omega)\ \mbox{is bounded}.$$ From this via the Kadec-Klee property, as in the proof of Proposition \ref{prop5}, we establish that $$\hat{\varphi}^+_{\lambda}\ \mbox{satisfies the C-condition}.$$ In a similar fashion, using this time (\ref{eq73}), we show that $\hat{\varphi}^-_{\lambda}$ satisfies the C-condition. \end{proof} Hypothesis $H(f)_3(ii)$ (the superlinearity condition) implies that the functionals $\hat{\varphi}^{\pm}_{\lambda}$ are unbounded below. \begin{prop}\label{prop13} If hypotheses $H(\xi),H(\beta),H(f)_3$ hold, $\lambda>0$ and $u\in D_+$ then $\hat{\varphi}^{\pm}_{\lambda}(tu)\rightarrow-\infty$ as $t\rightarrow\pm\infty$. \end{prop} The next result will help us to verify the mountain pass geometry (see Theorem \ref{th1}) for the functionals $\hat{\varphi}^{\pm}_{\lambda}$ when $\lambda>0$ is small. \begin{prop}\label{prop14} If hypotheses $H(\xi),H(\beta),H(f)_3$ hold, then we can find $\lambda^{\pm}_*>0$ such that for every $\lambda\in(0,\lambda^{\pm}_{*})$ we can find $\rho^{\pm}_{\lambda}>0$ for which we have $$\inf[\hat{\varphi}^{\pm}_{\lambda}(u):||u||=\rho^{\pm}_{\lambda}]=\hat{m}^{\pm}_{\lambda}>0.$$ \end{prop} \begin{proof} Hypotheses $H_f(3)(i),(iii)$ imply that given $\epsilon>0$, we can find $c_{\epsilon}>0$ such that \begin{equation}\label{eq79} F(z,x)\leq\frac{1}{2}(\eta(z)+\epsilon)x^2+c_{\epsilon}|x|^{2^*}\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x\in\RR. \end{equation} For every $u\in H^1(\Omega)$, we have \begin{eqnarray*} \hat{\varphi}^{+}_{\lambda}(u)&\geq&\frac{1}{2}\gamma(u)+\frac{\mu}{2}||u^-||^2_2-\frac{1}{2}\int_{\Omega}\eta(z)(u^+)^2dz-\frac{\epsilon}{2}||u^+||^2-c_{18}(\lambda||u||^q+||u||^{2^*})\\ &&\mbox{for some}\ c_{18}>0\ (\mbox{see (\ref{eq72}), (\ref{eq79})})\\ &=&\frac{1}{2}\left[\gamma(u^+)-\int_{\Omega}\eta(z)(u^+)^2dz-\epsilon||u^+||^2\right]+\frac{1}{2}[\gamma(u^-)+\mu||u^-||^2_2]-\\ &&\hspace{0.5cm}c_{18}(\lambda||u||^q+||u||^{2^*})\\ &\geq&\frac{1}{2}(c_{19}-\epsilon)||u^+||^2+\frac{c_0}{2}||u^-||^2-c_{18}(\lambda||u||^q+||u||^{2^*}) \end{eqnarray*} for some $c_{19}>0$ (see Lemma 4.11 of Mugnai and Papageorgiou \cite{16} and (\ref{eq3})). So, choosing $\epsilon\in(0,c_{19})$, we have \begin{equation}\label{eq80} \hat{\varphi}^{\pm}_{\lambda}(u)\geq\left[c_{20}-c_{18}(\lambda||u||^{q-2}+||u||^{2^*-2})\right]||u||^2\ \mbox{for some}\ c_{20}>0. \end{equation} Consider the function $$\Im_{\lambda}(t)=\lambda t^{q-2}+t^{2^*-2}\ \mbox{for all}\ t>0,\ \lambda>0.$$ Since $q<2<2^*$, we see that $$\Im_{\lambda}(t)\rightarrow+\infty\ \mbox{as}\ t\rightarrow 0^+\ \mbox{and as}\ t\rightarrow+\infty.$$ So, we can find $t_0=t_0(\lambda)\in(0,+\infty)$ such that $$\Im_{\lambda}(t_0)=\min[\Im_{\lambda}(t):t>0].$$ We have \begin{eqnarray*} &&\Im'_{\lambda}(t_0)=0,\\ &\Rightarrow&\lambda(2-q)t^{q-3}_0=(2^*-2)t_0^{2^*-3},\\ &\Rightarrow&t_0=t_0(\lambda)=\left[\frac{\lambda(2-q)}{2^*-2}\right]^{\frac{1}{2^*-q}}. \end{eqnarray*} Hence it follows that $$\Im_{\lambda}(t_0)\rightarrow 0^+\ \mbox{as}\ \lambda\rightarrow 0^+.$$ So, we can find $\lambda^+_*>0$ such that $$\Im_{\lambda}(t_0)<\frac{c_{20}}{c_{18}}\ \mbox{for all}\ \lambda\in(0,\lambda^+_*).$$ Returning to (\ref{eq80}) and using this fact we obtain $$\hat{\varphi}^+_{\lambda}(u)\geq\hat{m}^+_{\lambda}>0\ \mbox{for all}\ u\in H^1(\Omega),\ ||u||=\rho^+_{\lambda}=t_0(\lambda).$$ Similarly for the functional $\hat{\varphi}^-_{\lambda}$. \end{proof} To produce multiple solutions of constant sign, we need to strengthen the condition on the potential function. The new hypothesis on $\xi(\cdot)$ is: \smallskip $H(\xi)':$ $\xi\in L^s(\Omega)$ with $s>N$ and $\xi^+\in L^{\infty}(\Omega)$. \begin{prop}\label{prop15} If hypotheses $H(\xi)',H(\beta),H(f)_3$ hold, then \begin{itemize} \item[(a)] for every $\lambda\in(0,\lambda^+_*)$ problem \eqref{eqP} has two positive solutions $$u_0,\hat{u}\in D_+\ \mbox{with}\ u_0\ \mbox{being a local minimizer of}\ \varphi_{\lambda};$$ \item[(b)] for every $\lambda\in(0,\lambda^-_*)$ problem \eqref{eqP} has two negative solutions $$v_0,\hat{v}\in-D_+\ \mbox{with}\ v_0\ \mbox{being a local minimizer of}\ \varphi_{\lambda};$$ \item[(c)] for every $\lambda\in(0,\lambda_*)$ (here $\lambda_*=\min\{\lambda^+_*,\lambda^-_*\}$) problem \eqref{eqP} has at least four nontrivial solutions of constant sign \begin{eqnarray*} &&u_0,\hat{u}\in D_+,\ v_0,\hat{v}\in-D_+,\\ &&u_0\ \mbox{and}\ v_0\ \mbox{are local minimizers of}\ \varphi_{\lambda}. \end{eqnarray*} \end{itemize} \end{prop} \begin{proof} \textit{(a)} Let $\lambda\in(0,\lambda^+_*)$ and let $\rho^+_{\lambda}>0$ be as postulated by Proposition \ref{prop14}. We consider the set $$\bar{B}^+_{\lambda}=\{u\in H^1(\Omega):||u||\leq\rho^+_{\lambda}\}.$$ This set is weakly compact. Also, using the Sobolev embedding theorem and the compactness of the trace map, we see that $\hat{\varphi}^+_{\lambda}$ is sequentially weakly lower semicontinuous. So, by the Weierstrass-Tonelli theorem, we can find $u_0\in H^1(\Omega)$ such that \begin{equation}\label{eq81} \hat{\varphi}^+_{\lambda}(u_0)=\inf[\hat{\varphi}^+_{\lambda}(u):u\in H^1(\Omega)]. \end{equation} Since $q<2$, for $t\in(0,1)$ small we have \begin{eqnarray*} &&\hat{\varphi}^+_{\lambda}(t\hat{u}_1)<0,\\ &\Rightarrow&\hat{\varphi}^+_{\lambda}(u_0)<0=\hat{\varphi}^+_{\lambda}(0)\ (\mbox{see (\ref{eq81})}),\\ &\Rightarrow&u_0\neq 0\ \mbox{and}\ ||u_0||<\rho^+_{\lambda}\ (\mbox{see Proposition \ref{prop14}}). \end{eqnarray*} This fact and (\ref{eq81}) imply that \begin{eqnarray}\label{eq82} &&(\hat{\varphi}^+_{\lambda})'(u_0)=0,\nonumber\\ &\Rightarrow&\left\langle A(u_0),h\right\rangle+\int_{\Omega}(\xi(z)+\mu)u_0hdz+\int_{\partial\Omega}\beta(z)u_0hd\sigma=\int_{\Omega}\hat{f}^+_{\lambda}(z,u_0)hdz\\ &&\mbox{for all}\ h\in H^1(\Omega).\nonumber \end{eqnarray} In (\ref{eq82}) we choose $h=-u^-_0\in H^1(\Omega)$. Using (\ref{eq72}) we obtain \begin{eqnarray*} &&\gamma(u^-_0)+\mu||u^-_0||^2_2=0,\\ &\Rightarrow&c_0||u^-_0||^2\leq 0\ (\mbox{see (\ref{eq3})}),\\ &\Rightarrow&u_0\geq 0,\ u_0\neq 0. \end{eqnarray*} Then because of (\ref{eq72}), equation (\ref{eq82}) becomes \begin{eqnarray}\label{eq83} &&\left\langle A(u_0),h\right\rangle+\int_{\Omega}\xi(z)u_0hdz+\int_{\partial\Omega}\beta(z)u_0hd\sigma=\int_{\Omega}[\lambda u^{q-1}_0+f(z,u_0)]hdz\nonumber\\ &&\mbox{for all}\ h\in H^1(\Omega),\nonumber\\ &\Rightarrow&-\Delta u_0(z)+\xi(z)u_0(z)=\lambda u_0(z)^{q-1}+f(z,u_0(z))\ \mbox{for almost all}\ z\in\Omega,\\ &&\frac{\partial u_0}{\partial n}+\beta(z)u_0=0\ \mbox{on}\ \partial\Omega\ \mbox{(see Papageorgiou and R\u adulescu \cite{19})}.\nonumber \end{eqnarray} As in the proof of Theorem \ref{th8}, from the regularity theory of Wang \cite{28}, we have $$u_0\in C_+\backslash\{0\}.$$ Hypotheses $H(f)_3(i),(iii)$, imply that $$|f(z,x)|\leq c_{21}|x|\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ |x|\leq ||u_0||_\infty,\ \mbox{some}\ c_{21}>0.$$ Then from (\ref{eq83}) and hypothesis $H(\xi)'$, we have \begin{eqnarray}\label{eq84} &&\Delta u_0(z)\leq[||\xi^+||_{\infty}+c_{21}]u_0(z)\ \mbox{for almost all}\ z\in\Omega,\nonumber\\ &\Rightarrow&u_0\in D_+\ (\mbox{by the strong maximum principle}). \end{eqnarray} From (\ref{eq72}) it is clear that $$\left.\varphi_{\lambda}\right|_{C_+}=\left.\hat{\varphi}^+_{\lambda}\right|_{C_+}.$$ So, from (\ref{eq84}) it follows that \begin{eqnarray*} &&u_0\ \mbox{is a local}\ C^1(\overline{\Omega})-\mbox{minimizer of}\ \varphi_{\lambda},\\ &\Rightarrow&u_0\ \mbox{is a local}\ H^1(\Omega)-\mbox{minimizer of}\ \varphi_{\lambda}\ (\mbox{see Proposition \ref{prop4}}). \end{eqnarray*} Next we will produce a second positive smooth solution. We know that \begin{equation}\label{eq85} \hat{\varphi}^+_{\lambda}(u_0)<0<\hat{m}^+_{\lambda}=\inf[\hat{\varphi}^+_{\lambda}(u):||u||=\rho^+_{\lambda}]\ (\mbox{see Proposition \ref{prop14}}). \end{equation} Also, from Propositions \ref{prop12} and \ref{prop13}, we have \begin{eqnarray} &&\hat{\varphi}^+_{\lambda}\ \mbox{satisfies the C-condition},\label{eq86}\\ &&\hat{\varphi}^+_{\lambda}(t\hat{u}_1)\rightarrow-\infty\ \mbox{as}\ t\rightarrow+\infty\,.\label{eq87} \end{eqnarray} Then (\ref{eq85}), (\ref{eq86}) and (\ref{eq87}) permit the use of Theorem \ref{th1} (the mountain pass theorem). So, we can find $\hat{u}\in H^1(\Omega)$ such that \begin{equation}\label{eq88} \hat{u}\in K_{\hat{\varphi}^+_{\lambda}}\ \mbox{and}\ \hat{m}^+_{\lambda}\leq \hat{\varphi}^+_{\lambda}(u). \end{equation} From (\ref{eq85}) and (\ref{eq88}) we have that \begin{eqnarray*} &&\hat{u}\neq u_0,\ \hat{u}\neq 0,\\ &\Rightarrow&\hat{u}\in D_+\ \mbox{is a solution of \eqref{eqP} (as before).} \end{eqnarray*} \textit{(b)} Reasoning in a similar fashion, this time using the functional $\hat{\varphi}^-_{\lambda}$ and (\ref{eq73}), we produce two negative smooth solutions $$v_0,\hat{v}\in -D,\ v_0\neq\hat{v},$$ with $v_0$ being a local minimizer of $\varphi_{\lambda}$. \textit{(c)} Follows from \textsl{(a)} and \textsl{(b)}. \end{proof} In fact we can show that problem \eqref{eqP} has extremal constant sign solutions. So, for all $\lambda\in(0,\lambda^+_*)$ there exists a smallest positive solution $u^*_{\lambda}\in D_+$ and for all $\lambda\in(0,\lambda^-_*)$ there is a biggest negative solution $v^*_{\lambda}\in-D_+$. Let $S^+_{\lambda}$ (respectively $S^-_{\lambda}$) be the set of positive (respectively negative) solutions of problem. From Proposition \ref{prop15} we know that \begin{eqnarray*} &&\emptyset\neq S^+_{\lambda}\subseteq D_+\ \mbox{for all}\ \lambda\in(0,\lambda^+_*),\\ &&\emptyset\neq S^-_{\lambda}\subseteq D_+\ \mbox{for all}\ \lambda\in(0,\lambda^-_*). \end{eqnarray*} Moreover, from Filippakis and Papageorgiou \cite{5} (see Lemmata 4.1 and 4.2), we have that \begin{itemize} \item $S^+_{\lambda}$ is downward directed (that is, if $u_1,u_2\in S^+_{\lambda}$, then there exists $u\in S^+_{\lambda}$ such that $u\leq u_1$ and $u\leq u_2$). \item $S^-_{\lambda}$ is upward directed (that is, if $v_1,v_2\in S^-_{\lambda}$, then there exists $v\in S^-_{\lambda}$ such that $v_1\leq v,v_2\leq v$). \end{itemize} Note that hypotheses $H(f)_3(i),(iii)$ imply that \begin{equation}\label{eq89} f(z,x)x\geq-c_{22}x^2-c_{23}|x|^{2^*}\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x\in\RR,\ \mbox{some}\ c_{22},c_{23}>0. \end{equation} We may always assume that $c_{22}\geq\mu$ (see (\ref{eq3})). This unilateral growth condition on $f(z,\cdot)$, leads to the following auxiliary Robin problem: \begin{equation} \left\{\begin{array}{ll} -\Delta u(z)+\xi(z)u(z)=\lambda|u(z)|^{q-2}u(z)-c_{22}u(z)-c_{23}|u(z)|^{2^*-2}u(z)&\mbox{in}\ \Omega,\\ \frac{\partial u}{\partial n}+\beta(z)u=0\ \mbox{on}\ \partial\Omega\,.& \end{array}\right\}\tag{$Au_{\lambda}$}\label{eqA} \end{equation} \begin{prop}\label{prop16} If hypotheses $H(\xi)',H(\beta),H(f)_3$ hold and $\lambda>0$, then problem \eqref{eqA} has a unique positive solution $$\tilde{u}_{\lambda}\in D_+$$ and because problem \eqref{eqA} is odd it follows that $$\tilde{v}_{\lambda}=-\tilde{u}_{\lambda}\in-D_+$$ is the unique negative solution of \eqref{eqA}. \end{prop} \begin{proof} First, we show the existence of a positive solution. To this end, let $\psi^+_{\lambda}:H^1(\Omega)\rightarrow\RR$ be the $C^1$-functional defined by \begin{eqnarray*} \psi^+_{\lambda}(u)&=&\frac{1}{2}\gamma(u)+\frac{\mu}{2}||u^-||^2_2+\frac{c_{22}}{2}||u^+||^2_2+\frac{c_{23}}{2}||u^+||^{2^*}_{2^*}-\lambda c_{24}||u||^q\\ &&\mbox{for some}\ c_{24}>0,\\ &\geq&\frac{c_0}{2}||u||^2-\lambda c_{24}||u||^q\ (\mbox{recall that}\ c_{22}\geq\mu\ \mbox{and see (\ref{eq3})}),\\ \Rightarrow\psi^+_{\lambda}(\cdot)&&\mbox{is coercive (recall that $q<2$)}. \end{eqnarray*} Also, $\psi^+_{\lambda}$ is sequentially weakly lower semicontinuous. So, we can find $\tilde{u}_{\lambda}\in H^1(\Omega)$ such that \begin{equation}\label{eq90} \psi^+_{\lambda}(\tilde{u}_{\lambda})=\inf[\psi^+_{\lambda}(u):u\in H^1(\Omega)]. \end{equation} Since $q<2<2^*$, for $t\in(0,1)$ small we have \begin{eqnarray}\label{eq91} &&\psi^+_{\lambda}(t\tilde{u}_1)<0=\psi^+_{\lambda}(0),\nonumber\\ &\Rightarrow&\psi^+_{\lambda}(\tilde{u}_{\lambda})<0=\psi^+_{\lambda}(0)\ (\mbox{see (\ref{eq90})}),\nonumber\\ &\Rightarrow&\tilde{u}_{\lambda}\neq 0. \end{eqnarray} From (\ref{eq90}) we have \begin{eqnarray}\label{eq92} &&(\psi^+_{\lambda})'(\tilde{u}_{\lambda})=0,\nonumber\\ &\Rightarrow&\left\langle A(\tilde{u}_{\lambda}),h\right\rangle+\int_{\Omega}\xi(z)\tilde{u}^+_{\lambda}hdz- \int_{\Omega}(\xi(z)+\mu)\tilde{u}^-_{\lambda}hdz+\int_{\partial\Omega}\beta(z)\tilde{u}_{\lambda}hd\sigma\nonumber\\ &&=\lambda c_{24}\int_{\Omega}(\tilde{u}^+_{\lambda})^{q-1}hdz-c_{23}\int_{\Omega}(\tilde{u}^+_{\lambda})^{2^*-1}hdz\ \mbox{for all}\ h\in H^1(\Omega). \end{eqnarray} In (\ref{eq92}) we choose $h=-\tilde{u}^-_{\lambda}\in H^1(\Omega)$. Then \begin{eqnarray*} &&\gamma(\tilde{u}^-_{\lambda})+\mu||\tilde{u}^-_{\lambda}||^2_2=0,\\ &\Rightarrow&c_0||\tilde{u}^-_{\lambda}||^2\leq 0\ (\mbox{see (\ref{eq3})}),\\ &\Rightarrow&\tilde{u}_{\lambda}\geq 0,\tilde{u}_{\lambda}\neq 0\ (\mbox{see (\ref{eq91})}). \end{eqnarray*} Therefore equation (\ref{eq92}) becomes \begin{eqnarray}\label{eq93} &&\left\langle A(\tilde{u}_{\lambda}),h\right\rangle+\int_{\Omega}\xi(z)\tilde{u}_{\lambda}hdz+\int_{\partial\Omega}\beta(z)\tilde{u}_{\lambda}hd\sigma\nonumber\\ &&=\lambda\int_{\Omega}\tilde{u}^{q-1}_{\lambda}hdz-c_{22}\int_{\Omega}\tilde{u}_{\lambda}hdz-c_{23}\int_{\Omega}\tilde{u}^{2^*-1}_{\lambda}hdz\ \mbox{for all}\ h\in H^1(\Omega),\nonumber\\ &\Rightarrow&-\Delta\tilde{u}_{\lambda}(z)+\xi(z)\tilde{u}_{\lambda}(z)=\lambda\tilde{u}_{\lambda}(z)^{q-1}-c_{22}\tilde{u}_{\lambda}(z)-c_{23}\tilde{u}_{\lambda}(z)^{2^*-1}\ \mbox{for a.a.}\ z\in\Omega,\\ &&\frac{\partial \tilde{u}_{\lambda}}{\partial n}+\beta(z)\tilde{u}_{\lambda}=0\ \mbox{on}\ \partial\Omega\ (\mbox{see Papageorgiou and R\u adulescu \cite{19}}),\nonumber\\ &\Rightarrow&\tilde{u}_{\lambda}\in C_+\backslash\{0\}\nonumber \end{eqnarray} (as before using the regularity theory of Wang \cite{28}). From (\ref{eq93}) we have \begin{eqnarray*} &&\Delta\tilde{u}_{\lambda}(z)\leq[||\xi^+||_{\infty}+c_{23}||\tilde{u}_{\lambda}||^{2^*-2}_{\infty}+c_{22}]\tilde{u}_{\lambda}(z)\ \mbox{for almost all}\ z\in\Omega,\\ &\Rightarrow&\tilde{u}_{\lambda}\in D_+\ \mbox{(by the strong maximum principle)}. \end{eqnarray*} Next we show the uniqueness of this positive solution. So, suppose that $\tilde{u}_{\lambda},\bar{u}_{\lambda}\in D_+$ are two solutions of \eqref{eqA}. The solution set of \eqref{eqA} is downward directed (see \cite{5}). So, we may assume that $\bar{u}_{\lambda}\leq\tilde{u}_{\lambda}$, \begin{eqnarray} &&\int_{\Omega}(D\tilde{u}_{\lambda},D\bar{u}_{\lambda})_{\RR^N}dz+\int_{\Omega}\xi(z)\tilde{u}_{\lambda}\bar{u}_{\lambda}dz+\int_{\partial\Omega}\beta(z)\tilde{u}_{\lambda}\bar{u}_{\lambda}d\sigma=\lambda\int_{\Omega}\tilde{u}^{q-1}_{\lambda}\bar{u}_{\lambda}dz-\nonumber\\ &&\hspace{1cm}c_{22}\int_{\Omega}\tilde{u}_{\lambda}\bar{u}_{\lambda}dz-c_{23}\int_{\Omega}\tilde{u}^{2^*-1}_{\lambda}\bar{u}_{\lambda}dz\label{eq94}\\ &&\int_{\Omega}(D\bar{u}_{\lambda},D\tilde{u}_{\lambda})_{\RR^N}dz+\int_{\Omega}\xi(z)\bar{u}_{\lambda}\tilde{u}_{\lambda}dz+\int_{\Omega}\beta(z)\bar{u}_{\lambda}\tilde{u}_{\lambda}d\sigma=\lambda\int_{\Omega}\bar{u}_{\lambda}^{q-1}\tilde{u}_{\lambda}dz-\nonumber\\ &&\hspace{1cm}c_{22}\int_{\Omega}\bar{u}_{\lambda}\tilde{u}_{\lambda}dz- c_{23}\int_{\Omega}\bar{u}^{2^*-1}_{\lambda}\tilde{u}_{\lambda}dz.\label{eq95} \end{eqnarray} We subtract (\ref{eq95}) from (\ref{eq94}) and obtain \begin{eqnarray*} &&\lambda\int_{\Omega}\tilde{u}_{\lambda}\bar{u}_{\lambda}\left[\frac{1}{\tilde{u}_{\lambda}^{2-q}}-\frac{1}{\bar{u}_{\lambda}^{2-q}}\right]=c_{23}\int_{\Omega}\tilde{u}_{\lambda}\bar{u}_{\lambda}\left(\tilde{u}_{\lambda}^{2^*-2}-\bar{u}^{2^*-2}_{\lambda}\right)dz\\ &\Rightarrow&\tilde{u}_{\lambda}=\bar{u}_{\lambda}\ (\mbox{recall}\ q<2<2^*\ \mbox{and}\ \bar{u}_{\lambda}\leq\tilde{u}_{\lambda}). \end{eqnarray*} This proves the uniqueness of the positive solution $\tilde{u}_{\lambda}\in D_+$ of \eqref{eqA}. Problem \eqref{eqA} is odd. So, it follows that $$\tilde{u}_{\lambda}=-\tilde{u}_{\lambda}\in-D_+$$ is the unique negative solution of \eqref{eqA}. \end{proof} \begin{remark} We present an alternative way of proving the uniqueness of the positive solution $\tilde{u}_{\lambda}\in D_+$ of \eqref{eqA}. So, let $\tilde{u}_{\lambda},\bar{u}_{\lambda}\in D_+$ be two positive solutions of \eqref{eqA}. Let $t>0$ be the biggest positive real such that \begin{equation}\label{eq96} t\bar{u}_{\lambda}\leq\tilde{u}_{\lambda} \end{equation} (see Filippakis and Papageorgiou \cite[Lemma 3.6]{5}). Suppose that $t\in(0,1)$. Since $q<2<2^*$, we can find $\hat{\vartheta}>0$ such that the function $$x\mapsto\lambda x^{q-1}+(\vartheta-c_{22})x-c_{23}x^{2^*-1}$$ is increasing on $[0,\max\{||\tilde{u}_{\lambda}||_{\infty},||\bar{u}_{\lambda}||_{\infty}\}]$. We have \begin{eqnarray}\label{eq97} &&-\Delta\tilde{u}_{\lambda}(z)+(\xi(z)+\hat{\vartheta})\tilde{u}_{\lambda}(z)\nonumber\\ &=&\lambda \tilde{u}_{\lambda}(z)^{q-1}+(\hat{\vartheta}-c_{22})\tilde{u}_{\lambda}(z)-c_{23}\tilde{u}_{\lambda}(z)^{2^*-1}\nonumber\\ &\geq&\lambda(t\bar{u}_{\lambda}(z))^{q-1}+(\hat{\vartheta}-c_{22})(t\bar{u}_{\lambda}(z))-c_{23}(t\bar{u}_{\lambda}(z))^{2^*-1}\ (\mbox{see (\ref{eq96})})\nonumber\\ &\geq&t\left[\lambda\bar{u}_{\lambda}(z)^{q-1}+(\hat{\vartheta}-c_{22})\bar{u}_{\lambda}(z)-c_{23}\bar{u}_{\lambda}(z)^{2^*-1}\right]\nonumber\\ &&(\mbox{since}\ q<2<2^*\ \mbox{and}\ t\in(0,1))\nonumber\\ &=&t\left[-\Delta\bar{u}_{\lambda}(z)+(\xi(z)+\hat{\vartheta})\bar{u}_{\lambda}(z)\right]\nonumber\\ &=&-\Delta(t\bar{u}_{\lambda})(z)+(\xi(z)+\hat{\vartheta})(t\bar{u}_{\lambda})(z)\ \mbox{for almost all}\ z\in\Omega,\nonumber\\ \Rightarrow&&\Delta(\tilde{u}_{\lambda}-t\bar{u}_{\lambda})(z)\leq(||\xi^+||_{\infty}+\hat{\vartheta})(\tilde{u}_{\lambda}-t\bar{u}_{\lambda})(z)\ \mbox{for almost all}\ z\in\Omega\\ &&(\mbox{see hypothesis}\ H(\xi)')\nonumber. \end{eqnarray} Note that $\tilde{u}_{\lambda}\not\equiv t\bar{u}_{\lambda}$. Indeed, if $\tilde{u}_{\lambda}\equiv t\bar{u}_{\lambda}$, then \begin{eqnarray*} &&\lambda(t\bar{u}_{\lambda})(z)^{q-1}-c_{22}(t\bar{u}_{\lambda})(z)-c_{23}(t\bar{u}_{\lambda})(z)^{2^*-1}\\ &>&t\left[-\Delta\bar{u}_{\lambda}(z)+\xi(z)\bar{u}_{\lambda}(z)\right]\ (\mbox{since}\ q<2<2^*\ \mbox{and}\ t\in(0,1))\\ &=&-\Delta\tilde{u}_{\lambda}(z)+\xi(z)\tilde{u}_{\lambda}(z)\ (\mbox{since}\ \tilde{u}_{\lambda}\equiv t\bar{u}_{\lambda})\\ &=&\lambda\tilde{u}_{\lambda}(z)^{q-1}-c_{22}\tilde{u}_{\lambda}(z)-c_{23}\tilde{u}_{\lambda}(z)^{2^*-1}\\ &=&\lambda(t\bar{u}_{\lambda})(z)^{q-1}-c_{22}(t\bar{u}_{\lambda})(z)-c_{23}(t\bar{u}_{\lambda})(z)^{2^*-1}, \end{eqnarray*} a contradiction. Then from (\ref{eq97}) and the strong maximum principle, we have $$\tilde{u}_{\lambda}-t\bar{u}_{\lambda}\in D_+,$$ which contradicts the maximality of $t>0$. Hence $t\geq 1$ and we have $$\bar{u}_{\lambda}\leq \tilde{u}_{\lambda}\ (\mbox{see (\ref{eq96})}).$$ Interchanging the roles of $\tilde{u}_{\lambda}$ and $\bar{u}_{\lambda}$ in the above argument, we also have \begin{eqnarray*} &&\tilde{u}_{\lambda}\leq\bar{u}_{\lambda},\\ &\Rightarrow&\tilde{u}_{\lambda}=\bar{u}_{\lambda}. \end{eqnarray*} This proves the uniqueness of the positive solution of \eqref{eqA}. \end{remark} Next we show that $\tilde{u}_{\lambda}\in D_+$ (respectively $\tilde{v}_{\lambda}\in -D_+$) is a lower bound (upper bound) for the elements of $S^+_{\lambda}$ (respectively $S^-_{\lambda}$). \begin{prop}\label{prop17} If hypotheses $H(\xi)',H(\beta),H(f)_3$ hold, then \begin{itemize} \item[(a)] $\tilde{u}_{\lambda}\leq u$ for all $u\in S^+_{\lambda},\ \lambda\in(0,\lambda^+_*)$; \item[(b)] $v\leq\tilde{v}_{\lambda}$ for all $v\in S^-_{\lambda},\ \lambda\in(0,\lambda^-_*)$. \end{itemize} \end{prop} \begin{proof} \textit{(a)} Let $u\in S^+_{\lambda}$ and let $g^+_{\lambda}:\Omega\times\RR\rightarrow\RR$ be the Carath\'eodory function defined by \begin{eqnarray}\label{eq98} &&g^+_{\lambda}(z,x)=\left\{\begin{array}{ll} 0&\mbox{if}\ x<0\\ \lambda x^{q-1}+(\mu-c_{22})x-c_{23}x^{2^*-1}&\mbox{if}\ 0\leq x\leq u(z)\\ \lambda u(z)^{q-1}+(\mu-c_{22})u(z)-c_{23}u(z)^{2^*-1}&\mbox{if}\ u(z)<x. \end{array}\right. \end{eqnarray} We set $G^+_{\lambda}(z,x)=\int^x_0g^+_{\lambda}(z,s)ds$ and consider the $C^1$-functional $\hat{\psi}^+_{\lambda}:H^1(\Omega)\rightarrow\RR$ defined by $$\hat{\psi}^+_{\lambda}(y)=\frac{1}{2}\gamma(y)+\frac{\mu}{2}||y||^2_2-\int_{\Omega}G^+_{\lambda}(z,y)dz\ \mbox{for all}\ y\in H^1(\Omega).$$ From (\ref{eq3}) and (\ref{eq98}) it is clear that $\hat{\psi}^+_{\lambda}$ is coercive. Also, it is sequentially weakly lower semicontinuous. So, we can find $\bar{u}_{\lambda}\in H^1(\Omega)$ such that \begin{equation}\label{eq99} \hat{\psi}^+_{\lambda}(\bar{u}_{\lambda})=\inf[\hat{\psi}^+_{\lambda}(u):u\in H^1(\Omega)]. \end{equation} As before, since $q<2<2^*$, we have \begin{eqnarray}\label{eq100} &&\hat{\psi}^+_{\lambda}(\bar{u}_{\lambda})<0=\hat{\psi}^+_{\lambda}(0),\nonumber\\ &\Rightarrow&\bar{u}_{\lambda}\neq 0. \end{eqnarray} From (\ref{eq99}), we have \begin{eqnarray}\label{eq101} &&(\hat{\psi}^+_{\lambda})'(\bar{u}_{\lambda})=0,\nonumber\\ &\Rightarrow&\left\langle A(\bar{u}_{\lambda}),h\right\rangle+\int_{\Omega}(\xi(z)+\mu)\bar{u}_{\lambda}hdz+\int_{\partial\Omega}\beta(z)\bar{u}_{\lambda}hd\sigma=\int_{\Omega}g^+_{\lambda}(z,\bar{u}_{\lambda})hdz\\ &&\mbox{for all}\ h\in H^1(\Omega).\nonumber \end{eqnarray} In (\ref{eq101}) first we choose $h=-\bar{u}^-_{\lambda}\in H^1(\Omega)$. Then \begin{eqnarray*} &&\gamma(\bar{u}^-_{\lambda})+\mu||\bar{u}^-_{\lambda}||^2_2=0\ (\mbox{see (\ref{eq98})}),\\ &\Rightarrow&c_0||\bar{u}^-_{\lambda}||^2\leq 0\ (\mbox{see (\ref{eq3})}),\\ &\Rightarrow&\bar{u}_{\lambda}\geq 0,\ \bar{u}_{\lambda}\neq 0\ (\mbox{see (\ref{eq100})}). \end{eqnarray*} Also in (\ref{eq101}) we choose $h=(\bar{u}_{\lambda}-u)^+\in H^1(\Omega)$. Then \begin{eqnarray*} &&\left\langle A(\bar{u}_{\lambda}),(\bar{u}_{\lambda}-u)^+\right\rangle+\int_{\Omega}(\xi(z)+\mu)\bar{u}_{\lambda}(\bar{u}_{\lambda}-u)^+dz+\int_{\partial\Omega}\beta(z)\bar{u}_{\lambda}(\bar{u}_{\lambda}-u)^+d\sigma\\ &=&\int_{\Omega}\left[\lambda u^{q-1}+(\mu-c_{22})u-c_{23}u^{2^*-1}\right](\bar{u}_{\lambda}-u)^+dz\ (\mbox{see (\ref{eq98})})\\ &\leq&\int_{\Omega}[\lambda u^{q-1}+f(z,u)+\mu u](\bar{u}_{\lambda}-u)^+dz\ (\mbox{see (\ref{eq89})})\\ &=&\left\langle A(u),(\bar{u}_{\lambda}-u)^+\right\rangle+\int_{\Omega}(\xi(z)+\mu)u(\bar{u}_{\lambda}-u)^+dz+\int_{\partial\Omega}\beta(z)u(\bar{u}_{\lambda}-u)^+d\sigma\\ &&(\mbox{since}\ u\in S^+_{\lambda}),\\ \Rightarrow&&\gamma((\bar{u}_{\lambda}-u)^+)+\mu||(\bar{u}_{\lambda}-u)^+||^2_2\leq 0,\\ \Rightarrow&&c_0||(\bar{u}_{\lambda}-u)||^2\leq 0\ (\mbox{see (\ref{eq3})}),\\ \Rightarrow&&\bar{u}_{\lambda}\leq u. \end{eqnarray*} So, we have proved that \begin{equation}\label{eq102} \bar{u}_{\lambda}\in[0,u],\ \bar{u}_{\lambda}\neq 0. \end{equation} On account of (\ref{eq98}) and (\ref{eq102}), equation (\ref{eq101}) becomes \begin{eqnarray*} &&\left\langle A(\bar{u}_{\lambda}),h\right\rangle+\int_{\Omega}\xi(z)\bar{u}_{\lambda}hdz+\int_{\partial\Omega}\beta(z)\bar{u}_{\lambda}hd\sigma=\int_{\Omega}\left[\lambda\bar{u}_{\lambda}^{q-1}-c_{22}\bar{u}_{\lambda}-c_{23}\bar{u}_{\lambda}^{2^*-1}\right]hdz\\ &&\mbox{for all}\ h\in H^1(\Omega),\\ &\Rightarrow&\bar{u}_{\lambda}\ \mbox{is a positive solutions of \eqref{eqA}},\\ &\Rightarrow&\bar{u}_{\lambda}=\tilde{u}_{\lambda}\in D_+\ (\mbox{see Proposition \ref{prop16}}). \end{eqnarray*} From (\ref{eq102}) we conclude that $$\tilde{u}_{\lambda}\leq u\ \mbox{for all}\ u\in S^+_{\lambda}.$$ \textit{(b)} In a similar fashion we show that $$v\leq\tilde{v}_{\lambda}\ \mbox{for all}\ v\in S^-_{\lambda}.$$ \end{proof} Now we are ready to produce extremal constant sign solutions for problem \eqref{eqP}, that is, a smallest element for the set $S^+_{\lambda}$ $(\lambda\in (0,\lambda^+_*))$ and a biggest element for the set $S^-_{\lambda}$ $(\lambda\in(0,\lambda^-_*))$. \begin{theorem}\label{th18} If hypotheses $H(\xi)',H(\beta),H(f)_3$ hold, then \begin{itemize} \item[(a)] for every $\lambda\in(0,\lambda^+_*)$ problem $S^+_{\lambda}$ has a smallest element $u^*_{\lambda}\in D_+$; \item[(b)] for every $\lambda\in(0,\lambda^-_*)$ problem $S^-_{\lambda}$ has a biggest element $v^*_{\lambda}\in-D_+.$ \end{itemize} \end{theorem} \begin{proof} \textit{(a)} Recall that $S^+_{\lambda}$ $(\lambda\in(0,\lambda^+_*))$ is downward directed. So, invoking Lemma 3.10 of Hu and Papageorgiou \cite[p. 178]{9}, we can find a decreasing sequence $\{u_n\}_{n\geq 1}\subseteq S^+_{\lambda}$ such that $$\inf S^+_{\lambda}=\inf\limits_{n\geq 1}u_n.$$ Evidently, $\{u_n\}_{n\geq 1}\subseteq H^1(\Omega)$ is bounded. So, we may assume that \begin{equation}\label{eq103} u_n\stackrel{w}{\rightarrow}u^*_{\lambda}\ \mbox{in}\ H^1(\Omega)\ \mbox{and}\ u_n\rightarrow u^*_{\lambda}\ \mbox{in}\ L^{\frac{2s}{s-1}}(\Omega)\ \mbox{and in}\ L^2(\partial\Omega). \end{equation} We have \begin{eqnarray}\label{eq104} &&\left\langle A(u_n),h\right\rangle+\int_{\Omega}\xi(z)u_nhdz+\int_{\partial\Omega}\beta(z)u_nhd\sigma=\lambda\int_{\Omega}u^{q-1}_nhdz+\int_{\Omega}f(z,u_n)hdz\\ &&\mbox{for all}\ h\in H^1(\Omega),\ \mbox{all}\ n\in\NN.\nonumber \end{eqnarray} In (\ref{eq104}) we pass to the limit as $n\rightarrow\infty$ and use (\ref{eq103}). We obtain \begin{eqnarray}\label{eq105} &&\left\langle A(u^*_{\lambda}),h\right\rangle+\int_{\Omega}\xi(z)u^*_{\lambda}hdz+\int_{\partial\Omega}\beta(z)u^*_{\lambda}hd\sigma=\lambda\int_{\Omega}(u^*_{\lambda})^{q-1}hdz+\int_{\Omega}f(z,u^*_{\lambda})hdz\\ &&\mbox{for all}\ h\in H^1(\Omega).\nonumber \end{eqnarray} Also we have \begin{eqnarray}\label{eq106} &&\tilde{u}_{\lambda}\leq u_n\ \mbox{for all}\ n\in\NN\ (\mbox{see Proposition \ref{prop17}}),\nonumber\\ &\Rightarrow&\tilde{u}_{\lambda}\leq u^*_{\lambda}\ (\mbox{see (\ref{eq103})}). \end{eqnarray} Then from (\ref{eq105}) and (\ref{eq106}) we infer that $$u^*_{\lambda}\in S^+_{\lambda}\ \mbox{and}\ u^*_{\lambda}=\inf S^+_{\lambda}.$$ \textit{(b)} Similarly we produce $v^*_{\lambda}\in-D_+$ the biggest element of $S^-_{\lambda}$. \end{proof} Using these extremal constant sign solutions of \eqref{eqP}, we can generate nodal (that is, sign changing) solutions. \begin{prop}\label{prop19} If hypotheses $H(\xi)',H(\beta),H(f)_3$ hold and $\lambda\in(0,\lambda_*)$ (recall $\lambda_*=\min\{\lambda^+_{\lambda},\lambda^-_*\}$), then problem \eqref{eqP} admits a nodal solution $\hat{y}\in C^1(\overline{\Omega})\backslash\{0\}$. \end{prop} \begin{proof} Let $u^*_{\lambda}\in D_+$ and $v^*_{\lambda}\in-D_+$ be the two extremal constant sign solutions of \eqref{eqP} produced in Theorem \ref{th18}. Using them we introduce the following Carath\'eodory function \begin{eqnarray}\label{eq107} k_{\lambda}(z,x)=\left\{\begin{array}{ll} \lambda|v^*_{\lambda}(z)|^{q-2}v^*_{\lambda}(z)+f(z,v^*_{\lambda}(z))+\mu v^*_{\lambda}(z)&\mbox{if}\ x<v^*_{\lambda}(z)\\ \lambda|x|^{q-2}x+f(z,x)+\mu x&\mbox{if}\ v^*_{\lambda}(z)\leq x\leq u^*_{\lambda}(z)\\ \lambda u^*_{\lambda}(z)^{q-1}+f(z,u^*_{\lambda}(z))+\mu u^*_{\lambda}(z)&\mbox{if}\ u^*_{\lambda}(z)<x. \end{array}\right. \end{eqnarray} Let $K_{\lambda}(z,x)=\int^x_0k_{\lambda}(z,s)ds$ and consider the $C^1$-functional $\eta_{\lambda}:H^1(\Omega)\rightarrow\RR$ defined by $$\eta_{\lambda}(u)=\frac{1}{2}\gamma(u)+\frac{\mu}{2}||u||^2_2-\int_{\Omega}K_{\lambda}(z,u)dz\ \mbox{for all}\ u\in H^1(\Omega).$$ Also we consider the positive and negative truncations of $k_{\lambda}(z,\cdot)$ (that is, $k^{\pm}_{\lambda}(z,x)=k_{\lambda}(z,\pm x^{\pm})$), set $K^{\pm}_{\lambda}(z,x)=\int^x_0k^{\pm}_{\lambda}(z,s)ds$ and consider the corresponding $C^1$-functionals $\eta^{\pm}_{\lambda}:H^1(\Omega)\rightarrow\RR$ defined by $$\eta^{\pm}_{\lambda}(u)=\frac{1}{2}\gamma(u)+\frac{\mu}{2}||u||^2_2-\int_{\Omega}K^{\pm}_{\lambda}(z,u)dz\ \mbox{for all}\ u\in H^1(\Omega).$$ \begin{claim}\label{cl1} $K_{\eta_{\lambda}}\subseteq[v^*_{\lambda},u^*_{\lambda}]\cap C^1(\overline{\Omega}),\ K_{\eta^+_{\lambda}}=\{0,u^*_{\lambda}\},\ K_{\eta^-_{\lambda}}=\{0,v^*_{\lambda}\}$. \end{claim} Let $u\in K_{\eta_{\lambda}}$. We have \begin{equation}\label{eq108} \left\langle A(u),h\right\rangle+\int_{\Omega}(\xi(z)+\mu)uhdz+\int_{\partial\Omega}\beta(z)uhd\sigma=\int_{\Omega}k_{\lambda}(z,u)hdz\ \mbox{for all}\ h\in H^1(\Omega). \end{equation} In (\ref{eq108}) we choose $h=(u-u^*_{\lambda})^+\in H^1(\Omega)$. Then \begin{eqnarray*} &&\left\langle A(u),(u-u^*_{\lambda})^+\right\rangle+\int_{\Omega}(\xi(z)+\mu)u(u-u^*_{\lambda})^+dz+\int_{\partial\Omega}\beta(z)u(u-u^*_{\lambda})^+d\sigma\\ &=&\int_{\Omega}[\lambda(u^*_{\lambda})^{q-1}+f(z,u^*_{\lambda})+\mu u^*_{\lambda}](u-u^*_{\lambda})^+dz\ (\mbox{see (\ref{eq107})})\\ &=&\left\langle A(u^*_{\lambda}),(u-u^*_{\lambda})^+\right\rangle+\int_{\Omega}(\xi(z)+\mu)u^*_{\lambda}(u-u^*_{\lambda})^+dz+\int_{\partial\Omega}\beta(z)u^*_{\lambda}(u-u^*_{\lambda})^+d\sigma\\ &&(\mbox{since}\ u^*_{\lambda}\in S^+_{\lambda}),\\ &\Rightarrow&u\leq u^*_{\lambda}. \end{eqnarray*} Similarly, choosing $h=(v^*_{\lambda}-u)^+\in H^1(\Omega)$ in (\ref{eq108}), we show that $$v^*_{\lambda}\leq u.$$ So, from the above and the regularity theory of Wang \cite{28}, we have \begin{eqnarray*} &&u\in[v^*_{\lambda},u^*_{\lambda}]\cap C^1(\overline{\Omega}),\\ &\Rightarrow&K_{\eta_{\lambda}}\subseteq[v^*_{\lambda},u^*_{\lambda}]\cap C^1(\overline{\Omega}). \end{eqnarray*} In a similar fashion, we show that $$K_{\eta^+_{\lambda}}\subseteq[0,u^*_{\lambda}]\ \mbox{and}\ K_{\eta^-_{\lambda}}\subseteq[v^*_{\lambda},0].$$ The extremality of $u^*_{\lambda}\in D_+$ and of $v^*_{\lambda}\in-D_+$, implies that $$K_{\eta^+_{\lambda}}=\{0,u^*_{\lambda}\}\ \mbox{and}\ K_{\eta^-_{\lambda}}=\{0,v^*_{\lambda}\}.$$ This proves Claim \ref{cl1}. \begin{claim}\label{cl2} $u^*_{\lambda}\in D_+$ and $v^*_{\lambda}\in -D_+$ are local minimizers of $\eta_{\lambda}$. \end{claim} Evidently, $\eta^+_{\lambda}$ is coercive (see (\ref{eq107})) and sequentially weakly lower semicontinuous. So, we can find $\tilde{u}^*_{\lambda}\in H^1(\Omega)$ such that \begin{equation}\label{eq109} \eta^+_{\lambda}(\tilde{u}^*_{\lambda})=\inf[\eta^+_{\lambda}(u):u\in H^1(\Omega)]. \end{equation} As before, since $q<2<2^*$, we have \begin{eqnarray}\label{eq110} &&\eta^+_{\lambda}(\tilde{u}^*_{\lambda})<0=\eta^+_{\lambda}(0)\nonumber\\ &\Rightarrow&\tilde{u}^*_{\lambda}\neq 0. \end{eqnarray} From (\ref{eq109}) we have that \begin{equation}\label{eq111} \tilde{u}^*_{\lambda}\in K_{\eta^+_{\lambda}}. \end{equation} From (\ref{eq110}), (\ref{eq111}) and Claim \ref{cl1} it follows that $\tilde{u}^*_{\lambda}=u^*_{\lambda}\in D_+$. Note that \begin{eqnarray*} &&\left.\eta_{\lambda}\right|_{C_+}=\left.\eta^+_{\lambda}\right|_{C_+}\ (\mbox{see (\ref{eq107})}),\\ &\Rightarrow&u^*_{\lambda}\ \mbox{is a local}\ C^1(\overline{\Omega})-\mbox{minimizer of}\ \eta_{\lambda},\\ &\Rightarrow&u^*_{\lambda}\ \mbox{is a local}\ H^1(\overline{\Omega})-\mbox{minimizer of}\ \eta_{\lambda}\ (\mbox{see Proposition \ref{prop4}}). \end{eqnarray*} Similarly for $v^*_{\lambda}\in-D_+$, using this time the functional $\eta^-_{\lambda}$. This proves Claim \ref{cl2}. Without any loss of generality, we may assume that $$\eta_{\lambda}(v^*_{\lambda})\leq\eta_{\lambda}(u^*_{\lambda}).$$ The reasoning is similar if the opposite inequality holds. Also, we assume that $K_{\eta_{\lambda}}$ if finite or on account of Claim \ref{cl1} we already have an infinity of smooth nodal solutions and so we are done. Then Claim \ref{cl2} implies that we can find $\rho\in(0,1)$ small such that \begin{equation}\label{eq112} \eta_{\lambda}(v^*_{\lambda})\leq\eta_{\lambda}(u^*_{\lambda})<\inf[\eta_{\lambda}(u):||u-u^*_{\lambda}||=\rho]=m_{\lambda}, \ ||v^*_{\lambda}-u^*_{\lambda}||>\rho \end{equation} (see Aizicovici, Papageorgiou and Staicu \cite{1}, proof of Proposition 29). From (\ref{eq3}) and (\ref{eq107}) it is clear that $\eta_{\lambda}$ is coercive. Hence \begin{equation}\label{eq113} \eta_{\lambda}\ \mbox{satisfies the C-condition}. \end{equation} Then (\ref{eq112}) and (\ref{eq113}) permit the use of Theorem \ref{th1} (the mountain pass theorem). So, we can find $\hat{y}\in H^1(\Omega)$ such that \begin{equation}\label{eq114} \hat{y}\in K_{\eta_{\lambda}}\ \mbox{and}\ m_{\lambda}\leq\eta_{\lambda}(\hat{y}). \end{equation} Claim \ref{cl1} together with (\ref{eq112}) and (\ref{eq114}) imply that $$\hat{y}\in[v^*_{\lambda},u^*_{\lambda}]\cap C^1(\overline{\Omega}),\ \hat{y}\not\in\{u^*_{\lambda},v^*_{\lambda}\}.$$ Since $\hat{y}$ is a critical point of $\eta_{\lambda}$ of mountain pass type, we have \begin{equation}\label{eq115} C_1(\eta_{\lambda},\hat{y})\neq 0 \end{equation} (see Motreanu, Motreanu and Papageorgiou \cite[Corollary 6.81, p. 168]{15}). On the other hand, the presence of the concave term $\lambda|x|^{q-2}x\ (q<2)$ in the reaction function, hypothesis $H(f)_3(iii)$ and Lemma 3.4 of D'Agui, Marano and Papageorgiou \cite{3} imply that \begin{equation}\label{eq116} C_k(\eta_{\lambda},0)=0\ \mbox{for all}\ k\in\NN_0. \end{equation} Comparing (\ref{eq115}) and (\ref{eq116}), we conclude that \begin{eqnarray*} &&\hat{y}\neq 0,\\ &\Rightarrow&\hat{y}\in C^1(\overline{\Omega})\backslash\{0\}\ \mbox{is nodal}. \end{eqnarray*} \end{proof} So, we can state the following multiplicity theorem for problem \eqref{eqP}. \begin{theorem}\label{th20} If hypotheses $H(\xi)',H(\beta),H(f)_3$ hold, then we can find a parameter value $\lambda_*>0$ such that for every $\lambda\in(0,\lambda_*)$ problem \eqref{eqP} has at least five nontrivial smooth solutions \begin{eqnarray*} &&u_0,\hat{u}\in D_+,\ v_0,\hat{v}\in-D_+,\\ &&\hat{y}\in C^1(\overline{\Omega})\ \mbox{nodal}. \end{eqnarray*} \end{theorem} If we strengthen the regularity of $f(z,\cdot)$, we can improve Theorem \ref{th20} and produce a sixth nontrivial smooth solution. However, we do not provide any sign information for this sixth solution. The new conditions on the perturbation term $f(z,x)$ are the following: \smallskip $H(f)_4:$ $f:\Omega\times\RR\rightarrow\RR$ is a measurable function such that for almost all $z\in\Omega,\ f(z,\cdot)\in C^1(\RR)$ and \begin{itemize} \item[(i)] for every $\rho>0$, there exists $a_{\rho}\in L^{\infty}(\Omega)$ such that \begin{eqnarray*} &&|f'_x(z,x)|\leq a_{\rho}(z)\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ |x|\leq\rho\\ &\mbox{and}&\lim\limits_{x\rightarrow\pm\infty}\frac{f'_x(z,x)}{|x|^{2^*-2}}=0\ \mbox{uniformly for almost all}\ z\in\Omega; \end{eqnarray*} \item[(ii)] $\lim\limits_{x\rightarrow\pm\infty}\frac{F(z,x)}{x^2}=+\infty$ uniformly for almost all $z\in\Omega$ and there exists $e\in L^1(\Omega)$ such that $$\tau(z,x)\leq\tau(z,y)+e(z)\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ 0\leq x\leq y\ \mbox{and all}\ y\leq x\leq 0$$ (recall that $F(z,x)=\int^x_0f(z,s)ds$ and $\tau(z,x)=f(z,x)x-2F(z,x)$); \item[(iii)] $f(z,0)=0$ for almost all $z\in\Omega$, $f'_x(z,0)=\lim\limits_{x\rightarrow 0}\frac{f(z,x)}{x}$ uniformly for almost all $z\in\Omega$ and $$f'_x(\cdot,0)\in L^{\infty}(\Omega),f'_x(z,0)\leq\hat{\lambda}_1\ \mbox{for almost all}\ z\in\Omega,f'_x(\cdot,0)\not\equiv\hat{\lambda}_1.$$ \end{itemize} Under the above hypotheses, we have $\varphi_{\lambda}\in C^2(H^1(\Omega)\backslash\{0\})$. \begin{prop}\label{prop21} If hypotheses $H(\xi)',H(\beta),H(f)_4$ hold and $\lambda\in(0,\lambda_*)$, then problem \eqref{eqP} admits a sixth nontrivial smooth solution $$\tilde{y}\in C^1(\overline{\Omega}).$$ \end{prop} \begin{proof} From Proposition \ref{prop15} we know that \begin{equation}\label{eq117} C_k(\varphi_{\lambda},u_0)=C_k(\varphi_{\lambda},v_0)=\delta_{k,0}\ZZ\ \mbox{for all}\ k\in\NN_0. \end{equation} Also, recall (see the proof of Proposition \ref{prop15}), that $\hat{u}\in D_+$ is a critical point of mountain pass type for $\hat{\varphi}^+_{\lambda}$ and $\hat{v}\in-D_+$ is a critical point of mountain pass type for $\hat{\varphi}^-_{\lambda}$. Note that \begin{eqnarray}\label{eq118} &&\left.\varphi_{\lambda}\right|_{C_+}=\left.\hat{\varphi}^+_{\lambda}\right|_{C_+}\ \mbox{and}\ \left.\varphi_{\lambda}\right|_{-C_+}=\left.\hat{\varphi}^-_{\lambda}\right|_{-C_+}\ (\mbox{see (\ref{eq72}), (\ref{eq73})}),\nonumber\\ &\Rightarrow&C_k(\left.\varphi_{\lambda}\right|_{C^1(\overline{\Omega})},\hat{u})=C_k(\left.\hat{\varphi}^+_{\lambda}\right|_{C^1(\overline{\Omega})},\hat{u})\ \mbox{and}\ C_k(\left.\varphi_{\lambda}\right|_{C^1(\overline{\Omega})},\hat{v})=C_k(\left.\hat{\varphi}^-_{\lambda}\right|_{C^1(\overline{\Omega})},\hat{v})\nonumber\\ &&\mbox{for all}\ k\in\NN_0\ (\mbox{recall that}\ \hat{u}\in D_+\ \mbox{and}\ \hat{v}\in-D_+)\nonumber\\ &\Rightarrow&C_k(\varphi_{\lambda},\hat{u})=C_k(\hat{\varphi}^+_{\lambda},\hat{u})\ \mbox{and}\ C_k(\varphi_{\lambda},\hat{v})=C_k(\hat{\varphi}^-_{\lambda},\hat{v})\ \mbox{for all}\ k\in\NN_0\ (\mbox{see Palais \cite{17}})\nonumber\\ &\Rightarrow&C_k(\varphi_{\lambda},\hat{u})=C_k(\varphi_{\lambda},\hat{v})=\delta_{k,1}\ZZ\\ &&(\mbox{see Motreanu, Motreanu and Papageorgiou \cite[Corollary 6.102, p. 177]{15}}).\nonumber \end{eqnarray} Also, as we already pointed out in the proof of Proposition \ref{prop19} (see (\ref{eq116}) and recall $\left.\eta_{\lambda}\right|_{[v^*_{\lambda},u^*_{\lambda}]}=\left.\varphi_{\lambda}\right|_{[v^*_{\lambda},u^*_{\lambda}]}$), we have \begin{equation}\label{eq119} C_k(\varphi_{\lambda},0)=0\ \mbox{for all}\ k\in\NN_0. \end{equation} From Proposition \ref{prop6}, we have \begin{equation}\label{eq120} C_k(\varphi_{\lambda},\infty)=0\ \mbox{for all}\ k\in\NN_0. \end{equation} Let $\hat{m}=\max\{||u_0||_{\infty},||\hat{u}||_{\infty},||v_0||_{\infty},||\hat{v}||_{\infty}\}$. Hypotheses $H(f)_4(i),(iii)$ imply that we can find $\hat{\xi}>0$ such that for almost all $z\in\Omega$, the function $$x\mapsto f(z,x)+\hat{\xi}x$$ is nondecreasing on $[-\hat{m},\hat{m}]$. With $\hat{y}\in C^1(\overline{\Omega})\backslash\{0\}$ being the nodal solution we have \begin{eqnarray*} &&-\Delta\hat{y}(z)+(\xi(z)+\hat{\xi})\hat{y}(z)\\ &=&\lambda|\hat{y}(z)|^{q-2}\hat{y}(z)+f(z,\hat{y}(z))+\hat{\xi}\hat{y}(z)\\ &\leq&\lambda u^*_{\lambda}(z)^{q-1}+f(z,u^*_{\lambda}(z))+\hat{\xi}u^*_{\lambda}(z)\ (\mbox{since}\ \hat{y}\leq u^*_{\lambda},\ \mbox{see Proposition \ref{prop19}})\\ &=&-\Delta u^*_{\lambda}(z)+(\xi(z)+\hat{\xi})u^*_{\lambda}(z)\ \mbox{for almost all}\ z\in\Omega,\\ \Rightarrow&&\Delta(u^*_{\lambda}-\hat{y})(z)\leq[||\xi^+||_{\infty}+\hat{\xi}](u^*_{\lambda}-\hat{y})(z)\ \mbox{for almost all}\ z\in\Omega\ (\mbox{see hypothesis}\ H(\xi))\\ \Rightarrow&&u^*_{\lambda}-\hat{y}\in D_+\ \mbox{(by the strong maximum principle)}. \end{eqnarray*} Similarly we show that $$\hat{y}-v^*_{\lambda}\in D_+.$$ So, finally we have \begin{equation}\label{eq121} \hat{y}\in {\rm int}_{C^1(\overline{\Omega})}[v^*_{\lambda},u^*_{\lambda}]. \end{equation} Recall that \begin{eqnarray}\label{eq122} &&\eta_{\lambda}\left|_{[v^*_{\lambda},u^*_{\lambda}]}=\varphi_{\lambda}\right|_{[v^*_{\lambda},u^*_{\lambda}]}\ \mbox{see (\ref{eq107})},\nonumber\\ &\Rightarrow&C_k(\eta_{\lambda},\hat{y})=C_k(\varphi_{\lambda},\hat{y})\ \mbox{for all}\ k\in\NN_0\nonumber\\ &&(\mbox{as before from (\ref{eq121}) and Palais \cite{17}})\nonumber\\ &\Rightarrow&C_k(\varphi_{\lambda},\hat{y})=\delta_{k,1}\ZZ\ \mbox{for all}\ k\in\NN_0\\ &&(\mbox{since}\ \hat{y}\in K_{\eta_{\lambda}}\ \mbox{is of mountain pass type, see \cite[p. 177]{15}}).\nonumber \end{eqnarray} Suppose that $K_{\varphi_{\lambda}}=\{0,u_0,v_0,\hat{u},\hat{v},\hat{y}\}$. Then from (\ref{eq117}), (\ref{eq118}), (\ref{eq119}), (\ref{eq120}) and (\ref{eq121}) and using the Morse relation with $t=-1$ (see (\ref{eq6})), we have $$2(-1)^0+2(-1)^1+(-1)^1=0,$$ a contradiction. So, there exists $\tilde{y}\in H^1(\Omega)$ such that $$\tilde{y}\in K_{\varphi_{\lambda}}\subseteq C^1(\overline{\Omega})\ \mbox{and}\ \tilde{y}\notin\{0,u_0,v_0,\hat{u},\hat{v},\hat{y}\}.$$ This is the sixth nontrivial smooth solution of problem \eqref{eqP}. \end{proof} So, we can state the following new multiplicity theorem for problem \eqref{eqP}. \begin{theorem}\label{th22} If hypotheses $H(\xi)',H(\beta),H(f)_4$ hold, then there exists a parameter value $\lambda_*>0$ such that for every $\lambda\in(0,\lambda_*)$ problem \eqref{eqP} has at least six nontrivial smooth solutions \begin{eqnarray*} &&u_0,\hat{u}\in D_+,\ v_0,\hat{v}\in-D_+,\\ &&\hat{y}\in C^1(\overline{\Omega})\ \mbox{nodal and}\ \tilde{y}\in C^1(\overline{\Omega}). \end{eqnarray*} \end{theorem} \section{Infinitely Many Solutions} In this section, we generate an infinity of nontrivial smooth solutions by introducing symmetry on the reaction term. We prove two such results. The first concerns problem \eqref{eqP} and the solutions we produce are nodal. The second result deals with problem (\ref{eq1}) and produces an infinity of nontrivial smooth solutions but without any sign information. For the first theorem, the hypotheses on the perturbation term $f(z,x)$ are the following: \smallskip $H(f)_5:$ $f:\Omega\times\RR\rightarrow\RR$ is a Carath\'eodory function which satisfies hypotheses $H(f)_3$ and in addition for almost all $z\in\Omega,\ f(z,\cdot)$ is odd. \begin{theorem}\label{th23} If hypotheses $H(\xi)',H(\beta),H(f)_5$ hold, then we can find a parameter value $\lambda_*>0$ such that for every $\lambda\in(0,\lambda_*)$ problem \eqref{eqP} admits a sequence $\{u_n\}_{n\geq 1}\subseteq C^1(\overline{\Omega})$ of distinct nodal solutions such that $$u_n\rightarrow 0\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ n\rightarrow\infty.$$ \end{theorem} \begin{proof} Let $\lambda_*=\min\{\lambda^+_*,\lambda^-_*\}$ be as in Proposition \ref{prop15}(c) and consider the $C^1$-functional $\eta_{\lambda}:H^1(\Omega)\rightarrow\RR$ introduced in the proof of Proposition \ref{prop19}. We have the following properties: \begin{itemize} \item $\eta_{\lambda}$ is even; \item $\eta_{\lambda}$ is coercive (hence it is bounded below and it satisfies the C-condition); \item $\eta_{\lambda}(0)=0$. \end{itemize} Let $V$ be a finite dimensional subspace of $H^1(\Omega)$. So, all norms on $V$ are equivalent. Also, from (\ref{eq107}) and hypotheses $H(f)_5(i),(iii)=H(f)_3(i),(iii)$ we see that we can find $c_{24}>0$ such that \begin{equation}\label{eq123} |K_{\lambda}(z,x)|\leq c_{24}|x|^2\ \mbox{for almost all}\ z\in\Omega. \end{equation} So, for every $u\in V$ and recalling that all norms are equivalent, we have \begin{eqnarray*} &&\eta_{\lambda}(u)\leq c_{25}||u||^2-\lambda c_{26}||u||^q\ \mbox{for some}\ c_{25},c_{26}>0\\ &&(\mbox{see (\ref{eq123}) and hypotheses}\ H(\xi)',H(\beta)). \end{eqnarray*} Since $q<2$, we can find $\rho_{\lambda}\in(0,1)$ small such that $$\eta_{\lambda}(u)<0\ \mbox{for all}\ u\in V\ \mbox{with}\ ||u||=\rho_{\lambda}.$$ Therefore we can apply Theorem \ref{th3} and find $\{u_n\}_{n\geq 1}\subseteq K_{\eta_\lambda}\subseteq[v^*_{\lambda},u^*_{\lambda}]\cap C^1(\overline{\Omega})$ (see Claim \ref{cl1} in the proof of Proposition \ref{prop19}) such that \begin{equation}\label{eq124} u_n\rightarrow 0\ \mbox{in}\ H^1(\Omega). \end{equation} From the regularity theory of Wang \cite{28} we know that \begin{eqnarray}\label{eq125} &&u_n\in C^{1,\alpha}(\overline{\Omega})\ \mbox{with}\ \alpha=1-\frac{N}{s}>0\ \mbox{and}\ ||u_n||_{C^{1,\alpha}(\overline{\Omega})}\leq c_{27}\\ &&\mbox{for all}\ n\in\NN\ \mbox{and some}\ c_{27}>0.\nonumber \end{eqnarray} Exploiting the compact embedding of $C^{1,\alpha}(\overline{\Omega})$ into $C^1(\overline{\Omega})$, from (\ref{eq124}) and (\ref{eq125}) we infer that $$u_n\rightarrow 0\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ n\rightarrow\infty .$$ Moreover, since $u_n\in[v^*_{\lambda},u^*_{\lambda}]$ for all $n\in\NN$, $u_n\in C^1(\overline{\Omega})$ is nodal. \end{proof} The second result of this section is about problem (\ref{eq1}). For this result the hypotheses on the reaction term $f(z,x)$ are the following: \smallskip $H(f)_6:$ $f:\Omega\times\RR\rightarrow\RR$ is a Carath\'eodory function such that for almost all $z\in\Omega$ $f(z,x)$ is odd and hypotheses $H(f)_6(i),(ii)$ are the same as the corresponding hypotheses $H(f)_5(i),(ii)$. \begin{remark} We point out that in the above hypotheses there are no conditions on $f(z,\cdot)$ near zero. \end{remark} \begin{theorem}\label{th24} If hypotheses $H(\xi),H(\beta),H(f)_6$ hold, then problem (\ref{eq1}) admits an unbounded sequence of solutions $\{u_n\}_{n\geq 1}\subseteq C^1(\overline{\Omega})$. \end{theorem} \begin{proof} From Proposition \ref{prop5} we know that the energy (Euler) functional $\varphi$ satisfies the C-condition and $\varphi(0)=0$. We consider the following orthogonal direct sum decomposition of $H^1(\Omega)$ $$H^1(\Omega)=H_-\oplus E(0)\oplus H_+$$ with $H_-=\overset{m_-}{\underset{\mathrm{k=1}}\oplus}E(\hat{\lambda}_k),\ H_+=\overline{{\underset{\mathrm{k\geq m_+}}\oplus}E(\hat{\lambda}_k)}$ (see Section 2). Then every $u\in H^1(\Omega)$ admits a unique sum decomposition of the term $$u=\bar{u}+u^0+\hat{u}\ \mbox{with}\ \bar{u}\in H_-,\ u^{\circ}\in E(0),\ \hat{u}\in H_+\,.$$ Hypothesis $H(f)_6(i)$ implies that given $\epsilon>0$, we can find $c_{28}=c_{28}(\epsilon)>0$ such that \begin{equation}\label{eq126} F(z,x)\leq\frac{\epsilon}{2}|x|^{2^*}+c_{28}|x|\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x\in\RR. \end{equation} Let $u\in H_+$. We have \begin{eqnarray}\label{eq127} \varphi(u)&=&\frac{1}{2}\gamma(u)-\int_{\Omega}F(z,u)dz\nonumber\\ &\geq&\frac{1}{2}\gamma(u)-\frac{\epsilon}{2}||u||^{2^*}_{2^*}-c_{28}||u||_1\ (\mbox{see (\ref{eq128})})\nonumber\\ &\geq&\frac{1}{2}\left[\gamma(u)-\epsilon c_{29}||u||^{2^*}\right]-c_{28}||u||_1\ \mbox{for some}\ c_{29}>0\nonumber\\ &\geq&\left[2c_{30}||u||^2-\epsilon c_{29}||u||^{2^*}\right]-c_{28}||u||_1\ \mbox{for some}\ c_{29}>0\ (\mbox{recall that}\ u\in H_+)\nonumber\\ &=&\left[c_{30}||u||^2-\epsilon c_{29}||u||^{2^*}\right]+\left[c_{30}||u||^2-c_{28}||u||_1\right]. \end{eqnarray} If $\hat{u}\in H_+$ is such that $$||\hat{u}||=\hat{\rho}<\left(\frac{c_{30}}{\epsilon c_{29}}\right)^{\frac{1}{2^*-2}},$$ then we have $$c_{30}||u||^2-\epsilon c_{29}||u||^{2^*}>0.$$ Also, for $l\geq m_+$ and $u\in\overline{{\underset{\mathrm{k\geq l}}\oplus}E(\hat{\lambda}_k)}$, we have $$c_{28}||u||_1\leq c_{31}||u||_2\leq\frac{c_{31}}{\sqrt{\hat{\lambda}_l}}||u||\ \mbox{for some}\ c_{31}>0.$$ Therefore $$c_{30}||u||^2-c_{28}||u||_1\geq c_{30}||u||^2-\frac{c_{31}}{\sqrt{\hat{\lambda}_l}}||u||\ \mbox{for}\ u\in V_l=\overline{{\underset{\mathrm{k\geq l}}\oplus}E(\hat{\lambda}_k)}.$$ So, if $u\in V_l$ with $l\geq m_+$ big and with $||u||=\hat{\rho}$, we have $$c_{30}\hat{\rho}^2-\frac{c_{31}}{\sqrt{\hat{\lambda}_l}}\hat{\rho}>0.$$ Returning to (\ref{eq127}), we see that \begin{equation}\label{eq128} \left.\varphi\right|_{V_l\cap\partial B_{\hat{\rho}}}>0. \end{equation} Next let $Z\subseteq H^1(\Omega)$ be a finite dimensional subspace. From hypotheses $H(f)_6(i),(ii)$, we know that given any $\eta>0$, we can find $c_{32}=c_{32}(\eta)>0$ such that \begin{equation}\label{eq129} F(z,x)\geq\frac{\eta}{2}x^2-c_{32}\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x\in\RR. \end{equation} For $u\in Z$ we have \begin{eqnarray}\label{eq130} \varphi(u)&=&\frac{1}{2}\gamma(u)-\int_{\Omega}F(z,u)dz\nonumber\\ &\leq&\frac{1}{2}\gamma(u)-\frac{\eta}{2}||u||^2_2+c_{32}|\Omega|_N\ \mbox{see (\ref{eq129})}\nonumber\\ &\leq&c_{33}||u||^2-\eta c_{34}||u||^2+c_{32}|\Omega|_N\ \mbox{for some}\ c_{33},c_{34}>0 \end{eqnarray} (since $Z$ is finite dimensional all norms are equivalent). But $\eta>0$ is arbitrary. So, we choose $\eta>\frac{c_{33}}{c_{34}}>0$ and from (\ref{eq130}) we infer that \begin{equation}\label{eq131} \varphi|_{Z}\ \mbox{is anticoercive}. \end{equation} Then (\ref{eq128}) and (\ref{eq131}) permit the use of Theorem \ref{th2} (the symmetric mountain pass theorem). So, we can find $\{u_n\}_{n\geq 1}\subseteq H^1(\Omega)$ such that $$u_n\in K_{\varphi}\ \mbox{for all}\ n\in\NN\ \mbox{and}\ ||u_n||\rightarrow+\infty\,.$$ Hence $u_n$ is a solution of (\ref{eq1}) and $u_n\in C^1(\overline{\Omega})$ with $||u_n||_{C^1(\overline{\Omega})}\rightarrow+\infty$. \end{proof} \medskip{\bf Acknowledgments.} This research was supported by the Slovenian Research Agency grants P1-0292, J1-8131, J1-7025.
1,116,691,498,873
arxiv
\section{Introduction} \label{S:0} Solution of any problem in perturbative quantum field theory includes several steps: \begin{enumerate} \item diagrams generation, classification into topologies, routing momenta; \item tensor and Dirac algebra in numerators, reduction to scalar Feynman integrals; \item reduction of scalar Feynman integrals to master integrals; \item calculation of master integrals. \end{enumerate} For a sufficiently complicated problem, all of them must be completely automated. The number of Feynman diagram in a problem can be very large. They can be classified into a moderate number of generic topologies. In each of them, a large number of scalar Feynman integrals is required (especially if expansion in some small parameter up to a high degree is involved). They are not independent: there are integration-by-parts (IBP) recurrence relations. These relations can be used to reduce all these scalar Feynman integrals to a small number of master integrals~\cite{CT:81} (see also textbooks~\cite{FIC,G:07}). The number of master integrals can be proved to be finite~\cite{SP:10}, but the proof is not constructive --- it provides no method of reduction. IBP relations are sometimes used together with recurrence relations in $d$~\cite{T:96,L:10}; we don't discuss these relations here. Several methods of calculation of master integrals also require the reduction problem to be solved: differential equations~\cite{K:91,R:97} (see also Chapter~7 in~\cite{FIC} and the review~\cite{AM:07}), recurrence relations in $d$~\cite{L:10}, gluing~\cite{BC:10}. Other methods, e.\,g., Mellin--Barnes representation, don't depend on IBP. Here we shall discuss only reduction (step 3); methods of calculation of master integrals is a separate (and large) topic. If the problem involves small ratios of external momenta and masses, there is an additional step --- expansion in these ratios using the method of regions~\cite{S:02}. It can produce new kinds of denominators in Feynman integrals. It is done after (or before) the step 2. \section{Feynman graphs and Feynman integrals} \label{S:Intro} Let's consider a diagram with external momenta $p_1$, \dots, $p_E$. If we are considering a generic kinematic configuration, this means that the diagram has $E+1$ external leg (it can have a larger number of legs, if we are interested in a restricted kinematics with linearly dependent momenta of these legs). An $L$-loop diagram has $L$ loop (integration) momenta $k_1$, \dots, $k_L$. Let $q_i = k_1$, \dots, $k_L$, $p_1$, \dots, $p_E$ ($i\in[1,M]$, $M=L+E$) be all the momenta. The diagram has $I$ internal lines with the momenta $l_1$, \dots, $l_I$; they are linear combinations of $q_i$. Let $s_{ij}=q_i\cdot q_j$ ($j\ge i$); $N_E=E(E+1)/2$ scalar products $s_{ij}$ with $i>L$ are external kinematic quantities, and \begin{equation} N = \frac{L(L+1)}{2} + LE \label{Intro:N} \end{equation} scalar products with $i\le L$ are integration variables. Massless and massive lines in a Feynman graph for a scalar Feynman integral are \begin{equation} \raisebox{-6.3mm}{\begin{picture}(22,9) \put(11,6){\makebox(0,0){\includegraphics{l0.eps}}} \put(11,1.5){\makebox(0,0){$l$}} \end{picture}} = \frac{1}{- l^2 - i0}\,,\qquad \raisebox{-3.3mm}{\begin{picture}(22,6) \put(11,4.5){\makebox(0,0){\includegraphics{l1.eps}}} \put(11,1.5){\makebox(0,0){$l$}} \end{picture}} = \frac{1}{m^2 - l^2 - i0}\,. \label{Intro:line} \end{equation} In HQET, the propagator \begin{equation} \raisebox{-3.3mm}{\begin{picture}(22,6) \put(11,4.5){\makebox(0,0){\includegraphics{lh.eps}}} \put(11,1.5){\makebox(0,0){$l$}} \end{picture}} = \frac{1}{-2 l \cdot v - i0} \label{Intro:HQET} \end{equation} appears, where the heavy-particle velocity $v$ ($v^2=1$) appears in the Lagrangian, see the textbooks~\cite{MW:00,G:04}. Similar propagators appear in SCET, but with $v^2=0$. In NRQED, NRQCD the denominator is more complicated: $- 2M l\cdot v + (l\cdot v)^2 - l^2 - i0$. Instead of using effective field theories, one may follow a more diagrammatic approach and expand the ordinary propagators in specific regions of $k_i$~\cite{S:02}. In all cases, the denominator of a propagator is quadratic or linear in the line momentum $l$. \begin{figure}[ht] \begin{center} \begin{picture}(110,80) \put(25,19.75){\makebox(0,0){\includegraphics{rout.eps}}} \put(25,33.5){\makebox(0,0)[b]{$k_1$}} \put(25,5){\makebox(0,0)[t]{$k_2+p$}} \put(25,19.49){\makebox(0,0)[b]{$k_2-k_1$}} \put(5,20){\makebox(0,0)[b]{$p$}} \put(45,20){\makebox(0,0)[b]{$p$}} \put(40.5,13.5){\makebox(0,0)[l]{$k_1+p$}} \put(9.5,13.5){\makebox(0,0)[r]{$k_1+p$}} \put(85,19.75){\makebox(0,0){\includegraphics{rout.eps}}} \put(85,33.5){\makebox(0,0)[b]{$k_1$}} \put(85,5){\makebox(0,0)[t]{$k_1+k_2$}} \put(85,19.49){\makebox(0,0)[b]{$k_2-p$}} \put(65,20){\makebox(0,0)[b]{$p$}} \put(105,20){\makebox(0,0)[b]{$p$}} \put(100.5,13.5){\makebox(0,0)[l]{$k_1+p$}} \put(69.5,13.5){\makebox(0,0)[r]{$k_1+p$}} \put(25,64.25){\makebox(0,0){\includegraphics{rout.eps}}} \put(25,78){\makebox(0,0)[b]{$k_1$}} \put(25,49.5){\makebox(0,0)[t]{$k_1+k_2+p$}} \put(25,64.45){\makebox(0,0)[b]{$k_2$}} \put(5,64.5){\makebox(0,0)[b]{$p$}} \put(45,64.5){\makebox(0,0)[b]{$p$}} \put(40.5,58){\makebox(0,0)[l]{$k_1+p$}} \put(9.5,58){\makebox(0,0)[r]{$k_1+p$}} \put(85,64.25){\makebox(0,0){\includegraphics{rout.eps}}} \put(85,78){\makebox(0,0)[b]{$k_1-p$}} \put(85,49.5){\makebox(0,0)[t]{$k_1+k_2$}} \put(85,64.45){\makebox(0,0)[b]{$k_2$}} \put(65,64.5){\makebox(0,0)[b]{$p$}} \put(105,64.5){\makebox(0,0)[b]{$p$}} \put(100.5,58){\makebox(0,0)[l]{$k_1$}} \put(69.5,58){\makebox(0,0)[r]{$k_1$}} \end{picture} \end{center} \caption{Momentum routings} \label{F:Routing} \end{figure} The choice of the integration momenta $k_i$ is not unique. Momentum conservation at each vertex is the only restriction. Some examples of different momentum routings for a single diagram are shown in Fig.~\ref{F:Routing}. It is not easy for a program to recognize that two Feynman integrals can be transformed into each other by linear substitutions of $k_i$. In fact, the freedom of choice is much larger than suggested by Fig.~\ref{F:Routing}, because all the momenta $p$, $k_1$, $k_2$ can flow along all lines with some continuous weights. The value of a given Feynman integral cannot depend on a choice of momentum routing (all choices agree with Feynman rules of the theory, and are equally good). Many diagrams have symmetries. For example, the non-planar self-energy diagram in Fig.~\ref{F:Sym}a may be reflected in a horizontal mirror. It also may be reflected in a vertical mirror; this operation changes the external momentum $p\to-p$, and hence, in order to get an identical integrand, we have to do the substitution $k_i\to-k_i$ of the integration momenta. One more symmetry is less obvious: you can keep the 3 left vertices at rest, and rotate the 3 right ones around the horizontal axis by $\pi$. The diagram in Fig.~\ref{F:Sym}b (where all the lines have the same mass) is very symmetrical. It can be represented as a tetrahedron, and thus has the full tetrahedron symmetry group: rotations by $n\pi/3$ around 4 axes and reflections in 6 planes. It is difficult for a program to find symmetries of a Feynman integral, because symmetry transformations often have to be followed by integration-momenta substitutions to write the transformed integral in its original form. \begin{figure}[ht] \begin{center} \begin{picture}(98,36) \put(28,20){\makebox(0,0){\includegraphics{nonplan.eps}}} \put(28,1.5){\makebox(0,0){a}} \put(82,20){\makebox(0,0){\includegraphics{mers.eps}}} \put(82,1.5){\makebox(0,0){b}} \end{picture} \end{center} \caption{Symmetric diagrams} \label{F:Sym} \end{figure} We shall consider Feynman graphs containing only 3-legged vertices (generic topologies). They have the maximum number of internal lines $I$. All the remaining diagrams can be obtained from these ones by shrinking some internal lines, i.\,e., raising some denominators to the power 0. They belong to reduced topologies. If the diagram we want contains a 4-legged vertex, we can split this vertex into 2 3-legged ones with an internal line between them, and say that this internal propagator is shrunk; thus, our diagram becomes a particular case of a generic one with 0 power of some internal line. This generalization is not unique: a diagram belonging to a reduced topology can be obtained from several different generic topologies by shrinking some internal lines. For $(E+1)$-legged tree diagrams, the number of internal lines is $I=E-2$ (just 1 vertex has $E=2$ and $I=0$; adding an external leg increases $I$ by 1). Adding a loop (i.\,e.\ connecting 2 points on some lines) increases $I$ by 3. So, $I$ for $(E+1)$-legged $L$-loop diagrams is \begin{equation} I = 3L + E - 2\,,\qquad N - I = \frac{(L-1)(L+2E-4)}{2}\,. \label{Intro:legged} \end{equation} Therefore, if $L\ge\max(2,5-2E)$, then there are more scalar products than denominators of propagators. In such a case, it is not possible to express all $s_{ij}$ as linear combinations of the denominators. For self-energy diagrams ($E=1$), this happens starting from $L=3$ loops; for diagrams with more legs --- starting from $L=2$ loops. Vacuum diagrams ($E=0$) have to be considered separately. The simplest diagram with 2 3-legged vertices has $L=2$ and $I=3$. Adding a loop increases $I$ by 3, and hence \begin{equation} I = 3 (L-1)\,,\qquad N-I = \frac{(L-2)(L-3)}{2}\,. \label{Intro:vac} \end{equation} Scalar products which cannot be expressed via denominators appear starting from $L=4$ loops. We want all scalar products $s_{ij}$ with $i\le L$ to be expressible as linear functions of the denominators $D_a$. Therefore, we add irreducible numerators (linear functions of $s_{ij}$) $D_{I+1}$, \dots, $D_N$: \begin{equation} D_a = \sum_{i=1}^L \sum_{j=i}^M A_a^{ij} s_{ij} + m^2_a\,. \label{Intro:Ds} \end{equation} The only requirement is that the full set $D_1$, \dots, $D_N$ is linearly independent. Then we can solve for $s_{ij}$: \begin{equation} s_{ij} = \sum_{a=1}^N A^a_{ij} (D_a - m^2_a)\,. \label{Intro:sD} \end{equation} If we view the pair $(ij)$ with $i\in[1,L]$ and $j\ge i$ as a single index (it has $N$ different values), then the matrix $A^a_{ij}$ is the inverse of $A_a^{ij}$. Now we define the scalar Feynman integral \begin{equation} \begin{split} &I(n_1,\ldots,n_N) = \frac{1}{(i\pi^{d/2})^L} \int d^d k_1\cdots d^d k_L\,f(k_1,\ldots,k_L,p_1,\ldots,p_E)\,,\\ &f(k_1,\ldots,k_L,p_1,\ldots,p_E) = \frac{1}{D_1^{n_1}\cdots D_N^{n_N}}\,. \end{split} \label{Intro:I} \end{equation} Of course, for irreducible numerators $n_a\le0$ ($a\in[I+1,N]$). The argument of $I$ is a point in $N$-dimensional integer space. If a diagram contains a self-energy insertion into some internal line, then 2 lines carry the same momentum $l$, and can be joined: \begin{equation} \raisebox{-5.8mm}{\begin{picture}(28,15.5) \put(14,7.75){\makebox(0,0){\includegraphics{m2c.eps}}} \put(4.5,10){\makebox(0,0){$n_1$}} \put(23.5,10){\makebox(0,0){$n_2$}} \end{picture}} = \raisebox{-5.8mm}{\begin{picture}(37,14.5) \put(23,7.25){\makebox(0,0){\includegraphics{m2l.eps}}} \put(8.5,11){\makebox(0,0){$n_1+n_2$}} \end{picture}} = \raisebox{-5.8mm}{\begin{picture}(37,14.5) \put(14,7.25){\makebox(0,0){\includegraphics{m2r.eps}}} \put(28,11){\makebox(0,0){$n_1+n_2$}} \end{picture}}\,. \label{Intro:Insert} \end{equation} These 3 different graphs correspond to 1 Feynman integral. If the lines around a self-energy insertion have different mass (e.\,g., $\gamma$ and $Z^0$), e.\,g., \begin{equation*} \raisebox{-5.8mm}{\begin{picture}(28,15.5) \put(14,7.75){\makebox(0,0){\includegraphics{m2z.eps}}} \end{picture}}\,, \end{equation*} then their denominators are different but linear-dependent. We shall see that it's easy to kill one of these lines using partial-fraction decomposition. It's best to do this before further calculations; otherwise, expressing scalar products in the numerator via the denominators becomes non-unique. However, this is not possible if these lines are raised to non-integer powers (see below). If some subdiagram is connected to the rest of the diagram only at 1 vertex (and has no external legs), it can be completely separated and considered a vacuum diagram: \begin{equation} \raisebox{-11.3mm}{\begin{picture}(25,20) \put(12.5,10){\makebox(0,0){\includegraphics{touch1.eps}}} \end{picture}} = \raisebox{-11.3mm}{\begin{picture}(25,25) \put(12.5,4){\makebox(0,0){\includegraphics{touch2.eps}}} \put(12.5,18){\makebox(0,0){\includegraphics{touch3.eps}}} \end{picture}}\,. \label{Intro:c1} \end{equation} In particular, \begin{equation} \raisebox{-8.8mm}{\begin{picture}(25,20) \put(12.5,10){\makebox(0,0){\includegraphics{touch4.eps}}} \end{picture}} = 0\,. \label{Intro:c10} \end{equation} If such a subdiagram has external legs, it still can be separated. The momentum enters the rest of the diagram through a new external leg: \begin{equation} \raisebox{-15.55mm}{\begin{picture}(25,23.5) \put(12.5,11.75){\makebox(0,0){\includegraphics{touch5.eps}}} \end{picture}} = \raisebox{-15.55mm}{\begin{picture}(25,37.5) \put(12.5,5.75){\makebox(0,0){\includegraphics{touch6.eps}}} \put(12.5,27){\makebox(0,0){\includegraphics{touch7.eps}}} \end{picture}}\,. \label{Intro:c2} \end{equation} In particular, \begin{equation} \raisebox{-10.55mm}{\begin{picture}(25,23.5) \put(12.5,11.75){\makebox(0,0){\includegraphics{touch5.eps}}} \put(14,22){\makebox(0,0)[l]{$p^2=0$}} \end{picture}} = 0\,. \label{Intro:c20} \end{equation} Suppose some subdiagram is connected to the rest of the diagram only at 2 vertices, and has no external legs (let's call this subdiagram $a$). We can always choose the total momentum flowing through this subdiagram to be $k_1$. Let the loop momenta $k_2$, \dots, $k_A$ live in the subdiagram $a$; then the remaining loop momenta $k_{A+1}$, \dots, $k_L$ live in the remaining subdiagram $b$. Suppose the total degree of the numerator in mixed scalar products $k_i\cdot q_j$ (with $i\le A$, $j>A$) is $n$. Then the subdiagram $a$ is an integral in $k_2$, \dots, $k_A$ with $n$ tensor indices, depending only on $k_1$. If we detach this subdiagram and then attach it the other way round, this means $k_1\to-k_1$. Let's make the substitution $k_2\to-k_2$, \dots, $k_A\to-k_A$. All the denominators in the subdiagram $a$ remain unchanged, and \begin{equation} \raisebox{-8.8mm}{\begin{picture}(38,20) \put(19,10){\makebox(0,0){\includegraphics{rev1.eps}}} \end{picture}} = (-1)^n \raisebox{-8.8mm}{\begin{picture}(38,20) \put(19,10){\makebox(0,0){\includegraphics{rev2.eps}}} \end{picture}}\,. \label{Intro:c3} \end{equation} The second equality in~(\ref{Intro:Insert}) is a particular case of this property. The $N$-dimensional integer space in which Feynman integrals~(\ref{Intro:I}) live can be subdivided into sectors (Fig.~\ref{F:Sect}). If $n_a>0$, $D_a$ is in the denominator; if $n_a\le0$, it is in the numerator (the line is shrunk). The set of sectors is partially ordered: the $++$ sector is higher than $+-$ which is higher than $--$; the $++$ sector is higher than $-+$ which is higher than $--$; but the sectors $+-$ and $-+$ cannot be compared (Fig.~\ref{F:Sect}). Each sector has a \textit{corner} --- the point with $n_a=1$ (if $n_a>0$ in the sector) or $n_a=0$ (if $n_a\le0$ in the sector). Generally speaking, there are $2^N$ sectors. But for irreducible numerator $D_a$, we always have $n_a\le0$, and opposite sectors don't exist. Some sectors are trivial, i.\,e.\ $I=0$ in all points. At least, the pure negative sector (where all $n_a\le0$) is trivial. Often there are more trivial sectors, when shrinking some lines produces a scale-free vacuum subdiagram~(\ref{Intro:c10}) (some sectors are trivial only at some specific kinematics, e.\,g., (\ref{Intro:c20})). In sectors just above trivial ones (i.\,e.\ when the diagram vanishes after contracting any line) a general expression for the integral (usually via $\Gamma$ functions) can often be obtained. \begin{figure}[ht] \begin{center} \begin{picture}(76,76) \put(37,37){\makebox(0,0){\includegraphics{sect.eps}}} \put(73,34){\makebox(0,0){$n_1$}} \put(34,73){\makebox(0,0){$n_2$}} \end{picture} \end{center} \caption{Sectors in the $N$-dimensional integer space} \label{F:Sect} \end{figure} Some sectors are transformed into each other by symmetries. All the denominators of the original integral become those of the symmetric integral, possibly, after an appropriate integration-momenta substitution. The numerators of the original integral (i.\,e., $D_a$ whose $n_a<0$), after the symmetry transformtion and the loop-momenta substitution, can be expressed as linear combinations of the numerators $D_a$ in the new sector. Expanding the product of powers of such numerators, we can express the original integral as a linear combination of integrals in the new sector having different powers of the new numerators. A sector may be transformed into itself by some symmetries. Let's consider a self-energy insertion into a massless line~(\ref{Intro:line}) (the HQET propagator~(\ref{Intro:HQET}) is also ``massless'' in this sense, because it contains no dimensional parameters except $l$). If this insertion contains no massive lines, then, by dimensionality, this insertion just shifts the power of the propagator: \begin{equation} \begin{split} &\raisebox{-8.8mm}{\begin{picture}(40,20) \put(20,10){\makebox(0,0){\includegraphics{ins0.eps}}} \end{picture}} \Rightarrow \raisebox{-0.3mm}{\begin{picture}(22,6) \put(11,1.5){\makebox(0,0){\includegraphics{l0a.eps}}} \put(11,7){\makebox(0,0){$\displaystyle-L\frac{d}{2}+n$}} \end{picture}}\,,\\ &\raisebox{-8.8mm}{\begin{picture}(40,20) \put(20,10){\makebox(0,0){\includegraphics{insh.eps}}} \end{picture}} \Rightarrow \raisebox{-0.3mm}{\begin{picture}(22,6) \put(11,1.5){\makebox(0,0){\includegraphics{lha.eps}}} \put(11,4.5){\makebox(0,0){$-Ld+n$}} \end{picture}}\,, \end{split} \label{Intro:Ins0} \end{equation} where $L$ is the number of loops in the insertion, and the integer $n$ is determined by the powers of the denominators in it. A diagram with such insertion[s] can be considered as a lower-loop diagram~(\ref{Intro:I}) with non-integer ($d$-dependent) power[s] of some denominator[s]. Such a power cannot be compared to integers, and hence there is no subdivision into sectors along the corresponding $n$ axis. This factor is always in the denominator, and can newer become a numerator. I know no circumstances when a \emph{massive} propagator gets raised to a non-integer power% \footnote{Except the situation when the $d$-dimensional space is separated into an $n$-dimensional subspace and a transverse $(d-n)$-dimensional one; integrating a massive propagator in transverse momentum components produces a massive propagator in the $n$-dimensional subspace raised to a non-integer power.}. \section{Integration by parts} \label{S:IBP} The Feynman integral~(\ref{Intro:I}) does not change if we do a substitution \begin{equation} k_i \to M_{ij} q_j = A_{ij} k_j + B_{ij} p_j \label{IBP:M} \end{equation} of its integration momenta, where \begin{equation*} M = \left( \begin{array}{cccccc} A_{11} & \cdots & A_{1L} & B_{11} & \cdots & B_{1E} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ A_{L1} & \cdots & A_{LL} & B_{L1} & \cdots & B_{LE} \end{array} \right) \end{equation*} is an $L\times M$ matrix, provided that the substitution is invertible: \begin{equation} \det A \ne 0\,. \label{IBP:det} \end{equation} Such substitutions form a Lie symmetry group of the Feynman integral. Let's consider an infinitesimal transformation $k_i\to k_i+\alpha q_j$. The integrand transforms as \begin{equation} f \to f + \alpha q_j \cdot \partial_i f\,. \label{IBP:f} \end{equation} If $j=i$, also the integration measure changes: \begin{equation} d^d k_i \to (1 + \alpha d) d^d k_i\,. \label{IBP:dk} \end{equation} These infinitesimal transformations form the Lie algebra~\cite{L:08} \begin{equation} \begin{split} &\int d^d k_1 \cdots d^d k_L\,O_{ij} f = 0\,,\\ &O_{ij} = \partial_i \cdot q_j\qquad (i\le L,\;j\ge i)\,,\qquad \partial_i = \frac{\partial}{\partial q_i}\,. \end{split} \label{IBP:Lie} \end{equation} The generators obey the commutation relation \begin{equation} [O_{ij},O_{i'j'}] = \delta_{ij'} O_{i'j} - \delta_{i'j} O_{ij'} \label{IBP:comm} \end{equation} (structure constants). Let's find an explicit form of the operator \begin{equation} \begin{split} &O_{ij} = d \delta_{ij} + q_j \cdot \partial_i = d \delta_{ij} + \sum_{m=1}^M (1+\delta_{mi}) s_{mj} \frac{\partial}{\partial s_{mi}}\,,\\ &\frac{\partial}{\partial s_{mi}} = \sum_{a=1}^N A_a^{mi} \frac{\partial}{\partial D_a} \end{split} \label{IBP:Oij} \end{equation} (the extra factor 2 at $m=i$ comes from $\partial_i s_{ii} = 2 k_i$). We assume that whenever $s_{ij}$ with $i>j$ appears in an equation, it is immediately replaced by $s_{ji}$; the same holds for $\partial/\partial s_{ij}$, $A^a_{ij}$, $A_a^{ij}$. If $j\le L$ ($q_j=k_j$), then from~(\ref{Intro:sD}) we have \begin{equation} O_{ij} = d \delta_{ij} + \sum_{a=1}^N \sum_{b=1}^N \sum_{m=1}^M A_a^{mi} A^b_{mj} (1+\delta_{mi}) (D_b - m_b^2) \frac{\partial}{\partial D_a}\,; \label{IBP:IBPk} \end{equation} if $j>L$ ($q_j=p_{j-L}$), then $s_{mj}$ with $m,j>L$ are external kinematic quantities: \begin{equation} O_{ij} = \sum_{a=1}^N \left[ \sum_{m=1}^L \sum_{b=1}^N A_a^{mi} A^b_{mj} (1+\delta_{mi}) (D_b - m_b^2) + \sum_{m=L+1}^M A_a^{mi} s_{mj} \right] \frac{\partial}{\partial D_a}\,. \label{IBP:IBPp} \end{equation} The operator $\partial/\partial D_a$ acting on the integrand $f$~(\ref{Intro:I}) raises the power $n_a$ by 1 (and multiplied the integrand by $n_a$); $D_b$ lowers $n_b$ by 1. Let's introduce operators which act on functions of $N$ integer variables and produce new functions: \begin{equation} \begin{split} (\mathbf{n}_a F) (n_1,\ldots,n_a,\ldots,n_N) &{}= n_a F(n_1,\ldots,n_a\ldots,n_N)\,,\\ (\mathbf{a}^+ F) (n_1,\ldots,n_a,\ldots,n_N) &{}= F(n_1,\ldots,n_a+1,\ldots,n_N)\,,\\ (\mathbf{a}^- F) (n_1,\ldots,n_a,\ldots,n_N) &{}= F(n_1,\ldots,n_a-1,\ldots,n_N)\,. \end{split} \label{IBP:Oper} \end{equation} The shift operators are inverse to each other: \begin{equation} \mathbf{a}^+ \mathbf{a}^- = \mathbf{a}^- \mathbf{a}^+ = 1\,. \label{IBP:inv} \end{equation} They don't commute with the number operators $\mathbf{n}_a$: \begin{equation} [\mathbf{a}^\pm,\mathbf{n}_b] = \pm \delta_{ab} \mathbf{a}^\pm\,. \label{IBP:comm2} \end{equation} Some authors prefer to use the operators $\hat{\mathbf{a}}^+ = \mathbf{n}_a \mathbf{a}^+$, \begin{equation} (\hat{\mathbf{a}}^+ F) (n_1,\ldots,n_a,\ldots,n_N) = n_a F(n_1,\ldots,n_a+1,\ldots,n_N) \label{IBP:hat} \end{equation} instead of $\mathbf{a}^+$, because \begin{equation} \frac{\partial}{\partial D_a} \Rightarrow - \hat{\mathbf{a}}^+\,,\qquad D_b \Rightarrow \mathbf{b}^- \label{IBP:subs} \end{equation} are the only combinations appearing in~(\ref{IBP:IBPk}), (\ref{IBP:IBPp}). These operators obey the commutation relation \begin{equation} [\hat{\mathbf{a}}^+,\mathbf{b}^-] = \delta_{ab}\,. \label{IBP:comm3} \end{equation} Then the operators $\mathbf{n}_a = \hat{\mathbf{a}}^+ \mathbf{a}^-$ are not independent; it is sufficient to use $\hat{\mathbf{a}}^+$ and $\mathbf{a}^-$. Now the identities~(\ref{IBP:Lie}) can be rewritten as the IBP recurrence relations \begin{equation} \mathbf{O}_{ij}(\hat{\mathbf{a}}^+,\mathbf{a}^-) I(n_1,\ldots,n_N) = \frac{1}{(i\pi^{d/2})^L} \int d^d k_1 \cdots d^d k_L\,O_{ij} f = 0\,, \label{IBP:IBP} \end{equation} where the operators $\mathbf{O}_{ij}$ are obtained from~(\ref{IBP:IBPk}), (\ref{IBP:IBPp}) using the substitutions~(\ref{IBP:subs}): \begin{align} &\mathbf{O}_{ij} = d \delta_{ij} - \sum_{a=1}^N \sum_{b=1}^N \sum_{m=1}^M A_a^{mi} A^b_{mj} (1+\delta_{mi}) \hat{\mathbf{a}}^+ \left(\mathbf{b}^- - m_b^2\right)\qquad (i\le L)\,, \label{IBP:k}\\ &\mathbf{O}_{ij} = \sum_{a=1}^N \left[ \sum_{b=1}^N \sum_{m=1}^L A_a^{mi} A^b_{mj} (1+\delta_{mi}) \hat{\mathbf{a}}^+ \left(\mathbf{b}^- - m_b^2\right) - \sum_{m=L+1}^M A_a^{mi} s_{mj} \hat{\mathbf{a}}^+ \right]\qquad (i>L) \label{IBP:p} \end{align} (note the order of $\hat{\mathbf{a}}^+$ and $\mathbf{b}^-$). The operators $\mathbf{O}_{ij}(\hat{\mathbf{a}}^+,\mathbf{a}^-)$ obey the commutation relations~(\ref{IBP:comm}) by virtue of~(\ref{IBP:comm3}). The simplest example is the 1-loop vacuum diagram (Fig.~\ref{F:v1}) \begin{equation} \frac{1}{i\pi^{d/2}} \int \frac{d^d k}{D^n} = V(n) m^{d-2n}\,,\qquad D = m^2 - k^2 - i0\,. \label{IBP:V1} \end{equation} We may set $m=1$; the power of $m$ can be reconstructed by dimensionality. The sector $n\le0$ is trivial (Fig.~\ref{F:v1}). The IBP relation \begin{equation} (d - 2 n + 2 n \mathbf{1}^+ ) V(n) = 0 \label{IBP:V1rel} \end{equation} relate 2 neighbouring values of $n$ (except the $n=0$ relation where $V(1)$ does not appear, and we obtain the known fact $V(0)=0$). In the positive sector, we can express $V(n)$ via $V(n-1)$, then $V(n-2)$, and so on, until we reach $V(1)$. The explicit solution of~(\ref{IBP:V1rel}) is \begin{equation} V(n) = \frac{1}{\Gamma(n)} \frac{\Gamma\left(n-\frac{d}{2}\right)}{\Gamma\left(1-\frac{d}{2}\right)} V(1)\,. \label{IBP:V1sol} \end{equation} \begin{figure}[ht] \begin{center} \begin{picture}(114,35) \put(16,16){\makebox(0,0){\includegraphics{v1.eps}}} \put(16,33.5){\makebox(0,0)[b]{$k$}} \put(16,29.5){\makebox(0,0)[t]{$n$}} \put(77,16){\makebox(0,0){\includegraphics{sect1.eps}}} \put(113,13){\makebox(0,0){$n$}} \end{picture} \end{center} \caption{The 1-loop vacuum diagram and its sectors} \label{F:v1} \end{figure} The 1-loop vacuum diagram with masses $m$ and 0 (Fig.~\ref{F:m0}) contains linearly dependent denominators \begin{equation} D_1 = 1 - k^2\,,\qquad D_2 = - k^2\,,\qquad D_1 - D_2 = 1 \label{IBP:m0den} \end{equation} (we set $m=1$). In the positive sector, we solve the recurrence relation \begin{equation} (1 - \mathbf{1}^- + \mathbf{2}^-) I = 0 \label{IBP:m0rel} \end{equation} for $I$; each reduction step decreases $n_1+n_2$ by 1, and eventually we get $n_1=0$ (where $I=0$) or $n_2=0$. In the sector $n_2<0$, we solve for $\mathbf{2}^- I$; each step decreases $|n_2|$ by 1, and we end up at $n_2=0$ or $n_1=0$ (Fig.~\ref{F:m0}). This reduction is equivalent to partial fraction decomposition. So, this problem reduces to the previous one (Fig.~\ref{F:v1}) (unless $n_2$ is not integer). \begin{figure}[ht] \begin{center} \begin{picture}(108,76) \put(11,37){\makebox(0,0){\includegraphics{vm.eps}}} \put(11,24.5){\makebox(0,0){1}} \put(11,49.5){\makebox(0,0){2}} \put(69,37){\makebox(0,0){\includegraphics{sect2.eps}}} \put(105,34){\makebox(0,0){$n_1$}} \put(66,73){\makebox(0,0){$n_2$}} \end{picture} \end{center} \caption{The 1-loop vacuum diagram with masses $m$, 0 and its sectors} \label{F:m0} \end{figure} Linear dependent denominators often appear in HQET diagrams~\cite{BG:91}. For example, the vertex diagram in Fig.~\ref{F:HQETpf} has \begin{align*} &D_1 = - 2 (k+p_1) \cdot v = - 2 (k\cdot v + \omega_1)\,,\qquad D_2 = - 2 (k+p_2) \cdot v = - 2 (k\cdot v + \omega_2)\,,\\ &D_1 - D_2 + 2 (\omega_1 - \omega_2) = 0\,. \end{align*} The recurrence relation \begin{equation} \left[ 2(\omega_1-\omega_2) + \mathbf{1}^- - \mathbf{2}^- \right] I = 0 \label{IBP:HQETpf} \end{equation} allows one to kill one of the heavy lines, similarly to Fig.~\ref{F:m0}. If an $L$-loop diagram contains more than $L$ HQET lines (with the same $v$), then its HQET denominators are linearly dependent: there are only $L$ scalar products $k_i\cdot v$ (see, e.\,g., Fig.~\ref{F:HQETpf}). Some of these HQET lines can be easily killed by partial fraction decomposition, unless their powers are non-integer. \begin{figure}[ht] \begin{center} \begin{picture}(98,20) \put(17,11.5){\makebox(0,0){\includegraphics{hpf1.eps}}} \put(77,10){\makebox(0,0){\includegraphics{hpf2.eps}}} \end{picture} \end{center} \caption{HQET diagrams with linearly dependent denominators} \label{F:HQETpf} \end{figure} The 1-loop massless self-energy diagram (Fig.~\ref{F:p1}) has \begin{equation*} D_1 = - (k+p)^2\,,\qquad D_2 = -k^2\,. \end{equation*} It has only one non-trivial sector, and is symmetric with respect to $1\leftrightarrow2$. We may put $p^2=-1$ (its power can be restored by dimensionality). The $\partial\cdot k$ IBP relation is \begin{equation} \left[d - n_1 - 2 n_2 + n_1 \mathbf{1}^+ (1 - \mathbf{2}^-)\right] G = 0\,. \label{IBP:p1} \end{equation} If $n_1>1$, we can use~(\ref{IBP:p1}) to lower $n_1+n_2$ by one; if $n_2>1$, the mirror-symmetric $\partial\cdot(k-p)$ relation can be used instead. All integrals reduce to 1 master integral $G(1,1)$% \footnote{Reduction for the 1-loop massless triangle diagram has been considered in~\cite{D:92}.}. \begin{figure}[ht] \begin{center} \begin{picture}(124,76) \put(19,37){\makebox(0,0){\includegraphics{p1.eps}}} \put(19,25.5){\makebox(0,0){$k+p$}} \put(19,48.5){\makebox(0,0){$k$}} \put(4,34.5){\makebox(0,0){$p$}} \put(34,34.5){\makebox(0,0){$p$}} \put(85,37){\makebox(0,0){\includegraphics{sect3.eps}}} \put(121,34){\makebox(0,0){$n_1$}} \put(82,73){\makebox(0,0){$n_2$}} \end{picture} \end{center} \caption{The 1-loop massless self-energy diagram and its sectors} \label{F:p1} \end{figure} In general, when solving the reduction problem in some sector, integrals from lower sectors (where some lines are contracted) are considered trivial, i.\,e., reduction problems in those sectors are assumed to be solved. In other words, the reduction problem is recursive: to solve it for the fully positive sector, one has to solve it for all the sectors just below it, and so on, until we reach trivial sectors (where $I=0$). Automatic identification of trivial sectors is a useful feature for programs of IBP reduction. If there exists a substitution $M(\alpha)$~(\ref{IBP:M}) such than the integral is multiplied by $\alpha^n$ (where $n\ne0$ is $d$-dependent), then the integral obviously vanishes (contains a scale-free subdiagram). The same holds if any numerator is inserted into this integral. Therefore, if the integral at the corner of the sector vanishes, the whole sector is trivial. This gives the criterion~\cite{L:08}: if solving all IBP relations at the corner point results in $I=0$, the sector is trivial. This criterion finds all cases like~(\ref{Intro:c10}), but not sectors which are trivial only in specific kinematics, like~(\ref{Intro:c20}). When considering some specific sector, we should express more complicated integrals via simpler ones. To do this systematically, we need to accept some convention which integrals are more complicated and which are more simple. This means introducing a total order of integer points in the sector. It is natural to require this order to be translation invariant: if $\vec{n}_1 < \vec{n}_2$ then $\vec{n}_1+\vec{m} < \vec{n}_2+\vec{m}$. The choice of such an admissible order is, of course, not unique. We want the reduction process to move integrals closer to the corner of the sector. Therefore, it is reasonable to compare the sums $n = \sum_a |n_a|$ first: if the first integral has a higher $n$ than the second one, the first integral is more complicated. After that, we should decide how to order integrals with equal values of $n$. This can be done in different ways (e.\,g., lexicographically). Suppose we have fixed some admissible order in the sector. Let's peek some IBP relation and solve it for the most complicated integral. This solution can be used for reduction of all integrals in the sector. They either move into lower sectors (when some $n_a=1$ and $\mathbf{a}^-$ is applied), or end up in a hyperplane of dimensionality $N-1$ in our sector. Indeed, the most complicated integral in our IBP relation is $n_a \mathbf{a}^+ I$ for some $a$; we cannot reduce $n_a$ from 1 to 0 using this relation. So, we are left with the hyperplane $n_a=1$. Next we update the remaining IBP relations to contain only integrals on this hyperplane (and trivial integrals from lower sectors: $\mathbf{a}^-$ acting on an integral with $n_a=1$ makes it trivial). Whenever an IBP relation contains $n_a=2$, we lower it back to $n_a=1$ using the first selected relation. Now the problem is much simpler: the dimensionality of space is $N-1$ instead of $N$. Let's select one of those $(N-1)$-dimensional recurrence relations, and solve it for the most complicated integral. Then we can use this solution to further reduce all integrals on the hyperplane. Here is a catch, however: the coefficient of this most complicated integral may vanish on some complicated subset of points, and the reduction fails on this subset (this can never happen during the first stage: there this subset is always the hyperplane $n_a=1$). This subset is still much smaller than the whole hyperplane: it consists of some lower-dimensional part[s]. It is better to choose a relation for which this subset is simple (e.\,g., a coordinate hyperplane). Next we treat these $(N-2$)-dimensional subsets in a similar way, and so on. This is a sketch of what people usually do when constructing a reduction algorithm by hand. This strategy is implemented in a \texttt{\textit{Mathematica}} program by R.\,N.~Lee based on~\cite{L:08}. This program is not guaranteed to construct a reduction algorithm for a given topology. However, it works successfully (and efficiently) for a large number of highly non-trivial examples. It is not publicly available. In some rare cases, approaches based on sectors can fail to detect a relation between 2 integrals in some sector, and declare both to be masters, when in fact they are dependent, but this dependency can only be obtained via a higher sector. \section{Homogeneity and Lorentz-invariance relations} \label{S:Add} Scalar Feynman integrals obey some relations which can be derived independently of IBP but are not really independent --- they appear to be linear combinations of the IBP relations. By dimensionality, an $L$-loop integral $I$ is a homogeneous function of external momenta $p_i$ and masses $m_i$ of degree $Ld - 2\sum n_i$ (we assume that all the denominators are quadratic). Therefore, \begin{equation} \left( \sum_i p_i \cdot \frac{\partial}{\partial p_i} + \sum_i m_i \frac{\partial}{\partial m_i} \right) I = \left( L d - 2 \sum_i n_i \right) I\,. \label{Add:Hom0} \end{equation} On the other hand, the derivatives in the left-hand side can be calculated explicitly. Equation these two expressions, we obtain a homogeneity relation~\cite{CT:81}. Let's move all the terms in~(\ref{Add:Hom0}) to the left-hand side, and apply this operator to the integrand $f$ instead of the integral $I$. Adding and subtracting the sum over the loop momenta $k_i$ we get \begin{align*} &\left( \sum_i p_i \cdot \frac{\partial}{\partial p_i} + \sum_i m_i \frac{\partial}{\partial m_i} - L d + 2 \sum_i n_i \right) f\\ &{} = \left( \sum_i q_i \cdot \frac{\partial}{\partial q_i} + \sum_i m_i \frac{\partial}{\partial m_i} - \sum_i k_i \cdot \frac{\partial}{\partial k_i} - L d + 2 \sum_i n_i \right) f\,. \end{align*} The integrand $f$ is a homogeneous function of the momenta $q_i$ (both loop and external) and the masses $m_i$ of degree $-2\sum n_i$, and this expression simplifies to \begin{equation} \left( - \sum_i k_i \cdot \frac{\partial}{\partial k_i} - L d \right) f = - \left( \sum_i \frac{\partial}{\partial k_i} \cdot k_i \right) f\,. \label{Add:Hom1} \end{equation} Thus the homogeneity relation is a linear combination of the IBP relations $\sum \partial_i\cdot k_i$. A scalar integral $I$ does not change if we rotate the external momenta $p_i$. In other words, a Lorentz-transformation generator applied to $I$ gives 0. If there are at least 2 external momenta ($E\ge2$), we can contract this tensor equation with $p_i^\mu p_j^\nu$ ($i\ne j$) to obtain a scalar relation \begin{equation} 2 p_i^\mu p_j^\nu \left( \sum_n p_n^{[\mu} \frac{\partial}{\partial p_n^{\nu]}} \right) I = 0 \label{Add:LI0} \end{equation} (square brackets mean antisymmetrization). On the other hand, the derivatives can be calculated explicitly. This gives Lorentz-invariance relations~\cite{GR:99}. Let's apply this operator to the integrand $f$. Adding and subtracting the sum over the loop momenta $k_i$ we get \begin{equation*} 2 p_i^\mu p_j^\nu \left( \sum_n p_n^{[\mu} \frac{\partial}{\partial p_n^{\nu]}} \right) f = 2 p_i^\mu p_j^\nu \left( \sum_n q_n^{[\mu} \frac{\partial}{\partial q_n^{\nu]}} - \sum_n k_n^{[\mu} \frac{\partial}{\partial k_n^{\nu]}} \right) f\,. \end{equation*} The integrand $f$ is a scalar function of the momenta $q_i$, and hence the first operator gives 0 when acting on $f$. Writing the antisymmetrization explicitly and commuting derivatives to the left, we obtain \begin{equation} \begin{split} - 2 p_i^\mu p_j^\nu \left( \sum_n k_n^{[\mu} \frac{\partial}{\partial k_n^{\nu]}} \right) f &{} = \sum_n \left( p_j\cdot k_n\,p_i\cdot\frac{\partial}{\partial k_n} - p_i\cdot k_n\,p_j\cdot\frac{\partial}{\partial k_n} \right) f\\ &{} = \sum_n \frac{\partial}{\partial k_n} \cdot \left( p_i\,p_j\cdot k_n - p_j\,p_i\cdot k_n \right) f \end{split} \label{Add:LI1} \end{equation} (the extra terms from commutation cancel). Thus Lorentz-invariance relations are linear combinations of IBP relations~\cite{L:08}. \section{Massless self-energy diagrams} \label{S:p2} Let's consider the 2-loop integral~\cite{CT:81}% \footnote{The IBP and homogeneity relations for this integral had been derived in~\cite{VPK:81} slightly earlier than in~\cite{CT:81}; however, the reduction algorithm had not been formulated.} (Fig.~\ref{F:Q2}) \begin{equation} \begin{split} &\frac{1}{(i\pi^{d/2})^2} \int d^d k_1 d^d k_2\,f(k_1,k_2,p) = G(n_1,\ldots,n_5) (-p^2)^{d/2-n_1-\cdots-n_5}\,,\\ &f(k_1,k_2,p) = \frac{1}{D_1^{n_1}\cdots D_5^{n_5}}\,,\\ &D_1=-(k_1+p)^2\,,\qquad D_2=-(k_2+p)^2\,,\qquad D_3=-k_1^2\,,\qquad D_4=-k_2^2\,,\\ &D_5=-(k_1-k_2)^2\,. \end{split} \label{p2:G} \end{equation} It is symmetric with respect to the interchanges $(1\leftrightarrow2,3\leftrightarrow4)$ and $(1\leftrightarrow3,2\leftrightarrow4)$. It vanishes when indices of two adjacent lines are non-positive integers, because then it contains a no-scale subdiagram. \begin{figure}[ht] \begin{center} \begin{picture}(64,32) \put(32,18){\makebox(0,0){\includegraphics{Q2.eps}}} \put(16,30){\makebox(0,0){$k_1$}} \put(48,30){\makebox(0,0){$k_2$}} \put(14,5){\makebox(0,0){$k_1+p$}} \put(50,5){\makebox(0,0){$k_2+p$}} \put(39,18){\makebox(0,0){$k_1-k_2$}} \put(20,10){\makebox(0,0){$n_1$}} \put(44,10){\makebox(0,0){$n_2$}} \put(20,24){\makebox(0,0){$n_3$}} \put(44,24){\makebox(0,0){$n_4$}} \put(29,18){\makebox(0,0){$n_5$}} \end{picture} \end{center} \caption{Two-loop massless self-energy diagram (all lines are massless)} \label{F:Q2} \end{figure} When one of the indices is zero, the problem becomes trivial. If $n_5=0$, it is the product of two one-loop diagrams: \begin{equation} \raisebox{-10.3mm}{\begin{picture}(52,23) \put(26,11.5){\makebox(0,0){\includegraphics{q2a.eps}}} \put(16,1){\makebox(0,0)[b]{$n_1$}} \put(36,1){\makebox(0,0)[b]{$n_2$}} \put(16,22){\makebox(0,0)[t]{$n_3$}} \put(36,22){\makebox(0,0)[t]{$n_4$}} \end{picture}}\,. \label{p2:n5} \end{equation} If $n_1=0$, we first calculate the inner loop (it shifts the power of the upper propagator in the outer loop), and then the outer one: \begin{equation} \raisebox{-10.55mm}{\begin{picture}(32,23.5) \put(16,11.5){\makebox(0,0){\includegraphics{q2b.eps}}} \put(16,1){\makebox(0,0)[b]{$n_2$}} \put(8,19){\makebox(0,0){$n_3$}} \put(24,19){\makebox(0,0){$n_4$}} \put(15,12.5){\makebox(0,0){$n_5$}} \end{picture}} = \raisebox{-10.55mm}{\begin{picture}(32,23.5) \put(16,11.5){\makebox(0,0){\includegraphics{q1.eps}}} \put(16,1){\makebox(0,0)[b]{$n_5$}} \put(16,22.5){\makebox(0,0)[t]{$\vphantom{d}n_3$}} \end{picture}} \times \raisebox{-10.55mm}{\begin{picture}(32,23.5) \put(16,11.5){\makebox(0,0){\includegraphics{q1.eps}}} \put(16,1){\makebox(0,0)[b]{$n_2$}} \put(16,22.5){\makebox(0,0)[t]{$n_4+n_3+n_5-d/2$}} \end{picture}}\,. \label{p2:n1} \end{equation} The cases $n_2=0$, $n_3=0$, $n_4=0$ are symmetric. If all $n_i$ are integer, then all integrals~(\ref{p2:n5}) are proportional to $G_1^2$, and all integrals~(\ref{p2:n1}) to $G_2$: \begin{equation} \raisebox{-7.6mm}{\includegraphics{q2b1.eps}} = G_1^2\,,\qquad \raisebox{-7.6mm}{\includegraphics{q2b2.eps}} = G_2\,, \label{p2:masters} \end{equation} where \begin{equation} G_n = \raisebox{-11.8mm}{\begin{picture}(44,26) \put(22,13){\makebox(0,0){\includegraphics{qs.eps}}} \put(22,17){\makebox(0,0){$\cdots$}} \end{picture}} = \frac{\Gamma\left(n+1-n\frac{d}{2}\right) \Gamma^{n+1}\left(\frac{d}{2}-1\right)}% {\Gamma\left((n+1)\left(\frac{d}{2}-1\right)\right)} \label{p2:Gn} \end{equation} is the $n$-loop massless sunset integral. Now we shall consider the positive sector (all $n_i>0$). The $\partial_2\cdot k_2$ IBP relation is \begin{equation} [ d-n_2-n_5-2n_4 + n_2 \mathbf{2}^+ (1 - \mathbf{4}^-) + n_5 \mathbf{5}^+ (\mathbf{3}^- - \mathbf{4}^-) ] G = 0\,; \label{p2:tri1} \end{equation} the $\partial_1\cdot k_1$ relation is mirror-symmetric. The $\partial_2\cdot(k_2-k_1)$ IBP relation is \begin{equation} [d-n_2-n_4-2n_5 + n_2 \mathbf{2}^+ (\mathbf{1}^- - \mathbf{5}^-) + n_4 \mathbf{4}^+ (\mathbf{3}^- - \mathbf{5}^-)] G = 0\,. \label{p2:tri2} \end{equation} Let's express $G(n_1,\ldots,n_5)$ (with unshifted indices) from this last expression. Each application of this relation reduces $n_1+n_3+n_5$ by 1 (Fig.~\ref{F:IBP}). Therefore, sooner or later one of the indices $n_1$, $n_3$, $n_5$ will vanish, and we'll get a trivial case~(\ref{p2:n5}), (\ref{p2:n1}), or symmetric to it. This means that all integrals in this sector reduce to 2 master integrals~(\ref{p2:masters}). Analyses of the remaining non-zero sectors is simple but somewhat lengthy; all integrals in these sectors reduce to~(\ref{p2:masters}) too. \begin{figure}[ht] \begin{center} \begin{picture}(150,50) \put(25,25){\makebox(0,0){\includegraphics[width=50mm]{i1.eps}}} \put(75,25){\makebox(0,0){\includegraphics[width=50mm]{i2.eps}}} \put(125,25){\makebox(0,0){\includegraphics[width=50mm]{i3.eps}}} \end{picture} \end{center} \caption{One step of IBP reduction: projection onto the $n_1$, $n_3$, $n_5$ subspace} \label{F:IBP} \end{figure} Applying $p\cdot(\partial/\partial p)$ to~(\ref{p2:G}) we get the homogeneity relation \begin{equation} [ 2(d-n_3-n_4-n_5)-n_1-n_2 + n_1 \mathbf{1}^+ (1 - \mathbf{3}^-) + n_2 \mathbf{2}^+ (1 - \mathbf{4}^-) ] G = 0\,. \label{p2:hom} \end{equation} It is nothing but the sum of the $\partial_2\cdot k_2$ relation~(\ref{p2:tri1}) and its mirror-symmetric $\partial_1\cdot k_1$ relation. Another interesting relation is obtained by inserting $(k_1+p)^\mu$ into the integrand of~(\ref{p2:G}) and taking derivative $\partial/\partial p^\mu$ of the integral. On the one hand, the vector integral must be proportional to $p^\mu$, and we can make the substitution \begin{equation*} k_1+p \to \frac{(k_1+p)\cdot p}{p^2} p = \left(1 + \frac{D_1-D_3}{-p^2}\right) \frac{p}{2} \end{equation*} in the integrand. Taking $\partial/\partial p^\mu$ of this vector integral produces~(\ref{p2:G}) with \begin{equation*} \left(\tfrac{3}{2}d-\sum n_i\right) \left(1 + \frac{D_1-D_3}{-p^2}\right) \end{equation*} inserted into the integrand. On the other hand, explicit differentiation in $p$ gives \begin{align*} &d + \frac{n_1}{D_1} 2(k_1+p)^2 + \frac{n_2}{D_2} 2(k_2+p)\cdot(k_1+p)\,,\\ &2(k_2+p)\cdot(k_1+p) = D_5-D_1-D_2\,. \end{align*} Therefore, we obtain the Larin's relation~\cite{Larin} \begin{equation} \Bigl[\tfrac{1}{2}d+n_1-n_3-n_4-n_5 + \left(\tfrac{3}{2}d-\sum n_i\right) (\mathbf{1}^- - \mathbf{3}^-) + n_2 \mathbf{2}^+ (\mathbf{1}^- - \mathbf{5}^-)\Bigr] G = 0 \label{p2:Larin} \end{equation} (three more relations follow from the symmetries). This relation can be used instead of~(\ref{p2:tri2}) to reduce all integrals in the positive sector to the master integrals~(\ref{p2:masters}), see Fig.~\ref{F:IBP}. It is surely some linear combination of the IBP relations, but I have no explicit proof of this fact. The IBP relations~(\ref{p2:tri1}), (\ref{p2:tri2}) are particular cases of the triangle relation~\cite{CT:81}. Suppose a diagram we are considering contains a subdiagram in Fig.~\ref{F:tri}: \begin{align*} &D_1 = m_1^2 - (k_1+k_3)^2\,,\qquad D_2 = m_1^2 - k_1^2\,,\\ &D_3 = m_2^2 - (k_2+k_3)^2\,,\qquad D_4 = m_2^2 - k_2^2\,,\qquad D_5 = - k_3^2 \end{align*} (any number of lines of any kinds may be attached to the left vertex; however, it is essential that only one line is attached to each of the two right vertices of the triangle). The $\partial_3\cdot k_3$ IBP relation is \begin{equation} \left[ d - n_1 - n_3 - 2 n_5 + n_1 \mathbf{1}^+ (\mathbf{2}^- - \mathbf{5}^-) + n_3 \mathbf{3}^+ (\mathbf{4}^- - \mathbf{5}^-) \right] I = 0 \label{p2:tri} \end{equation} (e.\,g., (\ref{p2:tri2})). A single application of the triangle relation~(\ref{p2:tri}) reduces $n_2+n_4+n_5$ by 1, and this always allows one to kill one of the lines 2, 4, 5. If the line 2 is external instead of internal, the lowering operator $\mathbf{2}^-$ becomes just the factor $(m_1^2-k_1^2)$ (where $k_1$ is now an external momentum), and there is no subset of $n$'s whose sum is reduced (e.\,g., (\ref{p2:tri1})). \begin{figure}[ht] \begin{center} \begin{picture}(42,42) \put(21,21){\makebox(0,0){\includegraphics{tri.eps}}} \put(30,39){\makebox(0,0){$k_1$}} \put(32,33.5){\makebox(0,0){2}} \put(30,3.5){\makebox(0,0){$k_2$}} \put(32,8){\makebox(0,0){4}} \put(7,30){\makebox(0,0){$k_1+k_3$}} \put(12,23.5){\makebox(0,0){1}} \put(7,13){\makebox(0,0){$k_2+k_3$}} \put(12,18){\makebox(0,0){3}} \put(26.5,21){\makebox(0,0){$k_3$}} \put(18.5,21){\makebox(0,0){5}} \end{picture} \end{center} \caption{Triangle subdiagram} \label{F:tri} \end{figure} There are 3 generic topologies of 3-loop massless propagator diagrams: \begin{equation} \raisebox{-11.05mm}{\includegraphics{q3t1.eps}}\,,\qquad \raisebox{-11.05mm}{\includegraphics{q3t2.eps}}\,,\qquad \raisebox{-11.05mm}{\includegraphics{q3t3.eps}}\,. \label{p2:Q3t} \end{equation} Each has 8 denominators. There are 9 scalar products of 3 loop momenta $k_i$ and the external momentum $p$. Therefore, for each topology, all scalar products in the numerator can be expressed via the denominators and one selected scalar product. IBP recurrence relations for these diagrams have been investigated in~\cite{CT:81}. They can be used to reduce all integrals~(\ref{p2:Q3t}), with arbitrary integer powers of denominators and arbitrary (non-negative) powers of the selected scalar product in the numerators, to linear combinations of 6 master integrals \begin{equation} \begin{split} &\raisebox{-4.0mm}{\includegraphics{q3b1.eps}} = G_1^3\,,\qquad \raisebox{-7.4mm}{\includegraphics{q3b2.eps}} = G_1 G_2\,,\\ &\raisebox{-11.4mm}{\includegraphics{q3b3.eps}} = G_3\,,\qquad \raisebox{-11.4mm}{\includegraphics{q3b4.eps}} \sim \frac{G_1^2}{G_2} G_3\,,\\ &\raisebox{-11.4mm}{\includegraphics{q3b5.eps}} = G_1 G(1,1,1,1,2-\tfrac{d}{2})\,,\qquad \raisebox{-11.4mm}{\includegraphics{q3b6.eps}}\,. \end{split} \label{p2:Q3b} \end{equation} This algorithm has been implemented in the \texttt{SCHOONSCHIP}~\cite{SCHOONSCHIP} package \texttt{Mincer}~\cite{Mincer:1} and later re-implemented~\cite{Mincer:2} in \texttt{FORM}~\cite{FORM}% \footnote{Unfortunately, \texttt{Mincer} does not produce linear combinations of 6 master integrals~(\ref{p2:Q3b}); recursively 1-loop integrals are expressed via $\Gamma$ functions and expanded in $\varepsilon=2-d/2$, so that the contributions from the first 4 master integrals cannot be separated.}. It has also been implemented in the \texttt{REDUCE}~\cite{REDUCE,G:97} package \texttt{Slicer}~\cite{Slicer}. Only the last, non-planar, topology in~(\ref{p2:Q3t}) involves the last, non-planar, master integral in~(\ref{p2:Q3b}). The first 4 master integrals are trivial: they are expressed via $G_n$~(\ref{p2:Gn}), and hence via $\Gamma$-functions. The 4-th one differs from the 3-rd one ($G_3$) by replacing the two-loop subdiagram: the second one in~(\ref{p2:masters}) ($G_2/(-k^2)^{3-d}$) by the first one ($G_1^2/(-k^2)^{4-d}$). Therefore, it can be obtained from $G_3$ by multiplying by \begin{equation*} \frac{G_1^2 G(1,4-d)}{G_2 G(1,3-d)} = \frac{2d-5}{d-3} \frac{G_1^2}{G_2}\,. \end{equation*} The 5-th master integral is proportional to the two-loop diagram $G(1,1,1,1,n)$ with a non-integer index of the middle line $n=2-d/2$. The 6-th one, non-planar, is truly three-loop and most difficult. \section{HQET self-energy diagrams} \label{S:HQET} There are 2 generic topologies of 2-loop HQET self-energy diagrams: \begin{equation} \raisebox{0.2mm}{\includegraphics{h2t1.eps}}\,,\qquad \raisebox{-7.3mm}{\includegraphics{h2t2.eps}}\,. \label{p2:H2t} \end{equation} The denominators of the second one are linearly dependent, and there is one irreducible scalar product in the numerator. All these integrals, with any powers of denominators (and with any power of the numerator of the second diagram), can be reduced~\cite{BG:91} (see also~\cite{G:00}) to 2 master integrals \begin{equation} \raisebox{-3.825mm}{\includegraphics{h2b1.eps}} = I_1^2\,,\quad \raisebox{-4.2mm}{\includegraphics{h2b2.eps}} = I_2\,, \label{p2:Hb2} \end{equation} where \begin{equation} I_n = \raisebox{-8mm}{\begin{picture}(42,17) \put(21,8.5){\makebox(0,0){\includegraphics{hs.eps}}} \put(21,11){\makebox(0,0){{$\cdots$}}} \end{picture}} = \Gamma(2n+1-nd) \Gamma^n\left({\textstyle\frac{d}{2}}-1\right) \label{p2:In} \end{equation} is the $n$-loop HQET sunset integral. There are 10 generic topologies of 3-loop HQET self-energy diagrams: \begin{equation} \begin{split} &\raisebox{0.2mm}{\includegraphics{h3t1.eps}}\,,\qquad \raisebox{0.2mm}{\includegraphics{h3t2.eps}}\,,\\ &\raisebox{0.2mm}{\includegraphics{h3t3.eps}}\,,\qquad \raisebox{0.2mm}{\includegraphics{h3t4.eps}}\,,\\ &\raisebox{-3.4mm}{\includegraphics{h3t5.eps}}\,,\qquad \raisebox{-3.4mm}{\includegraphics{h3t6.eps}}\,,\qquad \raisebox{-7mm}{\includegraphics{h3t7.eps}}\,,\\ &\raisebox{-3.4mm}{\includegraphics{h3t8.eps}}\,,\qquad \raisebox{-5.2mm}{\includegraphics{h3t9.eps}}\,,\qquad \raisebox{0.2mm}{\includegraphics{h3t10.eps}}\,. \end{split} \label{p2:H3t} \end{equation} Diagrams in the first two rows have one scalar product which cannot be expressed via denominators; those in the third row have one linear relation among heavy denominators, and hence two independent scalar products in the numerator; those in the last row have two relations among heavy denominators, and hence three independent scalar products in the numerator. All these integrals, with any powers of denominators and irreducible numerators, can be reduced~\cite{G:00} to 8 master integrals \begin{equation} \begin{split} &\raisebox{-0.25mm}{\includegraphics{h3b1.eps}} = I_1^3\,,\qquad \raisebox{-0.25mm}{\includegraphics{h3b2.eps}} = I_1 I_2\,,\\ &\raisebox{-0.25mm}{\includegraphics{h3b3.eps}} = I_3\,,\qquad \raisebox{-0.25mm}{\includegraphics{h3b4.eps}} \sim \frac{I_1^2}{I_2} I_3\,,\\ &\raisebox{-0.25mm}{\includegraphics{h3b5.eps}} \sim \frac{G_1^2}{G_2} I_3\,,\qquad \raisebox{-0.25mm}{\includegraphics{h3b6.eps}} = G_1 I(1,1,1,1,2-\tfrac{d}{2})\,,\\ &\raisebox{-8.25mm}{\includegraphics{h3b7.eps}} = I_1 J(1,1,3-d,1,1)\,,\qquad \raisebox{-0.25mm}{\includegraphics{h3b8.eps}}\,. \end{split} \label{p2:H3b} \end{equation} using integration by parts. This reduction algorithm has been implemented as a \texttt{REDUCE} package \texttt{Grinder}~\cite{G:00}. The first 5 master integrals can be easily expressed via $\Gamma$ functions, exactly in $d$ dimensions. The next 2 ones reduce to two-loop ones with a single $d$-dependent index; the last one is truly three-loop. \section{Equivalence of IBP relations for integrals with the\\ same total number of loop and external momenta} \label{S:Equiv} IBP relations for integrals with the same $M=L+E$ but different $E$ can be made equivalent~\cite{BS:00} (then they differ only by boundary conditions --- sets of trivial sectors). In other words, the IBP relations for an $L$-loop Feynman integral~(\ref{Intro:I}) with $E$ external momenta can be reduced (after some re-definitions) to exactly the same form~(\ref{IBP:k}) as for the $M$-loop vacuum integral (in which we also integrate in $d^d p_1\cdots d^d p_E$). Consider some $L$-loop integral $I(n_1,\ldots,n_N)$ with $E$ external momenta at some kinematic point $s_{ij}=s^0_{ij}$ ($i,j>L$, $j\ge i$). We number all such $(i,j)$ pairs by integers $a\in[N+1,K]$ in some way, where \begin{equation} K = N + N_E = \frac{M(M+1)}{2} \label{Equiv:NE} \end{equation} is the number of all scalar products $s_{ij}$ of $M$ vectors $q_i$ ($j\ge i$), and introduce the quantities $D_a=-s_{ij}+s^0_{ij}=-s_{ij}+m_a^2$ ($a\in[N+1,K]$) for these $(i,j)$ pairs. Now all the quantities $D_a$ can be written in a uniform way~(\ref{Intro:Ds}) (with both sums up to $M$). All the scalar products $s_{ij}$ can be expressed via $D_a$ in a uniform way~(\ref{Intro:sD}) with the sum up to $K$. Let's expand this integral $I(n_1,\ldots,n_N)$~(\ref{Intro:I}) in a formal series in $s_{ij}-s^0_{ij}$ ($L<i\le j\le M$): \begin{equation} I(n_1,\ldots,n_N) = \sum_{n_{N+1}=1}^\infty \cdots \sum_{n_K=1}^\infty I(n_1,\ldots,n_N,n_{N+1},\ldots,n_K) D_{N+1}^{n_{N+1}-1} \cdots D_K^{n_K-1}\,. \label{Equiv:exp} \end{equation} In fact, we are mainly interested in the value of our integral at this kinematic point ($n_{N+1}=\cdots=n_K=1$), the derivatives (given by higher values of these indices) are not our primary goal. We assume $I=0$ if any index $n_{N+1}$, \dots, $n_K$ is $\le0$, so that there is only one sector with respect to each of these indices. Applying $N$ operators $O_{ij}$ with $i\in[1,L]$, $j\ge i$ to the integrand $f$, we get the usual IBP relations~(\ref{IBP:k}). In other words, the ``evolution'' of $I(n_1,\ldots,n_N,n_{N+1},\ldots,n_K)$ with respect to the ``internal'' indices $n_1$, \dots, $n_N$ is governed by $N$ ``vacuum'' IBP relations~(\ref{IBP:k}). We need $N_E$ additional relations which govern the ``evolution'' with respect to the ``external'' indices $n_{N+1}$, \dots, $n_K$. To derive them, we apply $N_E$ operators $O_{ij}=q_j\cdot \partial_i$ ($L<i\le j\le M$) to the definition~(\ref{Equiv:exp}). In the left-hand side, we get \begin{equation*} - \sum_{a=1}^N \sum_{b=1}^N \sum_{m=1}^L A_a^{mi} A^b_{mj} \hat{\mathbf{a}}^+ \left(\mathbf{b}^- - m_b^2\right) I(n_1,\ldots,n_N)\,; \end{equation*} here each integral can be expanded like~(\ref{Equiv:exp}). In the right-hand side, $O_{ij}$ acts on the product of $D_a$ with $a\in[N+1,K]$. To get the coefficient of $D_{N+1}^{n_{N+1}-1} \cdots D_K^{n_K-1}$, we shift the index $n_a\to n_a+1$ in terms with $1/D_a$, and $n_b\to n_b-1$ in terms with $D_b$, and obtain \begin{equation*} \sum_{a=N+1}^K \sum_{b=N+1}^K \sum_{m=L+1}^M A_a^{mi} A^b_{mj} (1+\delta_{mi}) \left(\mathbf{b}^- - m_b^2\right) \hat{\mathbf{a}}^+ I(n_1,\ldots,n_N,n_{N+1},\ldots,n_K) \end{equation*} (note the opposite order of the operators). In order to combine this with the left-hand side, we commute the operators~(\ref{IBP:comm3}) and use \begin{equation*} \sum_a A_a^{ij} A^a_{i'j'} = \delta^i_{i'} \delta^j_{j'}\qquad (j\ge i,j'\ge i')\,. \end{equation*} Finally, the relations for the ``external'' indices are \begin{equation} \begin{split} &\left[ (E+1) \delta_{ij} - \sum_{a=1}^K \sum_{b=1}^K \sum_{m=1}^M A_a^{mi} A^b_{mj} (1+\delta_{mi}) \hat{\mathbf{a}}^+ \left(\mathbf{b}^- - m_b^2\right) \right]\\ &\qquad{}\times I(n_1,\ldots,n_N,n_{N+1},\ldots,n_K) = 0\,. \end{split} \label{Equiv:ext} \end{equation} They are similar to the ``internal'' ones~(\ref{IBP:k}), but contain $E+1$ instead of $d$. \begin{sloppypar} We can make these relation exactly ``vacuum'' ones using the substitution \begin{equation} \begin{split} &I(n_1,\ldots,n_N) = D^{(E+1-d)/2} \tilde{I}(n_1,\ldots,n_N)\,,\\ &D = \frac{\det s_{ij}}{\det s^0_{ij}}\,,\qquad i,j\in[L+1,M]\,. \end{split} \label{Equiv:D} \end{equation} The new functions $\tilde{I}(n_1,\ldots,n_N)$ are expanded in $D_a$ with $a>N$ in the same way as in~(\ref{Equiv:exp}), with the coefficients $\tilde{I}(n_1,\ldots,n_N,n_{N+1},\ldots,n_K)$. At $s_{ij}=s^0_{ij}$ $\tilde{I}$ coincides with $I$: $\tilde{I}(n_1,\ldots,n_N,1,\ldots,1)=I(n_1,\ldots,n_N,1,\ldots,1)$; higher $\tilde{I}(n_1,\ldots,n_N,n_{N+1},\ldots,n_K)$ are linear combinations of $I(n_1,\ldots,n_N,n_{N+1},\ldots,n_K)$. The operator $O_{ij}$ can be written as \begin{equation*} O_{ij} = \sum_{m=L+1}^M (1+\delta_{mi}) s_{mj} \partial_{mi}\,,\qquad \partial_{mi} = \frac{\partial}{\partial s_{mi}}\,. \end{equation*} The derivatives of $D$ are \begin{equation*} \partial_{ij} D = (2-\delta_{ij}) D \cdot (s^{-1})_{ji}\qquad (i,j\in[L+1,M]) \end{equation*} (the extra factor 2 at $i\ne j$ comes from the fact that $s_{ij}$ appears in the determinant $D$ twice, at the positions $ij$ and $ji$). Therefore, \begin{equation*} O_{ij} D^{(E+1-d)/2} = (E+1-d) D^{(E+1-d)/2} \delta_{ij}\,, \end{equation*} and an additional term $(d-E-1) \delta_{ij}$ appears in~(\ref{Equiv:ext}). All relations for $\tilde{I}(n_1,\ldots,n_N,n_{N+1},\ldots,n_K)$ have the same form as for the $M=L+E$ loop vacuum integral $I_0(n_1,\ldots,n_K)$. \end{sloppypar} The boundary conditions may differ. In the case of $\tilde{I}(n_1,\ldots,n_N,n_{N+1},\ldots,n_K)$, all sectors with $n_a\le0$ ($a\in[N+1,K]$) are trivial; this does not have to be so for the corresponding vacuum diagram. Now we shall consider a few examples. Let's consider the 1-loop self-energy diagram (Fig.~\ref{F:m1}a; we set $m=1$): \begin{equation} M(n_1,n_2) = \frac{1}{i\pi^{d/2}} \int \frac{d^d k}{D_1^{n_1} D_2^{n_2}}\,,\qquad D_1 = 1 - (k+p)^2\,,\qquad D_2 = -k^2\,. \label{Equiv:m1} \end{equation} This integral vanishes at $n_1\le0$. Suppose we want to calculate it on the mass shell $p^2=1$. We introduce $D_3=1-p^2$, and re-express the integral as \begin{equation} M(n_1,n_2) = (p^2)^{(2-d)/2} \tilde{M}(n_1,n_2)\,, \label{Equiv:m1D} \end{equation} according to~(\ref{Equiv:D}). Then we expand it in $D_3$: \begin{equation} \tilde{M}(n_1,n_2) = \sum_{n_3=1}^\infty \tilde{M}(n_1,n_2,n_3) D_3^{n_3-1} \label{Equiv:m1exp} \end{equation} ($\tilde{M}(n_1,n_2,n_3)$ vanishes at $n_3\le0$). There is 1 master integral: \begin{equation} \tilde{M}(n_1,n_2,n_3) = c(n_1,n_2,n_3) \tilde{M}(1,0,1)\,. \label{Equiv:Mmaster} \end{equation} \begin{figure}[ht] \begin{center} \begin{picture}(74,36) \put(19,21.25){\makebox(0,0){\includegraphics{M1.eps}}} \put(19,10){\makebox(0,0)[t]{$k+p$}} \put(19,32.5){\makebox(0,0)[b]{$k$}} \put(19,12){\makebox(0,0)[b]{1}} \put(19,28){\makebox(0,0)[t]{2}} \put(4,19){\makebox(0,0)[t]{$p$}} \put(34,19){\makebox(0,0)[t]{$p$}} \put(19,0){\makebox(0,0)[b]{a}} \put(61,20){\makebox(0,0){\includegraphics{V2.eps}}} \put(61,7){\makebox(0,0)[t]{$k_1+k_2$}} \put(61,32){\makebox(0,0)[b]{$k_2$}} \put(61,23.5){\makebox(0,0)[b]{$k_1$}} \put(61,9){\makebox(0,0)[b]{1}} \put(61,31){\makebox(0,0)[t]{3}} \put(61,19){\makebox(0,0)[t]{2}} \put(61,0){\makebox(0,0)[b]{b}} \end{picture} \end{center} \caption{Massive 1-loop self-energy and 2-loop vacuum diagram} \label{F:m1} \end{figure} The $\partial\cdot k$ and $\partial\cdot(k+p)$ IBP relations for $M(n_1,n_2)$ are \begin{equation*} \begin{split} &\left[ d - \mathbf{n}_1 - 2 \mathbf{n}_2 + \mathbf{n}_1 \mathbf{1}^+ \left( D_3 - \mathbf{2}^- \right) \right] M(n_1,n_2) = 0\,,\\ &\left[ d - 2 \mathbf{n}_1 - \mathbf{n}_2 + 2 \mathbf{n}_1 \mathbf{1}^+ + \mathbf{n}_2 \mathbf{2}^+ \left( D_3 - \mathbf{1}^- \right) \right] M(n_1,n_2) = 0 \end{split} \end{equation*} (here $D_3$ is an external kinematic parameter). For the expansion coefficients this means \begin{equation} \begin{split} &\left[ d - \mathbf{n}_1 - 2 \mathbf{n}_2 + \mathbf{n}_1 \mathbf{1}^+ \left( \mathbf{3}^- - \mathbf{2}^- \right) \right] \tilde{M}(n_1,n_2,n_3) = 0\,,\\ &\left[ d - 2 \mathbf{n}_1 - \mathbf{n}_2 + 2 \mathbf{n}_1 \mathbf{1}^+ + \mathbf{n}_2 \mathbf{2}^+ \left( \mathbf{3}^- - \mathbf{1}^- \right) \right] \tilde{M}(n_1,n_2,n_3) = 0 \end{split} \label{Equiv:IBPMk} \end{equation} (we shift the summation index $n_3\to n_3-1$ in the terms with $D_3$). In order to find the ``evolution'' in $n_3$, we apply $p\cdot\partial/\partial p$ to~(\ref{Equiv:m1exp}): \begin{align*} &p \cdot \frac{\partial}{\partial p} M(n_1,n_2) = \left[ - \mathbf{n}_1 + \mathbf{n}_1 \mathbf{1}^+ \left( 2 + \mathbf{2}^- - D_3 \right) \right] M(n_1,n_2)\,,\\ &p \cdot \frac{\partial}{\partial p} (p^2)^{(d-2)/2} = (d-2) (p^2)^{(d-2)/2}\,,\\ &\sum_{n_3=1}^\infty \tilde{M}(n_1,n_2,n_3) p \cdot \frac{\partial}{\partial p} D_3^{n_3-1} = 2 \sum_{n_3=1}^\infty \tilde{M}(n_1,n_2,n_3) (n_3-1) \left( D_3^{n_3-1} - D_3^{n_3-2} \right)\\ &\qquad{} = 2 \sum_{n_3=1}^\infty \left[ \left( \mathbf{n}_3 - 1 - \mathbf{n}_3 \mathbf{3}^+ \right) \tilde{M}(n_1,n_2,n_3) \right] D_3^{n_3-1}\,. \end{align*} Collecting these pieces together, we get \begin{equation} \left[ d - \mathbf{n}_1 - 2 \mathbf{n}_3 + \mathbf{n}_1 \mathbf{1}^+ \left( 2 + \mathbf{2}^- - \mathbf{3}^- \right) + 2 \mathbf{n}_3 \mathbf{3}^+ \right] \tilde{M}(n_1,n_2,n_3) = 0\,. \label{Equiv:IBPMp} \end{equation} Now let's consider the 2-loop vacuum diagram (Fig.~\ref{F:m1}b, $m=1$): \begin{equation} \begin{split} &V(n_1,n_2,n_3) = \frac{1}{(i\pi^{d/2})^2} \int \frac{d^d k_1 d^d k_2}{D_1^{n_1} D_2^{n_2} D_3^{n_3}}\,,\\ &D_1 = 1 - (k_1+k_2)^2\,,\qquad D_2 = - k_1^2\,,\qquad D_3 = 1 - k_2^2\,. \end{split} \label{Equiv:v2} \end{equation} This integral vanishes at $n_1\le0$ or $n_3\le0$. The $\partial_1\cdot k_1$, $\partial_1\cdot(k_1-k_2)$, and $\partial_2\cdot k_2$ IBP relations for $V(n_1,n_2,n_3)$ have exactly the same form as~(\ref{Equiv:IBPMk}), (\ref{Equiv:IBPMp}), as expected. There is 1 master integral: \begin{equation} V(n_1,n_2,n_3) = c_0(n_1,n_2,n_3) V(1,0,1)\,. \label{Equiv:Vmaster} \end{equation} In this problem, the boundary conditions for $\tilde{M}(n_1,n_2,n_3)$ and $V(n_1,n_2,n_3)$ coincide, and hence $c(n_1,n_2,n_3)=c_0(n_1,n_2,n_3)$. Therefore, the 1-loop on-shell self-energy diagram $M(n_1,n_2)=\tilde{M}(n_1,n_2,1)$ is related to the 2-loop vacuum diagram $V(n_1,n_2,1)$ (in which the index of the ``former external'' line is $n_3=1$): \begin{equation} \frac{M(n_1,n_2)}{M(1,0)} = \frac{V(n_1,n_2,1)}{V(1,0,1)}\,. \label{Equiv:MV} \end{equation} Explicit expressions for $M(n_1,n_2)$ and $V(n_1,n_2,n_3)$ can be found, e.\,g., in the textbook~\cite{G:07}. It is easy to check the relation~(\ref{Equiv:MV}) using \begin{equation*} \Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin \pi z}\,. \end{equation*} Reduction of 2-loop on-shell self-energy diagrams to master integrals has been considered in~\cite{B:92}: \begin{align} \raisebox{-6.2mm}{\includegraphics{m2.eps}} ={}& c_1(n_1,\ldots,n_5) \raisebox{-9.2mm}{\includegraphics{m2b1.eps}} + c_2(n_1,\ldots,n_5) \raisebox{-6.2mm}{\includegraphics{m2b2.eps}}\,, \label{Equiv:M2}\\ \raisebox{-6.2mm}{\includegraphics{n2.eps}} ={}& c_3(n_1,\ldots,n_5) \raisebox{-9.2mm}{\includegraphics{m2b1.eps}} + c_4(n_1,\ldots,n_5) \raisebox{-6.2mm}{\includegraphics{m2b2.eps}} \nonumber\\ &{} + c_5(n_1,\ldots,n_5) \raisebox{-6.2mm}{\includegraphics{m2b3.eps}}\,. \label{Equiv:N2} \end{align} These IBP reduction relations are equivalent to those for 3-loop vacuum diagrams (also considered in~\cite{B:92}): \begin{align} \raisebox{-7.2mm}{\includegraphics{v3t1.eps}} ={}& c'_1(n_1,\ldots,n_6) \raisebox{-9.7mm}{\includegraphics{v3b1.eps}} + c'_2(n_1,\ldots,n_6) \raisebox{-7.2mm}{\includegraphics{v3b2.eps}}\,, \label{Equiv:VM}\\ \raisebox{-7.2mm}{\includegraphics{v3t2.eps}} ={}& c'_3(n_1,\ldots,n_6) \raisebox{-9.7mm}{\includegraphics{v3b1.eps}} + c'_4(n_1,\ldots,n_6) \raisebox{-7.2mm}{\includegraphics{v3b2.eps}} \nonumber\\ &{} + c'_5(n_1,\ldots,n_6) \raisebox{-7.2mm}{\includegraphics{v3b3.eps}}\,. \label{Equiv:VN} \end{align} However, boundary conditions differ: on-shell integrals vanish at $n_6\le0$, but vacuum integrals don't. Therefore, it is not possible to obtain $c_k(n_1,\ldots,n_5)$ from $c'_k(n_1,\ldots,n_5,1)$. Reduction for both classes of diagrams is implemented in a \texttt{REDUCE} package \texttt{Recursor}~\cite{B:92}. For 2-loop on-shell diagrams, $n_6=1$ is specified, and a special control flag is set which is responsible for nullifying all integrals with $n_6\le0$ during reduction% \footnote{The redefinition~(\ref{Equiv:D}) is not used in this package; therefore, some terms in the vacuum IBP relations are absent in the on-shell ones. Their inclusion is controlled by the same flag.}. \begin{figure}[ht] \begin{center} \begin{picture}(42.8,48.8) \put(21.4,24.4){\makebox(0,0){\includegraphics{t1.eps}}} \put(36,24.4){\makebox(0,0)[l]{$k$}} \put(40,7.2){\makebox(0,0)[l]{$p_1$}} \put(40,41.6){\makebox(0,0)[l]{$p_2$}} \put(24,31){\makebox(0,0)[br]{$k+p_2$}} \put(24,17.8){\makebox(0,0)[tr]{$k+p_1$}} \end{picture} \end{center} \caption{1-loop vertex diagram (all lines are massless)} \label{F:t1} \end{figure} Our next example is the 1-loop massless vertex diagram (Fig.~\ref{F:t1}) with 2 on-shell legs: $p_1^2=p_2^2=0$. Its IBP relations are equivalent to those for the 2-loop massless self-energy diagram (Fig.~\ref{F:Q2}), because both are equivalent to the same 3-loop vacuum diagram with 1 massive line. These 2-loop self-energy diagrams are expressed via 2 master integrals (Sect.~\ref{S:p2}): \begin{equation} \begin{split} \raisebox{-7.6mm}{\begin{picture}(42,17) \put(21,8.5){\makebox(0,0){\includegraphics{q2t.eps}}} \put(11,14.5){\makebox(0,0){$n_2$}} \put(11,2.5){\makebox(0,0){$n_1$}} \put(31,14.5){\makebox(0,0){$n_5$}} \put(31,2.5){\makebox(0,0){$n_4$}} \put(24,8.5){\makebox(0,0){$n_3$}} \end{picture}} ={}& c_1(n_1,\ldots,n_5) \raisebox{-7.6mm}{\includegraphics{q2b1.eps}}\\ &{}+ c_2(n_1,\ldots,n_5) \raisebox{-7.6mm}{\includegraphics{q2b2.eps}}\,. \end{split} \label{Equiv:q2} \end{equation} The boundary conditions for the 1-loop vertex are different: integrals with $n_{4,5}\le0$ vanish. Such integrals produce the second master integral in~(\ref{Equiv:q2}). This leads to a very simple prescription: replace the second master integral in~(\ref{Equiv:q2}) by $0$ and the first one by the vertex master integral, \begin{equation} \raisebox{-11.3mm}{\begin{picture}(23,24.4) \put(10.7,12.2){\makebox(0,0){\includegraphics{t1t.eps}}} \put(20,12.2){\makebox(0,0){$n_3$}} \put(11,18.2){\makebox(0,0){$n_2$}} \put(11,6.2){\makebox(0,0){$n_1$}} \end{picture}} = c_1(n_1,n_2,n_3,1,1) \raisebox{-7.6mm}{\includegraphics{t1b.eps}}\,. \label{Equiv:t1} \end{equation} Similarly, reduction for 2-loop massless vertex diagrams with $p_1^2=p_2^2=0$ is equivalent to 3-loop massless self-energy diagrams~\cite{BS:00}. This method is applicable to diagrams with any number of external legs. \section{Gr\"obner-bases methods} \label{S:SBases} For complicated problems, constructing reduction algorithms by hand becomes impractical, and it is desirable to automate this process. Some of such systematic approaches are based on Gr\"obner bases. A simple introduction to Gr\"obner bases for systems of polynomial equations with commuting variables is given in Appendix~\ref{S:Groebner}. In the IBP reduction problem, polynomials of non-commuting operators are used. An approach using the shift operators~(\ref{IBP:Oper}) was proposed in~\cite{SS:06,S:06} and implemented in a \texttt{\textit{Mathematica}} program \texttt{FIRE}~\cite{FIRE}. Suppose we are considering the sector $n_1\le0$, $n_2>0$ in Fig.~\ref{F:Normal}. Any integral in the sector can be expressed via the integral at its corner using the shift operators $\mathbf{1}^-$, $\mathbf{2}^+$: \begin{equation} I(n_1,n_2) = \left(\mathbf{1}^-\right)^{-n_1} \left(\mathbf{2}^+\right)^{n_2-1} I(0,1)\,. \label{SBases:corner} \end{equation} \begin{figure}[ht] \begin{center} \begin{picture}(76,76) \put(37,37){\makebox(0,0){\includegraphics{sect4.eps}}} \put(73,34){\makebox(0,0){$n_1$}} \put(34,73){\makebox(0,0){$n_2$}} \end{picture} \end{center} \caption{Normal form of IBP relations in a sector} \label{F:Normal} \end{figure} All IBP relations in this sector can be written in the normal form containing only $\mathbf{1}^-$ and $\mathbf{2}^+$: we act on the original form of these relations by $\mathbf{1}^-$ and $\mathbf{2}^+$ sufficiently many times to get rid of $\mathbf{1}^+$, $\mathbf{2}^-$ (but not too many times, see Fig.~\ref{F:Normal}). They are \begin{equation} \sum_{j_1,j_2\ge0} C_{j_1 j_2}(\mathbf{n}_a) \left(\mathbf{1}^-\right)^{j_1} \left(\mathbf{2}^+\right)^{j_1} \sim 0 \label{SBases:Normal} \end{equation} (this means that these operator polynomials give 0 when applied to $I$; note that $\mathbf{n}_a$ don't commute with the shift operators). We need to fix some total order of the monomials constructed from $\mathbf{1}^-$ and $\mathbf{2}^+$; this is equivalent to fixing a shift-invariant total order for the integrals in our sector (Sect.~\ref{S:IBP}). Now we can construct the Gr\"obner basis for the system~(\ref{SBases:Normal}) (more exactly $S$-basis~\cite{SS:06,S:06}), reduce the monomial $\left(\mathbf{1}^-\right)^{-n_1} \left(\mathbf{2}^+\right)^{n_2-1}$~(\ref{SBases:corner}) with respect to it, and apply it to $I(0,1)$. Integrals from lower sectors are considered trivial and already known. Irreducible monomials give master integrals in this sector. The idea to use Gr\"obner bases for IBP reduction was originally proposed in~\cite{T:98} in a somewhat different setting. Let's assume that each line has a separate mass $m_a$, and consider integrals without numerators (numerators can be eliminated by shifting $d$~\cite{T:96}). Differentiation in $m_a^2$ is equivalent to the raising operator~(\ref{IBP:hat}): \begin{equation} \frac{\partial}{\partial m_a^2} \Rightarrow - \hat{\mathbf{a}}^+\,. \label{SBases:dm} \end{equation} Any integral in the positive sector can be expressed via the integral at its corner: \begin{equation} I(n_1,\ldots,n_1) = \left(-\frac{\partial}{\partial m_1^2}\right)^{n_1-1} \cdots \left(-\frac{\partial}{\partial m_N^2}\right)^{n_N-1} I(1,\ldots,1)\,. \label{SBases:cornerd} \end{equation} The IBP relations can be written as \begin{equation} \sum C_{j_1\ldots j_N}(m_1^2,\ldots,m_N^2) \left(\frac{\partial}{\partial m_1^2}\right)^{j_1} \cdots \left(\frac{\partial}{\partial m_N^2}\right)^{j_N} \sim 0\,. \label{SBases:pde} \end{equation} Defining a total order for monomials constructed from $\partial/\partial m_a^2$, we can find the Gr\"obner basis of the system~(\ref{SBases:pde}). Then we can reduce the monomial in~(\ref{SBases:cornerd}) with respect to this basis. Irreducible monomials give the master integrals. This method has been implemented in \texttt{Maple} and successfully applied~\cite{T:04} to the problem of reduction of 2-loop self-energy diagrams with all 5 masses different and arbitrary $p^2$~\cite{T:97}. If there are many zero (or equal) masses in the problem, this approach requires to solve a more difficult problem with all masses being different first, and this may lead to very lengthy intermediate expressions. \section{Baikov's method} \label{S:Baikov} The Feynman integral~(\ref{Intro:I}) can be written as an integral in scalar products $s_{ij}$% \footnote{Here we consider the Euclidean case, in order not to have complications with signs and $\pm i0$.}. The integration measure is \begin{equation} d^d k_1 d^d k_2 \cdots d^d k_L = d^{M-1} k_{1||} d^{d-M+1} k_{1\bot} d^{M-2} k_{2||} d^{d-M+2} k_{2\bot} \cdots d^{M-L} k_{L||} d^{d-M+L} k_{L\bot}\,, \label{Baikov:dk} \end{equation} where $k_{1||}$ lies in the subspace spanned by $k_2$, \dots, $k_L$, $p_1$, \dots, $p_E$; $k_{2||}$ lies in the subspace spanned by $k_3$, \dots, $k_L$, $p_1$, \dots, $p_E$; and so on. The volume elements $d^{M-i} k_{i||}$ are \begin{equation} \begin{split} &d^{M-1} k_{1||} = \frac{d s_{12} d s_{13} \cdots d s_{1M}}{G^{1/2}(k_2,\ldots,k_L,p_1,\ldots,p_E)}\,,\\ &d^{M-2} k_{2||} = \frac{d s_{23} d s_{24} \cdots d s_{2M}}{G^{1/2}(k_3,\ldots,k_L,p_1,\ldots,p_E)}\,,\\ &\cdots\\ &d^{M-L} k_{L||} = \frac{d s_{L,L+1} d s_{L,L+2} \cdots d s_{LM}}{G^{1/2}(p_1,\ldots,p_E)}\,, \end{split} \label{Baikov:par} \end{equation} where \begin{equation} G(q_1,\ldots,q_n) = \det q_i\cdot q_j \label{Baikov:Gram} \end{equation} is the Gram determinant ($G^{1/2}(q_1,\ldots,q_n)$ is the volume of the parallelogram formed by $q_1$, \dots, $q_n$). We can integrate over the angles in $d^n k_{i\bot}$: \begin{equation*} d^n k_{i\bot} = \frac{1}{2} \Omega_n k_{i\bot}^{n-2} d k^2_{i\bot}\,, \end{equation*} where \begin{equation} \Omega_n = \frac{2 \pi^{n/2}}{\Gamma(n/2)} \label{Baikov:Omega} \end{equation} is the $n$-dimensional full solid angle. Then we replace $d k^2_{i\bot} = d s_{ii}$; $k_{i\bot}$ is the height of the parallelogram with the base formed by $k_{i+1}$, \dots, $k_L$, $p_1$, \dots, $p_E$ and the extra vector $k_i$, and this is the volume of the whole parallelogram divided by the area of its base. Therefore, \begin{equation} \begin{split} &d^{d-M+1} k_{1\bot} = \frac{1}{2} \Omega_{d-M+1} \left(\frac{G(k_1,\ldots,k_L,p_1,\ldots,p_E)}{G(k_2,\ldots,k_L,p_1,\ldots,p_E)}\right)^{(d-M-1)/2} d s_{11}\,,\\ &d^{d-M+2} k_{2\bot} = \frac{1}{2} \Omega_{d-M+2} \left(\frac{G(k_2,\ldots,k_L,p_1,\ldots,p_E)}{G(k_3,\ldots,k_L,p_1,\ldots,p_E)}\right)^{(d-M)/2} d s_{22}\,,\\ &\cdots\\ &d^{d-M+L} k_{L\bot} = \frac{1}{2} \Omega_{d-M+L} \left(\frac{G(k_L,p_1,\ldots,p_E)}{G(p_1,\ldots,p_E)}\right)^{(d-M+L-2)/2} d s_{LL}\,. \end{split} \label{Baikov:perp} \end{equation} All the Gram determinants except the first and the last ones cancel in the measure~(\ref{Baikov:dk}), and~\cite{L:10} \begin{equation} \begin{split} &I(n_1,\ldots,n_N) = \frac{1}{\pi^{Ld/2}} \int d^d k_1 \cdots d^d k_L f ={}\\ &\frac{\pi^{-L(L-1)/4-LE/2}}{\prod_{i=1}^L \Gamma\left(\frac{d-M+i}{2}\right)} G(p_1,\ldots,p_E)^{(-d+E+1)/2}\\ &{}\times \int \prod_{i=1}^L \prod_{j=i}^M d s_{ij} G(k_1,\ldots,k_L,p_1,\ldots,p_E)^{(d-M-1)/2} f\,. \end{split} \label{Baikov:Lee} \end{equation} The integration region has a complicated shape; the Gram determinant vanishes on its boundaries. This formula can be used as a definition of the $d$-dimensional integral. Usually, another definition based on the $\alpha$ (or Feynman) parametrization is used. But then many simple properties, like the possibility to cancel identical brackets in the numerator and the denominator of the integrand $f$, become theorems needing non-trivial proofs. Here the possibility to cancel brackets (depending on $s_{ij}$) is obvious. The integration variables $x_a=D_a$ ($a\in[1,N]$)~\cite{B:97} can be used instead of $s_{ij}$: \begin{equation} I(n_1,\ldots,n_N) = C \int \frac{d x_1 \cdots d x_N}{x_1^{n_1} \cdots x_N^{n_N}} P(x_1-m_1^2,\ldots,x_N-m_N^2)^{(d-M-1)/2}\,, \label{Baikov:Baikov} \end{equation} where the Baikov polynomial is \begin{equation} P(x_1,\ldots,x_N) = \det \sum_{a=1}^N A^a_{ij} x_a\,, \label{Baikov:P} \end{equation} and the normalization constant is \begin{equation*} C = \frac{\pi^{-L(L-1)/4-LE/2}}{\prod_{i=1}^L \Gamma\left(\frac{d-M+i}{2}\right)} G(p_1,\ldots,p_E)^{(-d+E+1)/2} \det A^a_{ij} \end{equation*} (in the last determinant, the pair $(i,j)$ with $j\ge i$ is considered as a single index). Acting by the operator $\mathbf{a}^-$ on~(\ref{Baikov:Baikov}) multiplies the integrand by $x_a$; acting by $\hat{\mathbf{a}}^+$ replaces $1/x_a^{n_a}$ by $n_a/x_a^{n_a+1}=-\partial_a(1/x_a^{n_a})$, we integrate by parts and take into account the fact that the polynomial $P$ vanishes at the boundaries: \begin{equation} \begin{split} &\mathbf{a}^- I = C \int \frac{d x_1 \cdots d x_N}{x_1^{n_1} \cdots x_N^{n_N}} x_a P^{(d-M-1)/2}\,,\\ &\hat{\mathbf{a}}^+ I = C \int \frac{d x_1 \cdots d x_N}{x_1^{n_1} \cdots x_N^{n_N}} \partial_a P^{(d-M-1)/2}\,. \end{split} \label{Baikov:Oper} \end{equation} Note that $[\partial_a,x_b]=\delta_{ab}$, in accordance with~(\ref{IBP:comm3}). The IBP relations $\mathbf{O}_{ij}(\hat{\mathbf{a}}^+,\mathbf{a}^-) I = 0$ become~\cite{B:96,B:97} \begin{equation} \mathbf{O}_{ij}(\partial_a,x_a) P^{(d-M-1)/2}(x_a-m_a^2) = 0\,. \label{Baikov:IBP} \end{equation} Let's check that $P(x_a)$~(\ref{Baikov:P}) indeed satisfies this requirement. If $j\le L$ ($q_j=k_j$), \begin{equation*} \begin{split} \mathbf{O}_{ij}(\partial_a,x_a) &= d \delta_{ij} - \sum_{a=1}^N \sum_{b=1}^N \sum_{m=1}^M A_a^{mi} A^b_{mj} (1+\delta_{mi}) \partial_a (x_b-m_b^2)\\ &= (d-M-1) \delta_{ij} - \sum_{a=1}^N \sum_{b=1}^N \sum_{m=1}^M A_a^{mi} A^b_{mj} (1+\delta_{mi}) (x_b-m_b^2) \partial_a \end{split} \end{equation*} (see~(\ref{IBP:k}), note the order of operators). We substitute \begin{equation*} \sum_{a=1}^N A_a^{mi} \partial_a = \partial_{mi} = \frac{\partial}{\partial s_{mi}}\,,\qquad \sum_{b=1}^N A^b_{mj} (x_b-m_b^2) = s_{mj}\,, \end{equation*} and get \begin{equation*} \mathbf{O}_{ij} = (d-M-1) \delta_{ij} - \sum_{m=1}^M (1+\delta_{mi}) s_{mj} \partial_{mi} \end{equation*} (this form can be directly obtained from~(\ref{IBP:Oij}) if one takes into account that the order of $s_{mj}$ and $\partial_{mi}$ in~(\ref{IBP:Oij}) is interchanged due to integration by parts with respect to $x_a$ in~(\ref{Baikov:Oper})). The derivatives of $P=\det s_{ij}$ are proportional to elements of the inverse matrix $s^{-1}$: \begin{equation*} \partial_{ij} P = (2-\delta_{ij}) P\cdot (s^{-1})_{ji}\,. \end{equation*} And now we see that~(\ref{Baikov:IBP}) is satisfied. The proof at $j>L$ is similar, the only difference is that $s_{ij}$ with $i,j>L$ (external kinematic quantities) are not expressed via $x_a$ but kept intact. The Feynman integral~(\ref{Intro:I}) can be expressed as \begin{equation} I(\vec{n}) = \sum_k c_k(\vec{n}) I(\vec{n}_k)\,,\qquad c_k(\vec{n}_l)=\delta_{kl}\,, \label{Baikov:masters} \end{equation} where $\vec{n}_k$ are the indices of the master integrals. The coefficients $c_k(\vec{n})$ obey the same IBP relations \begin{equation} \mathbf{O}_{ij}(\hat{\mathbf{a}}^+,\mathbf{a}^-) c_k(\vec{n}) = 0 \label{Baikov:IBPc} \end{equation} as the integral $I(\vec{n})$, but with different boundary conditions. Let's consider a sector $n_{1,\ldots,m}>0$, $n_{m+1,\ldots,N}\le0$ (any sector has this form after a suitable re-numbering of $n$'s). Suppose there is 1 master integral in this sector: its corner $n_{1,\ldots,m}=1$, $n_{m+1,\ldots,N}=0$. Then only integrals from this sector and higher ones can contain this master integral $I(\vec{n}_k)$. In other words, the coefficient $c_k(\vec{n})$ vanishes if any index $n_{1,\ldots,m}$ is $\le0$. Let's consider~\cite{B:96,B:97} \begin{equation} \oint \frac{d x_1}{x_1^{n_1}} \cdots \oint \frac{d x_m}{x_m^{n_m}} \int \frac{d x_{m+1}}{x_{m+1}^{n_{m+1}}} \cdots \int \frac{d x_N}{x_N^{n_N}} P^{(d-M-1)/2}\,, \label{Baikov:sol} \end{equation} where the contours in the first $m$ integrals are small circles around the origin. This integral satisfies the IBP relations due to~(\ref{Baikov:IBP}) (because boundary terms from integration by parts in $x_a$ vanish), and vanishes when any index $n_{1,\ldots,m}$ is $\le0$. It is natural to assume that it is a linear combination of $c_k(\vec{n})$ and $c_{k'}(\vec{n})$ for master integrals in lower sectors (they also vanish when any of the indices $n_{1,\ldots,m}$ is $\le0$). In other words, $c_k(\vec{n})$ is a linear combination of~(\ref{Baikov:sol}) and similar integrals where some more $\int$ are replaced by $\oint$. Coefficients in this linear combination are fixed by the boundary conditions~(\ref{Baikov:masters}). As a simplest example, let's consider the 1-loop massive vacuum diagram (Fig.~\ref{F:v1}) (with $m=1$). In Euclidean space, $D=k^2+1$. The Baikov polynomial is $P(x)=x$, and \begin{equation} V(n) = C \int_1^\infty \frac{d x}{x^n} (x-1)^{(d-2)/2} \label{Baikov:v1} \end{equation} (it is easy to check that this formula reproduces the standard result for $V(n)$). There is 1 master integral V(1): $V(n)=c(n)V(1)$. The coefficient $c(n)$ is \begin{equation} \begin{split} c(n) &\sim \oint \frac{d x}{x^n} (x-1)^{(d-2)/2} \sim \frac{1}{(n-1)!} \left.\left(\frac{d}{d x}\right)^{n-1} (x-1)^{(d-2)/2} \right|_{x=0}\\ &\sim \frac{1}{(n-1)!} \left(-\frac{d}{2}+1\right) \cdots \left(-\frac{d}{2}+n-1\right)\,. \end{split} \label{Baikov:c1} \end{equation} Taking into account $c(1)=1$, we reproduce~(\ref{IBP:V1sol}). Now let's consider the 1-loop massless self-energy (Fig.~\ref{F:p1}). In Euclidean space, $D_1=k^2$, $D_2=(k+p)^2$ (we set $p^2=1$), therefore the Baikov polynomial is \begin{equation} P(x_1,x_2) = \left| \begin{array}{cc} x_1 & \frac{x_2-x_1-1}{2}\\ \frac{x_2-x_1-1}{2} & 1 \end{array} \right| = x_1 - \frac{(x_2-x_1-1)^2}{4}\,. \label{Baikov:Pp1} \end{equation} There is only 1 non-trivial sector, and 1 master integral $G(1,1)$. The coefficients of this master integral are \begin{equation} \begin{split} c(n_1,n_2) &\sim \oint \frac{d x_1}{x_1^{n_1}} \oint \frac{d x_2}{x_2^{n_2}} P(x_1,x_2)^{(d-3)/2}\\ &\sim \frac{1}{(n_1-1)!\,(n_2-1)!} \left.\left(\frac{d}{d x_1}\right)^{n_1-1} \left(\frac{d}{d x_2}\right)^{n_2-1} P(x_1,x_2)^{(d-3)/2} \right|_{x_1=x_2=0}\,. \end{split} \label{Baikov:c2} \end{equation} Application of this formalism to reduction of 3-loop vacuum diagrams is discussed in detail in~\cite{BS:98}. It was used in the formidable task of reduction of 4-loop massless self-energy diagrams~\cite{BC:10}. Some additional examples can be found in~\cite{SS:03} and Chapter~6 of~\cite{FIC}. Using this formalism, P.\,A.~Baikov~\cite{B:00} has constructed an elegant proof that the 3-loop massless non-planar integral (the last one in~(\ref{p2:H3b})) cannot be reduced to lower sectors. Indeed, suppose such a reduction exists, i.\,e. the equation \begin{equation} \raisebox{-6.0mm}{\includegraphics{q3n.eps}} - c_1 \raisebox{-6.0mm}{\includegraphics{q3n1.eps}} - c_2 \raisebox{-6.0mm}{\includegraphics{q3n2.eps}} - \cdots = 0 \label{Baikov:nonplan} \end{equation} is a linear combination of the IBP identities. Then it must hold not only for the Feynman integrals $I(n_1,\ldots,n_8,n_9)$, but also for any solution of the IBP relations (the indices $n_{1\ldots8}$ correspond to 8 denominators, and $n_9$ to the irreducible numerator). Let's apply it to \begin{equation*} s(n_1,\ldots,n_8,n_9) = \oint \frac{d x_1}{x_1^{n_1}} \cdots \oint \frac{d x_8}{x_8^{n_8}} \int \frac{d x_9}{x_9^{n_9}} P(x_1,\ldots,x_8,x_9)^{(d-5)/2}\,. \end{equation*} This integral vanishes if any of the indices $n_{1\ldots8}$ is $\le0$ (all terms in~(\ref{Baikov:nonplan}) except the first one are absent). For the first term we have \begin{equation*} s(1,\ldots,1,n_9) \sim \int \frac{d x_9}{x_9^{n_9}} P(0,\ldots,0,x_9)^{(d-5)/2}\,, \end{equation*} where $P(0,\ldots,0,x_9)\sim x_9^2(1-x_9)^2$ (for a suitable choice of the irreducible numerator $D_9$~\cite{B:00}). This means that we may choose the integration contour in $x_9$ to go from 0 to 1 (so that there are no boundary terms, and $s$ satisfies the IBP relations). Then $s(1,\ldots,1,0)\ne0$, and we have a contradiction. This means that~(\ref{Baikov:nonplan}) cannot follow from the IBP relations. Why doesn't a similar reasoning apply to the planar diagram (the second one in~(\ref{p2:H3t})) which is reducible? For this diagram, $P(0,\ldots,0,x_9)\sim x_9^2$, and we cannot choose a suitable integration contour for $x_9$. A form of the irreducibility criterion suitable for application in complicated problems (such as 4-loop massless self-energies) has been proposed in~\cite{B:06}. \section{Conclusion} \label{S:Conc} Approaches to the problem of reduction of Feynman integrals can be classified as following: \begin{itemize} \item Generic $n_i$ \begin{itemize} \item Construct an algorithm and implement by hand: \texttt{Mincer}, \dots \item More automated approaches \begin{itemize} \item Gr\"obner-bases methods \item Lie-algebra based method \item Baikov's method \end{itemize} \end{itemize} \item Specific numeric $n_i$: Laporta algorithm (\texttt{AIR}, \texttt{FIRE}, \texttt{Reduze}\dots) \end{itemize} The most straightforward approach to reduction is to substitute specific integer values for all the indices $n_a$ and to solve the resulting huge linear system. If we consider some region of size $R$ around the origin, there is $L(L+E)$ relations per point in this region, and only 1 unknown per point (plus some unknowns outside the region near its surface, but their number is proportional to the area of the surface, and becomes negligible as compared to the volume of the region at sufficiently large $R$). The system of IBP relations is highly redundant. In each sector, we can use these relations to reduce more complicated integrals to simpler ones (with respect to some total order); integrals from lower sectors are considered trivial and already known. A few integrals which cannot be reduced any further are the master integrals. This is called the Laporta algorithm~\cite{L:00}. It may require solving huge linear systems. There are several publicly available implementations: \texttt{AIR}~\cite{AIR} (in \texttt{Maple}), \texttt{FIRE}~\cite{FIRE} (in \texttt{\textit{Mathematica}}), \texttt{Reduze}~\cite{Reduze} (in \texttt{C++}, using the \texttt{GiNaC} library~\cite{GiNaC}). There are also numerous implementations which are not publicly available (several in Karlsruhe, several in Edmonton, by M.~Czakon, etc.; \texttt{FIRE} version 2 is rewritten in \texttt{C++} (with \texttt{Fermat}~\cite{Fermat}), but it is also not public). Many implementations of the Laporta algorithm are written in \texttt{C++}; for algebraic operations they either use \texttt{GiNaC}, or talk to an external CAS (e.\,g., \texttt{Fermat}). They have to use very large number of expressions; in order not to exceed the size of the main memory, key--value databases (e.\,g., Kyoto Cabinet~\cite{Kyoto}) may be used. It is possible to reduce the redundancy of the system of IBP relations~\cite{L:08} using its Lie algebra structure~(\ref{IBP:comm}). The set of relations \begin{equation} \begin{split} &\partial_i \cdot k_{i+1}\qquad (i\in[1,L],\;k_{L+1} \equiv k_1)\,;\\ &\partial_1 \cdot p_j\qquad (j\in[1,E])\,;\\ &\sum_{i=1}^L \partial_i \cdot k_i \end{split} \label{Conc:Lee} \end{equation} is sufficient: all the other $O_{ij}$ can be obtained by commutators from this smaller subset. It is still redundant: there are $L+E+1$ relations per point, but this is smaller than $L(L+E)$ relations per point in the complete set. The Laporta algorithm is universal. However, it requires one to solve very large linear systems, if the indices $|n_a|$ are not small, because the dimensionality of our integer space is large. Also, if only one $|n_a|\sim R$ is rather large, the algorithm typically requires one to consider a region of size $\sim R$ in all directions around the origin. If a special-purpose algorithm for a family of Feynman integrals can be constructed, then reduction can be done much more efficiently, and higher $|n_a|$ are attainable. Also, reduction of integrals with one large $|n_a|\sim R$ does not require to consider a huge number of integrals with all $|n_a|\sim R$. Historically, reduction algorithms were constructed by hand, and implemented in various computer algebra systems (CASs). The pioneering program of this kind is \texttt{Mincer}~\cite{Mincer:1,Mincer:2}. It was used (and is still being used) for solving many physical problems. Some examples of reduction programs for various classes of Feynman integrals are listed in Table~\ref{T:Prog}. It is impossible to produce a complete list, because special-purpose reduction programs for various diagrams were written and used in very many papers; the Table just contains typical examples. \begin{table}[ht] \caption{Some programs for reducing specific classes of Feynman integrals} \label{T:Prog} \begin{center} \begin{tabular}{|l|l|l|l|} \hline Program & Ref. & CAS & Diagrams\\ \hline \multirow{2}*{\texttt{Mincer}} & \cite{Mincer:1} & \texttt{SCHOONSCHIP} & \multirow{3}*{3-loop massless self-energies}\\ & \cite{Mincer:2} & \texttt{FORM} &\\ \cline{1-3} \texttt{Slicer} & \cite{Slicer} & \texttt{REDUCE} &\\ \hline \multirow{2}*{\texttt{Recursor}} & \multirow{2}*{\cite{B:92}} & \multirow{2}*{\texttt{REDUCE}} & 2-loop massive on-shell self-energies\\ &&&3-loop massive vacuum diagrams\\ \hline \texttt{SHELL2} & \cite{SHELL2} & \multirow{2}*{\texttt{FORM}} & \multirow{2}*{2-loop massive on-shell self-energies}\\ \texttt{ONSHELL2} & \cite{ONSHELL2} &&\\ \hline & \cite{A:95} & \multirow{2}*{\texttt{FORM}} & \multirow{2}*{3-loop massive vacuum diagrams}\\ \texttt{MATAD} & \cite{MATAD} &&\\ \hline \texttt{Grinder} & \cite{G:00} & \texttt{REDUCE} & 3-loop HQET self-energies\\ \hline \texttt{SHELL3} & \cite{SHELL3} & \texttt{FORM} & 3-loop massive on-shell self-energies\\ \hline & \cite{T:97} & \texttt{FORM} & \multirow{2}*{2-loop self-energies with arbitrary masses}\\ \texttt{Tarcer} & \cite{Tarcer} & \textit{\texttt{Mathematica}} &\\ \hline \texttt{Loops} & \cite{Loops} & \texttt{REDUCE} & 2-loop massless self-energies\\ \hline \end{tabular} \end{center} \end{table} In recent years, the problems being considered became much more complicated (4-loop vacuum and self-energy diagrams). They cannot be solved using this traditional approach. Several attempts to automate construction of IBP reduction algorithms were made (see Sects.~\ref{S:IBP}, \ref{S:SBases}, \ref{S:Baikov}). Unfortunately, the problem has not been completely solved. It seems to be a well-defined mathematical problem, and I do hope that some universal and elegant solution will appear. A short comment on CASs used for implementing IBP reduction algorithms is in order. They form a wide spectrum. On one side, there is \texttt{\textit{Mathematica}}: very convenient, with huge amount of built-in mathematical knowledge, with an advanced GUI; but very hungry with respect to both memory and CPU time (and also expensive). On the opposite end of this spectrum there is \texttt{FORM}~\cite{FORM}: low-level, nearly no built-in mathematical knowledge, but efficient and suitable for huge calculations. It has been recently released under GPL. \texttt{REDUCE}~\cite{REDUCE,G:97} is in the middle: a convenient high-level language, a lot of mathematical knowledge (integrals, expansion in series, and much more), much more efficient than \texttt{\textit{Mathematica}}; it has no fancy GUI. A few years ago it became free software (BSD license). If the problem being considered is so large that the size of expressions is larger than the main memory of the computer, \texttt{FORM} is the only choice. It can work with expressions stored on disk efficiently. If any other system begins to swap, the situation is hopeless. On the other hand, if the problem fits in the main memory, \texttt{REDUCE} is a very reasonable candidate. Its language is well suited for this kind of problems (I've written a large package \texttt{Grinder} in it). Several years ago, an interesting benchmark was run on many different CASs~\cite{Fateman}. It was multiplication of large sparse multivariate polynomials. Unsurprisingly, specialized polynomial systems (\texttt{Pari}~\cite{Pari}, \texttt{Fermat}~\cite{Fermat}) were at the top. The fastest general-purpose system was \texttt{REDUCE} (closely followed by \texttt{maxima} compiled with \texttt{CMUCL}, an efficient common lisp compiler). \texttt{FORM} was 4 times slower than \texttt{REDUCE}; \texttt{Maple} 10 times slower; and \texttt{\textit{Mathematica}} nearly 20 times slower. \texttt{REDUCE} has good implementations of polynomial GCD (important for working with rational functions, and hence for IBP reduction) and of polynomial factorization, including multivariate (both of these things are impossible or very difficult in \texttt{FORM}). \texttt{REDUCE} was the most widely used CAS in physics at the dawn of the CAS era, but its use has dramatically reduced later; now, when it's free software, it would be good to use this excellent system more often. But, as I said, \texttt{FORM} has a unique advantage: it is the only system that can work with expressions which are larger than the main memory of the computer. \begin{sloppypar} \textbf{Acknowledgements}. I am grateful to P.\,A.~Baikov, K.\,G.~Chetyrkin, A.\,V.~Kotikov, R.\,N.~Lee, A.\,V.~Smirnov, and V.\,A.~Smirnov for reading the manuscript and suggesting improvement; to D.\,J.~Broadhurst and M.~Steinhauser for discussions of various aspects of multiloop calculations; and to S.-O.~Moch for inviting me to give lectures at CAPP-2011. This work was supported by the BMBF through grant No.\ 05H09VKE. \end{sloppypar}
1,116,691,498,874
arxiv
\section{Introduction} It is a widely known fact from the late 80s/early 90s that symmetric monoidal closed categories model $\mathtt{MILL}$, multiplicative intuitionistic linear logic, whose logical connectives comprise multiplicative verum $\mathsf{I}$ and conjunction $\otimes$ and linear implication $\multimap$ \cite{mellies:categorical:09}. Probably lesser-known, though a quite straightforward variation of this observation (which actually predated the inception of linear logic altogether) is the fact that (not necessarily symmetric) monoidal biclosed categories provide a semantics for the non-commutative variant of $\mathtt{MILL}$\ (for which we use the abbreviation $\mathtt{NMILL}$) \cite{abrusci:noncommutative:1990}, where the structural rule of exchange is absent and there are two ordered implications $\multimap$ and $\rotatebox[origin=c]{180}{$\multimap$}$. The sequent calculus of $\mathtt{NMILL}$\ without verum is known as the \emph{Lambek calculus} \cite{lambek:deductive:68}. From the beginning, the Lambek calculus has been employed in formal investigations of natural languages \cite{lambek:mathematics:58}. Lambek refers to the implications $\multimap$ and $\rotatebox[origin=c]{180}{$\multimap$}$ as residuals, and monoidal biclosed categories without unit are therefore also called residual categories. By dropping one of the ordered implications of $\mathtt{NMILL}$, one obtains a fragment of the logic interpretable in every monoidal closed category. In recent years, Ross Street introduced the new notion of (left) skew monoidal closed categories \cite{street:skew-closed:2013}. These are a weakening of monoidal closed categories: their structure includes a unit $\mathsf{I}$, a tensor $\otimes$, an internal hom $\multimap$ and an adjunction relating the latter two operations, as in usual non-skew monoidal closed categories. The difference lies in the three structural laws of left and right unitality, $\lambda_A : \mathsf{I} \otimes A \to A$ and $\rho_A : A \to A \otimes \mathsf{I}$, and associativity, $\alpha_{A,B,C} : (A \otimes B) \otimes C \to A \otimes (B \otimes C)$, which are usually required to be natural isomorphisms, but in the skew variant are merely natural transformations with the specified orientation. Street originally proposed this weaker notion to reach better understanding of and to fix a famous imbalance, first noticed by Eilenberg and Kelly \cite{eilenberg:closed:1966}, present in the adjunction relating monoidal and closed structures \cite{street:skew-closed:2013,uustalu:eilenberg-kelly:2020}. In the last decade, skew monoidal closed categories, together with their non-closed/non-monoidal variants, have been thoroughly studied, with applications ranging from algebra and homotopy theory to programming language semantics \cite{szlachanyi:skew-monoidal:2012,lack:skew:2012,lack:triangulations:2014,altenkirch:monads:2014,buckley:catalan:2015,bourke:skew:2017,bourke:skew:2018,tomita:realizability:21}. A question arises naturally: is it possible to characterize skew monoidal closed categories as categorical models of (a deductive system for) a logic? This paper provides a positive answer to this question. We introduce a cut-free sequent calculus for a skew variant of $\mathtt{NMILL}$, which we name $\mathtt{SkNMILL}$. Sequents are peculiarly defined as triples $S \mid \Gamma \vdash A$, where the succedent is a single formula $A$ (as in intuitionistic linear logic), but the antecedent is divided into two parts: an optional formula $S$, called the \emph{stoup} \cite{girard:constructive:91}, and an ordered list of formulae $\Gamma$, the \emph{context}. Inference rules are similar to the ones of $\mathtt{NMILL}$\ but with specific structural restrictions for accurately capturing the structural laws of skew monoidal closed categories and nothing more. In particular, and in analogy with $\mathtt{NMILL}$, the structural rules of weakening, contraction and exchange are all absent. Sets of derivations are quotiented by a congruence relation $\circeq$, carefully crafted to serve as the sequent-calculus counterpart of the equational theory of skew monoidal closed categories. The design of the sequent calculus draws inspiration from, and further advance, the line of work of Uustalu, Veltri and Zeilberger on proof systems for various categories with skew structure: skew semigroup (a.k.a.\ the Tamari order) \cite{zeilberger:semiassociative:19}, skew monoidal (non-closed) \cite{uustalu:sequent:2021,uustalu:proof:nodate} and its symmetric variant \cite{veltri:coherence:2021}, skew prounital (non-monoidal) closed \cite{uustalu:deductive:nodate}. The metatheory of $\mathtt{SkNMILL}$\ is developed in two different but related directions: \begin{itemize} \item We study the categorical semantics of $\mathtt{SkNMILL}$, by showing that the cut-free sequent calculus admits an interpretation of its formulae, derivations and equational theory $\circeq$ in any skew monoidal closed category as soon as one fixes the intended interpretation in it of the atoms. Moreover, the sequent calculus is the initial model in this semantics; it is a particular presentation of the free skew monoidal closed category. This can be shown directly or by first introducing a Hilbert-style calculus which directly presents the free skew monoidal closed category and proving that derivations in the two calculi are in a bijective correspondence. \item We investigate the proof-theoretic semantics of $\mathtt{SkNMILL}$, by defining a normalization strategy for sequent calculus derivations wrt.\ the congruence $\circeq$, when the latter is considered as a locally confluent and strongly normalizing reduction relation. The shape of normal forms is made explicit in a new \emph{focused} sequent calculus, whose derivations act as the target of the normalization procedure. The sequent calculus is ``focused'' in the sense of Andreoli \cite{andreoli:logic:1992}, as it describes a sound and complete root-first proof search strategy for the original sequent calculus. The focused system in this paper builds on and elaborates the previously developed normal forms for skew monoidal categories \cite{uustalu:sequent:2021} and skew prounital closed categories \cite{uustalu:deductive:nodate}. The presence of both positive ($\mathsf{I}$,$\otimes$) and negative ($\multimap$) connectives requires some extra care in the design of the proof search strategy, reflected in the focused sequent calculus, in particular when aiming at removing all possible undesired non-deterministic choices leading to $\circeq$-equivalent derivations that can arise during proof search. This is technically realized in the focused sequent calculus by peculiar employment of a system of \emph{tags} for keeping track of new formulae appearing in the antecedent. The focused sequent calculus can also be seen as an optimized presentation of the initial model for $\mathtt{SkNMILL}$, and as such can be used for solving the coherence problem in an effective way: deciding equality of two maps in this model (or equality of two canonical maps in every model) is equivalent to deciding \emph{syntactic} equality of the corresponding focused derivations. \end{itemize} The equivalence between sequent calculus derivations, quotiented by the equivalence relation $\circeq$, and focused derivations, that we present in Section \ref{sec:focus}, has been formalized in the Agda proof assistant. The associated code is available at \url{https://github.com/niccoloveltri/code-skewmonclosed}. \section{A Sequent Calculus for Skew Non-Commutative $\mathtt{MILL}$}\label{sec2} We begin by introducing a sequent calculus for a skew variant of non-commutative multiplicative intuitionistic linear logic ($\mathtt{NMILL}$), that we call $\mathtt{SkNMILL}$. Formulae are inductively generated by the grammar $A,B ::= X \ | \ \mathsf{I} \ | \ A \otimes B \ | \ A \multimap B$, where $X$ comes from a fixed set $\mathsf{At}$ of atoms, $\mathsf{I}$ is a multiplicative verum, $\otimes$ is a multiplicative conjunction and $\multimap$ is a linear implication. A sequent is a triple of the form $S \mid \Gamma \vdash A$, where the succedent $A$ is a single formula (as in $\mathtt{NMILL}$) and the antecedent is divided in two parts: an optional formula $S$, called \emph{stoup} \cite{girard:constructive:91}, and an ordered list of formulae $\Gamma$, called \emph{context}. The peculiar design of sequents, involving the presence of the stoup in the antecedent, comes from previous work on deductive systems with skew structure by Uustalu, Veltri and Zeilberger \cite{uustalu:sequent:2021,uustalu:proof:nodate,uustalu:deductive:nodate,veltri:coherence:2021}. The metavariable $S$ always denotes a stoup, i.e., $S$ can be a single formula or empty, in which case we write $S = {-}$, and $X,Y,Z$ are always names of atomic formulae. Derivations of a sequent $S \mid \Gamma \vdash A$ are inductively generated by the following rules: \begin{equation}\label{eq:seqcalc} \def2.5{2.5} \begin{array}{c} \infer[\mathsf{ax}]{A \mid \quad \vdash A}{} \qquad \infer[\mathsf{pass}]{{-} \mid A , \Gamma \vdash C}{A \mid \Gamma \vdash C} \qquad \infer[{\multimap}\mathsf{L}]{A \multimap B \mid \Gamma , \Delta \vdash C}{ {-} \mid \Gamma \vdash A & B \mid \Delta \vdash C } \qquad \infer[{\multimap}\mathsf{R}]{S \mid \Gamma \vdash A \multimap B}{S \mid \Gamma , A \vdash B} \\ \infer[\mathsf{IL}]{\mathsf{I} \mid \Gamma \vdash C}{{-} \mid \Gamma \vdash C} \qquad \infer[\otimes \mathsf{L}]{A \otimes B \mid \Gamma \vdash C}{A \mid B , \Gamma \vdash C} \qquad \infer[\mathsf{IR}]{{-} \mid \quad \vdash \mathsf{I}}{} \qquad \infer[\otimes \mathsf{R}]{S \mid \Gamma , \Delta \vdash A \otimes B}{ S \mid \Gamma \vdash A & {-} \mid \Delta \vdash B } \end{array} \end{equation} The inference rules in (\ref{eq:seqcalc}) are reminiscent of the ones in the sequent calculus for $\mathtt{NMILL}$\ \cite{abrusci:noncommutative:1990}, but there are some crucial differences. \begin{enumerate} \item The left logical rules $\mathsf{IL}$, $\otimes \mathsf{L}$ and ${\multimap}\mathsf{L}$, read bottom-up, are only allowed to be applied on the formula in the stoup position. In particular, there is no general way to remove a unit $\mathsf{I}$ nor decompose a tensor $A \otimes B$ if these formulae are located in the context and not in the stoup (we will see in (\ref{eq:lleft:gen}) that something can actually be done to deal with implications $A \multimap B$ in the context). \item The right tensor rule $\otimes \mathsf{R}$, read bottom-up, splits the antecedent of the conclusion between the two premises whereby the formula in the stoup, in case such a formula is present, has to be moved to the stoup of the first premise. In particular, the stoup formula of the conclusion cannot be moved to the antecedent of the second premise even if $\Gamma$ is chosen to be empty. \item The presence of the stoup implies a distinction between antecedents of forms $A \mid \Gamma$ and ${-} \mid A, \Gamma$. The structural rule $\mathsf{pass}$ (for `passivation'), read bottom-up, allows the moving of the leftmost formula in the context to the stoup position whenever the stoup is initially empty. \item The logical connectives of $\mathtt{NMILL}$\ typically include two ordered implications $\multimap$ and $\rotatebox[origin=c]{180}{$\multimap$}$, which are two variants of linear implication arising from the removal of the exchange rule from intuitionistic linear logic. In $\mathtt{SkNMILL}$\ only one of the ordered implications (the left implication $\multimap$) is present. It is currently not clear to us whether the inclusion of the second implication to our logic is a meaningful addition and whether it corresponds to some particular categorical notion. \end{enumerate} The restrictions in 1--4 are essential for precisely capturing all the features of skew monoidal closed categories and nothing more, as we discuss in Section \ref{sec:catsem}. Notice also that, similarly to the case of $\mathtt{NMILL}$, all structural rules of weakening, contraction and exchange are absent. We give names to derivations and we write $f : S \mid \Gamma \vdash A$ when $f$ is a particular derivation of the sequent $S \mid \Gamma \vdash A$. Examples of valid derivations in the sequent calculus, corresponding to the structural laws $\lambda$, $\rho$ and $\alpha$ of skew monoidal closed categories (see Definition \ref{def:skewcat}) are given below. \begin{equation}\label{eq:lra} \small \begin{array}{c@{\;\quad}cc} (\lambda) & (\rho) & (\alpha) \\ \infer[\otimes \mathsf{L}]{\mathsf{I} \otimes A \mid \quad \vdash A}{ \infer[\mathsf{IL}]{\mathsf{I} \mid A \vdash A}{ \infer[\mathsf{pass}]{{-} \mid A \vdash A}{ \infer[\mathsf{ax}]{A \mid \quad \vdash A}{} } } } & \infer[\otimes \mathsf{R}]{A \mid \quad \vdash A \otimes \mathsf{I}}{ \infer[\mathsf{ax}]{A \mid \quad \vdash A}{} & \infer[\mathsf{IR}]{{-} \mid \quad \vdash \mathsf{I}}{} } & \infer[\otimes \mathsf{L}]{(A \otimes B) \otimes C \mid \quad \vdash A \otimes (B \otimes C)}{ \infer[\otimes \mathsf{L}]{A \otimes B \mid C \vdash A \otimes (B \otimes C)}{ \infer[\otimes \mathsf{R}]{A \mid B , C \vdash A \otimes (B \otimes C)}{ \infer[\mathsf{ax}]{A \mid \quad \vdash A}{} & \infer[\mathsf{pass}]{{-} \mid B , C \vdash B \otimes C}{ \infer[\otimes \mathsf{R}]{B \mid C \vdash B \otimes C}{ \infer[\mathsf{ax}]{B \mid \quad \vdash B}{} & \infer[\mathsf{pass}]{{-} \mid C \vdash C}{ \infer[\mathsf{ax}]{C \mid \quad \vdash C}{} } } } } } } \end{array} \end{equation} Examples of non-derivable sequents include the ``inverses'' of the conclusions in (\ref{eq:lra}), obtained by swapping the stoup formula with the succedent formula. More precisely, the three sequents $X \mid ~ \vdash \mathsf{I} \otimes X$, $X \otimes \mathsf{I} \mid ~ \vdash X$ and $X \otimes (Y \otimes Z) \mid ~ \vdash (X \otimes Y) \otimes Z$ do not have any derivation. All possible attempts of constructing a valid derivation for each of them end in failure. \begin{displaymath} \small \begin{array}{ccc} (\lambda^{-1}) & (\rho^{-1}) & (\alpha^{-1}) \\[6pt] \infer[\otimes \mathsf{R}]{X \mid ~\vdash \mathsf{I} \otimes X}{ \deduce[??]{X \mid ~ \vdash \mathsf{I}}{ } & \deduce[??]{{-} \mid ~ \vdash X}{ } } & \infer[\otimes \mathsf{L}]{X \otimes \mathsf{I} \mid \quad \vdash X}{ \deduce{X \mid \mathsf{I} \vdash X}{??} } & \infer[\otimes \mathsf{L}]{X \otimes (Y \otimes Z) \mid ~\vdash (X \otimes Y) \otimes Z}{ \deduce{X \mid Y \otimes Z \vdash (X \otimes Y) \otimes Z}{??} } \\ (\text{$\otimes \mathsf{R}$ sends $X$ to 1st premise}) & (\text{$\mathsf{IL}$ does not act on $\mathsf{I}$ in context}) & (\text{$\otimes \mathsf{L}$ does not act on $\otimes$ in context}) \end{array} \end{displaymath} Analogously, the sequents $\mathsf{I} \multimap A \mid ~ \vdash A$ and $(A \otimes B) \multimap C \mid ~ \vdash A \multimap (B \multimap C)$ are derivable, while generally their ``inverses'' are not. Also, a derivation of $A \mid ~ \vdash B$ always yields a derivation of $\mathsf{I} \mid ~ \vdash A \multimap B$, but there are $A$, $B$ such that $\mathsf{I} \mid ~ \vdash A \multimap B$ is derivable while $A \mid ~ \vdash B$ is not (take, e.g., $A = X$, $B = \mathsf{I} \otimes X$). Sets of derivations are quotiented by a congruence relation $\circeq$, generated by the following pairs of derivations. \begin{equation} \label{fig:circeq} \begin{array}{rlll} \mathsf{ax}_{\mathsf{I}} &\circeq \mathsf{IL} \text{ } (\mathsf{IR}) \\ \mathsf{ax}_{A \otimes B} &\circeq \otimes \mathsf{L} \text{ } (\otimes \mathsf{R} \text{ } (\mathsf{ax}_{A} , \mathsf{pass} \text{ } \mathsf{ax}_{B})) \\ \mathsf{ax}_{A \multimap B} &\circeq {\multimap}\mathsf{R} \text{ } ({\multimap}\mathsf{L} \text{ } (\mathsf{pass} \text{ } \mathsf{ax}_{A}, \mathsf{ax}_{B} )) \\ \otimes \mathsf{R} \text{ } (\mathsf{pass} \text{ } f, g) &\circeq \mathsf{pass} \text{ } (\otimes \mathsf{R} \text{ } (f, g)) &&f : A' \mid \Gamma \vdash A, g : {-} \mid \Delta \vdash B \\ \otimes \mathsf{R} \text{ } (\mathsf{IL} \text{ } f, g) &\circeq \mathsf{IL} \text{ } (\otimes \mathsf{R} \text{ } (f , g)) &&f : {-} \mid \Gamma \vdash A , g : {-} \mid \Delta \vdash B \\ \otimes \mathsf{R} \text{ } (\otimes \mathsf{L} \text{ } f, g) &\circeq \otimes \mathsf{L} \text{ } (\otimes \mathsf{R} \text{ } (f , g)) &&f : A' \mid B' , \Gamma \vdash A , g : {-} \mid \Delta \vdash B \\ \otimes \mathsf{R} \text{ } ({\multimap}\mathsf{L} \text{ } (f , g), h) & \circeq {\multimap}\mathsf{L} \text{ } (f, \otimes \mathsf{R} \text{ } (g, h)) &&f: {-} \mid \Gamma \vdash A, g : B \mid \Delta \vdash C, h : {-} \mid \Lambda \vdash D \\ \mathsf{pass} \text{ } ({\multimap}\mathsf{R} \text{ } f) &\circeq {\multimap}\mathsf{R} \text{ } (\mathsf{pass} \text{ } f) &&f : A' \mid \Gamma , A \vdash B \\ \mathsf{IL} \text{ } ({\multimap}\mathsf{R} \text{ } f) &\circeq {\multimap}\mathsf{R} \text{ } (\mathsf{IL} \text{ } f) &&f : {-} \mid \Gamma , A \vdash B \\ \otimes \mathsf{L} \text{ } ({\multimap}\mathsf{R} \text{ } f) &\circeq {\multimap}\mathsf{R} \text{ } (\otimes \mathsf{L} \text{ } f) &&f : A \mid B , \Gamma , C \vdash D \\ {\multimap}\mathsf{L} \text{ } (f, {\multimap}\mathsf{R} \text{ } g) &\circeq {\multimap}\mathsf{R} \text{ } ({\multimap}\mathsf{L} \text{ } (f, g)) &&f : {-} \mid \Gamma \vdash A', g : B' \mid \Delta , A \vdash B \end{array} \end{equation} The first three equations above are $\eta$-conversions, completely characterizing the $\mathsf{ax}$ rule on non-atomic formulae. The remaining equations are permutative conversions. The congruence $\circeq$ has been carefully chosen to serve as the proof-theoretic counterpart of the equational theory of skew monoidal closed categories, introduced in Definition \ref{def:skewcat}. The subsystem of equations involving only $(\mathsf{I},\otimes)$ originated in \cite{uustalu:sequent:2021} while the subsystem involving only $\multimap$ is from \cite{uustalu:deductive:nodate}. \begin{theorem} The sequent calculus enjoys cut admissibility: the following two cut rules are admissible. \begin{displaymath} \infer[\mathsf{scut}]{S \mid \Gamma , \Delta \vdash C}{ S \mid \Gamma \vdash A & A \mid \Delta \vdash C } \qquad \infer[\mathsf{ccut}]{S \mid \Delta_0 , \Gamma , \Delta_1 \vdash C}{ {-} \mid \Gamma \vdash A & S \mid \Delta_0 , A , \Delta_1 \vdash C } \end{displaymath} \end{theorem} The two cut rules satisfy a large number of $\circeq$-equations, see e.g, \cite[Figures 5 and 6]{uustalu:sequent:2021} for the list of such equations not involving $\multimap$. In particular, the cut rules respect $\circeq$, in the sense that $\mathsf{scut}(f,g) \circeq \mathsf{scut}(f',g')$ whenever $f \circeq f'$ and $g \circeq g'$, and similarly for $\mathsf{ccut}$. Here are some other interesting admissible rules relevant for the metatheory of this calculus. \begin{itemize} \item The left rules for $\mathsf{I}$ and $\otimes$ are invertible up to $\circeq$, and similarly the right rule for $\multimap$. No other rule is invertible; in particular, the passivation rule $\mathsf{pass}$ is not. \item Applications of the invertible left logical rules can be iterated, and similarly for the invertible right ${\multimap}\mathsf{R}$ rule, resulting in the two admissible rules \begin{equation}\label{eq:inter:ante} \infer[\mathsf{L}^\star]{[\![ S \mid \Gamma ]\!]_{\otimes} \mid \Delta \vdash C}{ S \mid \Gamma ,\Delta \vdash C } \qquad \infer[{\multimap}\mathsf{R}^\star]{S \mid \Gamma \vdash [\![ \Delta \mid C ]\!]_{\multimap}}{ S \mid \Gamma , \Delta \vdash C } \end{equation} The interpretation of antecedents $[\![ S \mid \Gamma ]\!]_{\otimes}$ in (\ref{eq:inter:ante}) is the formula obtained by substituting the separator $\mid$ and the commas with tensors, $[\![ S \mid A_1,\dots,A_n ]\!]_{\otimes} = (\dots (([\![ S \llangle \otimes A_1) \otimes A_2) \dots ) \otimes A_n$, where the interpretation of stoups is defined by $[\![ {-} \llangle = \mathsf{I}$ and $[\![ A \llangle = A$. Dually, the formula $[\![ \Delta \mid C ]\!]_{\multimap}$ in (\ref{eq:inter:ante}) is obtained by substituting $\mid$ and commas with implications: $[\![ A_1,\dots,A_n \mid C ]\!]_{\multimap} = A_1 \multimap (A_2 \multimap (\dots \multimap (A_n \multimap C)))$. \item Another left implication rule, acting on a formula $A \multimap B$ in the context, is derivable using cut: \begin{equation}\label{eq:lleft:gen} \small \!\!\! \proofbox{ \infer[{\multimap}\mathsf{L}_{\mathsf{C}}]{S \mid \Delta_0, A \multimap B, \Gamma , \Delta_1 \vdash C}{ \deduce{{-} \mid \Gamma \vdash A}{f} & \deduce{S \mid \Delta_0, B , \Delta_1 \vdash C}{g} } } {=} \proofbox{ \infer[\mathsf{ccut}]{S \mid \Delta_0, A \multimap B, \Gamma , \Delta_1 \vdash C}{ \infer[\mathsf{pass}]{{-} \mid A \multimap B, \Gamma \vdash B}{ \infer[{\multimap}\mathsf{L}]{A \multimap B \mid \Gamma \vdash B}{ \deduce{{-} \mid \Gamma \vdash A}{f} & \infer[\mathsf{ax}]{B \mid ~ \vdash B}{} } } & \deduce{S \mid \Delta_0, B , \Delta_1 \vdash C}{g} } } \end{equation} \end{itemize} \paragraph{$\mathtt{SkNMILL}$\ as a Logic of Resources} Similarly to other substructural logics like $\mathtt{MILL}$\ and $\mathtt{NMILL}$,\linebreak $\mathtt{SkNMILL}$\ can be understood as a logic of resources. Under this perspective, formulae of the sequent calculus in (\ref{eq:seqcalc}) correspond to resources: atomic formulae are primitive resources; the formula $A \otimes B$ is read as ``resource $A$ before resource $B$''; $\mathsf{I}$ is ``nothing''; the formula $A \multimap B$ can be described as a method for turning the resource $A$ into the resource $B$ (if $A \multimap B$ is provided before $A$). As in other substructural systems lacking the structural rules of weakening and contraction, all resources are one-time usable. The antecedent of a sequent contains the resources at hand, while the succedent contains the resource that needs to be produced. A derivation is then a particular procedure for turning the available resources into the goal resource. Under this interpretation, derivations are naturally read and built from the conclusion to the premises. Resources in the antecedent are ordered, meaning that they need to be utilized in the order they appear. If a resource $A$ precedes another resource $B$ in the antecedent, then $A$ must be consumed before $B$. The stoup position, when it is non-empty, contains the resource that is immediately usable. The resources in the context can only be spent after the resource in the stoup has been used. Time flows bottom-up in proof trees, from conclusion to premises, and the proof of a left premise always takes place before the proof of the right premise. The context shares similarities with the stack (and also the queue) data structure. If, at some moment, there is no resource immediately consumable, as stoup is empty, then the next available resource can be promoted to this status: the top (left-most position) of the context can be popped and moved to the stoup by the $\mathsf{pass}$ rule. A new resource can be pushed to the bottom (right-most position) of the context using the ${\multimap}\mathsf{R}$ rule. Using the $\otimes \mathsf{L}$ rule, the immediately usable resource $A \otimes B$ can be decomposed into two parts $A$ and $B$; the resource $A$ becomes immediately usable (remains in the stoup) while $B$ will become usable next (is pushed to the top of the context). The resource $\mathsf{I}$, if immediately usable, can be disposed by the rule $\mathsf{IL}$, making it possible to pop the next available resource from the context. The resource $\mathsf{I}$ can always be produced free-of-charge with the rule $\mathsf{IR}$. In the rule ${\multimap}\mathsf{L}$, we have a resource $A \multimap B$ immediately usable and need to produce $C$. This gives us access to the resource $B$, but only after we have produced $A$ by spending a part of the other resources available. The context is split into two parts $\Gamma$ and $\Delta$. The first part $\Gamma$ is used to make $A$. Once this has been accomplished, production of $C$ can continue with $B$ immediately usable and $\Delta$ usable thereafter. The succedent of the rule $\otimes \mathsf{R}$ is of the form $A \otimes B$, which implies that first $A$ and then $B$ need to be produced. This justifies the splitting of the context into two parts again: the first part $\Gamma$, consisting of resources that can be spent sooner, is used to produce $A$ in the left premise, while the second part $\Delta$ is spent subsequently for production of $B$ in the right premise. Crucially, and this is one central ``skew'' aspect of $\mathtt{SkNMILL}$, if we have a resource immediately usable in the stoup $S$, this must be spent for the construction of the first resource we are required to produce, namely $A$; it cannot be saved for producing $B$ even if producing $A$ needs no resources at all. The other central ``skew'' aspect of $\mathtt{SkNMILL}$\ is that the left rules only act on stoup formulae. This can be understood under the resources-as-formulae correspondence to say that we are only allowed to decompose an available resource when it has become immediately consumable (has entered the stoup). We are precluded from decomposing the resources in the context ahead of their time. \section{Categorical Semantics via Skew Monoidal Closed Categories} \label{sec:catsem} Next we present a categorical semantics for the sequent calculus of $\mathtt{SkNMILL}$. \begin{defn}\label{def:skewcat} A \emph{(left) skew monoidal closed category} $\mathbb{C}$ is a category with a unit object $\mathsf{I}$ and two functors $\otimes : \mathbb{C} \times \mathbb{C} \rightarrow \mathbb{C}$ and $\multimap : \mathbb{C}^{\mathsf{op}} \times \mathbb{C} \rightarrow \mathbb{C}$ forming an adjunction ${-} \otimes B \dashv B \multimap {-}$ for all $B$, and three natural transformations $\lambda$, $\rho$, $\alpha$ typed $\lambda_A : \mathsf{I} \otimes A \to A$, $\rho_A : A \to A \otimes \mathsf{I}$ and $\alpha_{A,B,C} : (A \otimes B) \otimes C \to A \otimes (B \otimes C)$, satisfying the following equations due to Mac Lane \cite{maclane1963natural}: \begin{center} \begin{tikzcd} & {\mathsf{I} \otimes \mathsf{I}} \\[-.2cm] \mathsf{I} && \mathsf{I} \arrow["{\rho_{\mathsf{I}}}", from=2-1, to=1-2] \arrow["{\lambda_{\mathsf{I}}}", from=1-2, to=2-3] \arrow[Rightarrow, no head, from=2-1, to=2-3] \end{tikzcd} \qquad \begin{tikzcd} {(A \otimes \mathsf{I}) \otimes B} & {A \otimes (\mathsf{I} \otimes B)} \\[-.3cm] {A \otimes B} & {A \otimes B} \arrow[Rightarrow, no head, from=2-1, to=2-2] \arrow["{\rho_A \otimes B}", from=2-1, to=1-1] \arrow["{A \otimes \lambda_{B}}", from=1-2, to=2-2] \arrow["{\alpha_{A , \mathsf{I} , B}}", from=1-1, to=1-2] \end{tikzcd} \begin{tikzcd} {(\mathsf{I} \otimes A ) \otimes B} && {\mathsf{I} \otimes (A \otimes B)} \\[-.3cm] & {A \otimes B} \arrow["{\alpha_{\mathsf{I} , A ,B}}", from=1-1, to=1-3] \arrow["{\lambda_{A \otimes B}}", from=1-3, to=2-2] \arrow["{\lambda_{A} \otimes B}"', from=1-1, to=2-2] \end{tikzcd} \qquad \begin{tikzcd} {(A \otimes B) \otimes \mathsf{I}} && {A \otimes (B \otimes \mathsf{I})} \\[-.3cm] & {A \otimes B} \arrow["{\alpha_{A , B, \mathsf{I}}}", from=1-1, to=1-3] \arrow["{A \otimes \rho_B}"', from=2-2, to=1-3] \arrow["{\rho_{A \otimes B}}", from=2-2, to=1-1] \end{tikzcd} \begin{tikzcd} {(A\otimes (B\otimes C)) \otimes D} && {A \otimes ((B \otimes C) \otimes D)} \\[-.2cm] {((A \otimes B)\otimes C) \otimes D} & {(A \otimes B) \otimes (C \otimes D)} & {A \otimes (B \otimes (C \otimes D))} \arrow["{\alpha_{A , B\otimes C , D}}", from=1-1, to=1-3] \arrow["{A \otimes \alpha_{B , C ,D}}", from=1-3, to=2-3] \arrow["{\alpha_{A ,B ,C\otimes D}}"', from=2-2, to=2-3] \arrow["{\alpha_{A \otimes B , C , D}}"', from=2-1, to=2-2] \arrow["{\alpha_{A , B ,C} \otimes D}", from=2-1, to=1-1] \end{tikzcd} \end{center} \end{defn} The notion of skew monoidal closed category admits other equivalent characterizations \cite{street:skew-closed:2013,uustalu:eilenberg-kelly:2020}. Tuples of natural transformations $(\lambda , \rho , \alpha)$ are in bijective correspondence with tuples of (extra)natural transformations $(j, i, L)$ typed $j_A : \mathsf{I} \to A \multimap A$, $i_A : \mathsf{I} \multimap A \to A$, $L_{A,B,C} : B \multimap C \to (A \multimap B) \multimap (A \multimap C)$. Moreover, $\alpha$ and $L$ are interdefinable with a natural transformation $\mathsf{p}$ typed $\mathsf{p}_{A , B , C} : (A \otimes B) \multimap C \to \linebreak A \multimap (B \multimap C)$, embodying an internal version of the adjunction between $\otimes$ and $\multimap$. \begin{example}[from \cite{uustalu:eilenberg-kelly:2020}] This example explains how to turn every categorical model of $\mathtt{MILL}$\ extended with a $\Box$-like modality of necessity (or something like the exponential modality $!$ of linear logic) into a model of $\mathtt{SkNMILL}$. Let $(\mathbb{C},\mathsf{I},\otimes,\multimap)$ be a (possibly symmetric) monoidal closed category and let $(D,\varepsilon, \delta)$ be a comonad on $\mathbb{C}$, where $\varepsilon_A : D\,A \to A$ and $\delta_A : D\,A \to D\,(D\,A)$ are the counit and comultiplication of $D$. Suppose the comonad $D$ to be \emph{lax monoidal}, i.e., coming with a map $\mathsf{e} : \mathsf{I} \to D\,I$ and a natural transformation $\mathsf{m}$ typed $\mathsf{m}_{A,B} : D \,A \otimes D\,B \to D\,(A \otimes B)$ cohering suitably with $\lambda$, $\rho$, $\alpha$, $\varepsilon$ and $\delta$. Then $\mathbb{C}$ has also a skew monoidal closed structure $(\mathsf{I}, \ot^D, \;\textsuperscript{$D$}\!\!\lolli)$ given by $A \ot^D B = A \otimes D\,B$ and $B \;\textsuperscript{$D$}\!\!\lolli C = D\,B \multimap C$. The adjunction ${-} \otimes D\,B \dashv D\,B \multimap {-}$ yields an adjunction ${-} \ot^D B \dashv B \;\textsuperscript{$D$}\!\!\lolli {-}$. The structural laws are \[ \begin{array}{c} \lambda^D_A \, = \, \xymatrix@C=3pc{\mathsf{I} \otimes D\,A \ar[r]^-{\mathsf{I} \otimes \varepsilon_A} & \mathsf{I} \otimes A \ar[r]^-{\lambda_A} & A} \hspace*{1.5cm} \rho^D_A \, = \, \xymatrix@C=3pc{A \ar[r]^-{\rho_A} & A \otimes \mathsf{I} \ar[r]^-{A \otimes \mathsf{e}} & A \otimes D\,\mathsf{I}} \\ \begin{array}{rcl} \alpha^D_{A,B,C} & = & \xymatrix@C=5pc{(A \otimes D\,B) \otimes D\,C \ar[r]^-{(A \otimes D B) \otimes \delta_C} & (A \otimes D\,B) \otimes D\,(D\,C)} \\ & & \hspace*{1.5cm} \xymatrix@C=5pc{\ar[r]^-{\alpha_{A,DB,D(DC)}} & A \otimes (D\,B \otimes D\,(D\,C)) \ar[r]^-{A \otimes \mathsf{m}_{B,DC}} & A \otimes D\,(B \otimes D\,C)} \end{array} \end{array} \] $(\mathbb{C},\mathsf{I},\ot^D,\;\textsuperscript{$D$}\!\!\lolli)$ is a ``genuine'' skew monoidal closed category, in the sense that $\lambda^D$, $\rho^D$ and $\alpha^D$ are all generally non-invertible. \end{example} \begin{defn} A \emph{(strict) skew monoidal closed functor} $F : \mathbb{C} \rightarrow \mathbb{D}$ between skew monoidal closed categories $(\mathbb{C} , \mathsf{I} , \otimes , \multimap)$ and $(\mathbb{D} , \mathsf{I}' , \otimes' , \multimap')$ is a functor from $\mathbb{C}$ to $\mathbb{D}$ satisfying $F \mathsf{I} = \mathsf{I}'$, $F (A \otimes B) = \linebreak F A \otimes' F B$ and $F(A \multimap B) = F A \multimap' F B$, also preserving the structural laws $\lambda$, $\rho$ and $\alpha$ on the nose. \end{defn} The formulae, derivations and the equivalence relation $\circeq$ of the sequent calculus for $\mathtt{SkNMILL}$\ determine a skew monoidal closed category $\mathsf{FSkMCl}(\mathsf{At}$). \begin{defn}\label{def:fskmcc} The skew monoidal closed category $\mathsf{FSkMCl}(\mathsf{At})$ has as objects formulae; the operations $\mathsf{I}$, $\otimes$ and $\multimap$ are the logical connectives. The set of maps between objects $A$ and $B$ is the set of derivations $A \mid ~ \vdash B$ quotiented by the equivalence relation $\circeq$. The identity map on $A$ is the equivalence class of $\mathsf{ax}_A$, while composition is given by $\mathsf{scut}$. The structural laws $\lambda$, $\rho$, $\alpha$ are given by derivations in (\ref{eq:lra}). \end{defn} This is a good definition since all equations of a skew monoidal closed category turn out to hold. Skew monoidal closed categories with given interpretations of atoms into them constitute models of the sequent calculus of $\mathtt{SkNMILL}$, in the sense specified by the following theorem. \begin{theorem}\label{thm:models} Let $\mathbb{D}$ be a skew monoidal closed category. Given $F_{\mathsf{At}} : \mathsf{At} \rightarrow |\mathbb{D}|$ providing evaluation of atomic formulae as objects of $\mathbb{D}$, there exists a skew monoidal closed functor $F : \mathsf{FSkMCl}(\mathsf{At}) \rightarrow \mathbb{D}$. \end{theorem} \begin{proof} Let $(\mathbb{D} , \mathsf{I}' , \otimes' , \multimap')$ be a skew monoidal closed category. The action on object $F_0$ of the functor $F$ is defined by induction on the input formula: \begin{equation*} F_0X = F_{\mathsf{At}}X \qquad F_0\mathsf{I} = \mathsf{I}' \qquad F_0(A \otimes B) = F_0A \otimes' F_0B \qquad F_0(A \multimap B) = F_0A \multimap' F_0B \end{equation*} The encoding of antecedents as formulae $[\![ S \mid \Gamma ]\!]_{\otimes}$, introduced immediately after (\ref{eq:inter:ante}), can be replicated also in $\mathbb{D}$ by simply replacing $\mathsf{I}$ and $\otimes$ with $\mathsf{I}'$ and $\otimes'$ in the definition, where now $S$ is an optional object and $\Gamma$ is a list of objects of $\mathbb{D}$. Using this encoding, it is possible to show that each rule in (\ref{eq:seqcalc}) is derivable in $\mathbb{D}$. As an illustrative case, consider the rule $\mathsf{pass}$. Assume given a map $f : [\![ A \mid \Gamma ]\!]_{\otimes'} \to C$ in $\mathbb{D}$. Then, assuming $\Gamma = A_1,\dots,A_n$, we can define the passivation of $f$ typed $[\![ {-} \mid A, \Gamma ]\!]_{\otimes'} \to C$ as \[\xymatrixcolsep{7pc} \xymatrix{ (\dots ((\mathsf{I}' \otimes' A_1) \otimes' A_2) \dots ) \otimes' A_n \ar[r]^-{(\dots (\lambda'_{A_1} \otimes' A_2) \dots ) \otimes' A_n} & (\dots (A_1 \otimes' A_2) \dots ) \otimes' A_n \ar[r]^-{f} & C } \] This implies the existence of a function $F_1$, sending each derivation $f : S \mid \Gamma \vdash A$ to a map $F_1f : \linebreak F_0([\![ S \mid \Gamma ]\!]_{\otimes}) \to FA$ in $\mathbb{D}$, defined by induction on the derivation $f$. When restricted to sequents of the form $A \mid ~ \vdash B$, the function $F_1$ provides the action on maps of $F$. It is possible to show that $F$ preserves the skew monoidal closed structure, so it is a skew monoidal closed functor. \end{proof} Moreover, the sequent calculus as a presentation of a skew monoidal closed category is the \emph{initial} one among these models, or, equivalently, $\mathsf{FSkMCl}(\mathsf{At})$ is the \emph{free} such category on the set $\mathsf{At}$. \begin{theorem}\label{thm:unique} The skew monoidal closed functor $F$ constructed in Theorem \ref{thm:models} is the unique one for which $F_0 X = F_{\mathsf{At}} X$ for any atom $X$. \end{theorem} Alternatively, following the strategy of our previous papers \cite{uustalu:sequent:2021,uustalu:proof:nodate,uustalu:deductive:nodate,veltri:coherence:2021}, this result can be shown by going via a Hilbert-style deductive system that directly presents the free skew monoidal closed category on $\mathsf{At}$. Its formulae are the same of the sequent calculus, its sequents are pairs $A \Rightarrow B$, with $A$ and $B$ single formulae. Derivations are generated by the following rules, in which one can recognize all the structure of Definition \ref{def:skewcat} (in particular $\pi$ and $\pi^{-1}$ are the adjunction data). \begin{displaymath} \def2.5{1.5} \begin{array}{c} \infer[\mathsf{id}]{A \Rightarrow A}{} \quad \infer[\mathsf{comp}]{A \Rightarrow C}{ A \Rightarrow B & B \Rightarrow C } \qquad \infer[\otimes]{A \otimes B \Rightarrow C \otimes D}{ A \Rightarrow C & B \Rightarrow D } \qquad \infer[\multimap]{A \multimap B \Rightarrow C \multimap D}{ C \Rightarrow A & B \Rightarrow D } \\ \infer[\lambda]{\mathsf{I} \otimes A \Rightarrow A}{} \quad \infer[\rho]{A \Rightarrow A \otimes \mathsf{I}}{} \quad \infer[\alpha]{(A \otimes B) \otimes C \Rightarrow A \otimes (B \otimes C)}{} \qquad \infer[\pi]{A \Rightarrow B \multimap C}{A \otimes B \Rightarrow C} \quad \infer[\pi^{-1}]{A \otimes B \Rightarrow C}{A \Rightarrow B \multimap C} \end{array} \end{displaymath} There exists a congruence relation $\doteq$ on each set of derivations $A \Rightarrow B$ with generators directly encoding the equations of the theory of skew monoidal closed categories. With the Hilbert-style calculus in place, one can show that there is a bijection between the set of derivations of the sequent $A \mid ~ \vdash B$ modulo $\circeq$ and the set of derivations of the sequent $A \Rightarrow B$ modulo $\doteq$. \section{Proof-Theoretic Semantics via Focusing} \label{sec:focus} The equivalence relation $\circeq$ from (\ref{fig:circeq}) can also be viewed as an abstract rewrite system, by orienting every equation from left to right. The resulting rewrite system is locally confluent and strongly normalizing, thus confluent with unique normal forms. Derivations in normal form thus correspond to canonical representatives of $\circeq$-equivalence classes. These representatives can be organized in a \emph{focused sequent calculus} in the sense of \cite{andreoli:logic:1992}, which describes, in a declarative fashion, a root-first proof search strategy for the (original, unfocused) sequent calculus. \subsection{A First (Na{\"i}ve) Focused Sequent Calculus} As a first attempt to focusing, we na{\"i}vely merge together the rules of the focused sequent calculi of skew monoidal categories \cite{uustalu:sequent:2021} and skew prounital closed categories \cite{uustalu:deductive:nodate}. In the resulting calculus, sequents have one of 4 possible subscript annotations, corresponding to 4 different phases of proof search: $\mathsf{RI}$ for `right invertible', $\mathsf{LI}$ for `left invertible`, $\mathsf{P}$ for `passivation` and $\mathsf{F}$ for `focusing`. We will see soon that this focused sequent calculus is too permissive, in the sense that two distinct derivations in the focused system can correspond to $\circeq$-equivalent sequent calculus derivations. \begin{equation}\label{eq:naive} \begin{array}{lc} \text{(right invertible)} & \proofbox{ \infer[{\multimap}\mathsf{R}]{S \mid \Gamma \vdash_{\mathsf{RI}} A \multimap B}{S \mid \Gamma , A \vdash_{\mathsf{RI}} B} \qquad \infer[\mathsf{LI} 2 \mathsf{RI}]{S \mid \Gamma \vdash_{\mathsf{RI}} P}{S \mid \Gamma \vdash_{\mathsf{LI}} P} } \\[9pt] \text{(left invertible)} & \proofbox{ \infer[\mathsf{IL}]{\mathsf{I} \mid \Gamma \vdash_{\mathsf{LI}} P}{{-} \mid \Gamma \vdash_{\mathsf{LI}} P} \qquad \infer[\otimes \mathsf{L}]{A \otimes B \mid \Gamma \vdash_{\mathsf{LI}} P}{A \mid B , \Gamma \vdash_{\mathsf{LI}} P} \qquad \infer[\mathsf{P} 2 \mathsf{LI}]{T \mid \Gamma \vdash_{\mathsf{LI}} P}{T \mid \Gamma \vdash_{\mathsf{P}} P} } \\[9pt] \text{(passivation)} & \proofbox{ \infer[\mathsf{pass}]{{-} \mid A , \Gamma \vdash_{\mathsf{P}} P }{ A\mid \Gamma \vdash_{\mathsf{LI}} P } \qquad \infer[\mathsf{F} 2 \mathsf{P}]{T \mid \Gamma \vdash_{\mathsf{P}} P}{ T \mid \Gamma \vdash_{\mathsf{F}} P } } \\ \text{(focusing)} & \\ \multicolumn{2}{c}{ \hspace*{3mm} \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}} X}{} \quad \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}} \mathsf{I}}{} \quad \infer[\otimes \mathsf{R}]{T \mid \Gamma , \Delta \vdash_{\mathsf{F}} A \otimes B}{ T \mid \Gamma \vdash_{\mathsf{RI}} A & {-} \mid \Delta \vdash_{\mathsf{RI}} B } \quad \infer[{\multimap}\mathsf{L}]{A \multimap B \mid \Gamma , \Delta \vdash_{\mathsf{F}} P}{ {-} \mid \Gamma \vdash_{\mathsf{RI}} A & B \mid \Delta \vdash_{\mathsf{LI}} P } } \end{array} \end{equation} In the rules above and the rest of the paper, the metavariable $P$ denotes a \emph{positive} formula, i.e. $P \not= A \multimap B$, while metavariable $T$ indicates a \emph{negative} stoup, i.e. $T \not= \mathsf{I}$ and $T\not= A \otimes B$ ($T$ can also be empty). We explain the rules of the focused sequent calculus from the perspective of root-first proof search. The starting phase is `right invertible` $\mathsf{RI}$. \begin{itemize} \item[($\vdash_\mathsf{RI}$)] We repeatedly apply the right invertible rule ${\multimap}\mathsf{R}$ with the goal of reducing the succedent to a positive formula $P$. When the succedent formula becomes positive, we move to phase $\mathsf{LI}$ via $\LI2\mathsf{RI}$. \item[($\vdash_\mathsf{LI}$)] We repeatedly destruct the stoup formula via application of left invertible rules $\otimes \mathsf{L}$ and $\mathsf{IL}$ with the goal of making it negative. When this happens, we move to phase $\mathsf{P}$ via $\Pass2\mathsf{LI}$. \item[($\vdash_\mathsf{P}$)] We have the possibility of applying the passivation rule and move the leftmost formula $A$ in the context to the stoup when the latter is empty. This allows us to start decomposing $A$ using left invertible rules in phase $\mathsf{LI}$. Otherwise, we move to phase $\mathsf{F}$ via $\F2\mathsf{P}$. \item[($\vdash_\mathsf{F}$)] We apply one of the four remaining rules $\mathsf{ax}$, $\mathsf{IR}$, $\otimes \mathsf{R}$ or ${\multimap}\mathsf{L}$. The premises of $\otimes \mathsf{R}$ are both in phase $\mathsf{RI}$ since $A$ and $B$ are generic formulae, in particular they could be implications. The first premise of ${\multimap}\mathsf{L}$ is in phase $\mathsf{RI}$ for the same reason while the second premise is in $\mathsf{LI}$ because the succedent formula $P$ is positive. \end{itemize} The focused calculus in (\ref{eq:naive}) is sound and complete wrt.\ the sequent calculus in (\ref{eq:seqcalc}) in regards to derivability, but not \emph{equationally complete}, i.e., there exist $\circeq$-equivalent sequent calculus derivations which have multiple distinct derivations using the rules in (\ref{eq:naive}). In other words, the rules in (\ref{eq:naive}) are too permissive. They facilitate two forms of non-determinism in root-first proof search that should not be there. \begin{enumerate} \item[(i)] The first premise of the $\otimes \mathsf{R}$ rule is in phase $\mathsf{RI}$, since $A$ is potentially an implication which the invertible right rule ${\multimap}\mathsf{R}$ could act upon. Proof search for the first premise eventually hits phase $\mathsf{P}$, when we have the possibility of applying the $\mathsf{pass}$ rule if the stoup is empty. This implies the existence of situations where either of the rules $\otimes \mathsf{R}$ and $\mathsf{pass}$ can be applied first, in both cases resulting in valid focused derivations. As an example, consider the two distinct derivations of ${-} \mid X , \Gamma , \Delta \vdash_{\mathsf{P}} P \otimes C$ under assumptions $f : X \mid \Gamma \vdash_{\mathsf{LI}} P$ and $g : {-} \mid \Delta \vdash_{\mathsf{RI}} C$. \begin{equation}\label{eq:passtr} \small \proofbox{ \infer[\mathsf{pass}]{{-} \mid X , \Gamma , \Delta \vdash_{\mathsf{P}} P \otimes C}{ \infer[\mathsf{sw}]{X \mid \Gamma , \Delta \vdash_{\mathsf{LI}} P \otimes C}{ \infer[\otimes \mathsf{R}]{X \mid \Gamma , \Delta \vdash_{\mathsf{F}} P \otimes C}{ \infer[\mathsf{sw}]{X \mid \Gamma \vdash_{\mathsf{RI}} P}{ \deduce{X \mid \Gamma \vdash_{\mathsf{LI}} P}{f} } & \deduce{{-} \mid \Delta \vdash_{\mathsf{RI}} C}{g} } } } \qquad \infer[\mathsf{sw}]{{-} \mid X , \Gamma , \Delta \vdash_{\mathsf{P}} P \otimes C}{ \infer[\otimes \mathsf{R}]{{-} \mid X , \Gamma , \Delta \vdash_{\mathsf{F}} P \otimes C}{ \infer[\mathsf{sw}]{{-} \mid X , \Gamma \vdash_{\mathsf{RI}} P}{ \infer[\mathsf{pass}]{{-} \mid X , \Gamma \vdash_{\mathsf{P}} P}{ \deduce{X \mid \Gamma \vdash_{\mathsf{LI}} P}{f} } } & \deduce{{-} \mid \Delta \vdash_{\mathsf{RI}} C}{g} } } } \end{equation} Here and in the rest of the paper the rule $\mathsf{sw}$ above stands for a sequence of (appropriately typed) phase switching inferences by $\LI2\mathsf{RI}$, $\Pass2\mathsf{LI}$ and $\F2\mathsf{P}$. The corresponding sequent calculus derivations are equated by congruence relation $\circeq$ because of the 4th equation from (\ref{fig:circeq}), i.e., the permutative conversion involving $\otimes \mathsf{R}$ and $\mathsf{pass}$. \item[(ii)] Rules $\otimes \mathsf{R}$ and ${\multimap}\mathsf{L}$ appear in the same phase $\mathsf{F}$, though there are situations where both rules can be applied first, which can lead to two distinct focused derivations. More precisely, there are cases when $\otimes \mathsf{R}$ and ${\multimap}\mathsf{L}$ can be interchangeably applied. As an example, consider the following two valid derivations of $A \multimap X \mid \Gamma , \Delta , \Lambda \vdash_{\mathsf{F}} P \otimes D$ under the assumption of $f : {-} \mid \Gamma \vdash_{\mathsf{RI}} A$, $g : X \mid \Delta \vdash_{\mathsf{LI}} P$ and $h : {-} \mid \Lambda \vdash_{\mathsf{RI}} D$. \begin{equation}\label{eq:llefttr} \small \hspace{-.5cm} \proofbox{ \infer[{\multimap}\mathsf{L}]{A \multimap X \mid \Gamma , \Delta , \Lambda \vdash_{\mathsf{F}} P \otimes D}{ \deduce{{-} \mid \Gamma \vdash_{\mathsf{RI}} A}{f} &\hspace{-.6cm} \infer[\mathsf{sw}]{X \mid \Delta , \Lambda \vdash_{\mathsf{LI}} P \otimes D}{ \infer[\otimes \mathsf{R}]{X \mid \Delta , \Lambda \vdash_{\mathsf{F}} P\otimes D}{ \infer[\mathsf{sw}]{X \mid \Delta \vdash_{\mathsf{RI}} P}{ \deduce{X \mid \Delta \vdash_{\mathsf{LI}} P}{g} } & \deduce{{-} \mid \Lambda \vdash_{\mathsf{RI}} D}{h} } } } \hspace{.2cm} \infer[\otimes \mathsf{R}]{A \multimap X \mid \Gamma , \Delta , \Lambda \vdash_{\mathsf{F}} P \otimes D}{ \infer[\mathsf{sw}]{A \multimap X \mid \Gamma , \Delta \vdash_{\mathsf{RI}} P}{ \infer[{\multimap}\mathsf{L}]{A \multimap X \mid \Gamma , \Delta \vdash_{\mathsf{F}} P}{ \deduce{{-} \mid \Gamma \vdash_{\mathsf{RI}} A}{f} & \deduce{X \mid \Delta \vdash_{\mathsf{LI}} P}{g} } } &\hspace{-.6cm} \deduce{{-} \mid \Lambda \vdash_{\mathsf{RI}} D}{h} } } \end{equation} The corresponding sequent calculus derivations, at the same time, are $\circeq$-equivalent because of the 7th equation from (\ref{fig:circeq}), the permutative conversion for $\otimes \mathsf{R}$ and ${\multimap}\mathsf{L}$. \end{enumerate} To get rid of type (i) undesired non-determinism, one might try an idea similar to the one that works in the skew monoidal non-closed case \cite{uustalu:sequent:2021}, namely, to prioritize $\mathsf{pass}$ over $\otimes \mathsf{R}$ by requiring the first premise of the latter to be a sequent in phase $\mathsf{F}$. But this does not do the right thing in the skew monoidal closed case. E.g., the sequent ${-} \mid Y \vdash_{\mathsf{F}} (X \multimap X) \otimes Y$ becomes underivable while its counterpart is derivable in (\ref{eq:seqcalc}). \begin{equation*}\label{eq:counterexample0} \infer[\otimes \mathsf{R}]{{-} \mid Y \vdash_{\mathsf{F}} (X \multimap X) \otimes Y}{ \deduce{{-} \mid \quad \vdash_{\mathsf{F}} X \multimap X}{??} & \infer[\mathsf{sw}]{{-} \mid Y \vdash_{\mathsf{RI}} Y}{ \infer[\mathsf{pass}]{{-} \mid Y \vdash_{\mathsf{P}} Y}{ \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{\mathsf{F}} Y}{} } } } } \end{equation*} An impulsive idea for eliminating undesired non-determinism of type (ii) is to prioritize the application of ${\multimap}\mathsf{L}$ over $\otimes \mathsf{R}$, e.g., by forcing the application of ${\multimap}\mathsf{L}$ in phase $\mathsf{F}$ whenever the stoup formula is an implication and restricting the application of $\otimes \mathsf{R}$ to sequents where the stoup is empty or atomic. This too leads to an unsound calculus, since the sequent $X \multimap Y \mid Z \vdash_{\mathsf{F}} (X \multimap Y) \otimes Z$, which has a derivable correspondent in (\ref{eq:seqcalc}), would not be derivable by first applying the ${\multimap}\mathsf{L}$ rule. \begin{equation*}\label{eq:counterexample1} \infer[{\multimap}\mathsf{L}]{X \multimap Y \mid Z \vdash_{\mathsf{F}} (X \multimap Y) \otimes Z}{ \deduce{{-} \mid \quad \vdash_{\mathsf{RI}} X}{??} & \infer[\mathsf{sw}]{Y \mid Z \vdash_{\mathsf{LI}} (X \multimap Y) \otimes Z}{ \infer[\otimes \mathsf{R}]{Y \mid Z \vdash_{\mathsf{F}} (X \multimap Y) \otimes Z}{ \infer[{\multimap}\mathsf{R}]{Y \mid \quad \vdash_{\mathsf{RI}} X \multimap Y}{ \deduce{Y \mid X \vdash_{\mathsf{RI}} Y}{??} } & \infer[\mathsf{sw}]{{-} \mid Z \vdash_{\mathsf{RI}} Z}{ \infer[\mathsf{pass}]{{-} \mid Z \vdash_{\mathsf{P}} Z}{ \infer[\mathsf{sw}]{Z \mid \quad \vdash_{\mathsf{LI}} Z}{ \infer[\mathsf{ax}]{Z \mid \quad \vdash_{\mathsf{F}} Z}{} } } } } } } \end{equation*} Dually, prioritizing the application of $\otimes \mathsf{R}$ over ${\multimap}\mathsf{L}$ leads to similar issues, e.g., the sequent $X \multimap (Y \otimes Z) \mid X \vdash_\mathsf{F} Y \otimes Z$ would not be derivable by first applying the $\otimes \mathsf{R}$ rule while its counterpart is derivable in (\ref{eq:seqcalc}). \begin{equation*}\label{eq:counterexample2} \infer[\otimes \mathsf{R}]{X \multimap (Y \otimes Z) \mid X \vdash_{\mathsf{F}} Y \otimes Z}{ \infer[\mathsf{sw}]{X \multimap (Y \otimes Z) \mid X \vdash_{\mathsf{RI}} Y}{ \infer[{\multimap}\mathsf{L}]{X \multimap (Y \otimes Z) \mid X \vdash_{\mathsf{F}} Y}{ \infer[\mathsf{sw}]{{-} \mid X \vdash_{\mathsf{RI}} X}{ \infer[\mathsf{pass}]{{-} \mid X \vdash_{\mathsf{P}} X}{ \infer[\mathsf{sw}]{X \mid \quad \vdash_{\mathsf{LI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}} X}{} } } } & \infer[\otimes \mathsf{L}]{Y \otimes Z \mid \quad \vdash_{\mathsf{LI}} Y}{ \deduce{Y \mid Z \vdash_{\mathsf{LI}} Y}{??} } } } & \deduce{{-} \mid \quad \vdash_{\mathsf{RI}} Z}{??} } \end{equation*} \subsection{A Focused System with Tag Annotations}\label{sec:tag} In order to eliminate undesired non-determinism of type (i) between $\mathsf{pass}$ and $\otimes \mathsf{R}$, we need to restrict applications of $\mathsf{pass}$ in the derivation of the first premise of an application of $\otimes \mathsf{R}$. One way to achieve this is to force that such an application of $\mathsf{pass}$ is allowed only if the leftmost formula of the context is \emph{new}, in the sense that it was not already present in the context before the $\otimes \mathsf{R}$ application. For example, with this restriction in place, the application of $\mathsf{pass}$ in the 2nd derivation of (\ref{eq:passtr}) would be invalid, since the formula $X$ was already present in context before the application of $\otimes \mathsf{R}$. Analogously, undesired non-determinism of type (ii) between ${\multimap}\mathsf{L}$ and $\otimes \mathsf{R}$ can be eliminated by restricting applications of ${\multimap}\mathsf{L}$ after an application of $\otimes \mathsf{R}$. This can be achieved by forcing the subsequent application of ${\multimap}\mathsf{L}$ to split the context into two parts $\Gamma,\Delta$ in such a way that $\Gamma$, i.e., the context of the first premise, necessarily contains some \emph{new} formula occurrences that were not in the context before the first $\otimes \mathsf{R}$ application. Under this restriction, the application of ${\multimap}\mathsf{L}$ in the 2nd derivation of (\ref{eq:llefttr}) would become invalid, since all formulae in $\Gamma$ are already present in context before the application of $\otimes \mathsf{R}$. One way to distinguish between old and new formulae occurrences in the above cases is to mark with a \emph{tag} $\bullet$ each new formula appearing in context during the building of a focused derivation. We christen a formula occurrence ``new'' whenever it is moved from the succedent to the context via an application of the right implication rule ${\multimap}\mathsf{R}$. In order to remember when we are building a derivation of a sequent arising as the first premise of $\otimes \mathsf{R}$, in which the distinction between old and new formula is relevant, we mark such sequents with a tag $\bullet$ as well. More generally, we write $S \mid \Gamma \vdash^{x}_{ph} C$ for a sequent that can be untagged or tagged, i.e., the turnstile can be of the form $\vdash_{ph}$ or $\vdash^\bullet_{ph}$, for $ph \in \{ \mathsf{RI},\mathsf{LI},\mathsf{P},\mathsf{F}\}$. This implies that there are a total of eight sequent phases, corresponding to the possible combinations of four subscript phases with the untagged/tagged state. In tagged sequents $S \mid \Gamma \vdash_{ph}^{\bullet} C$, the formulae in the context $\Gamma$ can be untagged or tagged, i.e., they can be of the form $A$ or $A^\bullet$; to be precise, all untagged formulae in $\Gamma$ must precede all tagged formulae (i.e., the context splits into untagged and tagged parts and, instead of possibly tagged formulae, we could alternatively work with contexts with two compartments). The formulae in the context of an untagged sequent $S \mid \Gamma \vdash_{ph} C$ must all be untagged (or, alternatively, the tagged compartment must be empty). Given a context $\Gamma$, we write $\Gamma^{\circ}$ for the same context where all tags have been removed from the formulae in it. Derivations in the focused sequent calculus with tag annotations are generated by the rules \begin{equation}\label{eq:focus} \begin{array}{lc} \text{(right invertible)} & \proofbox{ \infer[{\multimap}\mathsf{R}]{S \mid \Gamma \vdash^{x}_{\mathsf{RI}} A \multimap B}{S \mid \Gamma , A^{x} \vdash^{x}_{\mathsf{RI}} B} \qquad \infer[\mathsf{LI} 2 \mathsf{RI}]{S \mid \Gamma \vdash^{x}_{\mathsf{RI}} P}{S \mid \Gamma \vdash^{x}_{\mathsf{LI}} P} } \\[10pt] \text{(left invertible)} & \proofbox{ \infer[\mathsf{IL}]{\mathsf{I} \mid \Gamma \vdash_{\mathsf{LI}} P}{{-} \mid \Gamma \vdash_{\mathsf{LI}} P} \qquad \infer[\otimes \mathsf{L}]{A \otimes B \mid \Gamma \vdash_{\mathsf{LI}} P}{A \mid B , \Gamma \vdash_{\mathsf{LI}} P} \qquad \infer[\mathsf{P} 2 \mathsf{LI}]{T \mid \Gamma \vdash^{x}_{\mathsf{LI}} P}{T \mid \Gamma \vdash^{x}_{\mathsf{P}} P} } \\[10pt] \text{(passivation)} & \proofbox{ \infer[\mathsf{pass}]{{-} \mid A^{x} , \Gamma \vdash^{x}_{\mathsf{P}} P }{ A\mid \Gamma^{\circ} \vdash_{\mathsf{LI}} P } \qquad \infer[\mathsf{F} 2 \mathsf{P}]{T \mid \Gamma \vdash^{x}_{\mathsf{P}} P}{ T \mid \Gamma \vdash^{x}_{\mathsf{F}} P } } \\[10pt] \text{(focusing)} & \proofbox{\infer[\mathsf{ax}]{X \mid \quad \vdash^{x}_{\mathsf{F}} X}{} \qquad \infer[\mathsf{IR}]{{-} \mid \quad \vdash^{x}_{\mathsf{F}} \mathsf{I}}{} } \\[6pt] \multicolumn{2}{c}{ \infer[\otimes \mathsf{R}]{T \mid \Gamma , \Delta \vdash^{x}_{\mathsf{F}} A \otimes B}{ T \mid \Gamma^{\circ} \vdash^{\bullet}_{\mathsf{RI}} A & {-} \mid \Delta^{\circ} \vdash_{\mathsf{RI}} B } \qquad \infer[{\multimap}\mathsf{L}]{A \multimap B \mid \Gamma , \Delta \vdash^{x}_{\mathsf{F}} P}{ {-} \mid \Gamma^{\circ} \vdash_{\mathsf{RI}} A & B \mid \Delta^{\circ} \vdash_{\mathsf{LI}} P & x = \bullet \supset \bullet \in \Gamma } } \end{array} \end{equation} Remember that $P$ is a positive formula and $T$ is a negative stoup. The side condition in rule ${\multimap}\mathsf{L}$ reads: if $x = \bullet$, then some formula in $\Gamma$ must be tagged. For the rule $\mathsf{pass}$ notice that, if $x = \bullet$, it is actually forced that all formulae of $\Gamma$ are tagged since the preceding context formula $A^\bullet$ is tagged. For the rules $\otimes \mathsf{R}$ and ${\multimap}\mathsf{L}$ similarly notice that, if some formula of $\Gamma$ is tagged, then all formulae of $\Delta$ must be tagged. The rules in (\ref{eq:focus}), when stripped of all the tags, are equivalent to the rules in the na{\"i}ve calculus (\ref{eq:naive}). When building a derivation of an untagged sequent $S \mid \Gamma \vdash_\mathsf{RI} A$, the only possible way to enter a tagged phase is via an application of the $\otimes \mathsf{R}$ rule, so that sequents with turnstile marked $\vdash_{ph}^\bullet$ denote the fact that we are performing proof search for the first premise of an $\otimes \mathsf{R}$ inference (and the stoup is negative). The search for a proof of a tagged sequent $T \mid \Gamma^\circ \vdash^\bullet_\mathsf{RI} A$ proceeds as follows: \begin{itemize} \item[($\vdash^\bullet_\mathsf{RI}$)] We eagerly apply the right invertible rule ${\multimap}\mathsf{R}$ with the goal of reducing $A$ to a positive formula $P$. All formulae that get moved to the right end of the context are ``new'', and are therefore marked with $\bullet$. When the succedent formula becomes positive, we move to the tagged $\mathsf{LI}$ phase via $\LI2\mathsf{RI}$. \item[($\vdash^\bullet_\mathsf{LI}$)] Since $T$ is a negative stoup, we can only move to the tagged $\mathsf{P}$ phase via $\Pass2\mathsf{LI}$. \item[($\vdash^\bullet_\mathsf{P}$)] If the stoup is empty, we have the possibility of applying the $\mathsf{pass}$ rule and move the leftmost formula $A$ in the context to the stoup, but only when this formula is marked by $\bullet$. This restriction makes it possible to remove undesired non-determinism of type (i). We then strip the context of all tags and jump to the untagged $\mathsf{LI}$ phase. If we do not (or cannot) apply $\mathsf{pass}$, we move to the tagged $\mathsf{F}$ phase via $\F2\mathsf{P}$. \item[($\vdash^\bullet_\mathsf{F}$)] The possible rules to apply (depending on the stoup and succedent formula) are $\mathsf{ax}$, $\mathsf{IR}$, $\otimes \mathsf{R}$ or ${\multimap}\mathsf{L}$. If we apply $\otimes \mathsf{R}$, we remove all tags from the context $\Gamma, \Delta$ and move the first premise to the tagged $\mathsf{RI}$ phase again. The tags are removed from the context in order to reset tracking of new formulae. The most interesting case is ${\multimap}\mathsf{L}$, which can only be applied if the $\Gamma$ part of the context $\Gamma, \Delta$ contains at least one tagged formula. This side condition implements the restriction allowing the elimination of undesired non-determinism of type (ii). All tags are removed from $\Gamma, \Delta$ and proof search continues in the appropriate untagged phases. \end{itemize} The employment of tag annotations eliminates the two types of undesired non-determinism. For example, only one of the two derivations in (\ref{eq:passtr}) is valid using the rules in (\ref{eq:focus}), and similarly for (\ref{eq:llefttr}). \begin{displaymath} \small \begin{array}{cc} \infer[\mathsf{pass}]{{-} \mid X , \Gamma , \Delta \vdash_{\mathsf{P}} P \otimes C}{ \infer[\mathsf{sw}]{X \mid \Gamma , \Delta \vdash_{\mathsf{LI}} P \otimes C}{ \infer[\otimes \mathsf{R}]{X \mid \Gamma , \Delta \vdash_{\mathsf{F}} P \otimes C}{ \infer{X \mid \Gamma \vdash^{\bullet}_{\mathsf{RI}} P}{ \deduce{X \mid \Gamma \vdash^{\bullet}_{\mathsf{LI}} P}{f} } & \deduce{{-} \mid \Delta \vdash_{\mathsf{RI}} C}{g} } } } & \infer[\mathsf{sw}]{{-} \mid X , \Gamma , \Delta \vdash_{\mathsf{P}} P \otimes C}{ \infer[\otimes \mathsf{R}]{{-} \mid X , \Gamma , \Delta \vdash_{\mathsf{F}} P \otimes C}{ \infer[\mathsf{sw}]{{-} \mid X , \Gamma \vdash^{\bullet}_{\mathsf{RI}} P}{ \deduce{{-} \mid X , \Gamma \vdash^{\bullet}_{\mathsf{P}} P}{??} } & \deduce{{-} \mid \Delta \vdash_{\mathsf{RI}} C}{g} } } \\ (\text{same derivation as in (\ref{eq:passtr})}) & (\text{$\mathsf{pass}$ not applicable since $X$ is not tagged}) \\[10pt] \infer[{\multimap}\mathsf{L}]{A \multimap X \mid \Gamma , \Delta , \Lambda \vdash_{\mathsf{F}} P \otimes D}{ \deduce{{-} \mid \Gamma \vdash_{\mathsf{RI}} A}{f} & \infer[\mathsf{sw}]{X \mid \Delta , \Lambda \vdash_{\mathsf{LI}} P \otimes D}{ \infer[\otimes \mathsf{R}]{X \mid \Delta , \Lambda \vdash_{\mathsf{F}} P \otimes D}{ \infer[\mathsf{sw}]{X \mid \Delta \vdash^{\bullet}_{\mathsf{RI}} P}{ \deduce{X \mid \Delta \vdash^{\bullet}_{\mathsf{LI}} P}{g} } & \deduce{{-} \mid \Lambda \vdash_{\mathsf{RI}} D}{h} } } } & \infer[\otimes \mathsf{R}]{A \multimap X \mid \Gamma , \Delta , \Lambda \vdash_{\mathsf{F}} P \otimes D}{ \infer[\mathsf{sw}]{A \multimap X \mid \Gamma , \Delta \vdash^{\bullet}_{\mathsf{RI}} P}{ \deduce{A \multimap X \mid \Gamma , \Delta \vdash^{\bullet}_{\mathsf{F}} P}{??} } & \deduce{{-} \mid \Lambda \vdash_{\mathsf{RI}} D}{h} } \\ (\text{same derivation as in (\ref{eq:llefttr})}) & (\text{${\multimap}\mathsf{L}$ not applicable since $\Gamma$ is tag-free}) \end{array} \end{displaymath} The extra restrictions on sequents and formulae that the rules in (\ref{eq:focus}) impose in comparison to those in (\ref{eq:naive}) do not reduce derivability, e.g., the sequents ${-} \mid Y \vdash_{\mathsf{F}} (X \multimap X) \otimes Y$, whose proof requires passivation of a new formula in the derivation of the first premise of a $\otimes \mathsf{R}$ application, $X \multimap Y \mid Z \vdash_{\mathsf{RI}} (X \multimap Y) \otimes Z$, whose proof requires application of $\otimes \mathsf{R}$ before ${\multimap}\mathsf{L}$, and $X \multimap (Y \otimes Z) \mid X \vdash_\mathsf{F} Y \otimes Z$, which needs ${\multimap}\mathsf{L}$ invoked before $\otimes \mathsf{R}$, are all derivable. \begin{displaymath} \footnotesize \begin{array}{c} \proofbox{ \infer[\otimes \mathsf{R}]{{-} \mid Y \vdash_{\mathsf{F}} (X \multimap X) \otimes Y}{ \infer[{\multimap}\mathsf{R}]{{-} \vdash \quad \vdash^{\bullet}_{\mathsf{RI}} X \multimap X}{ \infer[\mathsf{sw}]{{-} \mid X^{\bullet} \vdash^{\bullet}_{\mathsf{RI}} X}{ \infer[\mathsf{pass}]{{-} \mid X^{\bullet} \vdash^{\bullet}_{\mathsf{P}} X}{ \infer[\mathsf{sw}]{X \mid \quad \vdash_{\mathsf{LI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}} X}{} } } } } & \infer[\mathsf{sw}]{{-} \mid Y \vdash_{\mathsf{RI}} Y}{ \infer[\mathsf{pass}]{{-} \mid Y \vdash_{\mathsf{P}} Y}{ \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{\mathsf{F}} Y}{} } } } } } \\ \proofbox{ \infer[\otimes \mathsf{R}]{X \multimap Y \mid Z \vdash_{\mathsf{F}} (X \multimap Y) \otimes Z}{ \infer[{\multimap}\mathsf{R}]{X \multimap Y \mid \quad \vdash^{\bullet}_{\mathsf{RI}} X \multimap Y}{ \infer[\mathsf{sw}]{X \multimap Y \mid X^{\bullet} \vdash^{\bullet}_{\mathsf{RI}} Y}{ \infer[{\multimap}\mathsf{L}]{X \multimap Y \mid X^{\bullet} \vdash^{\bullet}_{\mathsf{F}} Y}{ \infer[\mathsf{sw}]{{-} \mid X \vdash_{\mathsf{RI}} X}{ \infer[\mathsf{pass}]{{-} \mid X \vdash_{\mathsf{P}} X}{ \infer[\mathsf{sw}]{X \mid \quad \vdash_{\mathsf{LI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}} X}{} } } } & \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{\mathsf{F}} Y}{} } } } } & \infer[\mathsf{sw}]{{-} \mid Z \vdash_{\mathsf{RI}} Z}{ \infer[\mathsf{pass}]{{-} \mid Z \vdash_{\mathsf{P}} Z}{ \infer[\mathsf{sw}]{Z \mid \quad \vdash_{\mathsf{LI}} Z}{ \infer[\mathsf{ax}]{Z \mid \quad \vdash_{\mathsf{F}} Z}{} } } } } \quad \infer[{\multimap}\mathsf{L}]{X \multimap (Y \otimes Z) \mid X \vdash_{\mathsf{F}} Y \otimes Z}{ \infer[\mathsf{sw}]{{-} \mid X \vdash_{\mathsf{RI}} X}{ \infer[\mathsf{pass}]{{-} \mid X \vdash_{\mathsf{P}} X}{ \infer[\mathsf{sw}]{X \mid \quad \vdash_{\mathsf{LI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}} X}{} } } } & \infer[\otimes \mathsf{L}]{Y \otimes Z \mid \quad \vdash_{\mathsf{LI}} Y \otimes Z}{ \infer[\mathsf{sw}]{Y \mid Z \vdash_{\mathsf{LI}} Y \otimes Z}{ \infer[\otimes \mathsf{R}]{Y \mid Z \vdash_{\mathsf{F}} Y \otimes Z}{ \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{RI}}^{\bullet} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{\mathsf{F}}^\bullet Y}{} } & \infer[\mathsf{sw}]{{-} \mid Z \vdash_{\mathsf{RI}} Z}{ \infer[\mathsf{pass}]{{-} \mid Z \vdash_{\mathsf{P}} Z}{ \infer[\mathsf{sw}]{Z \mid \quad \vdash_{\mathsf{LI}} Z}{ \infer[\mathsf{ax}]{Z \mid \quad \vdash_{\mathsf{F}} Z}{} } } } } } } } } \end{array} \normalsize \end{displaymath} We should point out that although the focused calculus is free of the undesired non-determinism that the na\"ive attempt (\ref{eq:naive}) suffered from, it is still non-deterministic and this has to be so. In particular, the focused calculus (\ref{eq:focus}) keeps the following two types of non-determinism of the focused calculus of \cite{uustalu:sequent:2021} (see the analysis in \cite{uustalu:proof:nodate}). We call these types of non-determinism \emph{essential} because they reflect the fact that there are sequents with multiple derivations in $\mathtt{SkNMILL}$\ that are not $\circeq$-equivalent. \begin{enumerate} \item[1.] In phase $\mathsf{P}$, when the stoup is empty, there is a choice of whether to apply $\mathsf{pass}$ or $\F2\mathsf{P}$ and sometimes both options lead to a derivation. For example, the sequent $X \mid \mathsf{I} \otimes Y \vdash_{\mathsf{F}} X \otimes (\mathsf{I} \otimes Y)$ has two distinct derivations in the focused system and the corresponding derivations in $\mathtt{SkNMILL}$\ are not $\circeq$-equivalent. \begin{equation*} \footnotesize \begin{array}{cc} \infer[\otimes \mathsf{R}]{X \mid \mathsf{I} \otimes Y \vdash_{\mathsf{F}} X \otimes (\mathsf{I} \otimes Y)}{ \infer[\mathsf{sw}]{X \mid \quad \vdash^{\bullet}_{\mathsf{RI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}}^\bullet X}{} } & \infer[\mathsf{sw}]{{-} \mid \mathsf{I} \otimes Y \vdash_{\mathsf{RI}} \mathsf{I} \otimes Y}{ \infer[\mathsf{pass}]{{-} \mid \mathsf{I} \otimes Y \vdash_{\mathsf{P}} \mathsf{I} \otimes Y}{ \infer[\otimes \mathsf{L}]{\mathsf{I} \otimes Y \mid \quad \vdash_{\mathsf{LI}} \mathsf{I} \otimes Y}{ \infer[\mathsf{IL}]{\mathsf{I} \mid Y \vdash_{\mathsf{LI}} \mathsf{I} \otimes Y}{ \infer[\mathsf{sw}]{{-} \mid Y \vdash_{\mathsf{LI}} \mathsf{I} \otimes Y}{ \infer[\otimes \mathsf{R}]{{-} \mid Y \vdash_{\mathsf{F}} \mathsf{I} \otimes Y}{ \infer[\mathsf{sw}]{{-} \mid \quad \vdash^{\bullet}_{\mathsf{RI}} \mathsf{I}}{ \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}}^\bullet \mathsf{I}}{} } & \infer[\mathsf{sw}]{{-} \mid Y \vdash_{\mathsf{RI}} Y}{ \infer[\mathsf{pass}]{{-} \mid Y \vdash_{\mathsf{P}} Y}{ \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{F} Y}{} } } } } } } } } } } & \infer[\otimes \mathsf{R}]{X \mid \mathsf{I} \otimes Y \vdash_{\mathsf{F}} X \otimes (\mathsf{I} \otimes Y)}{ \infer[\mathsf{sw}]{X \mid \quad \vdash^{\bullet}_{\mathsf{RI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}}^\bullet X}{} } & \infer[\mathsf{sw}]{{-} \mid \mathsf{I} \otimes Y \vdash_{\mathsf{RI}} \mathsf{I} \otimes Y}{ \infer[\otimes \mathsf{R}]{{-} \mid \mathsf{I} \otimes Y \vdash_{\mathsf{F}} \mathsf{I} \otimes Y}{ \infer[\mathsf{sw}]{{-} \mid \quad \vdash^{\bullet}_{\mathsf{RI}} \mathsf{I}}{ \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}}^\bullet \mathsf{I}}{} } & \infer[\mathsf{sw}]{{-} \mid \mathsf{I} \otimes Y \vdash_{\mathsf{RI}} Y}{ \infer[\mathsf{pass}]{{-} \mid \mathsf{I} \otimes Y \vdash_{\mathsf{P}} Y}{ \infer[\mathsf{sw}]{\mathsf{I} \otimes Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\otimes \mathsf{L}]{\mathsf{I} \otimes \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{IL}]{\mathsf{I} \mid Y \vdash_{\mathsf{LI}}}{ \infer[\mathsf{sw}]{{-} \mid Y \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{pass}]{{-} \mid Y \vdash_{\mathsf{P}} Y}{ \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{\mathsf{F}} Y}{} } } } } } } } } } } } \end{array} \normalsize \end{equation*} \item[2.] In phase $\mathsf{F}$, if the succedent formula is $A \otimes B$, and the rule $\otimes \mathsf{R}$ is to be applied, the context can be split anywhere. Sometimes several of these splits can lead to a derivation. For example, the sequent $X \mid \mathsf{I} , Y \vdash_{\mathsf{F}} (X \otimes \mathsf{I}) \otimes Y$ has two distinct derivations. \begin{equation*} \footnotesize \begin{array}{cc} \infer[\otimes \mathsf{R}]{X \mid \mathsf{I} , Y \vdash_{\mathsf{F}} (X \otimes \mathsf{I}) \otimes Y}{ \infer[\mathsf{sw}]{X \mid \mathsf{I} \vdash^{\bullet}_{\mathsf{RI}} X \otimes \mathsf{I}}{ \infer[\otimes \mathsf{R}]{X \mid \mathsf{I} \vdash_{\mathsf{F}}^\bullet X \otimes \mathsf{I}}{ \infer[\mathsf{sw}]{X \mid \quad \vdash^{\bullet}_{\mathsf{RI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}}^\bullet X}{} } & \infer[\mathsf{sw}]{{-} \mid \mathsf{I} \vdash_{\mathsf{RI}} \mathsf{I}}{ \infer[\mathsf{pass}]{{-} \mid \mathsf{I} \vdash_{\mathsf{P}} \mathsf{I}}{ \infer[\mathsf{IL}]{\mathsf{I} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{sw}]{{-} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}} \mathsf{I}}{} } } } } } } & \infer[\mathsf{sw}]{{-} \mid Y \vdash_{\mathsf{RI}} Y}{ \infer[\mathsf{pass}]{{-} \mid Y \vdash_{\mathsf{P}} Y}{ \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{\mathsf{F}} Y}{} } } } } & \infer[\otimes \mathsf{R}]{X \mid \mathsf{I} , Y \vdash_{\mathsf{F}} (X \otimes \mathsf{I}) \otimes Y}{ \infer[\mathsf{sw}]{X \mid \quad \vdash^{\bullet}_{\mathsf{RI}} X \otimes \mathsf{I}}{ \infer[\otimes \mathsf{R}]{X \mid \quad \vdash_{\mathsf{F}}^\bullet X \otimes \mathsf{I}}{ \infer[\mathsf{sw}]{X \mid \quad \vdash^{\bullet}_{\mathsf{RI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}}^\bullet X}{} } & \infer[\mathsf{sw}]{{-} \mid \quad \vdash_{\mathsf{RI}} \mathsf{I}}{ \infer[\mathsf{ax}]{{-} \mid \quad \vdash_{\mathsf{F}} \mathsf{I}}{} } } } & \infer[\mathsf{sw}]{{-} \mid \mathsf{I} , Y \vdash_{\mathsf{RI}} Y}{ \infer[\mathsf{pass}]{{-} \mid \mathsf{I} , Y \vdash_{\mathsf{P}} Y}{ \infer[\mathsf{IL}]{\mathsf{I} \mid Y \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{sw}]{{-} \mid Y \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{pass}]{Y \mid \quad \vdash_{\mathsf{P}} Y}{ \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{\mathsf{F}} Y}{} } } } } } } } \end{array} \normalsize \end{equation*} \end{enumerate} The presence of $\multimap$ adds two further types of essential nondeterminism. \begin{enumerate} \item[3.] This type is similar to type 2. In phase $\mathsf{F}$, if the stoup formula is $A \multimap B$, and the rule ${\multimap}\mathsf{L}$ is to be applied, the context can be split anywhere. Again, sometimes several of these splits lead to a derivation. For example, the sequent $\mathsf{I} \multimap (X \multimap Y) \mid \mathsf{I} , X \vdash_{\mathsf{F}} Y$ has two derivations. \begin{equation*} \footnotesize \begin{array}{cc} \infer[{\multimap}\mathsf{L}]{\mathsf{I} \multimap (X \multimap Y) \mid \mathsf{I} , X \vdash_{\mathsf{F}} Y}{ \infer[\mathsf{sw}]{{-} \mid \mathsf{I} \vdash_{\mathsf{RI}} \mathsf{I}}{ \infer[\mathsf{pass}]{{-} \mid \mathsf{I} \vdash_{\mathsf{P}} \mathsf{I}}{ \infer[\mathsf{IL}]{\mathsf{I} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{sw}]{{-} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}} \mathsf{I}}{} } } } } & \infer[\mathsf{sw}]{X \multimap Y \mid X \vdash_{\mathsf{LI}} Y}{ \infer[{\multimap}\mathsf{L}]{X \multimap Y \mid X \vdash_{\mathsf{F}} Y}{ \infer[\mathsf{sw}]{{-} \mid X \vdash_{\mathsf{RI}} X}{ \infer[\mathsf{pass}]{{-} \mid X \vdash_{\mathsf{P}} X}{ \infer[\mathsf{sw}]{X \mid \quad \vdash_{\mathsf{LI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}} X}{} } } } & \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{\mathsf{F}} Y}{} } } } } & \infer[{\multimap}\mathsf{L}]{\mathsf{I} \multimap (X \multimap Y) \mid \mathsf{I} , X \vdash_{\mathsf{F}} Y}{ \infer[\mathsf{sw}]{{-} \mid \quad \vdash_{\mathsf{RI}} \mathsf{I}}{ \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}} \mathsf{I}}{} } & \infer[\mathsf{sw}]{X \multimap Y \mid \mathsf{I} , X \vdash_{\mathsf{LI}} Y}{ \infer[{\multimap}\mathsf{L}]{X \multimap Y \mid \mathsf{I} , X \vdash_{\mathsf{F}} Y}{ \infer[\mathsf{sw}]{{-} \mid \mathsf{I} , X \vdash_{\mathsf{RI}} X}{ \infer[\mathsf{pass}]{{-} \mid \mathsf{I} , X \vdash_{\mathsf{P}} X}{ \infer[\mathsf{IL}]{\mathsf{I} \mid X \vdash_{\mathsf{LI}} X}{ \infer[\mathsf{sw}]{{-} \mid X \vdash_{\mathsf{LI}} X}{ \infer[\mathsf{pass}]{{-} \mid X \vdash_{\mathsf{P}} X}{ \infer[\mathsf{sw}]{X \mid \quad \vdash_{\mathsf{LI}} X}{ \infer[\mathsf{ax}]{X \mid \quad \vdash_{\mathsf{F}} X}{} } } } } } } & \infer[\mathsf{sw}]{Y \mid \quad \vdash_{\mathsf{LI}} Y}{ \infer[\mathsf{ax}]{Y \mid \quad \vdash_{\mathsf{F}} Y}{} } } } } \end{array} \normalsize \end{equation*} \item[4.] Finally, in phase $\mathsf{F}$, if the succedent formula is $A \otimes B$ and the stoup formula is $A' \multimap B'$, then both $\otimes \mathsf{R}$ and ${\multimap}\mathsf{L}$ can be applied first and sometimes both options lead to a derivation. For example, the sequent $\mathsf{I} \multimap \mathsf{I} \mid Z \vdash_{\mathsf{F}} (\mathsf{I} \multimap \mathsf{I}) \otimes Z$ has two derivations. \begin{equation*} \footnotesize \begin{array}{cc} \infer[\otimes \mathsf{R}]{\mathsf{I} \multimap \mathsf{I} \mid Z \vdash_{\mathsf{F}} (\mathsf{I} \multimap \mathsf{I}) \otimes Z}{ \infer[{\multimap}\mathsf{R}]{\mathsf{I} \multimap \mathsf{I} \mid \quad \vdash^{\bullet}_{\mathsf{RI}} \mathsf{I} \multimap \mathsf{I}}{ \infer[\mathsf{sw}]{\mathsf{I} \multimap \mathsf{I} \mid \mathsf{I}^{\bullet} \vdash^{\bullet}_{\mathsf{RI}} \mathsf{I}}{ \infer[{\multimap}\mathsf{L}]{\mathsf{I} \multimap \mathsf{I} \mid \mathsf{I}^{\bullet} \vdash^{\bullet}_{\mathsf{F}} \mathsf{I}}{ \infer[\mathsf{sw}]{{-} \mid \mathsf{I} \vdash_{\mathsf{RI}} \mathsf{I}}{ \infer[\mathsf{pass}]{{-} \mid \mathsf{I} \vdash_{\mathsf{P}} \mathsf{I}}{ \infer[\mathsf{IL}]{\mathsf{I} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{sw}]{{-} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}} \mathsf{I}}{} } } } } & \infer[\mathsf{IL}]{\mathsf{I} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{sw}]{{-} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}} \mathsf{I}}{} } } } } } & \infer[\mathsf{sw}]{{-} \mid Z \vdash_{\mathsf{LI}} Z}{ \infer[\mathsf{pass}]{{-} \mid Z \vdash_{\mathsf{P}} Z}{ \infer[\mathsf{sw}]{Z \mid \quad \vdash_{\mathsf{LI}} Z}{ \infer[\mathsf{ax}]{Z \mid \quad \vdash_{\mathsf{F}} Z}{} } } } } & \infer[{\multimap}\mathsf{L}]{\mathsf{I} \multimap \mathsf{I} \mid Z \vdash_{\mathsf{F}} (\mathsf{I} \multimap \mathsf{I}) \otimes Z}{ \infer[\mathsf{sw}]{{-} \mid \quad \vdash_{\mathsf{RI}} \mathsf{I}}{ \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}} \mathsf{I}}{} } & \infer[\mathsf{IL}]{\mathsf{I} \mid Z \vdash_{\mathsf{LI}} (\mathsf{I} \multimap \mathsf{I}) \otimes Z}{ \infer[\mathsf{sw}]{{-} \mid Z \vdash_{\mathsf{LI}} (\mathsf{I} \multimap \mathsf{I}) \otimes Z}{ \infer[\otimes \mathsf{R}]{{-} \mid Z \vdash_{\mathsf{F}} (\mathsf{I} \multimap \mathsf{I}) \otimes Z}{ \infer[{\multimap}\mathsf{R}]{{-} \mid \quad \vdash^{\bullet}_{\mathsf{RI}} \mathsf{I} \multimap \mathsf{I}}{ \infer[\mathsf{sw}]{{-} \mid \mathsf{I}^{\bullet} \vdash^{\bullet}_{\mathsf{RI}} \mathsf{I}}{ \infer[\mathsf{pass}]{{-} \mid \mathsf{I}^{\bullet} \vdash^{\bullet}_{\mathsf{P}} \mathsf{I}}{ \infer[\mathsf{sw}]{\mathsf{I} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{IL}]{\mathsf{I} \mid \quad \vdash_{\mathsf{LI}} \mathsf{I}}{ \infer[\mathsf{sw}]{{-} \mid \quad \vdash_{\mathsf{LI}}}{ \infer[\mathsf{IR}]{{-} \mid \quad \vdash_{\mathsf{F}} \mathsf{I}}{} } } } } } } & \infer[\mathsf{sw}]{{-} \mid Z \vdash_{\mathsf{LI}} Z}{ \infer[\mathsf{pass}]{{-} \mid Z \vdash_{\mathsf{P}} Z}{ \infer[\mathsf{sw}]{Z \mid \quad \vdash_{\mathsf{LI}} Z}{ \infer[\mathsf{ax}]{Z \mid \quad \vdash_{\mathsf{F}} Z}{} } } } } } } } \end{array} \normalsize \end{equation*} Note that, in the second derivation, the rule $\mathsf{pass}$ applies to the sequent ${-} \mid \mathsf{I}^{\bullet} \vdash^{\bullet}_{\mathsf{P}} \mathsf{I}$ only because the context formula $\mathsf{I}^\bullet$ is tagged. \end{enumerate} \begin{theorem} The focused sequent calculus is sound and complete wrt.\ the sequent calculus of Section \ref{sec2}: there is a bijective correspondence between the set of derivations of $S \mid \Gamma \vdash A$ quotiented by $\circeq$ and the set of derivations of $S \mid \Gamma \vdash_\mathsf{RI} A$. \end{theorem} Soundness is immediate: there exist functions $\mathsf{emb}_{ph} : S \mid \Gamma \vdash^x_{ph} A \to S \mid \Gamma \vdash A$, for all $ph \in \{\mathsf{RI},\mathsf{LI},\mathsf{P}, \mathsf{F} \}$, which simply erase all phase annotations and tags. Completeness follows from the fact that the following rules are all admissible: \begin{equation}\label{eq:admis} \small \begin{array}{c} \infer[\mathsf{IL}^{\mathsf{RI}}]{\mathsf{I} \mid \Gamma \vdash_{\mathsf{RI}} C}{{-} \mid \Gamma \vdash_{\mathsf{RI}} C} \quad \infer[\otimes \mathsf{L}^{\mathsf{RI}}]{A \otimes B \mid \Gamma \vdash_{\mathsf{RI}} C}{A \mid B, \Gamma \vdash_{\mathsf{RI}} C} \quad \infer[\mathsf{pass}^{\mathsf{RI}}]{{-} \mid \Gamma \vdash_{\mathsf{RI}} C}{A \mid \Gamma \vdash_{\mathsf{RI}} C} \quad \infer[\mathsf{ax}^{\mathsf{RI}}]{A \mid \quad \vdash_{\mathsf{RI}} A}{} \quad \infer[\mathsf{IR}^{\mathsf{RI}}]{{-} \mid \quad \vdash_{\mathsf{RI}} \mathsf{I}}{} \\[6pt] \infer[{\multimap}\mathsf{L}^{\mathsf{RI}}]{A \multimap B \mid \Gamma , \Delta \vdash_{\mathsf{RI}} C}{ {-} \mid \Gamma \vdash_{\mathsf{RI}} A & B \mid \Delta \vdash_{\mathsf{RI}} C } \qquad \infer[\otimes \mathsf{R}_{\Gamma'}^{\mathsf{RI}}]{S \mid \Gamma , \Delta \vdash_{\mathsf{RI}} [\![ \Gamma' \mid A ]\!]_{\multimap} \otimes B}{ S \mid \Gamma , \Gamma' \vdash_{\mathsf{RI}} A & {-} \mid \Delta \vdash_{\mathsf{RI}} B } \end{array} \end{equation} The interesting one is $\otimes \mathsf{R}_{\Gamma'}^{\mathsf{RI}}$. The tensor right rule $\otimes \mathsf{R}^\mathsf{RI}$, with premises and conclusion in phase $\mathsf{RI}$, is an instance of the latter with empty $\Gamma'$. Without this generalization including the extra context $\Gamma'$, one quickly discovers that finding a proof of $\otimes \mathsf{R}^\mathsf{RI}$, proceeding by induction on the structure of the derivation of the first premise, is not possible when this derivation ends with an application of ${\multimap}\mathsf{R}$: \begin{displaymath} \small \proofbox{ \infer[\otimes \mathsf{R}^{\mathsf{RI}}]{S \mid \Gamma , \Delta \vdash_{\mathsf{RI}} (A' \multimap B') \otimes B}{ \infer[{\multimap}\mathsf{R}]{S \mid \Gamma \vdash_{\mathsf{RI}} A' \multimap B'}{ \deduce{S \mid \Gamma , A' \vdash_{\mathsf{RI}} B'}{f} } & \deduce{{-} \mid \Delta \vdash_{\mathsf{RI}} B}{g} } } = \quad ?? \end{displaymath} The inductive hypothesis applied to $f$ and $g$ would produce a derivation of the wrong sequent. The use of $\Gamma'$ in the generalized rule $\otimes \mathsf{R}_{\Gamma'}^{\mathsf{RI}}$ is there to fix precisely this issue. The admissibility of the rules in (\ref{eq:admis}) allows the construction of a function $\mathsf{focus} : S \mid \Gamma \vdash A \to \linebreak S \mid \Gamma \vdash_\mathsf{RI} A$, replacing applications of each rule in (\ref{eq:seqcalc}) with inferences by the corresponding admissible focused rule in phase $\mathsf{RI}$. It is possible to prove that the function $\mathsf{focus}$ maps $\circeq$-equivalent derivations in the sequent calculus for $\mathtt{SkNMILL}$\ to syntactically identical derivations in focused sequent calculus. One can also show that $\mathsf{focus}$ is the inverse of $\mathsf{emb}_\mathsf{RI}$, i.e. $\mathsf{focus}\;(\mathsf{emb}_\mathsf{RI} \;f) = f$ for all $f : S \mid \Gamma \vdash_\mathsf{RI} A$ and $\mathsf{emb}_\mathsf{RI}\;(\mathsf{focus}\;g) \circeq g$ for all $g : S \mid \Gamma \vdash A$. In other words, each $\circeq$-equivalence class in the sequent calculus corresponds uniquely to a single derivation in the focused sequent calculus. The focused sequent calculus solves the \emph{coherence problem} for skew monoidal closed categories. As proved in Theorems \ref{thm:models}, \ref{thm:unique}, the sequent calculus for $\mathtt{SkNMILL}$\ is a presentation of the free skew monoidal closed category $\mathsf{FSkMCl}(\mathsf{At})$ on the set $\mathsf{At}$. The coherence problem is the problem of deciding whether two parallel maps in $\mathsf{FSkMCl}(\mathsf{At})$ are equal. This is equivalent to deciding whether two sequent calculus derivations $f,g : A \mid ~ \vdash B$ are in the same $\circeq$-equivalence class. But that in turn is the same as deciding whether $\mathsf{focus}\;f = \mathsf{focus}\;g$ in the focused sequent calculus, and deciding syntactic equality of focused derivations is straightforward. The Hilbert-style calculus is a direct presentation of $\mathsf{FSkMCl}(\mathsf{At})$, but thanks to the bijection (up to $\doteq$ resp.\ $\circeq$) between Hilbert-style and sequent calculus derivations, we can also decide if two Hilbert-style derivations $f, g : A \Rightarrow B$ are in the same $\doteq$-equivalence class. \section{Conclusion} The paper describes a sequent calculus for $\mathtt{SkNMILL}$, a skew variant of non-commutative multiplicative intuitionistic linear logic. The introduction of the logic $\mathtt{SkNMILL}$\ via this sequent calculus is motivated by the categorical notion of skew monoidal closed category, which yields the intended categorical semantics for the logic. Sequent calculus derivations admit unique normal forms wrt.\ a congruence relation $\circeq$ capturing the equational theory of skew monoidal closed categories at the level of derivations. Normal forms can be organized in a focused sequent calculus, where each focused derivation uniquely corresponds to (and so represents) a $\circeq$-equivalence class in the unfocused sequent calculus. In order to deal with all the permutative conversions of $\circeq$ and to consequently eliminate the related sources of non-determinism in root-first proof search, the focused sequent calculus employs a system of tags for keeping track of the new formulae occurring in the context while building a derivation. The focused sequent calculus solves the coherence problem for skew monoidal closed categories: deciding if a canonical diagram commutes in every skew monoidal closed category reduces to deciding equality of focused derivations. We plan to investigate alternative presentations of $\mathtt{SkNMILL}$, such as natural deduction. Analogously to the case of the sequent calculus studied in this paper, we expect natural deduction derivation to be strongly normalizing wrt.\ an appropriately defined $\beta\!\eta$-conversion. We are interested in directly comparing the resulting $\beta\!\eta$-long normal forms with the focused derivations of Section \ref{sec:focus}. The system of tags in focused proofs, used for taming the non-deterministic choices arising in proof search to be able to canonically represent each equivalence class of $\circeq$, appears to be a new idea. There exist other techniques for drastically reducing non-determinism in proof search, such as multi-focusing \cite{chaudhuri:canonical:2008} and saturated focusing \cite{scherer:simple:2015}, and we wonder if our system of tags is in any way related to these. As a general disclaimer, we should state that we have interpreted focusing broadly as the idea of root-first proof search defining a normal form, based on a careful discipline of application of invertible and non-invertible rules to reduce non-determinism, but not necessarily driven by a polarity-centric analysis. In this respect, our approach is similar in spirit to, e.g., \cite{dyckhoff:ljq}. This paper represents the latest installment of a large project aiming at the development of the proof theory of categories with skew structure. So far the project, promoted and advanced by Uustalu, Veltri and Zeilberger, has investigated proof systems for skew semigroup categories \cite{zeilberger:semiassociative:19}, (non-symmetric and symmetric) skew monoidal categories \cite{uustalu:sequent:2021,uustalu:proof:nodate,veltri:coherence:2021} and skew prounital closed categories \cite{uustalu:deductive:nodate}. From a purely proof-theoretic perspective, the main lesson gained from the study of these skew systems consists in first insights into ways of \emph{modular} construction of focusing calculi. This is particularly highlighted in Uustalu et al.'s study of \emph{partially normal} skew monoidal categories \cite{uustalu:proof:nodate}, where one or more structural laws among $\lambda$, $\rho$ and $\alpha$ can be required to be invertible. The focused sequent calculus of partially normal skew monoidal categories is obtained from the focused sequent calculus of skew monoidal categories by modularly adding new rules for each enforced normality condition. We expect similar modularity to show up in the case of partially normal skew monoidal closed categories. We are also planning to develop an extension of $\mathtt{SkNMILL}$\ with an exponential modality $!$. This in turns requires the study of linear exponential comonads on skew monoidal closed categories, extending the work of Hasegawa \cite{hasegawa:linear:2017}. We would also like to consider modalities for exchange \cite{jiang:lambek:2019} and associativity/unitality. \paragraph{Acknowledgements} N.V.\ and T.U.\ were supported by the Estonian Research Council grants no. \linebreak PSG659, PSG749 and PRG1210, N.V.\ and C.-S.W.\ by the ESF funded Estonian IT Academy research measure (project 2014-2020.4.05.19-0001). T.U.\ was supported by the Icelandic Research Fund grant no.~196323-053. Participation of C.-S.W.\ in the conference was supported by the EU COST action CA19135 (CERCIRAS). \bibliographystyle{eptcs}
1,116,691,498,875
arxiv
\section{Additional Network Details} \subsection{Spiral Operator} \par In our mesh decoder, we use the spiral operator~\citep{spiralnet++} to process vertex features in the spatial domain. The spiral operator defines a spiral selection of neighbors for a center vertex $v$ by imposing the order on elements of the $k-$disk. A $k-$ring and a $k-$disk around a center vertex $v$ can be defined as follows: \begin{equation} \begin{split} 0-ring(v)&={v}, \\ k-disk(v)&=\bigcup_{i=0...k}i-ring(v), \\ (k+1)-ring(v)&=\mathcal{N}(k-ring(v)) \textbackslash k-disk(v).\\ \end{split} \end{equation} where $\mathcal{N}(\mathcal{V})$ denotes the vertex neighborhood. \par The spiral length is denoted as $l$, then we can get an ordered set $O(v, l)$ consisting of $l$ vertices by concatenating of $k-$rings: \begin{equation} \begin{split} O(v, l) \subset (0-ring(v), 1-ring(v),...,k-ring(v)). \\ \end{split} \end{equation} \par After getting the sequence, we can define the convolution in the manner of the euclidean convolution. The spiral convolution operator for a node $i$ can be defined as: \begin{equation} \begin{split} G_{s + 1} ^ i = \gamma_{s + 1}(\parallel_{j \in S(i, l)}G_{s} ^ j) \end{split} \end{equation} where $\gamma$ is MLPs and $\parallel$ denotes the concatenation operation. \subsection{Mesh Sampling} \par We apply a multi-scale hand mesh representation to capture both global and local information. At each stage of the mesh decoder, the number of mesh vertices are changed by the factor of 2. We follow COMA~\citep{ranjan2018generating} to perform the in-network sampling operations(down-sampling and up-sampling) using pre-computed transform matrices. The down-sampling matrix $D$ is obtained by iteratively contracting mesh vertex pairs while maintaining surface error approximation using quadric metrics. The up-sampling matrix $U$ is obtained by including barycentric coordinates of the vertices which are discarded during the downsampling. \begin{table}[!htbp] \caption{\small Analysis on different backbones, evaluated on FreiHAND. All backbones are pre-trained on ImageNet.} \label{compare backbone} \begin{center} \begin{tabular}{lll} \multicolumn{1}{c}{Backbone} &\multicolumn{1}{c}{PA-MPJPE$\downarrow$} &\multicolumn{1}{c}{PA-MPVPE$\downarrow$} \\ \hline \\ HRNet-W64 &6.4 &6.5\\ ResNet-50 &6.7 &6.9\\ \end{tabular} \end{center} \end{table} \subsection{Network Architecture details} \par For the feature encoder, we adopt different backbones including HRNet-W64~\citep{hrnet} and ResNet-50~\citep{resnet}. The backbone is followed by a series of deconvolution layers to obtain multi-scale feature maps $F_s$ and $H_ s$. The feature maps of the same stage have the same channels and sizes \emph{i.e.} 2048$\times$7$\times$7, 1024$\times$14$\times$14, 512$\times$28$\times$28, 256$\times$56$\times$56. We employ four 1$\times$1 convolution layers to get $Q \in \{512\times7\times7, 256\times14\times14, 128\times28\times28, 64\times56\times56\}$. The feature maps are then passed to four mesh decoding blocks. The output channels of each block are the same as $Q$ size in each stage $s$. A block consists of a spiral convolution layer and a self-attention module. For the spiral convolution, we set the spiral length $l$ as 27. For the self-attention module, we set the number of heads to 4. \par We study the behavior of different encoder backbones. Both HRNet~\citep{hrnet} and ResNet~\citep{resnet} model are pre-trained on the ImageNet. In table~\ref{compare backbone}, we find that HRNet has greater performance than using ResNet-50. As HRNet uses high-resolution feature pyramids, we observe further improvement. \begin{figure}[!t] \centering \includegraphics[width=\textwidth,trim={0 0.2cm 0 0cm},clip]{figs/new_2d_extractor.jpg} \caption{\small The re-designed 2D image feature extractor.} \label{new_2d_extractor} \end{figure} \subsection{The re-designed 2D feature extractor for GCN} \par Inspired by \citep{handmesh}, we propose to add semantic aggregation to our 2D feature extractor. Specifically, we combine all the predicted joint heatmaps to aggregate 2D joint locations, which can take advantage of our 2D auxiliary tasks. Then, we use another backbone to encode the combined heatmaps and exploit the feature maps as previous settings. \Figref{new_2d_extractor} shows the pipeline. \begin{figure}[!htbp] \centering \includegraphics[width=0.75\textwidth,trim={0 0cm 0 0cm},clip]{figs/transformer_based.jpg} \caption{\small Architecture of transformer-based method.} \label{fig_transformer_based} \end{figure} \section{Architecture of Transformer-based method} \par We inject our feature mapping and the hierarchical decoding into the Multi-Layer transformer encoder~\citep{metro}. Given the joint and mesh vertex queries, the encoder maps the input to 3D hand joints and mesh vertices. \par As shown in \Figref{fig_transformer_based}, the encoder consists of three blocks. All three blocks have the same number of tokens. The hidden dimensions are 1024, 256, and 64. For the first two blocks, we use two additional mesh-conv layers to predict intermediate mesh results. The pixel-aligned features with the same dimension as blocks are obtained using the mapping module and then concatenate to mesh features. \section{Failure Cases} \par \Figref{fig_failure} shows three typical failure cases of our method. In the first row, when the hand is severely occluded and the hands are not bounded by the crop size, some parts of the hand out of the image, our method fails to recover a correct hand mesh. In the second row, we observe when only a small portion of the hand is visible, our method predicts the wrong 2D pose and silhouette as well as the hand mesh. Referring to the last row, although the overall shape seems to be reasonable, it is difficult to obtain an accurate 3D mesh due to the heavy self-occlusion. The self-occlusion is one of the biggest challenges for 3D hand mesh reconstruction or pose estimation. \begin{figure}[!htbp] \centering \includegraphics[width=\textwidth,trim={0 1cm 0 0cm},clip]{figs/failure_cases.jpg} \caption{\small Failure cases of our method.} \label{fig_failure} \end{figure} \section{Additional Results} \subsection{Full result comparison on FreiHAND testset} \par As a supplement to our result on the FreiHAND testset, \Figref{fig_pck} plots PCK curves of 3D joints and 3D mesh vertices. The curve of our method achieves state-of-the-art for hand mesh reconstruction on the FreiHAND dataset. \begin{figure}[!htbp] \centering \includegraphics[width=\textwidth]{figs/acc.jpg} \caption{\small The pose and mesh AUC curve comparison with the state-of-the-art methods on the FreiHAND dataset.} \label{fig_pck} \end{figure} \subsection{Additional Qualitative Results} \par \Figref{fig_examples} illustrates the comprehensive qualitative results of our method. The challenges of the input include complicated hand poses and hand-object interaction situations. Overcoming these difficulties, our method can predict accurate hand mesh as well as silhouette and 2D pose. \begin{figure}[!htbp] \centering \includegraphics[width=\textwidth,trim={1.2cm 0.1cm 0 0}, clip]{figs/freihand_examples.jpg} \caption{\small Qualitative results on FreiHAND dataset.} \label{fig_examples} \end{figure} \section{Conclusions} \par We present a new pipeline to reconstruct a hand from a single RGB image. Specifically, we introduce a simple and compact architecture that can help align the mesh to the image, together with adding the self-attention module to improve vertices interactions. Comprehensive experiments show our method achieves the state-of-the-art on the FreiHAND dataset and verifies the effectiveness of our proposed key components. We further demonstrated that our design can also improve the performance of the transformer-based method. It shows that our proposed components have great generality for non-parametric models. \section{Introduction} \par Reconstructing 3D hand mesh from a single RGB image has attracted tremendous attention as it has numerous applications in human-computer interactions (HCI), VR/AR, robotics, \emph{etc}. Recent studies have made great efforts in the accurate hand mesh reconstruction and achieved very promising results~\citep{metro,meshnet,weakly,ge,hasson}. Recent state-of-the-art approaches address the problem mainly by deep learning. These learning-based methods can be roughly divided into two categories according to the representation of the hand meshes, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the parametric approaches, and the non-parametric ones. The parametric approaches use a parametric model that projects hand meshes in a low dimensional space (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot MANO~\citep{mano}) and regresses the coefficients in the space (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot the shape and pose parameters of MANO) to recover the 3D hand~\citep{hasson}. The non-parametric ones instead directly regress the mesh vertices using graph convolution neural network~\citep{meshnet,weakly,handmesh} or transformer~\citep{metro}. \par Non-parametric approaches have shown substantial improvement over the parametric ones in recent work, owing to the mapping between the image and the vertices is less non-linear than that between the image and the coefficients of the hand models~\citep{goal}. Their pipelines ~\citep{weakly,handmesh,metro} usually consist of three stages: a 2D encoder extracts the global image feature, which is mapped to 3D mesh vertices before fed into a 3D mesh decoder operating on the vertices and edges to get the final mesh. \par Despite the success, the potential of non-parametric approaches has not been fully uncovered with this pipeline. In parametric methods, vertices and edges are not visible to the network, and no operation is carried out in the manifold of the meshes; 2D image features are extracted only to learn a mapping between the image content and the hand model parameters. Conversely in non-parametric methods, operations on vertices and edges, such as graph convolutions or attention modules, are designed to aggregate the geometric features of the meshes. With the operation, vertices and edges are visible to the networks; thus direct connections between pixels of the 2D image feature space and vertices of the 3D mesh can be established and operations in the decoder can aggregate both image features and geometric features, which can not be realized in the parametric methods. This connection and the aggregation, however, have not been fully explored by previous work. \par In this paper, we seek to establish the connections and merge the local hand features from appearance in the input and geometry in the output. To this end, we utilize the pixel-aligned mapping module to establish the connections and propose a simple and compact architecture to deploy the connections. We design our network by making the following philosophical choices: 1) For the 2D feature extractor, we keep feature maps of different scales in the encoder instead of using the final global feature to enable 2D local information mapping to 3D. 2) We decode the mesh in the coarse-to-fine manner to make the best use of the multi-scale information. 3) Both image features and geometric features are aggregated in the operations of the mesh decoder rather than only geometric features. \par Our design is shown in \Figref{fig_pipeline}. Multi-scale image features are naturally passed to the 3D mesh decoder. Our experiments show the design enables better alignment between the image and the reconstructed mesh. The aggregation of features not only improves the graph convolution network substantially but also gains large superiority over the attention mechanism with global features. \par To summarize, our key contributions are 1) Operations are capable of aggregating both local 2D image features and 3D geometric features on meshes in different scales. 2) Connections between pixels of 2D image appearance in the encoder and vertices of 3D meshed in the decoder are established by a pixel-vertex mapping module. 3) A novel graph convolution architecture achieves state-of-the-art results on the FreiHAND benchmark. \section{Methodology} \par Given a monocular RGB image $I$, our goal is to predict the 3D positions of all the $N$ vertices $\mathcal{V} = \{v_i\}^N_{i=1}$ of the predefined hand mesh $\mathcal{M}$. The overall architecture of our network, as shown in \Figref{fig_pipeline}, has two major components: a 2D feature extractor, as well as a 3D mesh decoder that consists of feature mapping modules and mesh-conv layers. The 2D feature extractor is an hourglass that encodes the image content into features at $S$ levels of scale. Respectively the 3D mesh decoder also recovers the vertices in a coarse-to-fine manner in $S$ different scales. By design the mesh decoder at level $s \in S$ leverages the 2D feature map at level $s$. In the following sections, we will describe the architecture of the 2D feature extractor in \ref{2d_feature_extractor}, the pixel-aligned feature mapping module in \ref{Pixel-aligned_Feature_Mapping_Module}, the mesh decoder in \ref{mesh_decoder}, as well as training details in \ref{training}. \subsection{2D Feature Extractor} \label{2d_feature_extractor} \par In order to extract 2D features at different scales/receptive fields, we adopt a simple hourglass model with skip connections as the feature extractor. Previous works~\citep{metro,weakly,cmr} extract a global image feature vector and feed it to the decoder. Since all the vertices share the same global feature, only the geometric features relating to the overall deformation of meshes are aggregated and updated for each vertex. To enable mapping local image features to the vertices, we combine the downsampled feature $F_s$ and upsampled feature $H_s$ from the respective level $s$ of the hourglass network to have a fusion feature map $Q_s \in \mathbb{R}^{h_{s}*w_s*c_{s}}$ for the level $s$ of the mesh decoder. Specifically, \begin{equation} Q_s = \mathbf{Conv1D}(\bigoplus(F_s, H_s)), \end{equation} where $\bigoplus$ denotes concatenation, $\mathbf{Conv1D}$ denotes 1D convolution. $h_s$, $w_s$, and $c_s$ represent height, width, and the number of channels of $Q_s$ respectively. \subsection{Pixel-aligned Feature Mapping Module} \label{Pixel-aligned_Feature_Mapping_Module} \par Given a 2D image feature map $Q \in \mathbb{R}^{h*w*c}$, the mapping module needs to transform it into 3D vertex features $G \in \mathbb{R}^{N*c}$ of the corresponding mesh decoder layer, where $N$ denotes the number of vertices as mentioned. For this purpose, previous methods ~\citep{weakly,cmr, metro} either simply repeat the global feature from a feature extraction network to have the vertex features, or use a fully connected layer to map the global feature vector from $\mathbb{R}^{c}$ to $\mathbb{R}^{N*c}$ (the vertex features $G$ are obtained by reshaping $F^{'}_g$). This mapping manner can not well build the relationship between the 2D and 3D domain and have difficulty in guaranteeing mesh-alignment with the input image~\citep{pymaf}. \par Inspired by ~\citep{pixel2mesh,pifu}, we utilize a pixel-aligned feature mapping module to transform the feature map $Q_s$ to 3D vertex features $G_s \in \mathbb{R}^{N_s*c_s}$. Each predicted vertex $v \in \mathcal{V}_s$ is projected to pixel $x$ in image space, as illustrated in \Figref{mesh_decoder_fig}. After that, we sample the feature map $Q_s$ to extract pixel-aligned vertex features $G_s$ using the following equation: \begin{equation} \begin{split} G_s = f(Q_s, \pi (\mathcal{M}_s)), \end{split} \end{equation} where $\pi (.)$ denotes 2D projection, and $f(.)$ is a sampling function. \par Similar to ~\citep{pixel2mesh,pymaf} we use bilinear interpolation around each projected vertices on the feature maps to extract pixel-aligned feature vectors. The 2D feature maps $\{Q_s\}$ are multi-scale (7x7->14x14->28x28->56x56). Lower-resolution image features have a larger receptive field, hence more global information, while higher-resolution feature maps contain more local information. The pyramid feature maps and pixel-aligned feature mapping modules can provide richer content for vertices. \begin{figure}[t!] \centering \vspace{-0.8cm} \includegraphics[width=\textwidth,trim={0 2cm 0 2cm},clip]{figs/map_decoder2.jpg} \vspace{-0.2cm} \caption{\small Left: The architecture of the mesh decoder. Right: The data flow of the mesh decoder. 3D vertices are projected to the 2D image plane to retrieve pixel-aligned features. The GCN blocks and the attention module form a mesh-conv layer to predict offset mesh.} \label{mesh_decoder_fig} \vspace{-0.2cm} \end{figure} \subsection{Mesh Decoder} \label{mesh_decoder} In the 2D feature extractor, feature maps of different scales are kept and in the mapping module, the connections of the features of pixels of these feature maps and features for the 3D vertices in the corresponding decoding layers are established. In this section, we aim to design a mesh decoder structure that makes full use of the connections that are made possible via the non-parametric mesh representation. To be specific, we aim to 1) design a mechanism to leverage the multi-scale image features in the hierarchical decoding architecture; 2) aggregate both local features from the feature extractor and the features extracted in the previous mesh decoder layers. For the first aim, our decoder reconstructs hand meshes in a coarse-to-fine manner along with the hierarchical layers, shown in \Figref{fig_pipeline}. Our blocks skip connects the different vertex feature $G_i$ from the mapping module. In each decoding layer, based on the reconstructed mesh, image features of the corresponding scale are sampled by the feature mapping module and then concatenated to the vertices features. Rather than directly regressing the 3D coordinates, we predict the offset mesh $ \delta{M_i}$ of last-block-estimated ${M_i}$. The coarse-to-fine mesh topologies for each block are acquired by mesh simplification~\citep{spiralnet++}. The number of vertices of different meshes is 98, 195, 389, and 778. The $M_1$ has the same mesh topology as MANO~\citep{mano}. The meshes at different scales are predicted by several additional mesh decoding headers. We use the strategy based on edge contraction~\citep{garland1997surface} to do the pooling and unpooling operation. The input features are multiplied with a pre-computed transform matrix to obtain the output features. We achieve the second aim by deploying operations that can work on vertices of meshes, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot graph convolution operating on the manifold of the meshes and the attention module. In the ablation studies, we demonstrate that local features can significantly boost the performance of both operations compared with global features. Though in our approach, the feature mapping and the hierarchical decoding are only applied to decoders with graph convolutions and the attention modules, they can be injected into other 3D mesh decoders. In the experiment, we show that our components are also very effective for the transformer-based method~\citep{metro}. \paragraph{Graph Convolution} For the graph convolution operators, spectral convolutions and spatial convolutions have been used in the mesh reconstruction. We use a spiral patch operator~\citep{spiralnet++} to process vertex features in the spatial domain as it demonstrates superior performance in recent work. The spiral operator collects vertex neighbors of the center $v$. The graph convolution produces the offset mesh $\delta \mathcal{M}_s$ and feature $F_{GCN}^s$ for each vertex. The spiral convolution of input $F_{GCN}^s(v)$ can be defined as: \begin{equation} \begin{split} (F_{GCN}^s *g)_v=\sum_{l=1}^{L}g_l F_{GCN}^s(O(v)) \end{split} \end{equation} where $g_l$ is the convolution kernel, and $O(v)$ denotes the pre-computed vertices ordered sequence. \paragraph{Self-Attention} While GCN is useful for capturing fine-grained neighborhood information, it is less efficient at extracting long-range dependencies. We inject the multi-head self-attention module~\citep{mhsa} into GCN blocks to address this challenge. \par Given the vertex features $F_{GCN}^s$ generated by GCN, we strengthen the global interactions and get a new feature $F_{MHSA}^s$ for each vertex with the help of MHSA: \begin{equation} \begin{split} F_{MHSA}^i=MHSA(F_{GCN}^s), \end{split} \end{equation} The GCN blocks and the attention module form a mesh-conv layer as shown in \Figref{mesh_decoder_fig}. \subsection{Training} \label{training} \par We denote a dataset as a set of tuples $\{(I, P, C, \mathcal{M})\}$, where $I$ is the input image and $\mathcal{M}$ is the hand mesh; $P$ and $C$ are the heatmaps of 2D pose and silhouette, respectively. Following ~\citep{meshnet}, We apply the Gaussian distribution to construct 2D pose heatmaps. \paragraph{3D Supervision} One concern for non-parametric methods is the lack of kinematic and shape constraints. Adding 3D shape supervision can mitigate that. To this end, we adopt an L1 loss norm $L_{mesh}$ to supervise the 3D coordinates of vertices in the coarse-to-fine manner. Besides, we use the normal loss $L_{norm}$ and edge length loss $L_{edge}$ in ~\citep{pixel2mesh} for smoother reconstruction meshes in the last mesh decoding layer. So the loss for a sample in the dataset is defined as: \begin{equation} \begin{split} L_{mesh} &= \sum _{s=1} ^{S}\lambda_m^s*||\hat{\mathcal{M}_s}-\mathcal{M}_s||_1 \\ L_{edge} &= \sum _{t \in \mathcal{T}}\sum _{e \in t} || |\hat{e}|-|e|| ||_1\\ L_{norm} &= \sum _{t \in \mathcal{T}}\sum _{e \in t} || \hat{e} \cdot n ||_1\\ \end{split} \end{equation} where $t$ is a triangle face from all the faces $\mathcal{T}$ of $\mathcal{M}$, $e$ denotes an edge of $t$, and $n$ the normal vector of $t$ computed from the ground truth. To ensure the performance of the reconstruction of the finest mesh, we set a weight $\lambda_m^s$ for different scale mesh recontruction loss. \paragraph{2D Auxiliary Supervision} For the 2D feature extractor, we add auxiliary supervision for better feature extraction. We apply binary-cross-entropy (BCE) loss to formulate both the silhouette loss $L_{sil}$ and 2D pose loss $L_{2Dpose}$ as follows: \begin{equation} \begin{split} L_{sil} &= -(\sum _j(x^\silhouetteheatmap_j \log(\hat{x^\silhouetteheatmap_j})+(1-x^\silhouetteheatmap_j) \log(1-\hat{x^\silhouetteheatmap_j}))), x^\silhouetteheatmap \in C\\ L_{2Dpose} &= -(\sum _j(x^\poseheatmap_j \log(\hat{x^\poseheatmap_j})+(1-x^\poseheatmap_j) \log(1-\hat{x^\poseheatmap_j}))), x^\poseheatmap \in P, \end{split} \end{equation} where $x^\silhouetteheatmap_j$ and $x^\poseheatmap_j$ denotes the $j$-th pixel value of the silhouette and pose heatmaps respectively, and~ $\hat{}$ denotes the prediction. The overall loss is a weighted sum of all losses, $L_{total}=L_{mesh}+L_{edge}+\lambda _n* L_{norm}+\lambda _s*L_{sil}+ \lambda _p*L_{2Dpose}$, where $\lambda _n=0.1$, $\lambda _p=10$, $\lambda _s=2.5$ are set empirically. \section{Related Work} \paragraph{Mesh Reconstruction.} Previous research methods employ pre-trained parametric human hand and human models, namely MANO~\citep{mano}, SMPL~\citep{smpl}. And estimate the pose and shape coefficients of the parametric model. However, it is challenging to regress pose and shape coefficients directly from input images. Researchers propose to train network models with human priors, such as using skeletons~\citep{unitepeople} or segmentation maps. Some researchers have proposed regressed SMPL parameters by relying on human key points and contour maps~\citep{pavlakos2018learning,tan2017indirect} of the body. Coincidentally~\citep{omran2018neural} utilized the segmentation map of the human body as a supervision condition. A weakly supervised approach~\citep{kanazawa2018end} using 2D keypoint reprojection and adversarial learning regression SMPL parameters. Hsiao-Yu Tung~\citep{tung2017self} proposed a self-supervised approach to regression of human parametric models. \par Recently, model-free methods~\citep{pose2mesh,meshnet,cmr} for directly regressing human pose and shape from input images have received increasing attention. Because it can express the nonlinear relationship between the image and the predicted 3D space. Researchers have explored various ways to represent the human body and hand using 3D mesh~\citep{metro,cmr,pose2mesh,graphformer,litany2018deformable,ranjan2018generating,verma2018feastnet,pixel2mesh,meshnet}, voxel spaces~\citep{bodynet}, or occupancy fields~\citep{pifu,niemeyer2020differentiable,xu2019disn,saito2020pifuhd,peng2020convolutional}. Among them, the voxel space method adopts a completely non-parametric method, which requires a lot of computing resources, and the output voxel needs to fit the body model to obtain the final human 3D mesh.Among the recent research methods, Graph Convolution Neural Networks (GCNs)~\citep{cmr,pose2mesh,graphformer,litany2018deformable,ranjan2018generating,verma2018feastnet,pixel2mesh,meshnet} is one of the most popular methods. Because GCN is particularly convenient for convolution operations on mesh data. However, GCN is good for representing the local features of the mesh, and the global features of the long-distance interaction between human vertices and joints cannot be well represented. Transformer-based methods~\citep{metro} use a self-attention mechanism to take full advantage of the information interaction between vertex and joints and use the global information of the human body to reconstruct more accurate vertex positions. But whether it is a GCN-based method or an attention mechanism-based method. Neither considers pixel-level semantic feature alignment information. Local pixel-level semantic feature alignment can compensate for the global information that GCN and transformer methods focus on. \paragraph{Graph Neural Networks.} Graph deep learning generalizes neural networks to non-Euclidean domains, and we hope to apply graph convolution neural networks to learn shape-invariant features on triangular meshes. For example, spectral graph convolution neural network methods~\citep{bruna2013spectral,defferrard2016convolutional,kipf2016semi,levie2018cayleynets} perform convolution operations in the frequency domain. Local graph methods~\citep{masci2015geodesic,boscaini2016learning,monti2017geometric} based on spatial graph convolutions make deep learning on Manifold more convenient. \par In the application of mesh reconstruction.~\citep{ranjan2018generating} used fast local spectral filters to learn nonlinear representations of human faces.~\citep{kulon2019single} extended autoencoder networks to 3D representations of hands.Kolotouros proposed GraphCMR~\citep{cmr} to regression 3D mesh vertices using a GCN~\citep{cmr,pose2mesh,graphformer,litany2018deformable,ranjan2018generating,verma2018feastnet,pixel2mesh,meshnet}. Pose2Mesh~\citep{pose2mesh} proposes to reconstruct a human mesh from a given human pose representation based on a cascaded GCN.~\citep{simple} proposed spiral convolution to handle mesh in the spatial domain. Based on SpiralConv, Kulon ~\citep{weakly} introduced an automatic method to generate training data from unannotated images for 3D hand reconstruction and pose estimation.~\citep{handmesh,chen2021mobrecon} propose a novel aggregation method to collect effective 2D cues and exploit high-level semantic relations for root-relative mesh recovery. Kevin Lin proposed Graphormer~\citep{graphformer}, combining Transformer and GCN to simulate the global interaction between joints and mesh vertices. \paragraph{Mesh-image alignment.} In the field of 2D image processing, most deep learning methods employ a "fully convolution" network framework that maintains spatial alignment between images and outputs~\citep{Pointrend,long2015fully,tompson2014joint}. Several research methods also consider alignment relationships in the 3D domain. For example, PIFu~\citep{pifu} proposed an implicit representation that locally aligns the pixels of a 2D image with the global context of their corresponding 3D objects. PyMAF~\citep{pymaf} introduced a mesh alignment feedback loop, where evidence of mesh alignment is used to correct parameters for better-aligned reconstruction results. The alignment can take advantage of more informative features that are sensitive to position to predict mesh. \par Existing mesh recovery works~\citep{towards,li2022interacting} face the shortcomings of complex network structure when mesh-images alignment. Furthermore, the initial input of the 3D decoder is a high-resolution mesh, which makes network optimization difficult. This is critical for practical applications. To address these issues, we propose a compact network framework to map 2d image pixel features to 3d mesh vertex locations. We apply a multi-scale structure to the 2D feature encoder and 3D mesh decoder respectively, to achieve coarse-to-fine pixel alignment at corresponding resolutions. Using multi-scale pixel-aligned features can achieve better mesh-image alignment than previous methods. \begin{figure}[!htbp] \centering \vspace{-0.5cm} \includegraphics[width=\textwidth,trim={0 1cm 0 0cm},clip]{figs/pipeline4.jpg} \caption{\small \textbf{Our full pipeline.} Given a single-hand RGB image as input, our network first extracts 2D feature maps with an hourglass module. Downsampled $F_s$ and upsampled feature maps $H_s$ are aggregated to have the multiscale image feature pyramids, which are mapped to vertices of meshes at different scales by our pixel-aligned mapping module and GCN blocks. Vertex position of the meshes can then be predicted in a coarse-to-fine manner.} \label{fig_pipeline} \end{figure} \section{Experiment} \subsection{Experimental Setups} \label{sec:4.1} \par \textbf{FreiHAND}~\citep{freihand} is the most widely-used benchmark for hand mesh reconstruction. It contains 130K images for training with 3D and 2D annotations, and the test set has 4K images. As the annotations of the test set are not available, we submit our results to their provided online server for evaluation. \paragraph{Training Procedure.} Our network is based on the HRNet-W64~\citep{hrnet} and ResNet-50~\citep{resnet} backbone pre-trained on ImageNet~\citep{imagenet}. We trained our model in an end-to-end manner. 2D keypoints and masks are used to train the image feature extractors as auxiliary supervision. We resize the input image to 224$\times$224. Following previous work, data augmentations including rotation, translation, color jitter, etc are applied during training. Our method is implemented in Pytorch and trained on Nvidia RTX 3090 GPU with a batch size of 64 for 50 epochs. We optimize the network by Adam with a learning rate of 1e-4 and set a decay rate of 0.1 at 35 epochs. \paragraph{Evaluation metrics.} We report the results of our approach using several evaluation metrics. PA-MPJPE: It first performs the rigid alignment between the prediction and ground-truth using Procrustes Analysis~\citep{PA}, then calculates the mean-per-joint-position-error. PA-MPVPE: similar to PA-MPJPE, but it measures the difference between the vertices predicted and the ground truth. The unit for the PA-MPJPE/PA-MPVPE metrics is millimeter (mm). F-scores: It measures the harmonic mean between recall and precision between two meshes. We report the F@5mm and F@15mm as existing works. \begin{table}[!t] \vspace{-1.2cm} \caption{\small Analysis of the self-attention and mapping module} \label{table2} \centering \begin{tabular}{lllll} \\ \toprule \cmidrule(r){1-5} Feat. &Mapping Module &Self-attention &PA-MPJPE$\downarrow$ &PA-MPVPE$\downarrow$\\ \midrule Global &Repeat &\XSolidBrush &10.1 &10.3\\ Global &MLP &\XSolidBrush &9.1 &9.2\\ Feature maps(w/o 2D) &Pixel-aligned &\XSolidBrush &8.3 &8.4\\ Feature maps(w/ 2D) &Pixel-aligned &\XSolidBrush &7.8 &7.9\\ Global &MLP &\CheckmarkBold &8.4 &8.6\\ Feature maps &Pixel-aligned &\CheckmarkBold &7.5 &7.7\\ \bottomrule \vspace{-0.8cm} \end{tabular} \end{table} \begin{minipage}{\textwidth} \begin{minipage}[t]{0.45\textwidth} \makeatletter\def\@captype{table} \caption{\small Analysis of adding skip connections in different decoder layers.} \label{removefeature} \centering \begin{tabular}{llllll} \\ \toprule \cmidrule(r){1-6} $M_4$ &$M_3$ &$M_2$ &$M_1$ &PJ$\downarrow$ &PV$\downarrow$\\ \midrule \CheckmarkBold &\XSolidBrush &\XSolidBrush &\XSolidBrush &8.25 &8.31\\ \CheckmarkBold &\CheckmarkBold &\XSolidBrush &\XSolidBrush &8.06 &8.13\\ \CheckmarkBold &\CheckmarkBold &\CheckmarkBold &\XSolidBrush &7.93 &8.02\\ \CheckmarkBold &\CheckmarkBold &\CheckmarkBold &\CheckmarkBold &7.83 &7.93\\ \bottomrule \end{tabular} \end{minipage} \hspace{2em} \begin{minipage}[t]{0.45\textwidth} \makeatletter\def\@captype{table} \caption{\small Effectiveness of coarse-to-fine for pixel-aligned mapping module} \label{compare_other_pixel} \centering \begin{tabular}{lll} \\ \toprule \cmidrule(r){1-3} Methods &PJ$\downarrow$ &PV$\downarrow$\\ \midrule refine(A time) &8.31 &8.42\\ refine(Three times) &8.12 &8.22\\ \midrule \textbf{Ours} &7.83 &7.93\\ \bottomrule \end{tabular} \end{minipage} \end{minipage} \subsection{Ablation Studies} \label{ablation} \par We conduct ablation studies under various settings on FreiHAND~\citep{freihand} to investigate the key components of our model. We use ResNet-18 as the backbone and report the results using PA-MPJPE and PA-MPVPE. Table~\ref{table2} shows all the comparisons. Our final model (last row) improves the baseline with the global feature (1st row) by 26$\%$. \paragraph{Effectiveness of Mapping Module for Graph Convolution} To establish the relationships between the 2D pixels and 3D vertices, We utilize the pixel-aligned feature mapping module. To verify the efficacy, we construct baselines based on GCN and compare their reconstruction accuracy in Table~\ref{table2} with the variant of our method. Similar to ~\citep{weakly}, we implement a baseline that directly repeats the global feature and concatenates it to the mesh vertices (1st row) and a baseline that maps the global feature to the vertex features by MLP layers(2nd row). For fair comparisons, we construct our variant with the pixel-aligned module which has the same architecture for the mesh decoder with these two baselines and do not add the 2D supervision (3rd row). Table~\ref{table2} shows that the pixel-aligned module improves the reconstruction performance by a large margin, about 18$\%$ and 8$\%$. \paragraph{Effectiveness of Attention and Mapping Module for Attention} Adding attention to graph convolution can complement the long-range information aggregation and hence improve the network capacity. To verify this, we compare a network using the graph convolution only (2nd row) and a network with an attention module in between graph convolution layers (5th row) in Table~\ref{table2}. The results show that the attention mechanism can improve the mesh reconstruction with only global features. Based on the same graph attention architecture, adding the mapping module can further improve the performance by 0.9mm in PA-MPJPE. \paragraph{Effectiveness of 2D Auxiliary Supervision} We analyze how feature maps affect. We implement two models with the same architecture: one has 2D supervision from silhouette and 2D pose (4th row) and one without (3rd row). It shows that the feature maps with supervision are helpful for improving performance. As 3D mesh annotation is expensive to acquire while 2D pose can be relatively cheap to label, we expect our method can benefit more from extra 2D auxiliary supervision. \paragraph{Adding Skip Connections in Different Decoder Layers} To analyze the finer local features in the mesh reconstruction, variants of our method are constructed by gradually removing the skip connections in the last decoding layer and the results of these variants are shown in Table~\ref{removefeature}. Note that, all approaches do not add the self-attention module. We observe a gradual reduction in the errors when multi-scale mesh-alignment evidences are removed from decoder layers. It proves that our skip connection design improves accuracy by leveraging multi-scale information. \paragraph{Effectiveness of Coarse-to-fine for Pixel-aligned Mapping Module} We reproduced \citep{towards} pipeline with our mesh decoder architecture to compare with another pixel-aligned-based method. Rather than regress hand mesh vertices in a coarse-to-fine manner like ours, they use the full mesh to refine. In the first row, we follow their pipeline to refine the rough mesh one time, and for a more intuitive comparison with our method, we refine the mesh three times. Table~\ref{compare_other_pixel} shows our coarse-to-fine manner can achieve better results for both two cases. \begin{table}[!t] \vspace{-1.2cm} \caption{\small Performance comparison with the state-of-the-art methods on the FreiHAND dataset.$\downarrow$ means the lower the better, $\uparrow$ means the higher the better. The unit of PA-MPJPE/PA-MPVPE is mm.} \label{table1} \begin{center} \begin{tabular}{lllll} \\ \toprule \cmidrule(r){1-5} Methods &PA-MPJPE$\downarrow$ &PA-MPVPE$\downarrow$ &F@5mm$\uparrow$ &F@15mm$\uparrow$ \\ \midrule \citep{hasson} &13.3 &13.3 &0.429 &0.907\\ \citep{freihand} &10.9 &11.0 &0.516 &0.934\\ \citep{weakly} &8.4 &8.6 &0.614 &0.966\\ Pose2Mesh\citep{pose2mesh} &7.7 &7.8 &0.674 &0.969\\ I2LMeshNet\citep{meshnet} &7.4 &7.6 &0.681 &0.973\\ \citep{towards} &7.1 &7.1 &0.706 &0.977 \\ \citep{handmesh} &6.9 &7.0 &0.715 &0.977 \\ METRO\citep{metro} &6.8 &6.7 &0.717 &0.981\\ Graphormer\citep{graphformer} &6.0 &5.9 &0.764 &0.986 \\ \midrule \textbf{Ours with METRO} &\textbf{6.1} &\textbf{6.2} &\textbf{0.757} &\textbf{0.984}\\ \textbf{Ours with GCN} &\textbf{5.9} &\textbf{6.0} &\textbf{0.766} &\textbf{0.985}\\ \bottomrule \vspace{-0.8cm} \end{tabular} \end{center} \end{table} \paragraph{The Generality of Mapping Module and Skip Connections} We verify that our pixel-aligned mapping module and skip connections are not only effective for our approach but also can improve the performance of other non-parametric models. Based on METRO~\citep{metro} which are transformer-based methods, we re-implement their networks with our design. We provide the details of network architecture in the appendix due to the space limitation. As Table~\ref{table1} shows, pixel-aligned mapping module and skip connections can improve the METRO by a large margin. \subsection{Comparisons with State-of-the-Art} \label{sec:4.3} We follow the former works~\citep{freihand,hasson,weakly,pose2mesh,meshnet,metro,graphformer} to quantitatively compare our method with state-of-the-art methods on the FreiHAND eval set. For GCN, we re-design our 2D feature extractor as CMR~\citep{handmesh} for further performance improvement. For Transformer, we design the architecture as introduced in \ref{ablation}. As shown in Table \ref{table1}, All of our approaches achieve state-of-the-art for all the metrics. It demonstrates that our designs can be effective for both non-parametric models. \paragraph{Qualitative Results} \Figref{fig_examples} shows reconstructed meshes from several testing examples of FreiHAND. On the left side, our method aligns the meshes better to the input images than the baseline with global features only. On the right side, decoded meshes in different scales are shown. Notice that the meshes from the last layers adjust both the global locations to align the mesh to the inputs (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot the meshes in the 3rd row) and the geometry of the meshes (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot the thickness and smoothness of the meshes). \par \Figref{fig_failure} shows three typical failure cases of our method. In the first row, when the hand is severely occluded and the hands are not bounded by the crop size, some parts of the hand out of the image, our method fails to recover a correct hand mesh. In the second row, we observe when only a small portion of the hand is visible, our method predicts the wrong 2D pose and silhouette as well as the hand mesh. Referring to the last row, although the overall shape seems to be reasonable, it is difficult to obtain an accurate 3D mesh due to the heavy self-occlusion. The self-occlusion is one of the biggest challenges for 3D hand mesh reconstruction or pose estimation. \begin{figure}[!t] \centering \vspace{-1cm} \includegraphics[width=\textwidth,trim={0 1cm 0 0.5cm},clip]{figs/examples.jpg} \vspace{-0cm} \caption{\small Left: comparison of the baseline with global features and our method; Right: Example meshes from $M_4$ to $M_1$.} \label{fig_examples} \vspace{-0.2cm} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.7\textwidth,trim={0 1cm 0 0cm},clip]{figs/failure_cases.jpg} \caption{\small Failure cases of our method.} \label{fig_failure} \vspace{-0.5cm} \end{figure}
1,116,691,498,876
arxiv
\section{Introduction} Nonequilibrium dynamics in quantum field theory has become, during the last years, a very active field of research in particle physics [1-6], in cosmology [7-20] , and in solid state physics \cite{Zurek:1996}. The outline of typical computational experiments is as follows: a quantum field $\psi({\bf x},t)$ is driven by a classical field degree of freedom (Higgs, inflaton, condensate) $\phi(t)$ which takes an initial value away from a local or global minimum of the classical or effective action; the time development is then studied including the back reaction of the quantum field in one-loop, Hartree or large-N approximations. The initial state of the quantum field $\psi$ is usually taken to be the vacuum state corresponding to a free field of some ``initial mass'' $m(t_0)$ or a thermal state built on such a vacuum state. In Friedmann-Robertson-Walker (FRW) cosmology \cite{Boyanovsky:1997a,Ramsey:1997,Baacke:1997c} the usual initial quantum state is chosen to be the conformal vacuum, again corresponding to the initial mass $m(t_0)$. While such choices seem very natural, they are not necessarily appropriate; nevertheless, this point has received little attention up to now. The reason why we address this question is the occurrence, in some dynamically relevant physical quantities, of singularities in the time variable which are related to the choice of initial state. We will in fact show that these singularities can be removed by more appropriate choices. The origin of these singularities can be traced back to a discontinuous switching on of the interaction with the external field $\phi(t)$. This interaction is given by a time-dependent mass term \footnote {We choose $t_0=0$ for convenience.} $m^2(t)=m^2(0)+V(t)$ where for the simplest case of a $\lambda\Phi^4$ theory \cite{Boyanovsky:1994,Baacke:1997a} \begin{equation} V(t)=\frac{\lambda}{2} \left [\phi(t)-\phi(0)\right] \; .\end{equation} In FRW cosmology the (conformal) time dependent mass term reads \cite{Ringwald:1987,Baacke:1997c} \begin{equation} \label{effm} M^2(\tau)=a^2(\tau)\left\{ m^2+\left(\xi-\frac 1 6 \right)R(\tau)+\frac \lambda 2 \frac{\varphi^2(\tau)}{a^2(\tau)} \right\}\; . \end{equation} In this case $M^2(\tau)$ and therefore $V(\tau)= M^2(\tau)-M^2(0)$ also contain the scale parameter $a(\tau)$ and the curvature scalar $R(\tau)$. A free field theory vacuum state corresponding to the mass $m(0)$ would be an appropriate equilibrium state if $V(t)$ stayed zero for all times. However, at $t=0$ the potential $V(t)$ changes in a nonanalytic way. This is unavoidable since at least the second derivative of $\phi(t)$ becomes nonzero on account of the equation of motion. As a result of these discontinuities relevant physical quantities develop singularities in the time variable $t$ at $t=0$. In the case of $\lambda\Phi^4$ theory in Minkowski space such singularities only occur in the pressure. In FRW cosmology the problem becomes more acute. There, even the first derivative of $V(t)$ necessarily becomes nonzero at $t=0$; indeed, even with a constant external field $\phi$ the initial state could not be at equilibrium; this manifests itself by a nonvanishing first derivative of the scale parameter induced by the Friedmann equations. Furthermore, in this case both energy and pressure become singular; since they enter the Friedmann equations this singular behavior also affects the dynamics. Singularities arising from imposing initial conditions to quantum systems have been noted for the first time by St\"uckelberg \cite{Stueckelberg:1951}, who called them ``surface singularities''; they are briefly mentioned in the textbook of Bogoliubov and Shirkov \cite{Bogoliubov:1980}. The ``Casimir effect'' arising from initial conditions has been discussed by Symanzik \cite{Symanzik:1981}. In the context of nonequilibrium dynamics in FRW cosmology the occurrence of such singularities has been noted by Ringwald \cite{Ringwald:1987}. In the following we will refer to these singularities as ``initial singularities''. Imposing an initial condition at some time $t_0$ does not mean, in most applications, that one assumes the system to have come into being at just this time. Rather, $t_0$ is usually chosen as a point in time at which one can make, on some physical grounds, plausible assumptions about the state of the system. Clearly, if the system is not at equilibrium after $t_0$, it will not have been so before. Therefore, the initial state should take into account, at least in some minimal way, the previous nonequilibrium evolution of the system. Such a minimal requirement is the vanishing of initial singularities. It is the aim of this paper to specify such initial states. The plan of the paper is as follows: in section 2 we present the basic equations and formulate the problem for the case of a $\lambda\Phi^4$ theory; in section 3 we discuss an appropriate choice of the initial quantum state such that the singular behavior in the time variable is removed; the modified renormalized equations for the nonequilibrium system are given in section 4; we end with some concluding remarks in section 5. \section{Formulation of the problem} \setcounter{equation}{0} We consider a scalar $\lambda\Phi^4$ theory without spontaneous symmetry breaking. The Lagrangian density is given by \begin{equation} {\cal L}=\frac{1}{2}\partial_\mu\Phi\partial^\mu\Phi -\frac{1}{2}m^2\Phi^2- \frac{\lambda}{4!}\Phi^4. \end{equation} We split the field $\Phi$ into its expectation value $\phi$ and the quantum fluctuations $\psi$: \begin{equation} \label{erw} \Phi({\bf x},t)=\phi(t)+\psi({\bf x},t)\; , \end{equation} with \begin{equation} \phi(t)=\langle\Phi({\bf x},t)\rangle= \frac{{\rm Tr}{\Phi\rho(t)}}{{\rm Tr}\rho(t)}\; , \end{equation} where $\rho(t)$ is the density matrix of the system which satisfies the Liouville equation \begin{equation} i \frac{d\rho (t)}{dt} = [{\cal H}(t),\rho(t)] \; . \end{equation} The Lagrangian then takes the form \begin{equation} {\cal L}={\cal L}_{\rm 0}+{\cal L}_{\rm I}\; , \end{equation} with \begin{eqnarray} {\cal L}_{\rm 0}&=&\frac{1}{2} \partial_\mu\psi\partial^\mu\psi-\frac{1}{2}m^2\psi^2\nonumber\\ &&+\frac{1}{2} \partial_\mu\phi\partial^\mu\phi-\frac{1}{2}m^2\phi^2- \frac{\lambda}{4!}\phi^4 \; ,\\ {\cal L}_{\rm I}&=& \partial_\mu\psi\partial^\mu\phi-m^2\psi\phi- \frac{\lambda}{4!}\psi^4-\frac{\lambda}{6}\psi^3\phi -\frac{\lambda}{4}\psi^2\phi^2-\frac{\lambda}{6}\psi\phi^3 \; . \end{eqnarray} The equation of motion for the field $\phi(t)$ is given by \cite{Boyanovsky:1994} \begin{equation} \label{phidgl} \ddot{\phi}(t)+m^2\phi(t)+\frac{\lambda}{6}\phi^3(t) +\frac{1}{i}\frac{\lambda}{2}\phi(t) G^{++}(t,{\bf x};t,{\bf x})=0\; . \end{equation} Here $G^{++}$ is the $++$ matrix element of the exact nonequilibrium Green function \cite{Schwinger:1961,Keldysh:1964} in the background field $\phi(t)$. For a pure initial state $|i\rangle$ it can be written as \begin{equation} -iG^{++}(t,{\bf x};t',{\bf x}')= \langle i| T\psi(t,{\bf x})\psi(t',{\bf x}')|i \rangle \; . \end{equation} If the classical field is spatially uniform the equation of motion for the field $\psi(t,{\bf x})$ is given by \begin{equation} \left[\frac{\partial^2}{\partial t^2}- \Delta+m^2+\frac{\lambda}{2}\phi^2(t)\right] \psi(t,{\bf x}) = 0 \; . \end{equation} We introduce the notations \begin{eqnarray} \label{massoft} m^2(t)&=&m^2+\frac{\lambda}{2}\phi^2(t)\; , \\ \label{omoft} \omega_k(t)&=&\left[ {\bf k}^2+ m^2(t)\right]^{\frac{1}{2}}\; , \end{eqnarray} and \begin{equation} \omega_{k0}=\left[ {\bf k}^2+m_0^2 \right]^\frac{1}{2} \; . \end{equation} We will discuss the choice of $m_0$ below. We define the `potential' $V(t)$ as \begin{equation} V(t)=\omega_k^2(t)-\omega_{k0}^2 \end{equation} We further introduce the mode functions for fixed momentum $U_{k}(t)\exp(i{\bf k}\cdot{\bf x})$ \footnote{Note that the functions $U_{k}(t)$ depend only on the absolute value of ${\bf k}$.} which satify the evolution equation \begin{equation} \left[\frac{\partial^2}{\partial t^2}+ \omega_k^2(t)\right] U_k(t)=0 \; ; \end{equation} we choose the initial conditions \begin{equation} U_k(0)=1 \hspace{1cm} \dot U_k(0)=-i\omega_{k0} \; . \end{equation} The field $\psi$ can now be expanded as \begin{equation} \label{fieldexp} \psi(t,{\bf x})=\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, \left[a({\bf k})U_{k}(t)e^{i{\bf k} \cdot {\bf x}} +a^\dagger({\bf k})U^*_{k}(t)e^{-i{\bf k} \cdot {\bf x}} \right]\; , \end{equation} where the operators $a({\bf k})$ satisfy \begin{equation} [a({\bf k}),a^\dagger({\bf k}')]=(2\pi)^3 2\omega_{k0} \delta^3({\bf k} - {\bf k}') \end{equation} If the initial state $|i\rangle$ is chosen as the vacuum state corresponding to the operators $a({\bf k})$, i.e., as satisfying $a({\bf k})|i\rangle =0$, we obtain the Green function $G^{++}(t,t';{\bf x}-{\bf x}')$ as \begin{equation} G^{++}(t,t';{\bf x} -{\bf x}')=\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, \left[ U_k(t)U^*_k(t')\theta(t-t') +U_k(t')U^*_k(t)\theta(t'-t)\right] e^{i{\bf k} ({\bf x} -{\bf x}')} \; . \end{equation} The Green function at equal space and time points then reads \begin{equation} G_k^{++}(t,t;{\bf 0})= i \int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,|U_k(t)|^2 \; . \end{equation} The resulting equation of motion for the classical field $\phi(t)$ is \begin{equation} \label{eqmot} \ddot{\phi}(t)+m^2\phi(t)+\frac{\lambda}{6} \phi^3(t)+\frac{\lambda}{2}\phi(t){\cal F}(t)=0, \end{equation} where we have introduced fluctuation integral \begin{equation}\label{dmsq} {\cal F}(t) \equiv \int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, |U_k(t)|^2 \; . \end{equation} It determines the back reaction of the fluctuations onto the classical field $\phi(t)$. We further consider the energy density and the pressure. The energy density is given by \begin{equation} \label{energie} {\cal E}=\frac{1}{2}\dot{\phi}^2(t)+V(\phi(t))+\frac{{\rm Tr} {\cal H}\rho(0)}{{\rm Tr} \rho(0)} \; . \end{equation} Calculating the trace over the Hamiltonian for the same initial state we obtain \begin{eqnarray} \label{E_unren} {\cal E}&=&\frac{1}{2}\dot{\phi}^2(t)+ \frac{1}{2}m^2\phi^2(t)+\frac{\lambda}{4!}\phi^4(t) \nonumber \\ &&+\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\left\{\frac{1}{2}|\dot{U}_k(t)|^2 +\frac{1}{2}\omega_k^2(t)|U_k(t)|^2\right\} \; . \end{eqnarray} Using the equations of motion it is easy to see that the time derivative of the energy density vanishes. The pressure is given by \begin{equation} \label{pressure} p=\dot\phi^2(t) +\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\left\{|\dot U_k^+(t)|^2+\frac{{\bf k}^2}{3}|U_k^+(t)|^2 \right\}-{\cal E}\; . \end{equation} While the pressure does not enter the dynamics here, so it does in FRW cosmology. The expressions for the fluctuation integral, the energy density and the pressure are divergent and one has to discuss the renormalization of this theory. We have presented recently \cite{Baacke:1997a} a fully renormalized framework for nonequilibrium dynamics. The main technical ingredient of this analysis is the perturbative expansion of the functions $U_{k}(t)$ with respect to orders in the potential $V(t)$. We write the functions $U_{k}$ as \begin{equation} U_{k}(t)=e^{-i\omega_{k0} t}\left[1+h_{k}(t)\right] \end{equation} and expand further in orders of the potential $V(t)$ as \begin{equation} h_k(t)=\sum_{n=1}^\infty h^{(n)}_{k}(t) \; . \end{equation} We also introduce the partial sums \begin{equation} h_k^{\overline{(n)}}(t)=\sum_{l=n}^\infty h^{(l)}_{k}(t) \; , \end{equation} so that \begin{equation} h_k(t)\equiv h_k^{\overline{(1)}}(t) =h_k^{(1)}+h_k^{\overline{(2)}}(t)\; . \end{equation} The integral equation for the function $h_k(t)$ can be derived in a straightforward way from the differential equation satisfied by the functions $U_k(t)$; it reads \begin{equation} h_k(t)=\frac{i}{2\omega_{k0}}\int_0^t dt' \left(e^{2i\omega_{k0}(t-t') }-1\right) V(t')\left[1+h_k(t')\right]\; . \end{equation} We obtain \begin{equation} h_k^{(1)}=\frac{i}{2\omega_{k0}}\int_0^t dt' \left(e^{2i\omega_{k0}(t-t') }-1\right)V(t')\; . \end{equation} Using integrations by parts this function can be analyzed with respect to orders in $\omega_{k0}$ via \begin{eqnarray} \nonumber h_k^{(1)}(t)&=&\frac{-i}{2\omega_{k0}}\int_0^t dt' V(t') +\sum_{l=0}^n \left(\frac{-i}{2\omega_{k0}}\right)^{l+2} \left[V^{(l)}(t)-e^{2i\omega_{k0} t} V^{(l)}(0)\right]\\ &&-\left(\frac{-i}{2\omega_{k0}}\right)^{n+2} \int_0^t dt'e^{2i\omega_{k0}(t-t')} V^{(n+1)}(t') \; , \label{hexp} \end{eqnarray} where $V^{(l)}(t)$ denotes the $l$th derivative of $V(t)$. For energy density and pressure we need the expansion of $\dot h_k^{(1)}(t)$ as well. From Eq. (\ref{hexp}) and the relation \begin{equation} \dot h_k^{(1)}=2 i \omega_{k0} h_k^{(1)}- \int _0^tdt'V(t') \end{equation} we find \begin{eqnarray} \nonumber \dot h_k^{(1)}(t)&=&\sum_{l=0}^n \left(\frac{-i}{2\omega_{k0}}\right)^{l+1} \left[V^{(l)}(t)-e^{2i\omega_{k0} t} V^{(l)}(0)\right]\\ &&-\left(\frac{-i}{2\omega_{k0}}\right)^{n+1} \int_0^t dt'e^{2i\omega_{k0}(t-t')} V^{(n+1)}(t') \; . \label{hdotexp} \end{eqnarray} In the following we will need the real and imaginary parts of this expression; we introduce the following useful notation: \begin{eqnarray} {\cal C} (f,t)&=&\int_0^t dt' f(t')\cos(2\omega_{k0} t') \; ,\\ {\cal S} (f,t)&=&\int_0^t dt' f(t')\sin(2\omega_{k0} t') \; . \end{eqnarray} We now insert the perturbative expansion into the fluctuation integral to obtain \begin{eqnarray} \nonumber {\cal F}(t)&=&\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\left\{1+2\re h_k(t)+|h_k(t)|^2 \right\} \\ \nonumber &=&\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\left\{1-\frac{V(t)}{2\omega_{k0}^2} +\frac{V(0)}{2\omega_{k0}^2}\cos (2\omega_{k0} t)+ \frac{\dot V(0)}{4\omega_{k0}^3}\sin(2\omega_{k0} t) +\frac{\ddot V(t)}{8\omega_{k0}^3}\right.\\&&\left.-\frac{\ddot V(0)}{8\omega_{k0}^3} \cos (2\omega_{k0} t)- \frac{1}{8\omega_{k0}^4} {\cal C} (\dddot V,t) + 2\re h_k^{\overline{(2)}}+ |h_k|^2\right\} \; . \label{pertex} \end{eqnarray} The first two terms in the parenthesis of the second expression, i.e., $1$ and $V(t)/2\omega_{k0}^2$ lead to divergent integrals which have to be absorbed by the renormalization procedure. This has been discussed in \cite{Baacke:1997a}. There, the mass $m_0$ was chosen to be the `initial' mass $m(0)$ (see (\ref{massoft})). It was shown that the renormalization counter terms do not depend on this mass but can be chosen to contain only the renormalized mass $m$ corresponding to the perturbative ground state at $\phi=0$. With this choice of initial mass $V(0)$ is zero and the fluctuation integral is nonsingular at $t=0$. If, on the other hand, we choose $m_0=m$ it is obvious that the divergencies are absorbed by the counter terms depending only on the perturbative mass $m$, but we are faced with an initial singularity arising from the third term via (see Appendix B) \begin{equation} \int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, \frac{V(0)}{2\omega_{k0}^2}\cos(2 \omega_{k0} t) \simeq -\frac{1}{8\pi^2}\ln (2 m_0 t) \; \ {\rm as} \;\ t \to 0 \; . \end{equation} Of course, nobody has made such an `unnatural' choice of the initial mass, this initial singularity can be avoided trivially by choosing $m_0=m(0)$. It is important to note, however, that the renormalization can be performed in a way independent of the initial condition in both cases. The difference between the two approaches is in the initial `vacuum' state. These different initial states are related by a Bogoliubov transformation (see also Appendix A). So Bogoliubov transformations can be used to avoid initial singularities. In (\ref{pertex}) we have extended the expansion of $\re h^{(1)}_k(t)$ to display also the terms of order $\omega_{k0}^{-4}$ which depend on $\dot V(0)$ and $\ddot V(0)$. These terms do not lead to divergencies in the fluctuation integral; however, in the energy and pressure they appear multiplied with $\omega_{k0}^2$ and/or $k^2$. While the energy stays finite the pressure behaves in a singular way via \begin{equation} p_{\rm fluct,sing} \sim \int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, \left[-\omega_{k0}^2+\frac{k^2}{3}\right] \left\{\frac{\dot V(0)}{4\omega_{k0}^3}\sin (2\omega_{k0} t) -\frac{\ddot V(0)}{8\omega_{k0}^4}\cos (2\omega_{k0} t)\right\}\; . \end{equation} The behavior of the momentum integrals is given in Appendix B, they result in a $1/t$ singularity proportional to $\dot V(0)$ and a logarithmic one proportional to $\ddot V(0)$. Therefore, these terms have to be removed as well. \section{Removing the initial singularity} \setcounter{equation}{0} We have seen in the previous section that nonzero initial values of $V(t)$ and its derivatives lead to initial singularities. The clue for dealing with these terms has already been indicated: the leading singularity can be removed by a Bogoliubov transformation from the perturbative vacuum to a vacuum corresponding to free quanta of the initial mass $m(0)$. We expect, therefore, that the other singular terms can be removed in this way as well. We define a general initial state by requiring that \begin{equation} \left[a({\bf k})-\rho_ka^\dagger({\bf k})\right]|i\rangle =0 \; . \end{equation} The Bogoliubov transformation to this state is given in Appendix A. If the fluctuation integral, the energy and the pressure are computed by taking the trace with respect to this state the functions $U_k(t)$ are just replaced by \begin{equation} F_k(t)=\cosh(\gamma_k)U_k(t)+e^{i\delta_k} \sinh(\gamma_k)U_k^*(t) \; , \end{equation} where $\gamma_k$ and $\delta_k$ are defined by the relation \begin{equation} \rho_k=e^{i\delta}\tanh(\gamma_k)\; . \end{equation} The fluctuation integral now becomes \begin{eqnarray} {\cal F}(t)&=&\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, |F_k(t)|^2 \nonumber \\ &=& \int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, \left\{\cosh (2 \gamma_k(t)) |U_k(t)|^2+ \sinh (2\gamma_k)\re\left( e^{-i\delta_k}U^2_k(t)\right)\right\}\; . \end{eqnarray} Expanding as before we find \begin{eqnarray} \nonumber {\cal F} (t)&=&\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\left\{ \cosh(2\gamma_k) \left[1-\frac{V(t)}{2\omega_{k0}^2}+\frac{V(0)}{2\omega_{k0}^2}\cos(2\omega_{k0} t) +\frac{\dot V(0)}{4\omega_{k0}^3}\sin(2\omega_{k0} t)\right.\right. \\ \nonumber&&\left.+\frac{\ddot V(t)}{8\omega_{k0}^4} -\frac{\ddot V(0)}{8\omega_{k0}^4}\cos(2\omega_{k0} t)- \frac{1}{8\omega_{k0}^4}{\cal C}(\dddot V,t) + 2\re h_k^{\overline{(2)}}+|h_k|^2 \right] \\ &&+\sinh (2\gamma_k)\cos(\delta_k)\cos(2\omega_{k0} t) \label{fluc2} \\ \nonumber &&-\sinh(2\gamma_k)\sin(\delta_k)\sin(2\omega_{k0} t) +\sinh(2\gamma_k)\re e^{-2i\omega_{k0} t-i\delta} \left(2h_k+h_k^2\right) \Bigg\}\; . \end{eqnarray} Let us first discuss how to get rid of the most singular term, proportional to $V(0)$. Requiring this term to be compensated by the terms proportional to $\sinh(2\gamma_k)$ we find \begin{eqnarray} \delta_k&=&0\; ,\\ \tanh(2\gamma_k)&=&-\frac{V(0)}{2\omega_{k0}^2}\; . \end{eqnarray} As explained in Appendix A the standard Bogoliubov transformation from the perturbative vacuum with mass $m$ to the vacuum corresponding to quanta with the initial mass $m(0)$ is mediated by a function $\gamma_k'(k)$ satisfying \begin{equation} e^{\gamma'_k}=\left(\frac{m^2+{\bf k}^2}{m^2(0)+{\bf k}^2}\right)^{1/4}\; , \end{equation} which implies \begin{equation} \tanh(2\gamma'_k)=\frac{-V(0)}{2\omega_{k0}^2+V(0)} \; .\end{equation} We see that $\gamma_k$ and $\gamma'_k$ agree asymptotically to leading order in $1/\omega_{k0}$. So requiring that the most pronounced initial singularity vanishes leads essentially to the usual choice for the initial state, namely $m_0=m(0)$ and therefore $V(0)=0$ . The analysis of subleading terms in the difference between $\gamma_k$ and $\gamma'_k$ becomes somewhat cumbersome. After we have convinced ourselves that the Bogoliubov transform is the right technique for getting rid of initial singularities we will therefore choose $m_0=m(0)$ as everybody does and apply this technique to get rid of the remaining singularities. So from now on $V(0)=0$ and $\omega_{k0}=({\bf k}^2+m^2(0))^{1/2}$. Requiring that the terms proportional to $\dot V(0)$ and $\ddot V(0)$ vanish leads to the conditions \begin{eqnarray} \label{deltadef} \tan(\delta_k)&=&2\omega_{k0}\frac{\dot V(0)}{\ddot V(0)}\; , \\ \label{gammadef} \tanh (2\gamma_k)&=& \frac{1}{4\omega_{k0}^3}\left[\dot V^2(0) +\frac{\ddot V^2(0)} {4\omega_{k0}^2}\right]^{1/2} \; . \end{eqnarray} Using these functions we are now ready to formulate the renormalized equation of motion and the energy momentum tensor. \section{The renormalized equations} We have given the bare equation of motion and energy momentum tensor in section 2. The renormalization for the original initial state has been discussed in \cite{Baacke:1997a}. We have to ensure now that the scheme used there is not spoiled by the improved initial state. The main new feature in the fluctuation integral, the energy density and the pressure is the appearance of factors $\cosh(2\gamma_k)$ and $\sinh(2\gamma_k)$. We will need their asymptotic behavior. Using Eq. (\ref{gammadef}) we have \begin{equation} \gamma_k\stackrel{k\to\infty}{\simeq}\frac{|\dot V(0)|}{8\omega_{k0}^3} \; .\end{equation} The factor $\cosh(2\gamma_k)$ is equal to $1$ for $\gamma_k=0$; we will need the difference \begin{equation} \label{cosest} \cosh(2\gamma_k)-1=2\sinh^2 (\gamma_k)\stackrel{k\to\infty}{\simeq} \frac{\dot V^2(0)}{32\omega_{k0}^4}\; . \end{equation} There are new terms proportional to $\sinh(2\gamma_k)$; this factor behaves as \begin{equation} \label{sinest} \sinh(2\gamma_k)\stackrel{k\to\infty}{\simeq} \frac{|\dot V(0)|}{4\omega_{k0}^3}\; . \end{equation} The dimensionally regularized fluctuation integral (\ref{fluc2}) takes, after cancellation of the singular integrals induced by Eqs. (\ref{deltadef}) and (\ref{gammadef}), the form \begin{eqnarray} \nonumber {\cal F}_{\rm reg} (t) &=&-\frac{m_0^2}{16\pi^2}(L_0+1) -\frac{V(t)}{16\pi^2}L_0 \\ &&+\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\left\{ \sinh^2(\gamma_k)\left[1-\frac{V(t)}{2\omega_{k0}^2}\right]\right. \\&&+\cosh(2\gamma_k)\left[\frac{\ddot V(t)}{8\omega_{k0}^4} \frac{1}{8\omega_{k0}^4}{\cal C}(\dddot V,t) + 2\re h_k^{\overline{(2)}}+|h_k|^2 \right] \\ \nonumber &&+\sinh(2\gamma_k)\re e^{-2i\omega_{k0} t-i\delta} \left(2h_k+h_k^2\right) \Bigg\}\; . \end{eqnarray} Here we have introduced the abbreviation \begin{equation} L_0=\frac{2}{\epsilon}+\ln \frac{4\pi \mu^2}{m_0^2}-\gamma \; . \end{equation} Using the estimates (\ref{cosest}) and (\ref{sinest}), and using the fact that the mode function $h_k$ behaves as $\omega_{k0}^{-1}$ we see that the momentum integral is convergent. Introducing the counter term Lagrangian \begin{equation} {\cal L}_{\rm c.t.}= \frac{1}{2}\delta m^2 \Phi^2 +\frac{\delta\lambda}{4!} \Phi^4 \end{equation} the fluctuation integral gets replaced, in the equation of motion (\ref{eqmot}) for $\phi(t)$, by \begin{equation} {\cal F}_{\rm fin}={\cal F}_{\rm reg}+ \frac{2\delta m^2}{\lambda} + \frac{\delta \lambda} {3\lambda}\phi^2(t)\; . \end{equation} With the standard choice \begin{eqnarray} \delta m^2 &=& \frac{\lambda m^2}{32\pi^2}(L+1)\; , \\ \delta \lambda &=&\frac{3\lambda^2}{32\pi^2}L\; , \\ L&=&\frac{2}{\epsilon}+\ln \frac{4\pi \mu^2}{m^2}-\gamma \end{eqnarray} ${\cal F}_{\rm fin}$ is indeed finite. The calculation of the energy density proceeds in an analogous way. The fluctuation energy becomes, using dimensional regularization, \begin{eqnarray} {\cal E}_{\rm fluct} &=& \frac{1}{2}\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\left\{|\dot F_k(t)|^2+(\omega_{k0}^2+V(t))|F_k(t)|^2\right\} \\ \nonumber &=&-\frac{m_0^2}{64\pi^2}(L_0+\frac{3}{2}) -\frac{V(t)}{32\pi^2}(L_0+1)-\frac{V^2(t)}{64\pi^2}L_0 \\ \nonumber &&+\frac{1}{2}\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, \Bigg \{ 2 \sinh^2(\gamma_k) \left[2\omega_{k0}^2+V(t)\left(1-\frac{V(t)}{2\omega_{k0}^2}\right)\right]\\ \nonumber &&+V(t)\cosh(2\gamma_k)\left[\frac{\ddot V(t)}{8\omega_{k0}^4} -\frac{{\cal C}(\dddot{V},t)}{8\omega_{k0}^4}+ 2 \re h_k^{\overline{(2)}}+|h_k|^2\right] \\ \nonumber &&+V(t)\sinh(2\gamma_k)\re e^{-2i\omega_{k0} t-i\delta_k}\left(2h_k +h_k^2\right) \\ &&+\cosh(2\gamma_k) |\dot h_k|^2 \\ \nonumber &&+\sinh(2\gamma_k)\re e^{-2i\omega_{k0} t -i\delta_k} \left[ \dot h_k^2-2i\omega_{k0} (1+h_k)\dot h_k\right]\Bigg\}\; . \end{eqnarray} The divergent parts are cancelled by the counter terms \begin{equation} {\cal E}_{c.t.}=\delta \Lambda+\frac{1}{2}\delta m^2 \phi^2(t) +\frac{\delta\lambda}{4!}\phi^4(t) \end{equation} with the `cosmological constant' counter term \begin{equation} \delta \Lambda = \frac{m^4}{64\pi^2}(L+\frac{3}{2})\; . \end{equation} Finally, we have to consider the pressure. We find for the regularized fluctuation part: \begin{eqnarray}\nonumber p_{\rm fluct}&=&-{\cal E}_{\rm fluct} -\frac{m_0^4}{96\pi^2}-\frac{V(t)}{48\pi^2}m_0^2- \frac{\ddot V(t)}{96\pi^2}(L_0+\frac{1}{3}) \\ \nonumber&& +\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\Bigg\{ \sinh^2(\gamma_k)\left[ 2\omega_{k0}^2+\big(-\omega_{k0}^2+\frac{k^2}{3}\big) \left(1-\frac{V(t)}{2\omega_{k0}^2}+\frac{\ddot V(t)}{8\omega_{k0}^4}\right)\right] \\ &&+\cosh(2\gamma_k)\left[-\frac{{\cal C}(\dddot{V},t)}{8\omega_{k0}^4} +2 \re h_k^{\overline{(2)}}+|h_k|^2\right] +\cosh(2\gamma_k)|\dot h_k|^2 \\ \nonumber &&\sinh(2\gamma_k)\re e^{-2i\omega_{k0} t-i\delta_k} \left[\big(-\omega_{k0}^2+\frac{k^2}{3}\big) \left(2 h_k +h_k^2\right)+\dot h_k^2 -2i\omega_{k0}(1+h_k)\dot h_k\right] \Bigg\}\; . \end{eqnarray} In order to cancel the divergent term proportional to $\ddot V$ one introduces a counter term for the energy momentum tensor \begin{equation} \delta T_{\mu\nu}=A(g_{\mu\nu}\partial_\alpha\partial^\alpha -\partial_\mu\partial^\nu)\Phi^2\; , \end{equation} which leads to a counter term \begin{equation} p_{c.t.}=A \frac{d^2}{dt^2}\phi^2(t)=\frac{2A}{\lambda}\ddot V(t) \; .\end{equation} in the pressure. We choose \begin{equation} A = -\frac{\lambda}{192\pi^2}L \; .\end{equation} The remaining momentum integral is finite and nonsingular in $t$. For most of the terms this can be seen by inspection using Eqs. (\ref{cosest}), (\ref{sinest}), and the expansions (\ref{hexp}) and (\ref{hdotexp}). There are some internal cancellations which are, however, the same as for the case $\gamma_k=0$ already discussed in \cite{Baacke:1997a}. The only new, potentially singular term is \begin{equation} \int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\sinh(2\gamma_k)\big(-\omega_{k0}^2+\frac{k^2}{3}\big) \re e^{-2i\omega_{k0} t}2 h_k\; . \end{equation} The leading singular behaviour is given by \begin{equation} \dot V(0)\int_0^tdt'V(t') \int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\,\frac{1}{4\omega_{k0}^4}\big(-\omega_{k0}^2+\frac{k^2}{3}\big) \cos (2\omega_{k0} t) \; .\end{equation} While the momentum integral behaves as $\ln t$ as $t\to 0$ the time integral behaves as $t^2$ since $V(0)=0$. So the renormalized pressure is indeed nonsingular at $t=0$. From the analysis of divergent integrals given in this section it is obvious that only the leading asymptotic behavior of $\gamma_k$ is relevant, more precisely, only the terms of order $\omega_{k0}^{-3}$ and $\omega_{k0}^{-4}$. This means that any Bogoliubov transformation whose function $\gamma_k$ has this leading asymptotic behavior is equally suitable for defining an appropriate initial state. We have formulated our modified renormalized equations for $\lambda \Phi^4$ theory in flat space. The generalization to a scalar field in a flat Friedmann-Robertson-Walker universe is straightforward and the cancellation of singular terms in the energy density and the trace of the energy momentum tensor proceeds in the same way. \section{Conclusions} We have considered here the choice of initial states for a nonequilibrium system in quantum field theory. Our considerations arose from the problem that logarithmic and linear singularities in the variable $t-t_0$ appear in the energy momentum tensor and affect the dynamics of FRW cosmology. We consider such singularities - and their consequences - as unphysical, at least if $t_0$ is just some conveniently chosen point in time within a continuous evolution of the system. Most authors, including ourselves, have chosen initial states that correspond to equilibrium states of the system. We have constructed here improved initial states for nonequilibrium systems in such a way that the appearance of initial singularities is avoided. These states are obtained from the usual `vacuum' states by a Bogoliubov transformation. The essential part of this transformation is a Bogoliubov `rotation' of the creation and annihilation operators at large momentum. The construction presented here specifies a transformation of creation and annihilation operators at all momenta. It is not unique in the sense that it may be arbitrarily modified at small momenta. This non uniqueness is, however, nothing else as the freedom for choosing an `arbitrary', pure or mixed, initial state. Our construction can be considered as formulating a minimal requirement for choosing such states in the sense that it specifies the initial state of the high momentum quantum modes. \section*{Acknowledgements} It is a pleasure to thank H. de Vega and A. di Giacomo for discussions on the problem. \begin{appendix} \section{ Bogoliubov transformation} \setcounter{equation}{0} In this Appendix we briefly recall some basic features of the Bogoliubov transformation (see , e.g., \cite{Bogoliubov:1983}). We start with a vacuum state defined by \begin{equation} a({\bf k})|0\rangle =0 \; . \end{equation} We would like to obtain a new state $|\tilde 0\rangle$ that is annihilated by $a({\bf k})+\rho_ka^\dagger({\bf k})$, where $\rho_k$ is some complex function of $k$, i. e. we require \begin{equation} \label{newvac} \left[a({\bf k})-\rho_ka^\dagger({\bf k})\right]|\tilde 0\rangle =0 \; . \end{equation} Such a state can be obtained from $|0\rangle$ by a Bogoliubov transformation \begin{equation} |\tilde 0\rangle = \exp(Q)| 0\rangle \; . \end{equation} Using the general relations given in \cite{Bogoliubov:1983} one finds the explicit form of the operator $Q$ as \begin{equation} Q =\frac 1 2 \int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, \gamma_k\left[e^{i\delta_k} a^\dagger({\bf k})a^\dagger(-{\bf k}) -e^{-i\delta_k}a({\bf k})a(-{\bf k})\right] \; . \end{equation} Here $\gamma_k$ and $ \delta_k$ are defined by the relation \begin{equation} \rho_k=e^{i\delta_k}\tanh \gamma_k \end{equation} so that (\ref{newvac}) can also be written as \begin{equation} \tilde a({\bf k}) |\tilde0\rangle =\left [\cosh( \gamma_k) a({\bf k}) + e^{i\delta_k} \sinh (\gamma_k) a^\dagger(-{\bf k}) \right] |\tilde 0\rangle =0 \; . \end{equation} A special class of new `vacuum' states $|\tilde0\rangle$ is obtained when the new creation and annihilation operators refer to free particles with a different mass $\tilde m_0$. In the field expansion (\ref{fieldexp}) this means that the energy $\omega_{k0}=({\bf k}^2+m_0^2)^{1/2}$ is replaced by $\tilde \omega_{k0}=({\bf k}^2+\tilde m_0^2)^{1/2}$. In this case \begin{equation} \tilde a ({\bf k}) =\sqrt{\frac{\tilde\omega_{k0}}{\omega_{k0}}} a({\bf k})+\sqrt{\frac{\omega_{k0}}{\tilde\omega_{k0}}} a^\dagger({\bf k}) \end{equation} and therefore \begin{equation} \gamma_k=\frac 1 2 \ln \frac{\omega_{k0}}{\tilde\omega_{k0}} \end{equation} while $\delta_k=0$. For $k \gg m_0,\tilde m_0$ the function $\gamma_k$ behaves as \begin{equation} \gamma_k \simeq \frac{m_0^2-\tilde m_0^2}{4 k^2}\; . \end{equation} \section{Some singular integrals} \setcounter{equation}{0} The singular behaviour in time arises from the following integrals \begin{equation} I_1(t)=\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, \frac{1}{\omega_{k0}^2} \cos(\omega_{k0} t) \end{equation} and \begin{equation} I_2(t)=\int\frac{d^3k}{(2\pi)^3 2\omega_{k0}}\, \frac{1}{\omega_{k0}} \sin(\omega_{k0} t)\; . \end{equation} The first integral can be rewritten as \begin{eqnarray} \nonumber I_1(t)&=&\frac{1}{4\pi^2} \int_m^\infty\frac{d\omega\sqrt{\omega^2-m^2}} {\omega^2}\cos(2\omega t)\\&=& \frac{1}{4\pi^2} \int _m^\infty d\omega \left(\frac{1}{\sqrt{\omega^2-m^2}} \cos(2\omega t) - \frac{m^2}{\omega^2\sqrt{\omega^2-m^2}} \cos(2\omega t)\right)\; . \end{eqnarray} The integral over the second term is nonsingular; the first term yields a Bessel function $Y_0(2 m t)$, explicitly \begin{equation} I_1(t) = -\frac{1}{8\pi}Y_0(2 m t) + O(t^2) \stackrel{t\to 0}{\simeq} -\frac{1}{4\pi^2}\ln(2mt)\; . \end{equation} The integral $I_2(t)$ is simply given by \begin{equation} I_2(t)=-\frac{1}{2}\frac{d}{dt}I_1(t) \end{equation} and therefore \begin{equation} I_2(t)\stackrel{t\to 0}{\simeq} \frac{1}{8\pi^2 t}\; . \end{equation} \end{appendix}
1,116,691,498,877
arxiv
\section{Introduction}\label{Introduction} Algebraic theory of regular languages is a well\=/established field in the theory of formal languages. The basic ambition of this theory is to obtain effective characterizations of various natural classes of regular languages. First examples of significant classes of languages, which were effectively characterized by properties of syntactic monoids, were the star\=/free languages by Sch{\"u}tzenberger~\cite{schutz} and the piecewise testable languages by Simon~\cite{simon-pw}. A~general framework for discovering relationships between properties of regular languages and properties of monoids was provided by Eilenberg~\cite{eilenberg}, who established a one\=/to\=/one correspondence between the so\=/called {\em varieties} of regular languages and \emph{pseudovarieties} of finite monoids. Here varieties of languages are classes closed for taking quotients, preimages under homomorphisms and Boolean operations. Thus a membership problem for a given variety of regular languages can be translated to a membership problem for the corresponding pseudovariety of finite monoids. An advantage of this approach is that pseudovarieties of monoids are exactly classes of finite monoids which have an equational description by pseudoidentities -- see Reiterman~\cite{reiterman}. For a thorough introduction to that theory we refer to surveys by Pin~\cite{pin-handbook} and by Straubing and Weil~\cite{straubing-weil-handbook}. Since not every natural class of languages is closed for taking all mentioned operations, various generalizations of the notion of varieties of languages have been studied. One possible generalization is the notion of {\em positive varieties} of languages introduced by Pin~\cite{pin-positive} -- the classes need not be closed for taking complementation. Their equational characterization was given by Pin and Weil~\cite{pin-weil}. Another possibility is to weaken the closure property concerning preimages under homomorphisms -- only homomorphisms from a certain fixed class $\mathcal C$ are used. In this way, one can consider $\mathcal C$\=/varieties of regular languages which were introduced by Straubing~\cite{straubing} and whose equational description was presented by Kunc~\cite{michal}. These two generalizations could be combined as suggested by Pin and Straubing~\cite{pin-straubing}. In our contribution we do not use syntactic structures at all. We consider classes of automata as another natural counterpart to classes of regular languages. In fact, we deal with classes of semiautomata, which are exactly automata without the specification of initial nor final states. Characterizing of classes of languages by properties of minimal automata is quite natural, since usually we assume that an input of a membership problem for a fixed class of languages is given exactly by the minimal deterministic automaton. For example, if we want to test whether an input language is piecewise testable, we do not need to compute its syntactic monoid which could be quite large (see Brzozowski and Li~\cite{broz-j-triv}). Instead of that, we check a condition which must be satisfied by its minimal automaton and which was also established in~\cite{simon-pw}. This characterization was used in~\cite{stern} and~\cite{trahtman} to obtain a polynomial and quadratic algorithms, respectively, for testing piecewise testability. In~\cite{dlt13-klima}, Simon's condition was reformulated and the so\=/called {\em confluent acyclic (semi)automata} were defined. In this setting, this characterization can be viewed as an instance of Eilenberg type theorem between varieties of languages and varieties of semiautomata. Moreover, each minimal automaton is implicitly equipped with an order in which the final states form an upward closed subset. This leads to a notion of ordered automata. Then positive $\mathcal C$\=/varieties of ordered semiautomata can be defined as classes which are closed for taking certain natural closure operations. We recall here the general Eilenberg type theorem, namely Theorem~\ref{t:eilenberg-ordered}, which states that positive $\mathcal C$\=/varieties of ordered semiautomata correspond to positive $\mathcal C$\=/varieties of languages. Summarizing, there are three worlds:\\ (L) classes of regular languages,\\ (S) classes of finite monoids, sometimes enriched by an additional structure like the ordered monoids, monoids with distinguished generators, etc.,\\ (A) classes of semiautomata, sometimes ordered semiautomata, etc. Most variants of Eilenberg correspondence relate (L) and (S), the relationship between (A) and (S) was studied by Chaubard et al.~\cite{pin-str}, and finally the transitions between (L) and (A) were initiated by {\'E}sik and Ito~\cite{esik-ito}. Here we continue in the last approach, to establish Theorem~\ref{t:eilenberg-ordered}. In fact, this result is a combination of Theorem 5.1 of \cite{pin-straubing} (only some hints to a possible proof are given there) and the main result of \cite{pin-str} relating worlds (S) and (A). In contrary, in~the present paper, one can find a self\=/contained proof which does not go through the classes of monoids. The paper is structured as follows. In Sections 2 and 3 we recall the basic notions. In Sections 4 and 5 we study ordered semiautomata and some natural algebraic constructions on them. The next section is devoted to the detailed proof of Theorem~\ref{t:eilenberg-ordered}. Section 7 explains how the unordered variant of this result can be obtained. Section~8 presents several instances of Theorem~\ref{t:eilenberg-ordered} and Section~9 discusses membership problem for $\mathcal C$\=/varieties of semiautomata given by certain type of pseudoidentities. This paper is a technical report which precedes the short conference paper~\cite{kp-lata} on the topic. The final authenticated publication is available online at {\tt https://doi.org/10.1007/978-3-030-13435-8}. \section{Positive $\mathcal C$\=/Varieties of Languages} First of all, we recall basic definitions. Let $A^*$ be the set of all words over a finite alphabet $A$. We denote by $\lambda$ the empty word. The set $A^*$ equipped with the operation of concatenation forms a free monoid over $A$ with $\lambda$ being a neutral element. A {\it language} over alphabet $A$ is a subset of $A^*$. Note that all languages which are considered in the paper are regular. For a language $L\subseteq A^*$ and a pair of words $u,v\in A^*$, we denote by $u^{-1}Lv^{-1}$ the {\it quotient} of $L$ by these words, i.e. the set $u^{-1}Lv^{-1}=\{\, w\in A^* \mid uwv\in L\, \}$. In particular, a {\em left quotient} is defined by $u^{-1} L = \{\,w\in A^* \mid uw\in L\,\}$ and a {\em right} one is defined by $L v^{-1} = \{\,w\in A^* \mid wv\in L\,\}$. For the propose of this paper, following Straubing~\cite{straubing}, the {\em category of homomorphisms} $\mathcal C$ is a category where objects are all free monoids over non-empty finite alphabets and morphisms are certain monoid homomorphisms among them. If the sets $A$ and $B$ are clear from the context, we write briefly $f\in \mathcal C$ instead of $f\in \mathcal C(A^*,B^*)$. This ``categorical'' definition means that $\mathcal C$ satisfies the following properties: \begin{itemize} \item For each finite alphabet $A$, the identity mapping $\id_A : A^* \rightarrow A^*$ belongs to $\mathcal C$. \item If $f: B^* \rightarrow A^*$ and $g: C^* \rightarrow B^*$ belong to $\mathcal C$, then their composition $gf : C^* \rightarrow A^*$ is also in $\mathcal C$. \end{itemize} If $f: B^* \rightarrow A^*$ is a homomorphism and $L\subseteq A^*$, then by the {\em preimage} of $L$ in the homomorphism $f$ is meant the set $f^{-1}(L) =\{\, v \in B^* \mid f(v)\in L\, \}$. \begin{definition}\label{d:positive-C-varieties} Let $\mathcal C$ be a category of homomorphisms. A {\em positive $\mathcal C$\=/variety of languages} $\mathcal{V}$ associates to every non\=/empty finite alphabet $A$ a class $\mathcal{V}(A)$ of regular languages over $A$ in such a way that \begin{itemize} \item $\mathcal{V}(A)$ is closed under unions and intersections of finite families, \item $\mathcal{V}(A)$ is closed under quotients, i.e. $$L\in \mathcal {V}(A),\ u,v\in A^*\quad \text{implies} \quad u^{-1}Lv^{-1}\in\mathcal {V}(A)\, ,$$ \item $\mathcal{V}$ is closed under preimages in morphisms of $\mathcal C$, i.e. $$f:B^*\rightarrow A^*,\, f\in \mathcal C,\, L\in \mathcal {V}(A)\quad \text{implies}\quad f^{-1}(L)\in \mathcal{V}(B)\, .$$ \end{itemize} \end{definition} Note that the first condition in Definition~\ref{d:positive-C-varieties} ensures that the languages $\emptyset$ and $A^*$ belong to $\mathcal{V}(A)$ for every alphabet $A$: $\emptyset$ is the union of the empty system and $A^*$ is the intersection of the empty system. In other words, the first condition can be equivalently formulated as $\emptyset, A^*\in \mathcal{V}(A)$ and $\mathcal{V}(A)$ is closed under binary unions and intersections. In particular, all $\mathcal{V}(A)$'s are nonempty. If $\mathcal C$ consists of all homomorphisms we get exactly the notion of the positive varieties of languages. When adding ``each $\mathcal{V}(A)$ is closed under complements'', we get exactly the notion of the $\mathcal C$\=/\emph{variety} of languages. \section{The Canonical DFA} In this section we fix basic terminology concerning finite automata. First of all, note that all considered automata in the paper are deterministic, complete, finite and over finite alphabets. Moreover, we use the term {\it semiautomaton} when the initial and final states are not explicitly given. A {\em deterministic finite automaton} (DFA) over the alphabet $A$ is a five\=/tuple $\mathcal A = (Q,A,\cdot,i,F)$, where $Q$ is a non\=/empty set of {\em states}, $\cdot : Q\times A \rightarrow Q$ is a complete {\em transition function}, $i\in Q$ is the {\em initial} state and $F\subseteq Q$ is the set of {\em final} states. The transition function can be extended to a mapping $\cdot : Q\times A^* \rightarrow Q$ by $q\cdot \lambda = q,\ q\cdot (ua)= (q\cdot u)\cdot a$, for every $q\in Q,\,u\in A^*,\,a\in A$. The automaton $\mathcal A$ {\it accepts} a word $u\in A^*$ if and only if $i\cdot u\in F$ and the language {\em recognized} by the automaton $\mathcal A$ is $\nL_{\mathcal A}=\{\,u\in A^* \mid i\cdot u\in F\,\}$. More generally, for $q\in Q$, we denote $\nL_{\mathcal A,q}=\{\,u\in A^* \mid q\cdot u\in F\,\}$. For a fixed $\mathcal A$, we denote this language simply by $\nL_q$. We recall the construction of the minimal automaton of a regular language which was introduced by Brzozowski~\cite{broz-minimal}. Since this automaton is uniquely determined and it plays a central role in our paper, we use the adjective ``canonical'' for it. \begin{definition}\label{d:canonical-dfa} The {\em canonical deterministic automaton} of a regular language $L$ is $\mathcal D_L =(D_L,A,\cdot,L,F_L)$, where $D_L =\{\, u^{-1} L \mid u\in A^*\,\}$, $q\cdot a = a^{-1}q$, for each $q\in D_L$, $a\in A$, and $F_L=\{\, q\in D_L \mid \lambda\in q\,\} $. \end{definition} A part of Brzozowski's result is the correctness of the previous definition, because one needs to show that $\mathcal D_L$ is really a finite deterministic automaton. The minimality of $\mathcal D_L$ can be obtained as a consequence of the following lemma. Since the result will be modified later in the paper, the proof of the following lemma is also presented here. \begin{lemma}[\cite{broz-minimal}]\label{l:canonical-dfa} Let $L$ be a regular language with the canonical automaton $\mathcal D_L$ and let $\mathcal A=(Q,A,\cdot,i,F)$ be an arbitrary DFA with $\nL_{\mathcal A}=L$. Then the following holds: (i) For each $q\in D_L$, we have that $\nL_{\mathcal D_L, q}=q$. (ii) For each $u\in A^*$, we have that $\nL_{\mathcal A, i\cdot u}=u^{-1}L$. (iii) The rule $\varphi : i\cdot u \mapsto u^{-1}L$, for every $u\in A^*$, correctly defines a surjective mapping from $Q'=\{\,i\cdot u\mid u\in A^*\,\}$ onto $D_L$ satisfying $\varphi((i\cdot u)\cdot a)= (\varphi(i\cdot u))\cdot a$, for every $u\in A^*$, $a\in A$. \end{lemma} \begin{proof} (i) Let $q$ be a state of $\mathcal D_L$, i.e. $q=u^{-1}L$ for some $u\in A^*$. Then $v\in \nL_{\mathcal D_L, q}$ if and only if $\lambda\in (u^{-1}L)\cdot v=(uv)^{-1}L$, which is equivalent to $uv\in L$ and also to $v\in u^{-1}L=q$. (ii) Let $u\in A^*$. Then, for every $v\in A^*$, we have the following chain of equivalent formulas: $$v\in\nL_{\mathcal A, i\cdot u},\ (i\cdot u)\cdot v\in F,\ \ uv\in L, \ \text{and} \ \ v\in u^{-1}L\, .$$ (iii) The correctness of the definition of $\varphi$ follows from (ii) and the surjectivity of $\varphi$ is clear. Moreover, $\varphi((i\cdot u)\cdot a)= \varphi(i\cdot ua)=(ua)^{-1}L =u^{-1}L\cdot a=\varphi(i\cdot u)\cdot a$. \qed\end{proof} \section{Ordered Automata}\label{sec:ordered-automata} First, we recall some basic terminology from the theory of ordered sets. By an {\it ordered set} we mean a set $M$ equipped with an {\it order} $\le$, i.e. by a reflexive, antisymmetric and transitive relation. A subset $X$ is called {\it upward closed} if, for every pair of elements $x,y\in M$, the following property holds: $x\le y,\, x\in X$ implies $y\in X$. For every subset $X$, we denote by $\cl X$ the smallest upward closed subset containing the subset $X$, i.e. $\cl X=\{\,m \in M \mid \exists\, x\in X : x\le m\,\}$. In particular, for $x\in M$, we write $\cl x$ instead of $\cl\{x\}$. A mapping $f: M \rightarrow N$ between two ordered sets $(M,\le)$ and $(N,\le)$ is called {\it isotone} if, for every pair of elements $x,y\in M$, we have that $x\le y$ implies $f(x)\le f(y)$. States of the canonical automaton $\mathcal D_L$ are languages, and therefore they are ordered naturally by the set\=/theoretical inclusion. The action by each letter $a\in A$ is an isotone mapping: for each pair of states $p,q$ such that $p\subseteq q$, we have $p\cdot a= a^{-1}p \subseteq a^{-1}q=q\cdot a $. Moreover, the set $F_L$ of all final states is an upward closed subset with respect to $\subseteq$. These observations motivate the following definition. \begin{definition}\label{d:ordered-automaton} An {\em ordered automaton} over the alphabet $A$ is a six\=/tuple $\mathcal O = (Q,A,\cdot,\le,i,F)$, where \begin{itemize} \item $(Q,A,\cdot,i,F)$ is a usual DFA; \item $\le $ is an order on the set $Q$; \item the action by every letter $a\in A$ is an isotone mapping from the ordered set $(Q,\le)$ to itself; \item $F$ is an upward closed subset of $Q$ with respect to $\le$. \end{itemize} \end{definition} The definitions of the acceptance and the recognition are the same as in the case of DFA's. Since a composition of isotone mappings is isotone, it follows from Definition~\ref{d:ordered-automaton} that the action by every word $u\in A^*$ is an isotone mapping from the ordered set of states into itself. Moreover, the ordered semiautomaton $\mathcal Q=(Q,A,\cdot,\leq)$ {\it accepts} the language $L\subseteq A^*$ if we can complete $\mathcal Q$ to an ordered automaton $\mathcal O = (Q,A,\cdot,\le,i,F)$ such that $L=\nL_{\mathcal O}$. The following result states that Brzozowski's construction gives the minimal ordered automata $\mathcal O_L =(D_L,A,\cdot,\subseteq,L,F_L)$. \begin{lemma}\label{l:canonical-odfa} Let $\mathcal O = (Q,A,\cdot,\le,i,F)$ be an ordered automaton recognizing the language $L=\nL_{\mathcal O}$. Then, for states $p\le q$, we have $\nL_p\subseteq \nL_q$. Moreover, the mapping $\varphi$ from Lemma \ref{l:canonical-dfa} is an isotone one onto the canonical ordered automaton $\mathcal O_L =(D_L,A,\cdot,\subseteq,L,F_L)$. \end{lemma} \begin{proof} Let $p\le q$ hold in the given ordered automaton $\mathcal O$. If $w\in\nL_p$, then $p\cdot w\in F$. Now $p\cdot w \le q\cdot w$ implies $q\cdot w \in F$ and therefore $w\in \nL_q$. Moreover, having $u,v\in A^*$ such that $p=i\cdot u,\,q=i\cdot v$, we get $\nL_{i\cdot u}\subseteq \nL_{i\cdot v}$. By Lemma~\ref{l:canonical-dfa}, it means that $u^{-1}L\subseteq v^{-1}L$, i.e. $\varphi(i\cdot u) \subseteq \varphi(i\cdot v).$ \qed\end{proof} When relating languages with algebraic structures (not our task here), the following property of the minimal/canonical ordered automaton is a crucial one. \begin{lemma}[Pin~{\cite[Section 3]{pin-handbook}}] The transition monoid of the minimal automaton of a regular language $L$ is isomorphic to the syntactic monoid of~$L$. Similarly, the ordered transition monoid of the minimal ordered automaton of $L$ is isomorphic to the syntactic ordered monoid of $L$. \end{lemma} The next lemma clarifies how the quotients of a language can be obtained changing the initial and the final states appropriately. \begin{lemma}\label{l:quotients} Let $\mathcal O=(Q,A,\cdot,\leq,i,F)$ be an ordered automaton recognizing the language $L=\nL_{\mathcal O}$. Let $u,v\in A^*$. Then (i) $u^{-1}L=\nL_{\mathcal B}$ where $\mathcal B=(Q,A,\cdot,\leq,i\cdot u,F)$, (ii) $Lv^{-1}=\nL_{\mathcal C}$ where $\mathcal C=(Q,A,\cdot,\leq,i,F_v) \text{ and } F_v=\{\,q\in Q\mid q\cdot v\in F\,\}$. \end{lemma} \begin{proof} (i) It follows from Lemma~\ref{l:canonical-dfa} (ii). (ii) To show that $\mathcal C$ is an ordered semiautomaton, we need to prove that $F_v$ is upward closed. Let $p\in F_v$ and $p\le q\in Q$. From $p\in F_v$ we have $p\cdot v\in F$ and from $p\le q$ we obtain $p\cdot v\le q\cdot v$. Since $F$ is upward closed we get $q\cdot v\in F$, which implies $q\in F_v$. Now, for every $w\in A^*$, the following is a chain of equivalent statements: $$w\in Lv^{-1},\ wv\in L,\ (i\cdot w)\cdot v\in F,\ i\cdot w\in F_v,\ \text{and}\ w \in \nL_{\mathcal C}\,.$$ Thus we proved the equality $Lv^{-1}=\nL_{\mathcal C}$. \qed\end{proof} The next result characterizes languages which are recognized by changing the final states in the canonical ordered automaton. \begin{lemma}\label{l:quotients-in-canonical} Let $H\subseteq D_L$ be upward closed. Then $(D_L,A,\cdot,\subseteq, L,H)$ recognizes a language which can be expressed as a finite union of finite intersections of languages of the form $Lv^{-1}$. \end{lemma} \begin{proof} For an arbitrary upward closed subset $X\subseteq D_L$, we define $\Past(X)=\{\,w\in A^* \mid L \cdot w \in X\,\}$. Since $\Past(X)=\bigcup_{p\in X}\Past(\cl p)$, it is enough to prove that, for each $p\in D_L$, the set $\Past(\cl p)$ can be expressed as a finite intersection of right quotients of the language $L$. Let $p$ be an arbitrary state in $D_L$. For each $q\in D_L$ such that $p\not \subseteq q$, we have $p=L_p\not\subseteq L_q=q$. This means that there is $v_q\in A^*$ with the property $v_q \in L_p \setminus L_q$. Equivalently, $p\cdot v_q\in F_L$ and $q\cdot v_q\not\in F_L$. Now if $w\in \Past(\cl p)$ then $L\cdot w \supseteq p$. Therefore $(L \cdot w) \cdot v_q \supseteq p\cdot v_q\in F_L$, from which we get $wv_q \in L$, i.e. $ w\in L v_q^{-1}$. We have showed that $\Past(\cl p) \subseteq L v_q^{-1}$. Now we claim that $$\Past(\cl p)=\bigcap_{p\not\subseteq q} L v_q^{-1} \, .$$ We already saw the ``$\subseteq$''\=/part. To prove the opposite inclusion, let $w$ be an arbitrary word from $\in \bigcap_{p\not\subseteq q} L v_q^{-1}$. Fixing $q$ for a moment, we see that $wv_q\in L$, i.e. $L \cdot wv_q \in F_L$. In the case of $q=L\cdot w$ we would have $q\cdot v_q=(L \cdot w)\cdot v_q\in F_L$ -- a contradiction. Hence $L\cdot w\not= q$, and this holds for each $q\not\supseteq p$. Therefore $L\cdot w\supseteq p$ and we deduce that $w\in \Past(\cl p)$. \qed\end{proof} There is a natural question how the minimal ordered (semi)automaton can be computed from a given automaton. \begin{proposition} There exists an algorithm which computes, for a given automaton $\mathcal A=(Q,A,\cdot, i, F)$, the minimal ordered automaton of the language $\nL(\mathcal A)$. \end{proposition} \begin{proof} Our construction is based on Hopcroft minimization algorithm for DFA's. We may assume that all states of $\mathcal A$ are reachable from the initial state $i$. Let $R=(Q\times F) \cup ((Q\setminus F) \times Q)$. Then we construct the relation $\overline R$ from $R$ by removing unsuitable pairs of states step by step. At first, we put $R_1=R$. Then for each integer $k$, if we find $(p,q)\in R_k$ and a letter $a\in A$ such that $(p\cdot a,q\cdot a) \not\in R_k$, then we remove $(p,q)$ from the current relation $R_k$, that is, we put $R_{k+1}=R_k \setminus \{(p,q)\}$. This construction stops after, say, $m$ steps. So, $R_{m+1}=\overline R$ satisfies $(p,q)\in \overline R \implies (p\cdot a,q\cdot a) \in \overline R$, for every $p,q\in Q$ and $a\in A$. Now, we observe that, $(p,q)\in \overline R$ if and only if, for every $u\in A^*$, $(p\cdot u, q\cdot u) \in R$. The condition can be equivalently written as \begin{equation} \label{eq:hopcroft} (p,q)\in \overline R \ \text{ if and only if } \ (\ \forall\, u\in A^* \ :\ p\cdot u \in F\ \implies\ q\cdot u \in F\ )\, . \end{equation} It follows that $\overline R$ is a quasiorder on $Q$ and we can consider the corresponding equivalence relation $\rho=\{\,(p,q) \mid (p,q)\in \overline R , (q,p) \in \overline R\,\}$ on the set $Q$. Then the quotient set $Q/\rho =\{\,[q]_\rho \mid q\in Q\,\}$ has a structure of the automaton: the rule $[q]_\rho\cdot_\rho a =[q\cdot a]_\rho$, for each $q\in Q$ and $a\in A$, defines correctly actions by letters using~(\ref{eq:hopcroft}). Furthermore, the relation $\le$ on $Q/\rho$ defined by the rule $[p]_\rho \le [q]_\rho$ iff $(p,q)\in \overline R$, is an order on $Q/\rho$ compatible with actions by letters. So, $\mathcal A_\rho=(Q/\rho, A, \cdot_\rho,\le, [i]_\rho, F_\rho)$, where $F_\rho=\{\,[f]_\rho \mid f\in F\,\}$, is an ordered automaton recognizing $\nL(\mathcal A)$. Moreover, if there are two states $[p]_\rho,[q]_\rho \in Q/\rho$ such that $\nL(\mathcal A_\rho,p)=\nL(\mathcal A_\rho,q)$, then $(p,q)\in \rho$. Thus, the ordered automaton $\mathcal A_\rho$ is isomorphic to the minimal ordered automaton of the language $\nL(\mathcal A)$. \qed\end{proof} Note also that the classical power\=/set construction makes from a nondeterministic automaton an ordered deterministic automaton which is ordered by the set\=/theoretical inclusion. Thus, for the purpose of a construction of the minimal ordered automaton, one may also use Brzozowski's minimization algorithm using power\=/set construction for the reverse of the given language. \section{Algebraic Constructions on Ordered Semiautomata} \label{sec:constructions} To get an Eilenberg correspondence between classes of languages and the classes of semiautomata we need an appropriate definition of a variety of semiautomata. The notion of variety of semiautomata would be given in terms of closure properties with respect to certain constructions on semiautomata. Positive $\mathcal C$\=/varieties of languages are closed under quotients, therefore the choice of an initial state and final states in ordered automata can be left free due to Lemma~\ref{l:quotients}. If an ordered automaton $\mathcal O=(Q,A,\cdot,\le, i, F)$ is given, then we denote by $\ol{\mathcal O}$ the corresponding ordered semiautomaton $(Q,A,\cdot,\le)$. In particular, for the canonical ordered automaton $\mathcal{D}_L=(D_L,A,\cdot,\subseteq,L,F)$ of the language $L$, we have $\ol{\mathcal{D}_L}=(D_L,A,\cdot,\subseteq)$. Since positive $\mathcal C$\=/varieties of languages are closed under taking finite unions and intersections, we include the closedness with respect to direct products of ordered semiautomata. \begin{definition}\label{d:product} Let $n\ge 1$ be a natural number. Let $\mathcal O_j=(Q_j,A,\cdot_j,\le_j)$ be an ordered semiautomaton for $j=1,\dots,n$. We define the ordered semiautomaton $\mathcal O_1\times\dots\times \mathcal O_n= (Q_1\times\dots\times Q_n,A,\cdot,\le)$ as follows: \begin{itemize} \item for each $a\in A$, we put $(q_1,\dots,q_n) \cdot a = (q_1\cdot_1 a,\dots,q_n\cdot_n a)$ and \item we have $(p_1,\dots,p_n) \le (q_1,\dots,q_n)$ if and only if, for each $j=1,\dots,n$, the inequality $p_j\le_j q_j$ is valid. \end{itemize} The ordered semiautomaton $\mathcal O_1\times\dots\times \mathcal O_n$ is called a {\em product} of the ordered semiautomata $\mathcal O_1,\dots,\mathcal O_n$. \end{definition} We would like to know, which languages are recognized by a product of ordered semiautomata. \begin{lemma} \label{l:product-arbitrary-subset} Let the ordered semiautomaton $\mathcal O$ be the product of the ordered semiautomata $\mathcal O_1,\dots,\mathcal O_n$. Then the following holds: (i) If, for each $j=1,\dots,n$, the language $L_j$ is recognized by $\mathcal O_j$, then both $L_1\cap\dots\cap L_n$ and $L_1\cup\dots\cup L_n$ are recognized by $\mathcal O$. (ii) If the language $L$ is recognized by $\mathcal O$, then $L$ is a finite union of finite intersections of languages recognized by $\mathcal O_1,\dots,\mathcal O_n$. \end{lemma} \begin{proof} Let $\mathcal O_j=(Q_j,A,\cdot_j,\le_j),\ j=1,\dots,n$. Denote $Q=Q_1\times\dots\times Q_n$ and $\mathcal O= (Q,A,\cdot,\le)$. (i) Let $F_1, \dots , F_n$ be sets of final states used for recognition of the languages $L_1, \dots , L_n$. Put $F=F_1\times \dots \times F_n$ for the intersection $L_1\cap\dots\cap L_n$ and $$F=\{\, (q_1,\dots,q_n) \mid \text{ there exists } j\in\{1,\dots ,n\} \text{ such that } q_j\in F_j\, \}$$ for the union $L_1\cup\dots\cup L_n$. It is not hard to see that, in the both cases, $F$ is indeed an upward closed subset. (ii) Let $L$ be a language recognized by $(Q,A,\cdot,\le)$, i.e let $F$ be an upward closed subset of $Q$, and $i\in Q$ such that $L$ is recognized by $(Q,A,\cdot,\le,i,F)$. Since $F= \bigcup_{p\in F} \cl{p}$, we see that $L=\bigcup_{p\in F} L_p$, where $L_p$ is recognized by the ordered automata $(Q,A,\cdot,\le,i,\cl{p})$. Furthermore, for such $p$, we have $p=(p_1,\dots ,p_n)$ and we can write $\cl{p}=\cl{p_1}\times \dots \times \cl{p_n}$. Let $i=(i_1,\dots , i_n)$ and let $L_{(p,j)}$ be a language recognized by the ordered automaton $(Q_j,A,\cdot_j,\le_j,i_j,\cl{p_j})$. Then one can check that $L_p=L_{(p,1)}\cap \dots \cap L_{(p,n)}$. \qed\end{proof} Also the following construction is useful. \begin{definition}\label{d:disjoint-union} Let $I=\{1,\dots , n\}$ be a non\=/empty finite set and, for each $j\in I$, let $\mathcal Q_j=(Q_j,A,\cdot_j,\le_j)$ be an ordered semiautomaton. We define the {\em disjoint union} $\mathcal Q=(Q,A,\cdot,\le)$ of ordered semiautomata $\mathcal Q_1, \dots ,\mathcal Q_n$ in the following way: \begin{itemize} \item $Q=\{\, (q,j) \mid j\in I, q\in Q_j\, \}$, \item for each $a\in A$ and $(p,j), (q,k) \in Q$, we put $ (q,j) \cdot a = (q \cdot_j a,j)$ and \item we put $(q,j) \le (p,k)$ if and only if $j=k$ and $q_j\le_j p_j$. \end{itemize} \end{definition} Clearly, $L$ is recognized by a disjoint union of ordered semiautomata if and only if it is recognized by some of them. A further useful notion is a homomorphism of ordered semiautomata. \begin{definition}\label{d:morphisms} Let $(Q,A,\cdot,\le)$ and $(P,A,\circ,\preceq)$ be ordered semiautomata and $\varphi :Q\rightarrow P$ be a mapping. Then $\varphi$ is called a {\em homomorphism} of ordered semiautomata if it is isotone and $\varphi(q\cdot a)=\varphi (q)\circ a$ for all $a\in A$, $q\in Q$. If there exists a surjective homomorphism of ordered semiautomata from $(Q,A,\cdot,\le)$ to $(P,A,\circ,\preceq)$, then we say that $(P,A,\circ,\preceq)$ is a {\em homomorphic image} of $(Q,A,\cdot,\le)$. We say that $\varphi$ is {\em backward order preserving} if, for every $p,q\in Q$, the inequality $\varphi(p)\preceq\varphi(q)$ implies $p\le q$. If the homomorphism $\varphi$ is surjective and backward order preserving, then we say that $(Q,A,\cdot,\le)$ is {\em isomorphic} to $(P,A,\circ,\preceq)$. \end{definition} In what follows, we use often simply $(P,A,\cdot,\le)$ instead of $(P,A,\circ,\preceq)$. Note that every backward order preserving mapping is injective. In the setting of the previous definition, one can prove by induction with respect to the length of words that for an arbitrary homomorphism $\varphi$ of semiautomata that the equality $\varphi(q\cdot u)=\varphi(q) \circ u$ holds for every state $q\in Q$ and every word $u\in A^*$. \begin{lemma}\label{l:morphic-image-odfa} Let an ordered semiautomaton $(P,A,\cdot,\le)$ be a homomorphic image of an ordered semiautomaton $(Q,A,\cdot,\le)$ and $L$ be recognized by $(P,A,\cdot,\le)$. Then $L$ is also recognized by $(Q,A,\cdot,\le)$. \end{lemma} \begin{proof} If $L$ is recognized by an ordered automaton $\mathcal P=(P,A,\cdot,\le,i,F)$, with $F$ being an upward closed subset, and $\varphi$ is a surjective homomorphism of a semiautomaton $(Q,A,\cdot,\le)$ onto the semiautomaton $\ol{\mathcal P}=(P,A,\cdot,\le)$, then we can choose some $i'\in Q$ such that $\varphi(i')=i$ and we can consider $F'=\{\, q\in Q \mid \varphi(q)\in F\,\}$. Now $F'$ is an upward closed subset in $(Q,\le)$, because $\varphi$ is an isotone mapping and $F$ is upward closed. Moreover, for an arbitrary $u\in A^*$, the following is a chain of equivalent statements: $$i'\cdot u \in F',\ \varphi(i'\cdot u)\in F, \ \varphi(i')\cdot u \in F,\ i\cdot u\in F,\ \text{and}\ u\in L\, .$$ The statement of the lemma follows. \qed\end{proof} \begin{definition}\label{d:trivial-odfa} An ordered semiautomaton $(Q,A,\cdot,\le)$ is {\em trivial} if $q\cdot a=q$ for all $q\in Q$ and $a\in A$, and $\le$ is the equality relation on $Q$. In particular, for a natural number $n$, we define the ordered semiautomaton $\mathcal{T}_n(A)=(I_n, A,\cdot,=)$, where $I_n=\{1,\dots ,n\}$, the transition function $\cdot $ is defined by the rule $j\cdot a=j$ for all $j\in I_n$ and $a\in A$. \end{definition} It follows directly from the definition that every trivial ordered semiautomaton is isomorphic to some $\mathcal{T}_n(A)=(I_n, A,\cdot,=)$. \begin{lemma}\label{l:disjoint-via-product} The disjoint union of ordered semiautomata $\mathcal Q_j=(Q_j,A,\cdot_j,\le_j)$, $j\in \{1,\dots ,n\}$, is a homomorphic image of the product $\mathcal Q_1 \times \dots \times \mathcal Q_n\times \mathcal{T}_n(A)$. \end{lemma} \begin{proof} Clearly, the mapping defined by the rule $$\varphi: (q_1,\dots,q_n,j) \mapsto (q_j,j),\ \text{for every}\ q_1\in Q_1,\dots,q_n\in Q_n,\, j\in \{1,\dots ,n\}\, ,$$ is a surjective homomorphism of the considered semiautomata. \qed\end{proof} \begin{definition}\label{d:subautomaton} Let $(Q,A,\cdot,\le)$ be an ordered semiautomaton and $P\subseteq Q$ be a non\=/empty subset. If $p\cdot a \in P$ for every $p\in P$, $a\in A$, then $(P,A,\cdot,\le)$ is called a {\em subsemiautomaton} of $(Q,A,\cdot,\le)$. \end{definition} In the previous definition, the transition function and order on $P$ are restrictions of the corresponding data on the set $Q$ and so they are denoted by the same symbols. Using the notions of a subsemiautomaton and a homomorphic image, we can formulate the minimality of the canonical ordered semiautomaton in a bit precise way. \begin{lemma}\label{l:minimality-canonical-dfa} Let $(Q,A,\cdot,\le)$ be an ordered semiautomaton recognizing the language $L$. Then the canonical ordered semiautomaton $\ol{\mathcal{D}_L}$ is a homomorphic image of some subsemiautomaton of $(Q,A,\cdot,\le)$. \end{lemma} \begin{proof} Let $L$ be recognized by the ordered automaton $\mathcal A=(Q,A,\cdot,\le,i,F)$. It is easy to see that the subset $Q'=\{\,i\cdot u \mid u\in A^*\,\}$ constructed in Lemma~\ref{l:canonical-dfa} forms a subsemiautomaton of $(Q,A,\cdot,\le)$. Furthermore, we defined there the mapping $\varphi : Q'\rightarrow D_L$ by the rule $\varphi(q) =u^{-1}L$, where $q=i\cdot u$. This mapping $\varphi$ is a surjective homomorphism of ordered semiautomata. \qed\end{proof} We say that an ordered semiautomaton $(Q,A,\cdot, \le)$ is {\it 1\=/generated} if there exists a state $i\in Q$ such that $Q=\{\,i\cdot u \mid u\in A^*\,\}$. \begin{lemma}\label{l:reconstruction-product} Let $(Q,A,\cdot \le)$ be a 1\=/generated ordered semiautomaton. Then this semiautomaton is isomorphic to a subsemiautomaton of a product of the canonical ordered semiautomata of languages recognized by the ordered semiautomaton $(Q,A,\cdot \le)$. \end{lemma} \begin{proof} Let $\mathcal A=(Q,A,\cdot \le)$ be a 1\=/generated ordered semiautomaton, i.e. $Q=\{\,i\cdot u \mid u\in A^*\,\}$ for some $i\in Q$. For each $q\in Q$, we consider the ordered automaton $\mathcal{A}_q=(Q,A,\cdot,\le,i,\cl q)$. This automaton recognizes the language $L_{\mathcal{A}_q}$, which we denote by $L(q)$. Using Lemma~\ref{l:minimality-canonical-dfa}, there is a surjective homomorphism $\varphi_q : \mathcal A \rightarrow \ol{\mathcal D_{L(q)}}$ of ordered semiautomata, because $\mathcal A$ is 1\=/generated and thus $Q'=Q$ here. Assume that $\mathcal A$ has $n$ states and denote them by $q_1,\dots ,q_n$. We can consider the product of the canonical ordered semiautomata $\ol{\mathcal{D}_{L(q_k)}}=(D_{L(q_k)},A,\cdot_{q_k},\subseteq_{q_k})$ for all $k=1,\dots ,n$, i.e. $\ol{\mathcal{D}_{L(q_1)}} \times \dots \times \ol{\mathcal{D}_{L(q_n)}}$. Moreover, since we have the mapping $\varphi_q$ for each $q\in Q$, we can define a mapping $\varphi : Q \rightarrow {D}_{L(q_1)}\times \dots \times {D}_{L(q_n)}$ by the rule $\varphi(p)=(\varphi_{q_1}(p), \dots ,\varphi_{q_n}(p) )$, for $p\in Q$. To prove the statement of the lemma it is enough to show that this mapping $\varphi$ is an backward order preserving homomorphism of ordered semiautomata. Let $p\le q $ hold in $Q$. For each $r\in Q$ the homomorphism $\varphi_r$ is isotone and hence $\varphi_r(p)\subseteq_r\varphi_r(q)$. Thus we get $\varphi(p)\le\varphi(q)$ and we see that $\varphi$ is an isotone mapping. In the similar way one can check that $\varphi(p\cdot a)=\varphi(p) \cdot a$ for every $p\in Q$ and $a\in A$. These facts mean that $\varphi$ is a homomorphism of ordered semiautomata. Now assume that $p$ and $q$ are two states in $Q$ such that $\varphi(p)\le \varphi(q)$. Then we have $\varphi_p(p) \subseteq_p \varphi_p(q)$ in $\ol{\mathcal{D}_{L(p)}}$. By the definition of the mapping $\varphi_p$ given in Lemma~\ref{l:canonical-dfa}, for each $r\in Q$, we have $\varphi_p(r)= \nL_{\mathcal A_p, r}$. In particular, we can write $\nL_{\mathcal A_p, p}\subseteq \nL_{\mathcal A_p, q}$. Since $p$ is a final state in $\mathcal A_p$, we have $\lambda\in \nL_{\mathcal A_p, p}$ and therefore $\lambda\in \nL_{\mathcal A_p, q}$, which means that $q$ is a final state in $\mathcal{A}_p$ too. In other words $q\in \cl p$, i.e. $q\ge p$. Thus the mapping $\varphi$ is backward order preserving. \qed\end{proof} \begin{lemma}\label{l:reconstruction-union} Let $\mathcal A=(Q,A,\cdot \le)$ be an ordered semiautomaton. Then the semiautomaton $\mathcal A$ is a homomorphic image of a disjoint union of its 1\=/generated subsemiautomata. \end{lemma} \begin{proof} For every $q\in Q$, we consider the subset of $Q$ given by $Q_q=\{\, q\cdot u \mid u\in A^*\,\}$ consisting from all states reachable from $q$. Clearly, $Q_q$ form a 1\=/generated subsemiautomaton of $\mathcal A =(Q,A,\cdot, \le)$. We consider disjoint union of all these semiautomata. The set of all states of this ordered semiautomaton is $P=\{\,(p,q)\mid \text{there exists } u\in A^* \text{ such that } q\cdot u =p\,\}$. We show that the mapping $\varphi :P \rightarrow Q$ given by the rule $\varphi((p,q))=p$ is a surjective homomorphism of ordered semiautomata. Indeed, for $a\in A$, we have $(p,q)\cdot a= (p\cdot a,q)$, and hence $\varphi((p,q)\cdot a)=\varphi((p\cdot a,q))=p\cdot a=\varphi((p,q))\cdot a$. Moreover, $(p,q)\le (p',q')$ in $P$ implies $q=q'$ and $p\le p'$, which means that $\varphi((p,q))\le\varphi((p,q'))$. Finally, the surjectivity follows from the fact $\{(q,q) \mid q\in Q\} \subseteq P$. \qed\end{proof} Since positive $\mathcal C$\=/varieties of languages are closed under taking preimages in morphisms from the category $\mathcal C$ we need a construction on ordered semiautomata which enables the recognition of such languages. \begin{definition}\label{d:f-subautomaton} Let $f: B^* \rightarrow A^*$ be a homomorphism and $\mathcal A=(Q,A,\cdot,\le)$ be an ordered semiautomaton. By $\mathcal A^f$ we denote the semiautomaton $(Q,B,\cdot^f,\le)$ where $q\cdot^f b= q\cdot f(b)$ for every $q\in Q$ and $b\in B$. We speak about $f${\em\=/renaming} of $\mathcal A$ and we say that $(P,B,\circ,\preceq)$ is an $f${\em\=/subsemiautomaton} of $(Q,A,\cdot,\le)$ if it is a subsemiautomaton of $\mathcal A^f$. In other words, if $P\subseteq Q$, the order $\preceq$ is the restriction of $\le$, and $\circ$ is a restriction of $\cdot^f$. \end{definition} If we consider $f=\id_A : A^*\rightarrow A^*$, we can see that $(Q,A,\cdot,\le)^{\id}=(Q,A,\cdot,\le)$ and that $\id$\=/subsemiautomata of $(Q,A,\cdot,\le)$ are exactly its subsemiautomata. \begin{lemma}\label{l:f-automaton-preimage} Consider a homomorphism $ f:B^*\rightarrow A^*$ of monoids. (i) Let $L$ be a regular language which is recognized by an ordered automaton $(Q,A,\cdot,\le, i, F)$. Then the automaton $\mathcal B= (Q,B,\circ,\le, i, F)$, where $q\circ b= q\cdot f(b)$ for every $q\in Q$, $b\in B$, recognizes the language $f^{-1}(L)$. (ii) Let $\mathcal A=(Q,A,\cdot,\le)$ be an ordered semiautomaton. If $K\subseteq B^*$ is recognized by the ordered semiautomaton $\mathcal A^f$, then there exists a language $L\subseteq A^*$ recognized by $\mathcal A$ such that $K=f^{-1}(L)$. \end{lemma} \begin{proof} (i) For every $u\in B^*$, we have the following chain of equivalent formulas: $$u\in f^{-1}(L),\ f(u)\in L,\ i\cdot f(u)\in F,\ i\circ u\in F, \ \text{and} \ u\in \nL_{\mathcal B}\, .$$ (ii) If $K\subseteq B^*$ is recognized by the ordered semiautomaton $\mathcal A^f$ then there is a state $i\in Q$ and an upward closed subset $F\subseteq Q$ such that $K=L_{\mathcal{B}}$, where $\mathcal B=(Q,B,\cdot^f,\le,i,F)$. Now we consider $L=L_{\mathcal A'}$ , where $\mathcal A'=(Q,A,\cdot,\le,i,F)$. Now the equality $K=f^{-1}(L)$ follows from Lemma~\ref{l:f-automaton-preimage} (i). \qed\end{proof} \begin{lemma}\label{l:f-subautomaton-vs-HSP} Let $f$ be an arbitrary homomorphism $f:B^*\rightarrow A^*$. (i) If an ordered semiautomaton $\mathcal B=(P,A,\circ,\preceq)$ is a homomorphic image of an ordered semiautomaton $\mathcal A= (Q,A,\cdot,\le)$, then $\mathcal B^f$ is a homomorphic image of the ordered semiautomaton $\mathcal A^f$. (ii) If an ordered semiautomaton $\mathcal B= (P,A,\circ,\preceq)$ is a subsemiautomaton of an ordered semiautomaton $\mathcal A=(Q,A,\cdot,\le)$ then $\mathcal B^f$ is a subsemiautomaton of $\mathcal A^f$. (iii) If an ordered semiautomaton $\mathcal B=\mathcal A_1 \times \dots \times \mathcal A_n$ is a product of a family of ordered semiautomata $\mathcal A_1$, \dots , $\mathcal A_n$, then $\mathcal B^f=\mathcal A_1^f \times \dots \times \mathcal A_n^f$. \end{lemma} \begin{proof} (i) Let $\varphi$ be a surjective homomorphism from an ordered semiautomaton $(Q,A,\cdot,\le)$ onto $(P,A,\circ,\preceq)$. Then $\varphi$ is a isotone mapping from $(Q,\le)$ to $(P,\preceq)$. The states and the order in the semiautomaton $(Q,A,\cdot,\le) ^f$ resp. $(P,A,\circ,\preceq)^f$ are unchanged and hence $\varphi$ is an isotone mapping from the ordered semiautomaton $(Q,A,\cdot,\le) ^f$ onto $(P,A,\circ,\preceq)^f$. Now let $b\in B$ be an arbitrary letter and $q\in Q$ be an arbitrary state. Then $\varphi(q \cdot^f b)=\varphi(q\cdot f(b))=\varphi(q) \circ f(b)= \varphi(q)\circ^f b$. Therefore $\varphi$ is a surjective homomorphism of ordered semiautomata. (ii) This is clear. (iii) Let $\mathcal A_j=(Q_j,A,\cdot_j,\le_j)$ be an ordered semiautomaton for every $j=1,\dots , n$. Let $(P,A,\circ,\preceq)$ be the product $\mathcal A_1\times \dots \times \mathcal A_n$ and $(R,B,\diamond,\sqsubseteq)$ be the product of ordered semiautomata $\mathcal A_1 ^f\times \dots \times \mathcal A_n^f$. Directly from the definitions we have that $P=R=Q_1\times \dots \times Q_n$ and $\preceq\ =\ \sqsubseteq$. Furthermore, for an arbitrary element $(q_1,\dots ,q_n)$ from the set $P=R$, we have $$ (q_1,\dots ,q_n) \circ^f b = (q_1,\dots ,q_n) \circ f(b) = $$ $$ = (q_1\cdot_1 f(b), \dots , q_n\cdot_n f(b)) = (q_1\cdot_1^f b, \dots ,q_n\cdot_n^f b )$$ which is equal to $(q_1,\dots ,q_n) \diamond b$. This means that the action by each letter $b$ is defined in the ordered semiautomaton $(P,A,\circ,\preceq)^f$ in the same way as in the ordered semiautomaton $(R,B,\diamond,\sqsubseteq)$. \qed\end{proof} \section{Eilenberg Type Correspondence for Positive $\mathcal C$\=/Varieties of Ordered Semiautomata} \label{s:eilenberg} \begin{definition}\label{d:variety-automata} Let $\mathcal C$ be a category of homomorphisms. A positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb{V}$ associates to every non\=/empty finite alphabet $A$ a class $\mathbb{V}(A)$ of ordered semiautomata over $A$ in such a way that \begin{itemize} \item $\mathbb{V}(A)\not = \emptyset$ is closed under disjoint unions and direct products of non\=/empty finite families, and homomorphic images, \item $\mathbb {V}$ is closed under $f$\=/subsemiautomata for all $(f: B^* \rightarrow A^*)\in \mathcal C$. \end{itemize} \end{definition} \begin{remark}\label{r:trivial-variety} We define $\mathbb{T}(A)$ as a class of all trivial ordered semiautomata over an alphabet $A$, i.e. $\mathbb{T}(A)$ contains all semiautomata $\mathcal T_n(A)$ and all their isomorphic copies. By Lemma~\ref{l:disjoint-via-product}, the first condition in the definition of positive $\mathcal C$\=/variety of ordered semiautomata can be written equivalently in the following way: $\mathbb T (A) \subseteq \mathbb{V}(A)$ and $\mathbb{V}(A)$ is closed under direct products of non\=/empty finite families and homomorphic images. In particular, the class of all trivial ordered semiautomata $\mathbb T$ forms the smallest positive $\mathcal C$\=/variety of ordered semiautomata whenever the considered category $\mathcal C$ contains all isomorphisms. As mentioned in the introduction, Theorem~\ref{t:eilenberg-ordered} has already been proved in special cases. The technical difference is that \'Esik and Ito in~\cite{esik-ito} used disjoint union of automata and Chaubard, Pin and Straubing~\cite{pin-str} did not use this construction because they used trivial automata instead of them. \end{remark} Now we are ready to state the Eilenberg type correspondence for positive $\mathcal C$\=/varieties of ordered semiautomata. For each positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb V$, we denote by $\alpha(\mathbb V)$ the class of regular languages given by the following formula $$(\alpha(\mathbb V)) (A)= \{\,L\subseteq A^* \mid \ol{\mathcal{D}_L}\in \mathbb V (A)\, \}\, .$$ For each positive $\mathcal C$\=/variety of regular languages $\mathcal L$, we denote by $\beta(\mathcal L)$ the positive $\mathcal C$\=/variety of ordered semiautomata generated by all ordered semiautomata $\ol{\mathcal{D}_L}$, where $L\in \mathcal L (A)$ for some alphabet $A$. The following result can be obtained as a combination of Theorem 5.1 of \cite{pin-straubing} and the main result of \cite{pin-str}. We show a direct and detailed proof here. \begin{theorem}\label{t:eilenberg-ordered} Let $\mathcal C$ be a category of homomorphisms. The mappings $\alpha$ and $\beta$ are mutually inverse isomorphisms between the lattice of all positive $\mathcal C$\=/varieties of ordered semiautomata and the lattice of all positive $\mathcal C$\=/varieties of regular languages. \end{theorem} \begin{proof} First of all, we fix a category of homomorphism $\mathcal C$ for the whole proof. The proof will be done when we show the following statements: \begin{enumerate} \item $\alpha$ is correctly defined, i.e. for every positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb V$, the class $\alpha(\mathbb V)$ is a positive $\mathcal C$\=/variety of languages. \item $\beta$ is correctly defined, i.e. for every positive $\mathcal C$\=/variety of regular languages $\mathcal L$, the class $\beta(\mathcal L)$ is a positive $\mathcal C$\=/variety of ordered semiautomata. \item $\beta \circ \alpha = \mathrm{id}$, i.e. for each positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb V$ we have $\beta (\alpha(\mathbb V)) = \mathbb V$. \item $\alpha\circ \beta =\mathrm{id}$, i.e. for each positive $\mathcal C$\=/variety of languages $\mathcal L$, we have $\alpha (\beta(\mathcal L)) = \mathcal L$. \end{enumerate} We prove these facts in separate lemmas. The exception is the second item which trivially follows from the definition of the mapping $\beta$. Before the formulation of these lemmas we prove some technicalities. \begin{lemma}\label{l:formula-for-alpha} For each positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb V$ and an alphabet $A$ we have $$(\alpha(\mathbb V)) (A)=\{\,L\subseteq A^* \mid \exists\, \mathcal A=(Q,A,\cdot,\le, i, F) : L=\nL_{\mathcal A}\ \text{ and }\ \ol{\mathcal A} \in \mathbb V(A)\, \}\, .$$ \end{lemma} \begin{proof} The inclusion ``$\subseteq$'' is trivial, because one can take for the ordered automaton $\mathcal A$ the canonical automaton $\mathcal{D}_L$. To prove the opposite inclusion, let $L=\nL_{\mathcal A}$, where $\mathcal A=(Q,A,\cdot,\le, i, F)$ with $\ol{\mathcal{A}} \in \mathbb V(A)$. By Lemma~\ref{l:minimality-canonical-dfa} and the assumption that $\mathbb V$ is closed under taking subsemiautomata and homomorphic images, we have that $\ol{\mathcal{D}_L}\in \mathbb V(A)$. Therefore $L\in (\alpha(\mathbb V)) (A)$. \qed\end{proof} \begin{lemma} If $\mathbb V$ is a positive $\mathcal C$\=/variety of ordered semiautomata, then $\alpha(\mathbb V)$ is a positive $\mathcal C$\=/variety of regular languages. \end{lemma} \begin{proof} We need to prove that $(\alpha(\mathbb{V}))(A)$ is closed under taking intersections, unions and quotients. Secondly, we must show the closure property with respect to taking preimages in morphisms from the category $\mathcal C$. For each $A$, the class $(\alpha(\mathbb V)) (A)$ given by the formula from Lemma~\ref{l:formula-for-alpha} is closed under unions and intersections of finite families, since $\mathbb V(A)$ is closed under products of finite families (see Lemma~\ref{l:product-arbitrary-subset}). The class $(\alpha(\mathbb V)) (A)$ is also closed under quotients, since we can change initial and final states freely (see Lemma~\ref{l:quotients}). Furthermore, by Lemma~\ref{l:f-automaton-preimage}, we see the following observation. Since $\mathbb V$ is closed under taking $f$\=/subsemiautomata for each homomorphism $f$ from $\mathcal C$, the class $\alpha(\mathbb V)$ is closed under preimages in the same homomorphisms. \qed\end{proof} All three constructions -- direct product, homomorphic image and subsemiautomaton -- are standard constructions of universal algebra. From the general theory (see e.g. Burris and Sankappanavar~\cite{burris1981course}) we want to use only the fact that if one needs to generate the smallest class closed with respect to all three constructions together and containing a class $\XX$, then it is enough to consider a homomorphic images of subalgebras in products of algebras form $\XX$. Note that from this point of view, an alphabet $A$ is fixed, and $A$ serves as a set of unary function symbols. Then a semiautomaton over $A$ is a unary algebra. For a class of ordered semiautomata $\XX$ over a fixed alphabet $A$ we denote by \begin{itemize} \item $\OH\XX$ the class of all homomorphic images of ordered semiautomata from $\XX$, \item $\OI\XX$ the class of all isomorphic copies of ordered semiautomata from $\XX$, \item $\OS\XX$ the class of all subsemiautomata of ordered semiautomata from $\XX$, \item $\OP\XX$ the class of all products of non\=/empty finite families of ordered semiautomata from $\XX$, \item $\OD\XX$ the class of all disjoint unions of non\=/empty finite families of ordered semiautomata from $\XX$. \end{itemize} It is clear that the operators $\OH$, $\OI$ and $\OS$ are idempotent, i.e. for each class of ordered semiautomata $\XX$ we have $\OH\OH\XX=\OH\XX$ etc. Furthermore, $\OI\OP\OP=\OI\OP$ and $\OI\OD\OD=\OI\OD$. \begin{lemma}\label{l:operators-properties} For each class $\XX$ of ordered semiautomata over a fixed alphabet $A$, we have: $$\OD\XX \subseteq \OH\OP(\XX\cup \mathbb{T}(A))$$ and $$\OP\OH\XX \subseteq \OH\OP\XX,\quad \OS\OH\XX \subseteq \OH\OS\XX, \quad \OP\OS \XX \subseteq \OS\OP\XX\, .$$ \end{lemma} \begin{proof} The first property follows from Lemma~\ref{l:disjoint-via-product}. The other properties are well\=/known facts from universal algebra, see e.g.~\cite[Chapter II, Section 9]{burris1981course} -- a modification for the ordered case is straightforward. \qed\end{proof} \begin{lemma}\label{l:hsp} For each positive $\mathcal C$\=/variety of regular languages $\mathcal L$ we have $$(\beta(\mathcal L))(A)= \OH\OS\OP(\,\{\ol{\,\mathcal{D}_L}\mid L\in \mathcal{L}(A)\,\} \cup\mathbb{T}(A)\,)\, .$$ \end{lemma} \begin{proof} For every alphabet $A$, we denote $\XX= \{\,\ol{\mathcal{D}_L} \mid L\in \mathcal{L}(A)\,\}\cup\mathbb{T}(A)$ and we denote by $\mathbb{L}(A)$ the right\=/hand side of the formula in the statement, i.e. $\mathbb{L}(A)=\OH\OS\OP\XX$. Since $\beta(\mathcal L)$ is a positive $\mathcal C$\=/variety of ordered semiautomata, we have $\mathbb{T}(A) \subseteq (\beta(\mathcal L))(A)$ by Remark~\ref{r:trivial-variety}. Therefore $\XX \subseteq (\beta(\mathcal L))(A)$ and the inclusion $\mathbb{L}(A)\subseteq (\beta(\mathcal L))(A)$ follows from the fact that $\beta(\mathcal L)(A)$ is closed under operators $\OH$, $\OS$ and $\OP$. To prove the opposite inclusion $(\beta(\mathcal L))(A) \subseteq \mathbb{L}(A)$, we first prove that $\mathbb L$ is a positive $\mathcal C$\=/variety of ordered semiautomata. By the first property in Lemma~\ref{l:operators-properties}, we get $$\OD(\mathbb{L}(A))=\OD\OH\OS\OP\XX\subseteq \OH\OP(\OH\OS\OP\XX)\, .$$ By the other properties of Lemma~\ref{l:operators-properties} and idempotency of the operators we get $\OH\OP\OH\OS\OP\XX\subseteq \OH\OS\OP\XX=\mathbb{L}(A)$. Thus $\OD(\mathbb{L}(A)) \subseteq \mathbb{L}(A)$. In the same way one can prove another inclusions $\OH(\mathbb{L}(A)) \subseteq \mathbb{L}(A)$, $\OS(\mathbb{L}(A)) \subseteq \mathbb{L}(A)$, $\OP(\mathbb{L}(A)) \subseteq \mathbb{L}(A)$. Therefore $\mathbb{L}(A)$ is closed under all four operators $\OH$, $\OS$, $\OP$ and $\OD$. It remains to prove that $\mathbb{L}$ is closed under $f$\=/renaming. So, let $f: B^* \rightarrow A^*$ belong to $\mathcal C$. We need to show that $(Q,A,\cdot,\le)^f$ belongs to $\mathbb{L}(B)$ whenever $(Q,A,\cdot,\le)$ is from $\mathbb{L}(A)$. For an arbitrary set $\YY$ of ordered semiautomata over the alphabet $A$, we denote ${\YY}^f=\{\,(Q,A,\cdot,\le)^f \mid (Q,A,\cdot,\le)\in \YY\,\}$. Using this notation, we need to show that $(\mathbb{L}(A))^f\subseteq \mathbb{L}(B)$. At first, we show a weaker inclusion ${\XX}^f\subseteq \mathbb L(B)$. Trivially $(\mathbb{T}(A))^f=(\mathbb{T}(B))$. Now let $L$ be an arbitrary language from $\mathcal {L}(A)$. We consider $(D_L,A,\cdot,\subseteq)^f$ which is an ordered semiautomaton over $B$. By Lemma~\ref{l:f-automaton-preimage}, the set $Z$ of all regular languages which are recognized by the ordered semiautomaton $(D_L,A,\cdot,\subseteq)^f$ contains only languages of the form $f^{-1}(K)$, where $K$ is recognized by $(D_L,A,\cdot,\subseteq)$. Since $\mathcal L(A)$ is closed under unions, intersections and quotients, every such language $K$ belongs to $\mathcal L(A)$ by Lemmas~\ref{l:quotients} and~\ref{l:quotients-in-canonical}. This means that the set $Z$ is a subset of $\mathcal L (B)$ because $\mathcal L$ is closed under preimages in the homomorphism $f$. Therefore $\mathbb L(B)$ contains all canonical ordered semiautomata of languages from $Z$. Finally, $(D_L,A,,\cdot,\subseteq)^f$ can be reconstructed from these canonical ordered semiautomata of languages from $Z$: by Lemma~\ref{l:reconstruction-union}, the ordered semiautomaton $(D_L,A,\cdot,\subseteq)^f$ is a homomorphic image of a disjoint union of certain subsemiautomata which are isomorphic, by Lemma~\ref{l:reconstruction-product}, to subsemiautomata of products of the canonical ordered semiautomata of languages from $Z$. Hence $(D_L,A,\cdot,\subseteq)^f$ belongs to $\mathbb L(B)$ which is closed under homomorphic images, subsemiautomata, products and disjoint unions as we proved above. So, we proved ${\XX}^f\subseteq \mathbb L(B)$. Now Lemma~\ref{l:f-subautomaton-vs-HSP} has the following consequences $(\OH\YY)^f\subseteq \OH({\YY}^f)$, $(\OS\YY)^f\subseteq \OS({\YY}^f)$ and $(\OP\YY)^f\subseteq \OP({\YY}^f)$ for an arbitrary set of ordered semiautomata $\YY$ over the alphabet $A$. If we use all these properties we get $(\mathbb L(A))^f=(\OH\OS\OP\XX)^f\subseteq \OH\OS\OP({\XX}^f)\subseteq \OH\OS\OP (\mathbb L(B))=\mathbb L(B)$ because $\mathbb{L}(B)$ is closed under all three operators. Hence $\mathbb L$ is closed under taking $f$\=/subsemiautomata and therefore $\mathbb L$ is a positive $\mathcal C$\=/variety of ordered semiautomata. The inclusion $(\beta(\mathcal L))(A)\subseteq \mathbb{L}(A)$ follows from the definition of $\beta(\mathcal L)$ which is the smallest positive $\mathcal C$\=/variety of ordered semiautomata containing $\XX$. Since the opposite inclusion is also proved we have finish the proof of the lemma. \qed\end{proof} \begin{lemma}\label{l:eilenberg-part3} For each positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb V$ we have $\beta (\alpha(\mathbb V)) = \mathbb V$. \end{lemma} \begin{proof} Let $A$ be an arbitrary alphabet. By Lemma~\ref{l:hsp}, $$(\beta (\alpha(\mathbb V))) (A)=\OH\OS\OP (\,\{\ol{\mathcal{D}_L}\mid L\in (\alpha(\mathbb V))(A)\} \cup\mathbb{T}(A)\,)\, .$$ We denote $\XX= \{\,\ol{\mathcal{D}_L}\mid L\in (\alpha(\mathbb V))(A)\,\}$. If we use the definition of the mapping $\alpha$ then we see that $\XX=\{\,(Q,A,\cdot,\le) \in \mathbb{V}(A)\mid \exists L\subseteq A^* : \ol{\mathcal{D}_L}=(Q,A,\cdot,\le)\,\}$. In particular $\XX\subseteq \mathbb V (A)$. Since we also have $\mathbb T(A)\subseteq \mathbb V(A)$ we see that $\XX \cup\mathbb{T}(A)\subseteq \mathbb V(A)$. Hence $$(\beta (\alpha(\mathbb V))) (A)=\OH\OS\OP(\, \XX \cup\mathbb{T}(A)\,)\subseteq \mathbb V(A)$$ because $\mathbb V(A)$ is closed under taking homomorphic images, subsemiautomata and products. In the proof of Lemma~\ref{l:hsp} we already saw that every ordered semiautomaton $(Q,A,\cdot,\le)$ can be reconstructed from the canonical ordered automata of languages which are recognized by $(Q,A,\cdot,\le)$ by Lemma~\ref{l:reconstruction-product} and~\ref{l:reconstruction-union}. Therefore $\mathbb V(A) \subseteq \OH\OS\OP ( \XX \cup\mathbb{T}(A))$ and we proved the equality $\mathbb V(A) = \OH\OS\OP ( \XX \cup\mathbb{T}(A))$, which means that $\beta (\alpha(\mathbb V)) = \mathbb V$. \qed\end{proof} \begin{lemma}\label{l:eilenberg-part4} Let $\mathcal L$ be a positive $\mathcal C$\=/variety of languages $\mathcal L$. Then $\alpha (\beta(\mathcal L)) = \mathcal L$. \end{lemma} \begin{proof} We want to prove that for every $A$ the equality $(\alpha (\beta(\mathcal L))) (A) = \mathcal L (A)$ holds. Let $L\in \mathcal L(A)$ be an arbitrary language. By the definition of the mapping $\beta$, we have $\ol{\mathcal{D}_L}\in (\beta(\mathcal L)) (A)$. Therefore, by definition of $\alpha$, we have $L\in \alpha (\beta(\mathcal L)) (A)$ and we have proved the inclusion ``$\supseteq$''. To prove the opposite one, let $K\in \alpha ((\beta(\mathcal L))) (A)$ be an arbitrary language. Then there is an ordered automaton $\mathcal A=(Q,A,\cdot ,\le, i, F)$ such that $K=\nL_{\mathcal A}$ and $\ol{\mathcal A}\in (\beta(\mathcal L))(A) =\OH\OS\OP(\,\{\,\ol{\mathcal{D}_L}\mid L \in \mathcal{L}(A)\,\}\cup\mathbb{T}(A)\,)$. If $K$ is recognized by $\mathcal A$, where $\ol{\mathcal A}\in \OH\XX$ for the class of ordered semiautomata $\XX=\OS\OP(\,\{\,\ol{\mathcal{D}_L}\mid L\in \mathcal{L}(A)\,\}\cup\mathbb{T}(A)\,)$, then there is an ordered automaton $\mathcal B$ such that $\ol{\mathcal A}$ is a homomorphic image of $\ol{\mathcal B}\in\XX$. By Lemma~\ref{l:morphic-image-odfa}, the language $K$ is recognized by $\ol{\mathcal B}$. Thus we can assume that $\mathcal A$ belongs to $\OS\OP(\,\{\,\ol{\mathcal{D}_L}\mid L\in \mathcal{L}(A)\,\}\cup\mathbb{T}(A)\,)$. In the same way we can also assume that $\ol{\mathcal A}$ belongs to $\OP(\,\{\,\ol{\mathcal{D}_L}\mid L\in \mathcal{L}(A)\,\} \cup\mathbb{T}(A)\,)$. By Lemma~\ref{l:product-arbitrary-subset}, we know that $K$ is a finite union of finite intersections of languages which are recognized by ordered semiautomata from the class $\{\ol{\,\mathcal{D}_L}\mid L\in \mathcal{L}(A)\,\}\cup\mathbb{T}(A)$. Furthermore, trivial semiautomata recognize only languages $\emptyset$ and $A^*$ which belong to every $\mathcal L(A)$, hence we may consider $\{\,\ol{\mathcal{D}_L}\mid L\in \mathcal{L}(A)\,\}$ instead of $\{\,\ol{\mathcal{D}_L}\mid L\in \mathcal{L}(A)\,\}\cup\mathbb{T}(A)$ in the previous sentence. Since the canonical automaton $\mathcal{D}_L$ recognizes only finite unions of finite intersections of quotients of the language $L$ (by Lemmas~\ref{l:quotients} and~\ref{l:quotients-in-canonical}), and since $\mathcal L(A)$ is closed under taking quotients, unions and intersections, we see that $K$ belongs to $\mathcal L(A)$. \qed\end{proof} The previous lemma finishes the proof of Theorem~\ref{t:eilenberg-ordered}. \qed\end{proof} \section{$\mathcal C$\=/Varieties of Semiautomata} For an ordered semiautomaton $(Q,A,\cdot,\le)$ we define its {\it dual} $(Q,A,\cdot,\le)^{\Od}=(Q,A,\cdot, \le^{\Od})$ where $\le^{\Od}$ is the dual order to $\le$, i.e. $p\le^{\Od} q$ if and only if $q\le p$. Instead of the symbol $\le^{\Od}$ we usually use the symbol $\ge$. Trivially, the resulting structure $(Q,A,\cdot,\ge)$ is also an ordered semiautomaton. For a positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb V$ we denote by $\mathbb V^{\Od}$ its dual, i.e for every alphabet $A$ we consider $\mathbb V^{\Od}(A)=\{\,(Q,A,\cdot,\le)^{\Od}\mid (Q,A,\cdot,\le)\in\mathbb V(A)\, \}$. It is clear that $(\mathbb V^\Od)^{\Od}=\mathbb V$ and that $\mathbb V^{\Od}$ is a positive $\mathcal C$\=/variety of ordered semiautomata. We say that $\mathbb V$ is {\it selfdual} if $\mathbb V^{\Od}=\mathbb V$. In other words, $\mathbb V$ is selfdual if and only if every $\mathbb V(A)$ is closed under taking duals of its members. An alternative characterization follows. \begin{lemma}\label{l:selfdual} Let $\mathbb V$ be a positive $\mathcal C$\=/variety of ordered semiautomata. Then $\mathbb V$ is selfdual if and only if for each alphabet $A$, we have that: $$(Q,A,\cdot,\le)\in \mathbb V (A) \text{ implies } (Q,A,\cdot,=)\in\mathbb V(A)\, .$$ \end{lemma} \begin{proof} If $\mathbb V^d=\mathbb V$ and $(Q,A,\cdot,\le)\in\mathbb V(A)$ then we also have $(Q,A,\cdot,\ge)\in \mathbb V(A)$. Now the ordered semiautomaton $(Q,A,\cdot,=)$ is isomorphic to a subsemiautomaton of the product of the ordered semiautomata $(Q,A,\cdot,\le)$ and $(Q,A,\cdot,\ge)$, namely the subsemiautomaton with the set of states $\{\,(q,q)\mid q\in Q\,\}$. To prove the converse, it is enough to see that an arbitrary ordered semiautomaton $(Q,A,\cdot,\le)$ is a homomorphic image of the semiautomaton $(Q,A,\cdot,=)$: the identity mapping is a homomorphism of the considered order semiautomata. \qed\end{proof} Recall that a {\em $\mathcal C$\=/variety of regular languages} is a positive $\mathcal C$\=/variety of languages which is closed under taking complements. The canonical ordered semiautomaton of the complement of a regular language $L$ is the dual of the canonical ordered semiautomaton of $L$, i.e $\overline{\mathcal D_{L^c}}= (\overline{\mathcal D_{L}})^\Od$. This easy observation helps to prove the following statement. \begin{proposition}\label{p:variety-as-special-positive} There is one to one correspondence between $\mathcal C$\=/varieties of regular languages and selfdual positive $\mathcal C$\=/varieties of ordered semiautomata. \end{proposition} \begin{proof} The mentioned correspondence is given by the pairs of the mappings $\alpha$ and $\beta$ from Theorem~\ref{t:eilenberg-ordered}. For a selfdual positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb V$, we know that $(\alpha(\mathbb V))(A)$ is closed under complements. This means that $\alpha(\mathbb V)$ is a $\mathcal C$\=/variety of regular languages. Therefore, it remains to show that, for an arbitrary $\mathcal C$\=/variety of regular languages $\mathcal L$, the positive $\mathcal C$\=/variety of ordered semiautomata $\beta(\mathcal L)$ is selfdual. By Lemma~\ref{l:hsp}, $(\beta(\mathcal L))(A)= \OH\OS\OP(\,\{\,\ol{\mathcal{D}_L}\mid L\in \mathcal{L}(A)\,\} \cup\mathbb{T}(A)\,)$, where the set $\{\,\ol{\mathcal{D}_L}\mid L\in \mathcal{L}(A)\,\} \cup\mathbb{T}(A)$ is selfdual. However, for every selfdual class of semiautomata $\XX$, the classes of ordered semiautomata $\OP\XX$, $\OS\XX$ and $\OH\XX$ are selfdual again. Hence $\beta(\mathcal L)$ is selfdual. \qed\end{proof} Since every ordered semiautomaton $(Q,A,\cdot,\le)$ is a homomorphic image of the ordered semiautomaton $(Q,A,\cdot,=)$ we can consider the notion of $\mathcal C$\=/varieties of semiautomata instead of selfdual positive $\mathcal C$\=/varieties of ordered semiautomata: $\mathcal C$\=/varieties of semiautomata are classes of semiautomata which are closed under taking $f$\=/subsemiautomata, homomorphic images, disjoint unions and finite products. Let $\mathbb A(A)$ be the class of all ordered semiautomata over the alphabeth $A$. Notice that $\mathbb A$ forms the greatest positive $\mathcal C$\=/variety of ordered semiautomata for each category $\mathcal C$. If we have $\mathcal C$\=/variety of semiautomata $\mathbb V$ then we can consider all possible compatible orderings on these semiautomata and define the positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb V^{\Oo}$ in the following sense $$\mathbb V^{\Oo}(A)=\{\, (Q,A,\cdot,\le)\in \mathbb A(A) \mid (Q,A,\cdot)\in\mathbb V(A)\, \}\, .$$ Clearly, $\mathbb V^{\Oo}$ is selfdual. Conversely, for a selfdual positive $\mathcal C$\=/variety of ordered semiautomata $\mathbb V$, we can consider $$\mathbb V^{\Ou} (A)=\{\, (Q,A,\cdot) \mid \text{ there is an order } \le \text{ such that } (Q,A,\cdot,\le) \in \mathbb V (A)\,\}\, .$$ Now two mappings $\mathbb V \mapsto \mathbb V^{\Oo} $ and $\mathbb V \mapsto \mathbb V^{\Ou}$ are mutually inverse mappings between $\mathcal C$\=/varieties of semiautomata and selfdual positive $\mathcal C$\=/varieties of ordered semiautomata. Using this easy correspondence, we obtain the following result as the consequence of Proposition~\ref{p:variety-as-special-positive}. \begin{theorem}\label{t:variety-version} There is one to one correspondence between $\mathcal C$\=/varieties of regular languages and $\mathcal C$\=/varieties of semiautomata. \end{theorem} Note that this results can be also obtained by composing the results by Pin, Straubing~\cite{pin-straubing} with those of Chaubard, Pin and Straubing \cite{pin-str} \section{Examples} In this section we present several instances of Eilenberg type correspondence. Some of them are just reformulations of examples already mentioned in existing literature. In particular, the first three subsections correspond to pseudovarieties of aperiodic, $\mathcal R$\=/trivial and $\mathcal J$\=/trivial monoids, respectively. Also Subsection~\ref{s:ord-increasing} has a natural counterpart in pseudovarieties of ordered monoids satisfying the inequality $1\le x$. In all these cases, $\mathcal C$ is the category of all homomorphisms denoted by $\mathcal C_{all}$. Nevertheless, we believe that these correspondences viewed from the perspective of varieties of (ordered) semiautomata are of some interest. Another four subsections works with different categories $\mathcal C$ and Subsections~\ref{subs:synchro} and~\ref{subs:inserting} bring new examples of (positive) $\mathcal C$\=/varieties of (ordered) automata. \subsection{Counter\=/Free Automata} The star free languages were characterized by Sch{\"u}tzenberger~\cite{schutz} as languages having aperiodic syntactic monoids. Here we recall the subsequent characterization of McNaughton and Papert~\cite{McNaughton} by counter\=/free automata. \begin{definition} We say that a semiautomaton $(Q,A,\cdot)$ is {\em counter\=/free} if, for each $u\in A^*$, $q\in Q$ and $n\in\mathbb N$ such that $q\cdot u^n =q$, we have $q\cdot u = q$. \end{definition} \begin{proposition}\label{p:counter-free} The class of all counter\=/free semiautomata forms a variety of semiautomata. \end{proposition} \begin{proof} It is easy to see that disjoint unions, subsemiautomata, products and $f$\=/renamings of a counter\=/free semiautomata are again counter\=/free. Let $\varphi : (Q,A,\cdot) \rightarrow (P,A,\circ)$ be a surjective homomorphism of semiautomata and let $(Q,A,\cdot)$ be counter\=/free. We prove that also $(P,A,\circ)$ is a counter\=/free semiautomaton. Take $p\in P, u\in A^*$ and $n\in\mathbb N$ such that $p\circ u^n= p$. Let $q\in Q$ be an arbitrary state such that $\varphi (q)=p$. Then, for each $j\in\mathbb N$, we have $\varphi(q\cdot u^{jn}) =p\circ u^{jn} = p$. Since the set $\{q, q\cdot u^n, q\cdot u^{2n}, \dots \}$ is finite, there exist $k,\ell \in\mathbb N$ such that $q\cdot u^{kn} = q \cdot u^{(k+\ell)n}$. If we take $r=q \cdot u^{kn}$, then $r\cdot u^{\ell n}=r$. Since $(Q,A,\cdot)$ is counter\=/free, we get $r\cdot u=r$. Consequently, $p\circ u=\varphi(r)\circ u=\varphi(r\cdot u)=\varphi(r)=p$. \qed\end{proof} The promised link between languages and automata follows. \begin{proposition}[McNaughton, Papert~\cite{McNaughton}] Star free languages are exactly the languages recognized by counter\=/free semiautomata. \end{proposition} Note that this characterization is effective, although testing whether a regular language given by a DFA is aperiodic is even PSPACE\=/complete problem by Cho and Huynh~\cite{cho}. \subsection{Acyclic Automata} The {\it content} $\mathrm{c}(u)$ of a word $u\in A^*$ is the set of all letters occurring in $u$. \begin{definition} We say that a semiautomaton $(Q,A,\cdot)$ is {\em acyclic} if, for every $u\in A^+$ and $q\in Q$ such that $q\cdot u =q$, we have $q\cdot a = q$ for every $a\in \mathrm{c}(u)$. \end{definition} Note that one of the conditions in Simon's characterization of piecewise testable languages is that a minimal DFA is acyclic -- see \cite{simon-pw}. One can prove the following proposition in a similar way as in the case of counter\=/free semiautomata. \begin{proposition} The class of all acyclic semiautomata forms a variety of semiautomata. \end{proposition} According to Pin~\cite[Chapter 4, Section 3]{pin1986varieties}, a semiautomaton $(Q,A,\cdot)$ is called {\em extensive} if there exists a linear order $\, \preceq\, $ on $Q$ such that $(\,\forall\, q\in Q,\, a\in A\,) \ q\preceq q\cdot a$. Note that such an order need not to be compatible with actions of letters. One can easily show that a semiautomaton is acyclic if and only if it is extensive. We prefer to use the term acyclic, since we consider extensive actions by letters (compatible with ordering of a semiautomaton) later in the paper. Anyway, testing whether a given semiautomaton is acyclic can be decided using the breadth\=/first search algorithm. \begin{proposition}[Pin \cite{pin1986varieties}] The languages over the alphabet $A$ accepted by acyclic semiautomata are exactly disjoint unions of the languages of the form $$A_0^*a_1A_1^*a_2 A_2^* \dots A_{n-1}^*a_n A_n^*\ \text{ where }\ a_i\not\in A_{i-1}\subseteq A\,\text{ for }\ i=1,\dots,n\,.$$ \end{proposition} Note that the languages above are exactly those having $\mathcal R$\=/trivial syntactic monoids \subsection{Acyclic Confluent Automata} In our paper~\cite{dlt13-klima} concerning piecewise testable languages, we introduced a certain condition on automata being motivated by the terminology from the theory of rewriting systems. \begin{definition} We say that a semiautomaton $(Q,A,\cdot)$ is {\em confluent}, if for each state $q\in Q$ and every pair of words $u,v\in A^*$, there is a word $w\in A^*$ such that $\mathrm{c}(w)\subseteq \mathrm{c}(uv)$ and $(q\cdot u)\cdot w=(q\cdot v)\cdot w$. \end{definition} In~\cite{dlt13-klima}, this definition was studied in the context of acyclic (semi)automata, in which case several equivalent conditions were described. One of them can be rephrased in the following way. \begin{lemma}\label{l:acyclic-confluent} Let $(Q,A,\cdot)$ be an acyclic semiautomaton. Then $(Q,A,\cdot)$ is confluent if and only if, for each $q\in Q$, $u,v\in A^*$, we have $q\cdot u\cdot (uv)^{|Q|}=q\cdot v\cdot (uv)^{|Q|}$. \end{lemma} \begin{proof} Assume that $(Q,A,\cdot)$ is a confluent acyclic semiautomaton and let $q\in Q$, $u,v\in A^*$ be arbitrary. We consider the sequence of states $$q\cdot u,\ q\cdot u\cdot (uv),\ q\cdot u\cdot (uv)^2,\ \dots,\ q\cdot u\cdot (uv)^{|Q|}\,.$$ Since the sequence contains more members than $|Q|$, we have $p=q\cdot u\cdot (uv)^k=q\cdot u\cdot (uv)^\ell$ for some $0\le k<\ell\le |Q|$. Since $(Q,A,\cdot)$ is acyclic, we have $p \cdot a =p$ for every $a\in \mathrm{c}(uv)$. Therefore, $p=q\cdot u\cdot (uv)^k=q\cdot u\cdot (uv)^{k+1}= \dots =q\cdot u\cdot (uv)^{|Q|}$ and we have $p\cdot w=p$ for every $w\in A^*$ such that $\mathrm{c}(w)\subseteq \mathrm{c}(uv)$. Similarly, for $r=q\cdot v\cdot (uv)^{|Q|}$, we obtain the same property $r\cdot w=r$ for the same words $w$. Taking into account that $(Q,A,\cdot)$ is confluent we obtain the existence of a word $w$ such that $\mathrm{c}(w)\subseteq \mathrm{c}(uv)$ and $p\cdot w=r\cdot w$. Hence $p=r$ and the first implication is proved. The second implication is evident. \qed\end{proof} Using the condition from Lemma~\ref{l:acyclic-confluent}, one can prove that the class of all acyclic confluent semiautomata is a variety of semiautomata similarly as in Proposition~\ref{p:counter-free}. Finally, the main result from~\cite{dlt13-klima} can be formulated in the following way. It is mentioned in~\cite{dlt13-klima} that the defining condition is testable in a polynomial time. \begin{proposition}[Klíma and Polák~\cite{dlt13-klima}] The variety of all acyclic confluent semiautomata corresponds to the variety of all piecewise testable languages. \end{proposition} \subsection{Ordered Automata with Extensive Actions}\label{s:ord-increasing} We say that an ordered semiautomaton $(Q,A,\cdot ,\le)$ has \emph{extensive actions} if, for every $q\in Q$, $a\in A$, we have $q\le q\cdot a$. Clearly, the defining condition is testable in a polynomial time. The transition ordered monoids of such ordered semiautomata are characterized by the inequality $1\le x$. It is known~\cite[Proposition 8.4]{pin-handbook} that the last inequality characterizes the positive variety of all finite unions of languages of the form $$A^*a_1A^*a_2A^*\dots A^*a_\ell A^* \, , \ \text{ where }\ a_1,\dots ,a_\ell\in A,\ \ell\ge 0\, .$$ Therefore we call them \emph{positive piecewise testable languages}. In this way one can obtain the following statement, which we prove directly using the theory presented in this paper. \begin{proposition} The class of all ordered semiautomata with extensive actions is a positive variety of ordered semiautomata and corresponds to the positive variety of all positive piecewise testable languages. \end{proposition} \begin{proof} It is a routine to check that the class of all ordered semiautomata with extensive actions is a positive variety of ordered semiautomata. Using Theorem~\ref{t:eilenberg-ordered}, we need to show, that a language $L$ is positive piecewise testable if and only if its canonical ordered semiautomaton has extensive actions. To prove that the canonical semiautomaton of a positive piecewise testable language has extensive actions, it is enough to prove this fact for languages of the form $A^*a_1A^*a_2A^*\dots A^*a_\ell A^*$ with $a_1,\dots ,a_\ell\in A,\ \ell\ge 0$. This observation follows from the description of the canonical ordered (semi)automata of a language given in Section~\ref{sec:ordered-automata}. Indeed, for every language $K=A^*b_1A^*b_2A^*\dots A^*b_k A^*$ we have $K\subseteq b^{-1}K$, because $b^{-1}K=K$ or $b^{-1}K=A^*b_2A^*\dots A^*b_k A^*$ depending on the fact whether $b\not=b_1$ or $b=b_1$. Assume that the canonical automaton $\mathcal O_L=(D_L,A,\cdot,\subseteq,L,F_L)$ of a language $L$ has extensive actions; consequently $(D_L,A,\cdot)$ is an acyclic semiautomaton. Since $F$ is upward closed, for every $p\in F$ and $a\in A$, we have $p\cdot a \in F$. In other words, for every $p\in F$, we have $\nL_p=A^*$. However by Lemma~\ref{l:canonical-dfa} we have $\nL_p=p$, so we get that $F$ contains just one final state $p=A^*$. Now we consider a simple path in $\mathcal O_L$ from $i$ to $p$ labeled by a word $u=a_1a_2\dots a_n$ with $a_k\in A$, i.e $i\not= i\cdot a_1\not= i\cdot a_1a_2 \not= \dots \not= i\cdot u=p$. If we consider a word $w$ such that $w=w_0a_1w_1a_1\dots a_nw_n$, where $w_0,w_1,\dots w_n\in A^*$, then one can easily prove by an induction with respect to $k$ that $i\cdot a_1\dots a_k \le i\cdot w_0a_1w_1a_1\dots a_kw_k$. For $k=n$, we get $p\le i\cdot w\in F$, thus $i\cdot w=p$. Hence we can conclude with $A^*a_1A^*a_2 \dots a_n A^* \subseteq L$. We can consider the language $K$, which is the union of such languages $A^*a_1A^*a_2 \dots a_n A^* $ for all possible simple paths in $\mathcal O_L$ from $i$ to $p$. Now $K\subseteq L$ follows from the previous argument and $L\subseteq K$ is clear, because every $w\in L$ describes a unique simple path from $i$ to $p$. \qed\end{proof} Note that a usual characterization of the class of positive piecewise testable languages is given by a forbidden pattern for DFA (see e.g.~\cite[page 531]{straubing-weil-handbook}). This pattern consists of two words $v, w\in A^*$ and two states $p$ and $q=p\cdot v$ such that $p\cdot w \in F$ and $q\cdot w\not\in F$. In view of~(\ref{eq:hopcroft}) from Section~\ref{sec:ordered-automata}, the presence of the pattern is equivalent to the existence of two states $[p]_\rho \not\le [q]_\rho$, such that $[p]_\rho \cdot_\rho v= [q]_\rho$ in the minimal automaton of the language. The membership for the class of positive piecewise testable languages is decidable in polynomial time -- see~\cite[Corollary 8.5]{pin-handbook} or~\cite[Theorem 2.20]{straubing-weil-handbook}. \subsection{Autonomous Automata} We recall examples from the paper~\cite{esik-ito}. We call a semiautomaton $(Q,A,\cdot)$ {\em autonomous} if for each state $q\in Q$ and every pair of letters $a,b\in A$, we have $q\cdot a = q \cdot b$. For a positive integer $d$, let $\mathbb{V}_d$ be the class of all autonomous semiautomata being disjoint unions of cycles whose lengths divide $d$. Clearly, the defining conditions are testable in a linear time. \begin{proposition}[\'Esik and Ito~\cite{esik-ito}] (i) All autonomous semiautomata form a $\mathcal C_l$\=/variety of semiautomata and the corresponding $\mathcal C_l$\=/variety of languages consists of regular languages $L$ such that, for all $u,v\in A^*$, if $u\in L$, $|u|=|v|$ then $v\in L$. (ii) The class $\mathbb{V}_d$ forms a $\mathcal C_l$\=/variety of semiautomata and the corresponding $\mathcal C_l$\=/variety of languages consists of all unions of $(A^d)^*A^i$, $i\in \{0,\dots ,d-1\}$. \end{proposition} \subsection{Synchronizing and Weakly Confluent Automata} \label{subs:synchro} Synchronizing automata are intensively studied in the literature. A semiautomaton $(Q,A,\cdot)$ is {\em synchronizing} if there is a word $w\in A^*$ such that the set $Q\cdot w$ is a one\=/element set. We use an equivalent condition, namely, for each pair of states $p,q\in Q$, there exists a word $w\in A^*$ such that $p\cdot w=q\cdot w$ (see e.g. Volkov~\cite[Proposition 1]{volkov}). In this paper we consider the classes of semiautomata which are closed for taking disjoint unions. So, we need to study disjoint unions of synchronizing semiautomata. Those automata can be equivalently characterized by the following weaker version of confluence. We say that a semiautomaton $(Q,A,\cdot)$ is {\em weakly confluent} if, for each state $q\in Q$ and every pair of words $u,v\in A^*$, there is a word $w\in A^*$ such that $(q\cdot u)\cdot w=(q\cdot v)\cdot w$. \begin{proposition}\label{l:synchronizing} A semiautomaton is weakly confluent if and only if it is a disjoint union of synchronizing semiautomata. \end{proposition} \begin{proof} It is clear that a disjoint union of synchronizing semiautomata is weakly confluent. To prove the opposite implication, assume that $(Q,A,\cdot)$ is a weakly confluent semiautomaton. We consider one connected component and an arbitrary pair $p,q$ of its states. Then there exist states $p_1, p_2, \dots , p_n$ and letters $a_1, \dots , a_{n-1}$ such that $p_1=p$, $p_n=q$ and for each $i\in\{1, \dots , n-1 \}$ we have $p_i\cdot a_i=p_{i+1}$ or $p_{i+1}\cdot a_i=p_{i}$. We claim, for each $i\in\{1,\dots , n\}$, the existence of a word $w_i$ such that $p_1\cdot w_i= p_i\cdot w_i$. This claim gives, in the case $i=n$, that $p\cdot w_n=q\cdot w_n$, which concludes the proof. In the rest of the proof we show the claim by the induction on $i$. For $i=1$ one can take any word for $w_1$. Now, assume that the claim is true for $i$, i.e. there is a word $w_i$ and state $r_1$ such that $r_1=p_1\cdot w_i= p_i\cdot w_i$. Furthermore, we denote $r_2=p_{i+1}\cdot w_i$. In the case $p_i\cdot a_i=p_{i+1}$, we denote $r_0=p_i$ and we have $r_0 \cdot w_i =r_1$ and $r_0\cdot a_iw_i=r_2$. In the case $p_{i+1}\cdot a_i=p_{i}$, we denote $r_0=p_{i+1}$ and we have $r_0\cdot a_iw_i=r_1$ and $r_0 \cdot w_i =r_2$. In both cases, since the semiautomaton is weakly confluent there exists $u\in A^*$ such that $r_1\cdot u=r_2\cdot u$. Now for $w_{i+1}=w_iu$ we have $p_1\cdot w_{i+1}=(p_1\cdot w_i)\cdot u=r_1\cdot u=r_2\cdot u= (p_{i+1}\cdot w_i)\cdot u=p_{i+1} \cdot w_{i+1}$. \qed\end{proof} Since the synchronization property can be tested in the polynomial time (see~\cite{volkov}), Proposition~\ref{l:synchronizing} implies that the weak confluence of a semiautomaton can be tested in the polynomial time, as well. In the next result we use the category $\mathcal C_s$ of all surjective homomorphisms. Note that $f: B^* \rightarrow A^*$ is a surjective homomorphism if and only if $A\subseteq f(B)$. \begin{proposition} The class of all weakly confluent semiautomata is a $\mathcal C_s$\=/variety of semiautomata. \end{proposition} \begin{proof} Clearly, the class of all weakly confluent semiautomata $\mathbb V$ is closed under disjoint unions, subsemiautomata and homomorphic images. We need to check that $\mathbb V$ is closed under direct products of non\=/empty finite families. Let $\mathcal Q=(Q,A,\cdot,\le)$ and $\mathcal P=(P,A,\circ,\preceq)$ be a pair of weakly confluent semiautomata. Take a state $(q,p)$ in the product $\mathcal Q\times \mathcal P$ and let $u,v\in A^*$ be words. Since $(Q,A,\cdot,\le)$ is weakly confluent, there is $w\in A^*$ such that $q\cdot u\cdot w=q\cdot v\cdot w$. Now we consider the words $uw$ and $vw$. Since $\mathcal P$ is weakly confluent, there is $z\in A^*$ such that $p\cdot uw \cdot z=p\cdot vw \cdot z$. Hence $(q,p)\cdot u \cdot wz = (q,p)\cdot v \cdot wz$ and we proved that $\mathcal Q\times \mathcal P$ is weakly confluent. The general case for a direct product of a non\=/empty finite family of ordered semiautomata can be proved in the same way. To finish the proof, assume that $f\in \mathcal C_s(B^*,A^*)$ is a surjective homomorphism. Let $\mathcal A = (Q,A,\cdot)$ be a weakly confluent semiautomaton and $\mathcal A^f=(Q,B,\cdot^f)$ is its $f$\=/renaming. Taking $q\in Q$ and $u,v\in B^*$, we have $q\cdot^f u =q\cdot f(u)$ and $q\cdot^f v=q\cdot f(v)$. Since $\mathcal A = (Q,A,\cdot)$ is weakly confluent, there is $w\in A^*$ such that $q\cdot f(u) \cdot w= q\cdot f(v) \cdot w$. Now we can consider a preimige $w'\in B^*$ of the word $w$ in the surjective homomorphism $f$. Finally, we can conclude that $(q\cdot^f u) \cdot^f w'=(q\cdot^f v)\cdot^f w'$. \qed\end{proof} \subsection{Automata for Finite Languages} Finite languages do not form a variety, because their complements, the so\=/called {\em cofinite languages}, are not finite. Moreover, the class of all finite languages is not closed for taking preimages under all homomorphisms. However, one can restrict the category of homomorphisms to the so\=/called {\em non\=/erasing} ones: we say that a homomorphism $f: B^* \rightarrow A^*$ is {\em non\=/erasing} if $f^{-1} (\lambda)=\{\lambda\}$. The class of all non\=/erasing homomorphisms is denoted by $\mathcal{C}_{ne}$. Note that $\mathcal{C}_{ne}$\=/varieties of languages correspond to $+$\=/varieties of languages (see~\cite{straubing}). We use certain technical terminology for states of a given semiautomaton $(Q,A,\cdot)$: we say that a state $q\in Q$ {\em has a cycle}, if there is a word $u\in A^+$ such that $q\cdot u=q$ and we say that the state $q$ is {\em absorbing} if for each letter $a\in A$ we have $q\cdot a = q$. \begin{definition} We call a semiautomaton $(Q,A,\cdot)$ {\em strongly acyclic}, if each state which has a cycle is absorbing. \end{definition} It is evident that every strongly acyclic semiautomaton is acyclic. \begin{proposition} (i) The class of all strongly acyclic semiautomata forms a $\mathcal{C}_{ne}$\=/variety. (ii) The class of all strongly acyclic confluent semiautomata forms a $\mathcal{C}_{ne}$\=/variety. \end{proposition} \begin{proof} (i) It is easy to see that the class $\mathbb V$ of all strongly acyclic semiautomata is closed under finite products, disjoint unions and subsemiautomata. Also the property of $f$\=/renaming is clear whenever we consider a non\=/erasing homomorphism $f: B^* \rightarrow A^*$. Finally, one can prove that the class $\mathbb V$ is closed under homomorphisms in a similar way as in the case of counter\=/free semiautomata. (ii) By the first part we know that all strongly acyclic semiautomata form a $\mathcal{C}_{ne}$\=/variety. We also know that all acyclic confluent semiautomata form a variety of semiautomata, and hence they form also a $\mathcal{C}_{ne}$\=/variety of semiautomata. Therefore all strongly acyclic confluent semiautomata, as an intersection of two $\mathcal{C}_{ne}$\=/varieties, form a $\mathcal{C}_{ne}$\=/variety again. \qed\end{proof} \begin{proposition}\label{p:co-finite-languages} The $\mathcal{C}_{ne}$\=/variety of all finite and all cofinite languages corresponds to the $\mathcal{C}_{ne}$\=/variety of all strongly acyclic confluent semiautomata. \end{proposition} \begin{proof} At first, consider an arbitrary finite language $L\subseteq A^*$ and its canonical automaton $(D_L,A,\cdot,L,F_L)$. Since $L$ is finite, there is only one state in $D_L$ which has a cycle, namely the state $\emptyset$. Moreover, this state is absorbing and it is reachable from all other states, because quotients of finite languages are finite. Therefore the semiautomaton $(D_L,A,\cdot)$ is strongly acyclic and confluent at the same time. Of course, if we start with the complement of a finite language $L$, the canonical semiautomaton is the same as for $L$. Conversely, let $\mathcal A=(Q,A,\cdot)$ be a strongly acyclic confluent semiautomaton. For an arbitrary state $q\in Q$, we take some path starting in $q$ of length $|Q|$. On that path there is a state $q'$ which has a cycle, i.e. $q\cdot u=q'=q'\cdot v$ for some $u\in A^*$, $v\in A^+$. Since $\mathcal A$ is strongly acyclic, $q'$ is an absorbing state. Since $\mathcal A$ is confluent, there is at most one such absorbing state $q'$ reachable from $q$. Now we choose $i\in Q$ and $F\subseteq Q$ arbitrarily and we consider the automaton $(Q,A,\cdot,i,F)$. By the previous considerations there is just one state reachable from $i$ which has a cycle. We denote it by $f$. Note that it is an absorbing state. One can see that, for each state $q\not= f$, the set $\{\, u \mid i\cdot u =q\, \}$ is finite and therefore $\{ u \mid i\cdot u=f\}$ is a complement of the finite language. Thus depending on the fact $f\in F$, the language recognized by $(Q,A,\cdot,i,F)$ is cofinite or finite. \qed\end{proof} Naturally, one can try to describe the corresponding $\mathcal{C}_{ne}$\=/variety of languages for the $\mathcal{C}_{ne}$\=/variety of strongly acyclic semiautomata. Following Pin~\cite[Section 5.3]{pin-handbook}, we call $L\subseteq A^*$ a {\em prefix\=/testable} language if $L$ is a finite union of a finite language and languages of the form $uA^*$, with $u\in A^*$. One can prove the following statement in a similar way as Proposition~\ref{p:co-finite-languages}. Note that one can find also a characterization via syntactic semigroups in~\cite[Section 5.3]{pin-handbook}. \begin{proposition} The $\mathcal{C}_{ne}$\=/variety of all prefix\=/testable languages corresponds to the $\mathcal{C}_{ne}$\=/variety of all strongly acyclic semiautomata. \end{proposition} The characterization from Proposition~\ref{p:co-finite-languages} can be modified for a positive $\mathcal{C}_{ne}$\=/variety of finite languages $\mathcal F$: where $\mathcal F(A)$ consists from $A^*$ and all finite languages over $A$. To make the characterizing condition more readable, for a given strongly acyclic confluent semiautomaton and its state $q$, we call the uniquely determined state $q'$, mentioned in the proof of Proposition~\ref{p:co-finite-languages}, as a {\em main follower} of the state $q$. \begin{proposition} The positive $\mathcal{C}_{ne}$\=/variety of all finite languages corresponds to the positive $\mathcal{C}_{ne}$\=/variety of all strongly acyclic confluent ordered semiautomata satisfying $q'\le q$ for each state $q$ and its main follower $q'$. \end{proposition} \begin{proof} By the first paragraph of the proof of Proposition~\ref{p:co-finite-languages}, every canonical ordered automaton of a finite language satisfies the additional condition $q'\le q$ for each state $q$, because the main follower of $q$ is $\emptyset$. Similarly, in the second part of the proof: Let $f$ be the considered main follower of $i$. Since it is also main follower of all reachable states from the initial state $i$, we see that $f$ is the minimal state among all reachable states from $i$. Now if $f$ is final, then all states are final, because the final states form upward closed subset. Consequently the language accepted by the ordered automaton is $A^*$ in this case. If $f$ is not final, then the language accepted by the ordered automaton is finite. \qed\end{proof} Note that, all considered conditions on semiautomata discussed in this subsection can be checked in polynomial time. \subsection{Automata for Languages Closed under Inserting Segments} \label{subs:inserting} We know that a language $L\subseteq A^*$ is positive piecewise testable if, for every pair of words $u,w\in A^*$ such that $uw\in L$ and for a letter $a\in A$, we have $uaw\in L$. So, we can add an arbitrary letter into each word from the language (at an arbitrary position) and the resulting word stays in the language. Now we consider an analogue, where we put into the word not only a letter but a word of a given fixed length. The length of a word $v\in A^*$ is denoted by $|v|$ as usually. For each positive integer $n$, we consider the following property of a given regular language $L\subseteq A^*$: $$\text{ for every } u,v,w \in A^*, \text{ if } uw\in L \text{ and } |v|=n, \text{ then } uvw \in L \, . $$ We say that $L$ is closed under $n$\=/{\em insertions} whenever $L$ satisfies this property. We show that the class of all regular languages closed under $n$\=/insertions form a positive $\mathcal C$\=/variety of languages by describing the corresponding positive $\mathcal C$\=/variety of ordered semiautomata. At first, we need to describe an appropriate category of homomorphisms. Let $\mathcal C_{lm}$ be the category consisting of the so\=/called {\em length\=/multiplying} (see~\cite{straubing}) homomorphisms: $f\in \mathcal C_{lm} (B^*, A^*)$ if there exists a positive integer $k$ such that $|f(b)|=k$ for every $b\in B$. \begin{definition} Let $n$ be a positive integer and $\mathcal Q=(Q,A,\cdot,\le)$ be an ordered semiautomaton. We say that $\mathcal Q$ has $n$\=/extensive actions if, for every $q\in Q$ and $u\in A^*$ such that $|u|=n$, we have $q\le q\cdot u $. \end{definition} Note that ordered semiautomata from Subsection~\ref{s:ord-increasing} are ordered semiautomata which have $1$\=/extensive actions. Of course, these ordered semiautomata have $n$\=/extensive actions for every $n$. More generally, if $n$ divides $m$ and an ordered semiautomaton $\mathcal Q$ has $n$\=/extensive actions, then $\mathcal Q$ has $m$\=/extensive actions. \begin{proposition} Let $n$ be a positive integer. The class of all ordered semiautomata which have $n$\=/extensive actions form a positive $\mathcal C_{lm}$\=/variety of ordered semiautomata. The corresponding positive $\mathcal C_{lm}$\=/variety of languages consists of all regular languages closed under $n$\=/insertions. \end{proposition} \begin{proof} The first part of the statement is easy to show. To establish the second part, let $L$ be a regular language over $A$ closed under $n$\=/insertions. For $u\in A^*$, we consider the state $K=u^{-1}L$ in the canonical ordered semiautomaton of $L$. Now we show that for every $v\in A^*$ such that $|v|=n$, we have $K\subseteq v^{-1}K$. Indeed, if $w\in K=u^{-1}L$ then $uw\in L$ and since $L$ is closed under $n$\=/insertions we get $uvw\in L$. Hence $vw\in K=u^{-1}L$, which implies $w\in v^{-1}K$. Therefor the canonical ordered semiautomaton of $L$ has $n$\=/extensive actions. On contrary, let $L$ be recognized by $\mathcal Q=(Q,A,\cdot,\le,i,F)$ with $n$\=/extensive actions. For every $u,v,w \in A^*$ such that $uw\in L$ and $|v|=n$, we can consider the state $q=i\cdot u$ in $\mathcal Q$. Since $\mathcal Q$ has $n$\=/extensive actions we have $q\cdot v \ge q$. Hence $i\cdot uvw =q\cdot vw\ \ge q\cdot w =i\cdot uw \in F$ and we can conclude that $uvw\in L$. Thus $L$ is closed under $n$\=/insertions. \qed\end{proof} For a fixed $n$, it is decidable in polynomial time whether a given ordered semiautomaton has $n$\=/extensive actions, because the relation $q\le q\cdot u $ has to be checked only for polynomially many words $u$. \section{Membership Problem for $\mathcal C$\=/Varieties of Semiautomata} \label{s:membership} In the previous section, the membership problem for (positive) $\mathcal C$\=/varieties of semiautomata was always solved by an ad hoc argument. Here we discuss whether it is possible to give a general result in this direction. For that purpose, recall that $\omega$\=/identity is a pair of $\omega$\=/terms, which are constructed from variables by (repeated) successive application of concatenation and the unary operation $u \mapsto u^\omega$. In a particular monoid, the interpretation of this unary operation assigns to each element $s$ its uniquely determined power which is idempotent. In the case of $\mathcal C_{all}$ consisting of all homomorphisms, we mention Theorem 2.19 from~\cite{straubing-weil-handbook} which states the following result: if the corresponding pseudovariety of monoids is defined by a finite set of $\omega$\=/identities then the membership problem of the corresponding variety of languages is decidable by a polynomial space algorithm in the size of the input automaton. Thus, Theorem 2.19 slightly extends the case when the pseudovariety of monoids is defined by a finite set of identities. The algorithm checks the defining $\omega$\=/identities in the syntactic monoid $M_L$ of a language $L$ and uses the basic fact that $M_L$ is the transition monoid of the minimal automaton of $L$. This extension is possible, because the unary operation $(\phantom{u})^\omega$ can be effectively computed from the input automaton. We should mention that checking a fixed identity in an input semiautomaton can be done in a better way. Such a (NL) algorithm (a folklore algorithm in the theory) guesses a pair of finite sequences of states for two sides of a given identity $u=v$ which are visited during reading the word $u$ (and $v$ respectively) letter by letter. These sequences have the same first states and distinct last states. Then the algorithm checks whether for each variable, there is a transition of the automaton given by a word, which transforms all states in the sequence in the right way, when every occurrence of the variable is considered. If, for every used variable, there is such a word, we obtained a counterexample disproving the identity $u=v$. Whichever algorithm is used, we can immediately get the generalization to the case of positive varieties of languages, because checking inequalities can be done in the same manner as checking identities. However, we want to use the mentioned algorithms to obtain a corresponding result for positive $\mathcal C$\=/varieties of ordered semiautomata for the categories used in this paper. For such a result we need the following formal definition. An $\omega$\=/inequality $u\le v$ \emph{holds in an ordered semiautomaton $\mathcal O = (Q,A,\cdot, \le)$ with respect to a category $\mathcal C$} if, for every $f\in \mathcal C (X^*,A^*)$ with $X$ being the set of variables occurring in $uv$, and for every $p\in Q$, we have $p\cdot f(u)\le p\cdot f(v)$. Here $f(u)$ is equal to $f(u')$, where $u'$ is a word obtained from $u$ if all occurrences of $\omega$ are replaced by an exponent $n$ satisfying the equality $s^\omega=s^n$ in the transition monoid of $\mathcal O$ for its arbitrary element $s$. \begin{theorem}\label{t:membership-problem} Let $\mathcal O = (Q,A,\cdot, \le)$ be an ordered semiautomaton, let $u\le v$ be an $\omega$\=/inequality and $\mathcal C$ be one of the categories $\mathcal C_{ne}$, $\mathcal C_{l}$, $\mathcal C_{s}$ and $\mathcal C_{lm}$. The problem whether $u\le v$ holds in $\mathcal O$ with respect to $\mathcal C$ is decidable. \end{theorem} \begin{proof} The result is a consequence of the following propositions. \begin{proposition} Let $\mathcal O = (Q,A,\cdot, \le)$ be an ordered semiautomaton, $u\le v$ be an $\omega$\=/inequality and $\mathcal C$ be one of the categories $\mathcal C_{ne}$, $\mathcal C_{l}$ and $\mathcal C_{s}$. The problem of deciding whether $u\le v$ holds in $\mathcal O$ with respect to $\mathcal C$ can be solved by a polynomial space algorithm. \end{proposition} \begin{proof} First of all, we prove the statement formally for $\mathcal C=\mathcal C_{all}$. We start with the case when $\omega$ operation is not used. It is mentioned in Section~\ref{s:membership} that such an algorithm is a folklore in the theory. Let $y_1 \ldots y_s \le z_1 \ldots z_t$ be an inequality, where $y_1, \dots, y_s, z_1, \dots, z_t$ are variables from $X$. Recall, that this inequality holds in the transition ordered monoid of~$\mathcal{A}$ if and only if for every homomorphism $f: X^* \rightarrow A^*$, the inequality of transformations $f(y_1) \circ \dots \circ f(y_s) \le f(z_1) \circ \dots \circ f(z_t)$ is satisfied. This requirement can be reformulated as the inequality $q \cdot f(y_1) \cdots f(y_s) \le q \cdot f(z_1) \cdots f(z_t)$ of states of~$\mathcal{A}$, for every state $q \in Q$ and every homomorphism $f: X^* \rightarrow A^*$. This means that the inequality is not valid if and only if there exist such $f$ and states $p_0, p_1, \dots, p_s, q_0, q_1, \dots, q_t \in Q$, with $p_0 = q_0$ and $p_s \not\le q_t$, which satisfy $p_{i-1} \cdot f(y_i) = p_i$ and $q_{j-1} \cdot f(z_j) = q_j$ for every $i \in \{1,\dots,s\}$ and $j \in \{1,\dots,t\}$. Since the numbers $s$ and $t$ are constants, one can non\=/deterministically choose all these states, and then decide whether for this choice of states the required homomorphism $f$ exists. For every variable~$x$, denote by $I_x$ the set of all $i \in \{1,\dots,s\}$ such that $y_i = x$, and by $J_x$ the set of all $j \in \{1,\dots,t\}$ such that $z_j = x$. In order to decide existence of~$f$, one has to check whether for every variable~$x$ there exists a word $f(x) \in A^*$ such that $p_{i-1} \cdot f(x) = p_i$ for every $i \in I_x$ and $q_{j-1} \cdot f(x) = q_j$ for every $j \in J_x$. However, the existence of such a word $f(x)$ can be expressed as a condition on the product automaton of $|I_x| + |J_x|$ copies of the automaton~$\mathcal{A}$; namely, it is equivalent to reachability of the state with components $p_i$, for $i \in I_x$, and $q_j$, for $j \in J_x$, from the state with components $p_{i-1}$, for $i \in I_x$, and $q_{j-1}$, for $j \in J_x$. Recall that $|I_x| + |J_x|$ is a constant. Now assume that $u$ and $v$ are $\omega$\=/terms. We are guessing the states as in the previous simple case, but we do this inductively with respect to the structure of the $\omega$\=/terms $u$ and $v$ from top to down. In this way we obtain a more complicated system of states comparing the sequences in the case of (linear) words. To explain the inductive construction, assume that we have guessed states $p$ and $q$ for a certain $\omega$\=/subterm $w$ assuming that $p\cdot f(w)=q$. If $w=w_1w_2$ for $\omega$\=/terms $w_1$ and $w_2$, then we simply guess a state $r$ and assume that $p\cdot f(w_1)=r$ and $r\cdot f(w_2)=p$. The case $w=z^\omega$, with a subterm $z$, is more complicated. It is well known that, for every element $s$ in the transition monoid of the given automaton, the element $s^\omega$ is equal to $s^n$ for some $n\le |Q|$. In particular, $s^n\cdot s^n=s^n$ holds for this $n$. So, we guess $n\le |Q| $ and states $r_0,r_1,r_2,\dots, r_{2n}$ such that $r_0=p$, $r_n=r_{2n}=q$ and we assume that $r_{i-1}\cdot f(w)= r_{i}$ for every $i=1,\dots, 2n$. In this way, when we decompose all subterms, we obtain a system of states equipped with assumptions of the form $p\cdot f(x)=q$, where $p$ and $q$ are states and $x$ is a variable. Since the $\omega$\=/terms $u$ and $v$ are not part of the input, there are only constantly many steps of the algorithm decomposing the terms $u$ and $v$. Thus, at the end, the number of conditions is polynomial with respect the size of the input automaton. (In fact, the number of the conditions can be bounded by the number of all states in $Q$, which is linear.) The final part of the algorithm is the same: we just check, for each variable $x$, whether it is possible to satisfy all the conditions concerning $f(x)$ at the same time. Point out, that the number of conditions was constant in the case of identity $u=v$ in the first part, which gives $\log$ space algorithm in the that case. Now we are ready to discuss another categories, where we search for $f\in \mathcal C(X^*,A^*)$. The case $\mathcal C=\mathcal C_{ne}$ is trivial. When we test reachability in the product of certain number of copies of $\mathcal O$, we are looking for a non\=/empty path in the graph. The case $\mathcal C=\mathcal C_{l}$ is even easier, because we test reachability in one step. Seeing this case from another point of view, this case is easy, because there are only polynomially many homomorphisms in $\mathcal C_{l}(X^*,A^*)$ for fixed $X$ and $A$ where only $A$ is a part of the input. The case $\mathcal C=\mathcal C_{s}$ is also easy. We are looking for $f\in \mathcal C(X^*,A^*)$ such that $A\subseteq f(X)$. So, we can additionally guess, for each letter $a\in A$, a variable $x\in X$ such that $f(x)=a$. We could conclude with the remark, that is well known that nondeterministic polynomial space is equivalent to deterministic polynomial space. \qed \end{proof} \begin{proposition} Let $\mathcal O = (Q,A,\cdot, \le)$ be an ordered semiautomaton, $u\le v$ be an $\omega$\=/inequality. The problem whether $u\le v$ holds in $\mathcal O$ with respect to $\mathcal C_{lm}$ is decidable. \end{proposition} \begin{proof} We proceed as in the general case up to the place where the existence of $f(x)$ is discussed. We do not decide whether there is $f(x)\in A^*$ satisfying all conditions before we first complete the conditions in such a way that, for every $q\in Q$, the condition on $q\cdot f(x)$ is present. This is made by guessing missing pairs $q\cdot f(x)$ for all $q$ and $x$. Just now we test whether there are words $f(x)$ satisfying the conditions. Only if there are such words, we continue. Next we try to describe all of them. It is possible, because, for every $x$, we know how $f(x)$ transform the semiautomaton $\mathcal O$. So, the language of all words which are considered as a potential words $f(x)$ is a regular language which is recognized by the transition monoid of the semiautomaton $\mathcal O$. We denote it as $L_x$. Furthermore, we are able to compute a regular expression $r_x$ describing $L_x$. We need to decide whether for each variable $x$ there is a word $w_x\in L_x$ such that all words's $w_x$ have the same length. Thus, we need to know all possible lengths of words in $L_x$. For this purpose we consider the unique literal mapping $\psi : A^*\rightarrow \{a\}^*$, $\psi(A)=\{a\}$. Clearly, the language $\psi(L_x)$ is regular, because it is described by a regular expression $\ol{r_x}$, which can be obtained from $r_x$, if we replace every letter from the alphabet $A$ by the letter $a$. Moreover, there is a word $w_x\in L_x$ of length $k$ if and only if $a^k\in \psi(L_x)$. So, the existence of an integer $k$ such that $L_x\cap A^k\not=\emptyset$ holds for every $x$, is equivalent to the fact $\bigcap_{x\in X} \psi(L_x) \not= \emptyset$. The later inequality is equivalent to non\=/emptiness of the language given by the generalized regular expression $\bigcap_{x\in X} \ol{r_x}$. So, one can decide this question. \qed \end{proof} We did not discuss the complexity of the algorithm, because we do not see how to effectively construct the regular expression $\bigcap_{x\in X} \ol{r_x}$. \end{proof} \section{Further Remarks} At the end we could mention that one can extend the construction in at least two natural directions. First, the theory of tree languages is a field where many fundamental ideas from the theory of deterministic automata were successfully generalized. Another recent notion of biautomata (see~\cite{biautomaty} and \cite{ncma13-holzer}) is based on considering both\=/sided quotients instead of left quotients only. In both cases one can try to apply the previous constructions and consider varieties of (semi)automata. Some papers in this direction already exist~\cite{esik-ivan}.
1,116,691,498,878
arxiv
\section{Introduction} Since the compilation of the first large samples of galaxy clusters almost 50 years ago (Zwicky 1961; Abell 1958), clusters have been used as fundamental probes of the effect of environment on the evolution of galaxies. Over this time, our understanding of this phenomenon has grown significantly, and a basic picture of the formation and evolution of cluster galaxies between 0 $< z <$ 1 has emerged. Studies of the stellar populations of cluster galaxies via the fundamental plane (e.g., van Dokkum et al. 1998; van Dokkum \& Stanford 2003; Holden et al. 2005) and the evolution of the cluster color-magnitude relation (e.g., Ellis et al. 1997; Stanford et al. 1998; Gladders et al. 1998; Blakeslee et al. 2003; Holden et al. 2004; Mei et al. 2006; Homeier et al. 2006; Tran et al. 2007) have shown that the majority of stars in cluster galaxies are formed at high-redshift ($z >$ 2) and that most of the evolution thereafter is the passive aging of these stellar populations. Studies of the evolution of the near-infrared (NIR) luminosity functions (LFs) of clusters have shown that not only are the stellar populations old, but that the bulk of the stellar mass is already assembled into massive galaxies at high-redshift (e.g., De Propris et al. 1999; Toft et al. 2003; Strazullo et al. 2006; Lin et al. 2006; Muzzin et al. 2007a). Furthermore, it appears that the cluster scaling relations seen locally ($z <$ 0.1, e.g., Lin et al. 2004; Rines et al. 2004; Lin et al. 2003), such as the Halo Occupation Distribution, Mass-to-Light ratio, and the galaxy number/luminosity density profile are already in place by at least $z \sim$ 0.5 (e.g., Muzzin et al. 2007b; Lin et al. 2006). \newline\indent These studies suggest a picture where the formation of the stars in cluster galaxies, as well as the assembly of the galaxies themselves occurs at a higher redshift than has yet been studied in detail; and that, other than the passive aging of the stellar populations, clusters and cluster galaxies have changed relatively little since $z \sim$ 1. This picture appears to be a reasonable zeroth-order description of the evolution of cluster galaxies; however, there are still properties of the cluster population which cannot be explained within this context. In particular, there are significant changes in the morphology (Dressler et al. 1997; Postman et al. 2005, Smith et al., 2005), color (e.g., Butcher \& Oemler 1984; Rakos \& Schombert 1995; Smail et al. 1998; Ellingson et al. 2001; Margoniner et al. 2001; Loh et al. 2008) and star-formation properties (e.g., Balogh et al. 1999; Dressler et al. 1999; Poggianti et al. 1999; Dressler et al. 2004; Tran et al. 2005a; Poggianti et al. 2006, although see Kodama et al. 2004) of cluster galaxies since $z \sim$ 1. The fraction of blue, star forming galaxies increases from almost zero at $z =$ 0 to as much as 50\% at $z \sim 0.5$ (the so-called Butcher-Oemler Effect), and correspondingly, the fraction of S0 galaxies in clusters drops by a factor of 2-3, with similar increase in the number of spiral/irregular galaxies over the same redshift range (Dressler et al. 1997). Naively, these results suggest that gas-rich, star-forming galaxies at high-redshift have their star-formation truncated by the cluster environment at moderate redshift and become the dominate S0 population seen locally. How such a transformation occurs, and how it avoids leaving a notable imprint on the stellar populations, is still not well-understood. \newline\indent Citing an abundance of post-starburst (k+a) galaxies in clusters at $z \sim$ 0.4, Poggianti et al. (1999) and Dressler et al. (2004) suggested that there may be an abundance of dusty starburst galaxies in clusters at moderate redshift, and that the dusty starburst and k+a galaxies may represent the intermediate stages between regular star forming late-type galaxies and S0 galaxies (e.g., Shioya et al. 2004; Bekki \& Couch 2003). In particular, they suggested that the cluster e(a)\footnote{An e(a) galaxy is defined as a galaxy with EW([OII]) $<$ -5\AA$ $ and EW(H$\delta$) $>$ 4\AA$ $ by Dressler et al. (1999). These are emission line galaxies with a strong A star component to their spectrum suggesting a recent, possibly obscured, burst of star formation.} galaxies would be the best candidates for dusty starburst galaxies because their inferred star formation rates appear larger from H$\alpha$ emission than from [OII] emission. If the cluster environment excites a dusty starburst from harassment, tidal interaction, or ram-pressure stripping, then this may quickly deplete a star forming galaxy of its gas, transforming it first into a k+a galaxy, and then leaving it an S0. More detailed work on two $z \sim$ 0.5 clusters by Moran et al. (2005) also showed an abundance of starbursting galaxies conspicuously near the cluster virial radius, suggesting a environmental origin to their ``rejuvenation''. $ISO$ observations of relatively nearby clusters have detected significant amounts of dust-obscured star formation (e.g., Fadda et al. 2000; Duc et al. 2002; Biviano et al. 2004; Coia et al. 2005), and this has recently been confirmed at even higher redshift ($z =$ 0.2 - 0.8) by $Spitzer$ observations (Geach et al. 2006; Marcillac et al. 2007; Bai et al. 2007; Fadda et al. 2007; Saintonge et al. 2008; Dressler et al. 2008). Despite this, it is currently unclear whether there is a population of dusty starbursts which is sufficiently abundant to be the progenitors of the large number of cluster k+a galaxies. \newline\indent Alternatively, there is evidence from other cluster samples that the S0 population may simply be the result of the truncation of star formation in infalling late-type galaxies via gas strangulation (e.g., Abraham et al. 1996; Balogh et al. 1999; Treu et al. 2003; Moran et al. 2006) and that no accompanying starburst occurs. Most likely, the star formation and morphology of galaxies are transformed both ``actively'' (as in a starburst triggered from merging/harassment/tidal forces) and ``passively'' (from gas strangulation or ram-pressure stripping), and that the magnitude of each effect varies significantly from cluster-to-cluster and possibly by epoch, which may explain why studies of small numbers of clusters have found discrepant results. Interestingly, both processes can be active within massive clusters as was demonstrated by Cortese et al. (2007), who found two interesting galaxies in Abell 1689 and Abell 2667, one of which seems to be undergoing gas strangulation and ram-pressure stripping, while the other is experiencing an induced starburst. There is evidence that galaxies in clusters that are less dynamically relaxed have larger star formation rates (e.g., Owen et al. 1999; Metevier et al. 2000; Moss \& Whittle 2000; Owen et al. 2005; Moran et al. 2005; Coia et al. 2005, and numerous others) and that the accretion of large substructures induces starbursts from harassment and tidal forces. \newline\indent The most obvious way to understand whether dusty starbursts are important in the evolution of cluster galaxies is to observe their abundances directly in the mid-infrared (MIR). In particular, differences in the MIR LFs of the cluster and field environments can be used to determine if dusty starbursts are more common in the cluster environment. If so, it would suggest that environmental processes may be responsible for triggering these events. \newline\indent The InfraRed Array Camera (IRAC) onboard $Spitzer$ provides a unique tool for studying this problem. IRAC images in 4 bands simultaneously (3.6$\micron$, 4.5$\micron$, 5.8$\micron$, 8.0$\micron$) and this is particularly advantageous because 3.6$\micron$ and 4.5$\micron$ observations are a good proxy for the stellar mass of cluster galaxies between $0 < z < 1$, and 5.8$\micron$ and 8.0$\micron$ are sensitive to emission from warm dust (i.e., from dusty star forming regions) over the same redshift range. In particular, the Polycyclic Aromatic Hydrocarbons (PAHs) emit strong line emission at rest frame 3.3$\micron$, 6.2$\micron$, 7.7$\micron$, 8.6$\micron$, and 11.3$\micron$ (e.g., Gillett et al. 1973; Willner et al. 1977). These features, in addition to the warm dust continuum, are sensitive indicators of dusty star formation, and several studies have already shown a good correlation between 8.0$\micron$ flux and star formation rate (SFR\footnote{Although there is a direct correlation between 8$\micron$ flux and SFR, the scatter in the correlation is approximately a factor of 2 for metal rich galaxies in the local universe, and metal poor galaxies can deviate by as much as a factor of 50 (e.g., Calzetti et al. 2007). Because of the large scatter and metallicity dependence, throught this paper we do not use the 8$\micron$ data to quantitatively measure SFRs. Instead, we use the presence of enhanced 8$\micron$ flux as a qualitative indicator of increased dusty star formation.}; e.g., Calzetti et al. 2005; Wu et al. 2005; Calzetti et al. 2007). Therefore, examining the suite of IRAC cluster LFs at redshifts 0 $< z <$ 1 shows both the evolution of the majority of stellar mass in cluster galaxies, as well as the evolution of dusty star formation in the same galaxies. \newline\indent The obvious approach to measuring the presence of dusty star formation in clusters is to observe a handful of ``canonical'' galaxy clusters with IRAC. However, given that determining the LF from a single cluster suffers significantly from Poisson noise, and perhaps most importantly, is not necessarily representative of the average cluster population at a given mass/epoch, a better approach would be to stack large numbers of clusters in order to improve the statistical errors, and avoid peculiarities associated with individual clusters. This approach requires targeted observations of numerous clusters, which is time-consuming compared to other alternatives. For example, large-area $Spitzer$ surveys such as the 50 deg$^2$ $Spitzer$ Wide-area Infrared Extragalactic Survey (SWIRE\footnote{SWIRE data are publically available at http://swire.ipac.caltech.edu/swire/}, Lonsdale et al. 2003), the 8.5 deg$^2$ IRAC Shallow Survey (Eisenhardt et al. 2004), and the 3.8 deg$^2$ $Spitzer$ First Look Survey (FLS\footnote{The FLS data are publically available at http://ssc.spitzer.caltech.edu/fls/}, Lacy et al. 2005) are now, or soon-to-be, publically available and these fields already contain significant amounts of optical photometry. These wide optical-IRAC datasets can be employed to find clusters in the survey area itself using optical cluster detection methods such as the cluster red sequence (CRS) technique (Gladders \& Yee 2000, hereafter GY00), or photometric redshifts (e.g., Eisenhardt et al. 2008; Brodwin et al. 2006). Subsequently, the IRAC survey data can be used to study the LFs of clusters at a much larger range of masses and redshifts than could be reasonably followed up by $Spitzer$. Furthermore, these surveys also provide panoramic imaging of clusters out to many virial radii, something that has thus far rarely been attempted because it is time-consuming. \newline\indent Finding clusters with the CRS algorithm is relatively straightforward with the ancillary data available from these surveys. The technique exploits the fact that the cluster population is dominated by early type galaxies, and that these galaxies form a tight red sequence in color-magnitude space. If two filters which span the 4000\AA$ $ break are used to construct color-magnitude diagrams, early types are always the brightest, reddest galaxies at any redshift (e.g., GY00) and therefore provide significant contrast from the field. The CRS technique is well-tested and provides photometric redshifts accurate to $\sim$ 5\% (Gilbank et al. 2007a; Blindert et al. 2004) as well as a low false-positive rate ($<$5\%, e.g., Gilbank et al. 2007a; Blindert et al. 2004; Gladders \& Yee 2005). The method has been used for the 100 deg$^2$ red sequence Cluster Survey (RCS-1, Gladders \& Yee 2005) and is also being used for the next generation, 1000 deg$^2$ RCS-2 survey (Yee et al. 2007). Variations of the red sequence method have also been used to detect clusters in the Sloan Digital Sky Survey (the ``BCGmax'' algorithm, Koester et al. 2007; Bahcall et al. 2003) as well as in the fields of X-ray surveys (e.g. Gilbank et al. 2004; Barkhouse et al. 2006). \newline\indent In this paper we combine the $Spitzer$ FLS R$_{c}$-band and 3.6$\micron$ photometry and use it to detect clusters with the CRS algorithm. Given the depth of the data, and that the R$_{c}$ - 3.6$\micron$ filter combination spans the rest-frame 4000\AA$ $ break to $z >$ 1, we are capable of detecting a richness-limited sample of clusters out to $z \sim$ 1. Using the sample of clusters discovered in the FLS we compute the 3.6$\micron$, 4.5$\micron$, 5.8$\micron$, and 8.0$\micron$ LFs of clusters 0.1 $< z <$ 1.0 and study the role of dusty star formation in cluster galaxy evolution. A second paper on the abundance of dusty starburst galaxies detected at 24$\micron$ in the same clusters using the FLS MIPS data is currently in preparation by Muzzin et al. (2008). \newline\indent The structure of this paper is as follows. In \S 2 we give a brief overview of the optical, IRAC, and spectroscopic data used in the paper. Section 3 describes the cluster-finding algorithm used to detect clusters and \S 4 contains the FLS cluster catalogue, and a basic description of its properties. In \S 5 we present the IRAC cluster LFs and \S 6 contains a discussion of these results as well as a comparison of the cluster and field LFs. We conclude with a summary in \S 7. Throughout this paper we assume an $\Omega_{m}$ = 0.3, $\Omega_{\Lambda}$ = 0.7, H$_{0}$ = 70 km s$^{-1}$ Mpc $^{-1}$ cosmology. All magnitudes are on the Vega system. \section{Data Set} \subsection{{\it Spitzer} IRAC Data and Photometry} The IRAC imaging data for this project was observed as part of the publically available, {\it Spitzer} First Look Survey (FLS; see Lacy et al. 2005 for details of the data acquisition and reduction). The FLS was the first science survey program undertaken after the telescope's in-orbit-checkout was completed. It covers 3.8 square degrees and has imaging in the four IRAC bandpasses (3.6$\micron$, 4.5$\micron$, 5.8$\micron$, 8.0$\micron$). The FLS is a shallow survey with a total integration time of only 60 seconds per pixel. Because IRAC images all four channels simultaneously, the total integration time is identical in each channel. The resulting 5$\sigma$ limiting flux densities are 20, 25, 100, and 100 $\mu$Jy in the 3.6$\micron$, 4.5$\micron$, 5.8$\micron$, 8.0$\micron$ bandpasses, respectively. These flux densities correspond to Vega magnitudes of 18.0, 17.2, 15.2, and 14.6 mag, respectively. The 50\% completeness limits for the 4 channels are 18.5, 18.0, 16.0, 15.4 mag and hereafter we use these limits for the cluster finding algorithm (\S 3) and computing the cluster LFs (\S 5). The data was corrected for completeness using a third-order polynomial fit to the survey completeness as a function of magnitude determined by Lacy et al. (2005). Lacy et al. compared their completeness estimates, made using artifical galaxies, to completeness estimates determined by comparing the recovery of sources in the FLS to a deeper ``verification strip''. The completeness was similar using both methods; however, in some cases the latter suggested it might be higher by $\sim$ 10-15\%. When counting galaxies we have multiplied the formal uncertainties by an additional $\pm$ 20\% of the completeness correction to account for this additional uncertainty. \newline\indent Photometry for the IRAC data was performed using the SExtractor (Bertin \& Arnouts 1996) package. For each channel, four aperture magnitudes plus an isophotal magnitude are computed. The four apertures used are 3, 5, 10, and 20 IRAC pixels in diameter (3.66$''$, 6.10$''$, 12.20$''$, 24.40$''$). The aperture magnitudes are corrected for the flux lost outside the aperture due to the large diffraction-limit of the telescope and the significant wings of the IRAC point spread function (PSF). The aperture corrections are computed from bright stars within the FLS field and are listed and discussed further in Lacy et al. (2005). The majority of galaxies with 3.6$\micron$ $>$ 15.0 mag are unresolved, or only slightly resolved at the resolution of the 3.6$\micron$ bandpass and therefore the 3 pixel aperture corrected magnitude provides the best total magnitude. For galaxies which are extended and resolved, this small aperture is an underestimate of their total magnitude. For these galaxies, a ``best'' total magnitude is measured by estimating an optimum photometric aperture using the isophotal magnitudes. The geometric mean radius of the isophote ({\it r$_{m}$} = (A/$\pi$)$^{0.5}$, where A is the isophotal area) is compared to the radius of each of the 4 apertures used for the aperture magnitudes ({\it r$_{1}$},{\it r$_{2}$},{\it r$_{3}$},{\it r$_{4}$}). If {\it r$_{m}$} $<$ 1.1 {\it r$_{ap}$}, then that aperture magnitude is chosen as the best total magnitude. For objects with {\it r$_{m}$} $>$ 1.1 {\it r$_{4}$} the isophotal magnitude is used as the best total magnitude. When measuring the R$_{c}$ - 3.6$\micron$ colors, we always use the 3 pixel aperture-corrected magnitude, even for resolved galaxies (see discussion in \S 2.3). \newline\indent Object detection was performed separately in all 4 channels and these catalogues were later merged using a 1.8$''$ search radius. Tests of this matching (Lacy et al. 2005) show that this radius provides the most reliably matched catalogues. \subsection{Optical Data} \indent The ground-based Cousins R$_{c}$-band (hereafter ``R-band'') imaging used in this study was obtained as part of the FLS campaign and is also publically available. R-band imaging covering the entire FLS IRAC and MIPS fields was observed on the Kitt Peak 4m Mayall telescope using the MOSAIC-1 camera. MOSAIC-1 consists of eight 4096 $\times$ 2048 CCDs, and has a field-of-view of 36 $\times$ 36 arcmin with a pixel scale of 0.258 arcseconds per pixel. Data reduction was performed using the NOAO IRAF $mscred$ package and procedures, and galaxy photometry was performed using the SExtractor (Bertin \& Arnouts 1996) package. Typical seeing for the images was $\sim$ 1.1 arcseconds and the 5$\sigma$ limiting magnitude in an aperture of 3 arcseconds is 24.7 mag. The 50\% completeness limit in the same aperture is $\sim$ 24.5 mag. A complete discussion of the data reduction, object finding, and photometry can be found in Fadda et al. (2004). For this study we performed additional photometry to that publically available in order to measure fluxes in a slightly larger 3.66$''$ aperture which matches with the smallest aperture of the IRAC data (D. Fadda, private communication). \newline\indent The mean absolute positional error in the astrometry for the R-band data is 0.35$''$ (Fadda et al. 2004) and the mean positional error in the astrometry of bright (faint) sources in the IRAC catalogue is 0.25$''$ (1.0$''$) (Lacy et al. 2005). Given these uncertainties, as well as the large IRAC pixel scale, the R-band catalogue was matched to the IRAC catalogue by looking for the closest object within 1.5 IRAC pixels (1.8$''$) of each IRAC detection. Tests of matching radii ranging between 0.3 and 3.0 IRAC pixels (0.37$''$ - 3.66$''$) showed that the number of matches increased rapidly using progressively larger radii up to $\sim$ 1.5 IRAC pixels and thereafter the gain in the number of matches with increasing radius was relatively modest, suggesting that the majority of additional matches were likely to be chance associations. Given that the IRAC astrometry is calibrated using bright stars from the Two Micron All Sky Survey (2MASS, Strutskie et al. 2006), whereas the R-band data was astrometrically calibrated using the USNO-A2.0 catalogue (Monet et al. 1998) an additional concern was the possibility of a systematic linear offset between the two astrometric systems. We attempted to iteratively correct for any systematic offset by shifting the IRAC astrometry by the median offset of all matched sources and then rematching the catalogues; however, multiple iterations could not converge to a solution significantly better than the initial 0.2$''$ offset seen between the two systems. Given that this offset is less than the quoted positional errors in the two systems, it suggests that any systematic offset between the 2MASS and USNO-A2.0 system in the FLS field is less than the random positional error in the R-band and IRAC data themselves. The iterative refinements increased/decreased the total number of matches by $\pm$ 0.05 - 0.3\% depending on which iteration. Given these small variations, and the lack of further evidence for a systematic offset between the coordinate systems, the final matched catalogue uses the original IRAC and R-band astrometry. \newline\indent In approximately 4\% of cases more than one R-band object was located within the search radius. In these cases, the object closest to the IRAC centroid was taken as the match. The space density of R-band sources is approximately 5 times higher than the number of IRAC sources at these respective depths. This suggests that at most, 20\% of R-band sources have an IRAC counterpart at the respective depths. \newline\indent When there are multiple R-band matches for an IRAC detection, the majority of cases will be where only one of the R-band detections is the counterpart of the IRAC detection, and our approach will provide correct colors. Nevertheless, a certain percentage of the multiple matches will be when two R-band objects, both of which have IRAC counterparts, have these counterparts blended together into a single IRAC detection due to the large IRAC PSF. Because the IRAC source is a blend of two objects, but we use only one R-band counterpart, these objects will be cataloged as brighter and redder than they truly are. However, because only 4\% of IRAC sources have multiple R-band matches, and the probability that both of those R-band sources have an IRAC counterpart is roughly, 20\%$^2$ = 4\%, this suggests that only 4\% x 4\% = 0.16\% of all IRAC sources are blended sources where only one R-band galaxy has been identified as the counterpart. \newline\indent Although this estimated contamination is small, clusters have greater surface densities of galaxies than the field, and therefore it might be expected that cluster galaxies are blended more frequently than field galaxies. We measured the frequency of multiple matches for galaxies in the fields of the clusters (\S 4) and found that 6.5\% of IRAC sources had multiple R-band counterparts making blending about 1.5 times more common in cluster fields. Even though the rate of blends is higher, it should not have a significant effect on the LFs. Even in the worst case that all 6.5\% of IRAC-detected galaxies with multiple R-band matches are blended (not just coincidentally aligned with a faint R-band galaxy in the foreground), and those blends are with a galaxy of comparable luminosity, the values of M$^{*}$ measured from the LFs would be only $\sim$ 0.05 mag brighter. Given the Schechter function shape of the LF, it is more probable that most galaxies are blended with a fainter galaxy and therefore 0.05 mag is likely to be the upper limit of how significantly blending affects the LFs. This effect is smaller than the statistical errors in the measurement of M$^{*}$ for the LFs (\S 5) and therefore we make no attempt to correct for it, but note that our M$^{*}$ values could be systematically high by as much as 0.05 mag. \newline\indent The large IRAC PSF means that star-galaxy separation using these data is difficult and therefore the classification of each matched object is determined from the R-band data using the CLASS\_STAR parameter from SExtractor. This is done using the criteria suggested in Fadda et al. (2004). All objects with R $<$ 23.5 with CLASS\_STAR $<$ 0.9 are considered galaxies. For fainter objects with R $>$ 23.5, those with CLASS\_STAR $<$ 0.85 are considered galaxies. Most stars have R - 3.6$\micron$ colors of $\sim$ 0 in the Vega system. The R-band data is $\sim$ 5 mag deeper than the IRAC data and therefore most stars detected by IRAC should be robustly removed using this classification. \newline\indent \subsection{Galaxy Colors} The most important ingredient in the cluster red sequence algorithm is the measurement of accurate colors. Excess noise in the colors causes scatter in the cluster red sequence and reduces the probability that a cluster will be detected. For images with large differences in seeing, PSF shape, and pixel size such as the R-band and 3.6$\micron$, measuring accurate colors can be problematic. To this end, significant effort was invested in finding the most appropriate way to measure colors with this filter combination. \newline\indent Studies of the cluster red sequence using telescopes/filters with equivalently large angular resolution differences (e.g., HST + ground based, Holden et al., 2004; Optical and low-resolution IR, Stanford et al., 1998) have typically measured colors by degrading the highest resolution images using the PSF of the lowest resolution images. This is the most accurate way to measure colors, and is feasible for a survey of several clusters; however, it is time consuming for a survey the size of the FLS that has more than a million sources detected in the R-band. More importantly, because there are so many more galaxies detected in R-band than in 3.6$\micron$, degrading those images causes numerous unnecessary blends of R-band galaxies resulting in an increased number of catastrophic color errors. Degrading the resolution also inhibits the potential for detecting distant clusters because the signal-to-noise ratio of the faintest R-band objects becomes much worse when they are smeared with a large PSF. \newline\indent The compromise is to use a fixed aperture that provides accurate colors, yet is as large as possible for the IRAC data (to reduce the need for aperture corrections), and yet is as small is possible to reduce the excess sky noise in the R-band measurement. It is important to use the same diameter apertures for both 3.6$\micron$ and R-band so that the colors of bright resolved galaxies are measured properly. Galaxies which are small and mostly unresolved require an aperture of only 2-3 times the seeing disk to measure a correct color. In principle, colors for such galaxies can be measured correctly using a different sized apertures for both 3.6$\micron$ and R-band (i.e., optimized apertures). However, because measuring the color correctly for large galaxies that are resolved in both filters requires that the aperture must be the same size in both filters, we use the same aperture for all galaxies. \newline\indent After experimenting with apertures ranging in diameter between one to ten IRAC pixels (1.22$''$ to 12.2$''$) we determined that the three IRAC pixel diameter aperture (3.66$''$) was the optimum aperture because it requires a relatively small aperture correction at 3.6$\micron$ ($\sim$ 10\%) yet is only slightly larger than three R-band seeing disks, resulting in only a marginal excess sky noise being added to the R-band aperture magnitudes. Using this large fixed aperture means that the photometry of faintest R-band galaxies is not optimized because much of the aperture contains sky. As a result, some potential in discovering the most distant clusters is sacrificed because the faintest red galaxies (i.e., distant red sequence galaxies) may have excessively large photometric errors. However, most importantly, accurate colors are determined for all galaxies, and overall the approach provides much better photometry than degrading the entire survey data. \newline\indent As an illustration of the quality of colors achievable with this approach we show the color-magnitude diagram of FLS J171648+5838.6, the richest cluster in the survey, in Figure 1. The typical intrinsic scatter of early type galaxies on the red sequence at the redshift of this cluster ($z_{spec} =$ 0.573) is $\sim$ 0.075 ( Stanford et al. 1998; Holden et al. 2004). As a comparison we measure the intrinsic scatter for FLS J171648+5838.6 by subtracting the mean photometric error from the total scatter in quadrature. This is slightly less rigorous than the Monte-Carlo methods used by other authors, but provides a reasonable estimate of the scatter. For galaxies with 3.6$\micron$ $<$ 17 mag (3.6$\micron$ $>$ 17 mag) the observed scatter of the red sequence is 0.149(0.225) mag, and the mean photometric color error is 0.118(0.167) mag, resulting in an intrinsic scatter of 0.091(0.151) mag. We note that without knowing the morphologies of the galaxies we are unable to properly separate early type galaxies from bluer disk galaxies, and therefore this measurement of the scatter is almost certainly inflated by Sa or Sb galaxies bluer than the red sequence. In particular, these galaxies are generally more prevalent at fainter magnitudes (e.g., Ellingson et al. 2001). However, even without morphological separation, the scatter in the color of red sequence galaxies are in fair agreement with scatter in the colors of typical red sequences and demonstrates that the 3.66$''$ aperture works well for measuring colors. \begin{figure} \epsscale{1.1} \plotone{f1.eps} \caption{\footnotesize Color-Magnitude diagram within a 1 Mpc (2.5 arcmin) diameter of FLS J171648+5838.6 (cluster 44, $z_{spec}$ = 0.573), the richest cluster in the FLS. Several field galaxies with R - 3.6$\micron$ colors $>$ 5.5 have been removed for clarity. The solid line is the best red sequence model for the cluster (\S 3.1). The intrinsic scatter in the red sequence for this cluster is 0.091 mag for galaxies with 3.6$\micron$ $<$ 17 mag, and 0.151 mag for galaxies with 3.6$\micron$ $>$ 17 mag, and is comparable to the scatter in other clusters at this redshift. } \end{figure} \subsection{Keck, WIYN, \& SDSS Spectroscopic Data} A large number of spectroscopic redshifts are available for galaxies in the FLS field from several spectroscopic campaigns. A sample of 642 redshifts were obtained using the HYDRA spectrograph on the Wisconsin-Illinois-Yale-NOAO (WIYN) 3.6m telescope as part of a program to followup radio sources in the FLS (Marleau et al. 2007). A set of 1373 redshifts in the FLS field were obtained for galaxies selected by their red R - K$_{s}$ colors using the DEIMOS spectrograph on the 10m KECK II telescope by Choi et al. (2006). Lastly, 1296 redshifts were obtained using the Hectospec Fiber Spectrograph on the 6.5m MMT by Papovich et al. (2006). The primary target of that survey were galaxies that are detected in the FLS MIPS 24$\micron$ imaging and have R $<$ 21.5 mag. In addition to redshifts from these projects, 1192 redshifts in the FLS field are also available from the Sloan Digital Sky Survey (SDSS) DR5 database (Adelman-McCarthy et al. 2007). In total there are 4503 redshifts at various positions available in the FLS. Of these, 26 are likely to be cluster red sequence galaxies (see \S 3.7). \subsection{Palomar Spectroscopy} In addition to the spectroscopic catalogues available, we also obtained our own longslit spectroscopy for bright red sequence galaxies in three clusters with 0.4 $< z_{phot} <$ 0.6 in the FLS using the Double-Spectrograph (Doublespec) on the 200-inch Hale telescope at Palomar Mountain (P200). We also obtained multi-object spectroscopy using the COSMIC Spectrograph on the P200 for an additional three clusters with $z_{phot} <$ 0.3. These six clusters were chosen for followup because they were amongst the richest clusters in our preliminary cluster catalogues. \subsubsection{Double-Spectrograph Data} Spectroscopy of bright red sequence galaxies in clusters FLS J171241+5855.9, FLS J172122+5922.7, and FLS J171648+5838.6 (clusters 16, 38, and 44 listed in Table 1) was performed on 17, 18, and 19 August 2004 with Doublespec on the P200. The observations were made with the ``Red'' camera using the 316 l/mm grating blazed at 7150$ $\AA $ $ and a 0.5$''$ wide slit, giving a spectral resolution of 2.6$ $\AA$ $ ($\sim$ 150 km s$^{-1}$). The Doublespec longslit is $\sim$ 1.5 arcmin long and the angle of the slit on the sky can be rotated. In all 3 clusters we centered the slit on the brightest cluster galaxy (BCG) and then chose a rotation angle so that we could get at least 2 other bright objects (preferentially red sequence galaxies) on the slit. \newline\indent For FLS J172122+5922.7 and FLS J171648+5838.6 we obtained spectra of 3 objects in the field, and in FLS J171241+5855.9 we managed 4. We obtained three 20 minute exposures for FLS J172122+5922.7 and FLS J171648+5838.6, which have photometric redshifts of 0.57 and 0.55, respectively, and one 20 minute exposure for FLS J171241+5855.9, which has a photometric redshift of 0.39. We also observed a spectroscopic standard, calibration lamps, dome flats and twilight flats at the beginning of each night. Data reduction and wavelength calibration were performed using the standard IRAF techniques. After 1-d spectra were extracted, 7 of the 10 objects had a signal-to-noise ratio suitable for cross-correlation. One of the spectra in FLS J171241+5855.9 has a strong emission line at 7056\AA$ $ and no possible identification that puts it near the photo-{\it z} of the cluster. This object was therefore considered a field interloper. The remaining 6 spectra (two per cluster) showed significant absorption features typical of early type galaxies and redshifts were obtained by cross correlating them with an elliptical galaxy spectrum. The redshifts of the galaxies within each cluster were similar ($\Delta$$z$ $<$ 0.01) and are in excellent agreement with the cluster photometric redshift. These spectroscopic redshifts are listed in the cluster catalogue (Table 1). \subsubsection{COSMIC Data} Multi-object spectroscopy of both red sequence galaxies and MIPS 24$\micron$-detected galaxies in the fields of clusters FLS J171059+5934.2, FLS J171639+5915.2, FLS J171505+5859.6, and FLS J172449+5921.3 (clusters 1, 2, 8, and 10 listed in Table 1) were performed on 26, 27, 28, 29 May 2006, and 15, 16, 17 June 2007 using the COSMIC Spectrograph on the 200$''$ Hale Telescope at Palomar Mountain. These observations were made with the 300 l/mm grating blazed at 5500$ $\AA$ $ with 1$''$ wide slits giving a spectral resolution of 8$ $\AA$ $ ($\sim$ 450 km s$^{-1}$). These data are part of a larger campaign to study cluster 24$\micron$ sources and full details of the data reduction, calibration and cross-correlation will be presented in a future paper (Muzzin et al. 2008, in preparation). We obtained 17, 16, 12 and 20 good-quality spectra in the fields of FLS J171059+5934.2, FLS J171505+5859.6, and FLS J172449+5921.3 respectively, and redshifts were determined using cross-correlation. Including the data from the other spectroscopic campaigns, the field of FLS J171059+5934.2 has 10 galaxies with $\overline z$ = 0.126, the field of FLS J171639+5915.2 has 7 galaxies with $\overline z$ = 0.129, the field of FLS J171505+5859.6 has 9 galaxies with $\overline z$ = 0.252, and the field of FLS J172449+5921.3 has 12 galaxies with $\overline z$ = 0.253. These redshifts are included in the cluster catalogue (Table 1). \subsection{Keck/DEIMOS Spectroscopy of FLS J172126+5856.6} Spectroscopy was obtained of the candidate cluster FLS J172126+5856.6 (cluster 93 in Table 1) with the Deep Imaging Multi-Object Spectrograph (DEIMOS, Faber et al. 2003) on the 10~m Keck~II telescope. On the night of 1 September 2005, we obtained three 1800s exposures on the same mask in non-photometric conditions with $\sim$1.3$''$ seeing. The 600ZD grating ($\lambda_{\rm blaze} = 7500$ \AA; $\Delta \lambda_{\rm FWHM} = 3.7$ \AA) and a GG455 order-blocking filter were used. The DEIMOS data were processed using a slightly modified version of the pipeline developed by the DEEP2 team at UC-Berkeley\footnote{\tt http://astro.berkeley.edu/~cooper/deep/spec2d/}. Relative flux calibration was achieved from observations of standard stars from Massey \& Gronwall (1990). \newline\indent Slits were preferentially placed on candidate red sequence galaxies, allowing a total of 10 slits on likely cluster members. Of the 10 candidate red sequence galaxies, five had sufficient S/N for determining redshifts, and four had redshifts $\Delta z$ $<$ 0.01 from each other, with the $\overline z$ = 1.045. These redshifts are included in the cluster catalogue (Table 1). \section{Cluster Finding Algorithm} The cluster finding algorithm employed in this study is essentially the CRS algorithm of Gladders \& Yee (2000, 2005) with some minor modifications. Here we outline only the major steps, and refer to those papers for a more detailed explanation of the procedures. \newline\indent The CRS algorithm is motivated by the observation that early type galaxies dominate the bright end of the cluster LF and that these galaxies always follow a tight red sequence in the color-magnitude plane. At increasing redshift the observed red sequence color becomes redder\footnote{The observed-frame color of the red sequence becomes redder with increasing redshift because of band shifting. The rest-frame color change due to passive evolution actually makes galaxies bluer at higher redshift, but is a small effect for a single-burst population formed at high redshift. Because the change in observed-frame color is dominated by the k-correction from an old stellar population, it increases monotonically with redshift and provides a good estimate of the cluster redshift.} and because this change in color follows closely the predictions from a passively evolving stellar population, the color can be used as a robust photometric redshift estimate for a cluster. In order to apply the CRS algorithm, slices are made in the color-magnitude plane of a survey. Galaxies are then assigned weights based on the probability that they belong to a particular slice. This probability is determined by the color and the photometric error in the color. Once color weights for each galaxy in each slice have been assigned, each galaxy is also assigned a magnitude weight. Magnitude weighting is done because bright red sequence galaxies are more likely to be members of clusters than faint ones. \newline\indent Once each galaxy is assigned a color and magnitude weight for each slice, the positions of each galaxy are plotted for each slice with their respective weights. The resulting ``probability map'' for each slice is then smoothed and peaks in these maps represent likely cluster candidates. In the following subsections we discuss in more detail the steps in our version of the algorithm. \subsection{Red Sequence Models} The first step in finding clusters with the CRS is to model the color, slope, and intercept of the cluster red sequence as a function of redshift. This was done by making simulated single-burst galaxies using all available metallicities from the Bruzual \& Charlot (2003) spectral synthesis code. The models are constructed with 50\% of the stars forming in a single-burst at t = 0, and the remainder forming with an exponentially declining star formation rate of $\tau$ = 0.l Gyr. Using a range of metallicities causes the color of each galaxy to be slightly different at $z$ = 0, with the most metal-rich galaxies being the reddest. The absolute magnitude of each galaxy with a different metallicity is normalized using the U-V, V-I, and J-K red sequences of Coma (Bower et al. 1992) assuming that a metallicity gradient with magnitude is the primary source of the slope of the red sequence. Normalizing the absolute magnitude of each galaxy this way allows us to reproduce models with the correct red sequence color and slope with redshift. \newline\indent There is increasing evidence that the slope of the red sequence is not only caused by a metallicity sequence, but is also the product of an age sequence, with the less luminous galaxies being both more metal poor and younger (e.g., Nelan et al. 2005; Gallazzi et al. 2006). Examination of spectroscopically confirmed clusters in the FLS shows that the pure metallicity sequence used in our models reproduces the red sequence slope and color quite well, and because we are only interested in a fiducial red sequence model for detecting clusters and determining photometric redshifts, no further tuning of the ages of galaxies along the sequence is done. \newline\indent Once the absolute magnitude of each model galaxy is normalized using the Coma red sequences, linear fits to the R - 3.6$\micron$ vs. 3.6$\micron$ color-magnitude relations of the model galaxies between 0.1 $< z <$ 1.6 are made. A high density of redshift models are fit so that there is significant overlap in color space (185 slices between 0.1 $< z <$ 1.6). This assures that no clusters are missed because they have colors between the finite number of models and it also allows for increased precision in the photometric redshifts. \newline\indent We computed two sets of single-burst models, one with a formation redshift ({\it z}$_{f}$) = 2.8, and another with {\it z}$_{f}$ = 5.0. These two sets of models produce nearly identical observed red sequences at {\it z} $<$ 1.1, but begin to diverge at higher redshifts. There is evidence from previous studies of the fundamental plane (e.g., van Dokkum et al. 1998; van Dokkum \& Stanford 2003, and many others), evolution of the color-magnitude diagram (Stanford et al. 1998; Holden et al. 2004), and K-band luminosity function (De Propris et al. 1999; Lin et al. 2006; Muzzin et al. 2007a) that a {\it z}$_{f}$ $\sim$ 3 model is appropriate for cluster early types; however, the uncertainties in these studies are fairly large. There is also evidence that many of the most massive field early types formed the majority of their stars at {\it z} $>$ 5 (McCarthy et al. 2004; Glazebrook et al. 2004), so the possibility remains that a {\it z}$_{f}$ = 5.0 is more appropriate. Regardless, the majority of the systems we have discovered are at {\it z} $<$ 1.1, and therefore the $z_{f}$ uncertainty does not affect the photometric redshifts of these systems. For systems at {\it z} $>$ 1.1, the redshift can be considered an upper limit. For example, the photo-{\it z} for a cluster at {\it z} = 1.3 in the {\it z}$_{f}$ = 2.8 model would be {\it z} $\sim$ 1.2 in the {\it z}$_{f}$ = 5.0 model. \newline\indent To illustrate the depth of the survey, and the location of the red sequence models, Figure 2 shows the R - 3.6$\micron$ vs. 3.6$\micron$ color-magnitude diagram for all galaxies in the FLS with some of the {\it z}$_{f}$ = 2.8 red sequence models overlaid. The density of galaxies with M $\sim$ M$^{*}$ begins to drop off significantly for the $z >$ 1.2 red sequence models because of the depth of the R-band data (M$^{*}$(3.6$\micron$) $\sim$ 17.0 mag at $z =$ 1.2) and therefore we consider $z \sim$ 1.2 the upper limit at which we can reliably detect clusters. Remarkably, the red sequence models are even well separated in color space at $z <$ 0.5 where the R - 3.6$\micron$ filters do not span the 4000\AA$ $ break. This is caused by the large wavelength separation between the bands, and the wide 3.6$\micron$ filter which has a strongly redshift-dependent negative k-correction. Although the k-correction in R-band evolves slowly with redshift out to $z \sim$ 0.5, the k-correction for 3.6$\micron$ is significant and therefore the R - 3.6$\micron$ color is still a sensitive redshift indicator. \begin{figure*} \plotone{f2.ps} \caption{\footnotesize Observed color-magnitude diagram for all galaxies in the FLS. The solid lines are fiducial red sequence models at different redshifts generated using the Bruzual \& Charot code. The redshift of each model is labelled in the figure. The bulk of the shift in color with redshift of the models is due to bandpass shifting or ``k-correction'', not because of evolution in the rest-frame colors of the galaxies.} \end{figure*} \subsection{Color Weights} Once red sequence models have been made, weights based on the probability that a galaxy belongs within a color slice are computed. The typical 1$\sigma$ scatter in the local cluster color-magnitude relation is $\sim$ 0.075 mag (e.g., Lopez-Cruz et al. 2004; Bower et al. 1992). The scatter has been measured in clusters to z $\sim$ 1 where it remains remarkable consistent (e.g., Stanford et al. 1998; Gladders et al. 1998; Blakeslee et al. 2003). Thereafter, it may become somewhat more scattered (Holden et al. 2004). Assuming that this relation holds to $z \sim$ 1.3, color weights (with values ranging from 0 to 1) are assigned by computing the overlapping area of a galaxy's color with the red sequence assuming a red sequence intrinsic dispersion of 0.075 mag and assuming the galaxy's color is represented by a Gaussian centered on the measured color with a 1$\sigma$ dispersion equal to the color error (see e.g. GY00, Figure 3 for an example). Using this method, the weight of a bright galaxy lying directly on the red sequence with a color error significantly narrower than the width of the red sequence is 1.0. The same galaxy, with a color error equal to the dispersion in the red sequence, has a weight of 0.67. Color weights are computed for all galaxies in all 185 color slices. \subsection{Magnitude Weights} In addition to the color weights, galaxies are also weighted based on their magnitude relative to a fiducial M$^{*}$ value. Cluster early types are usually the brightest, reddest galaxies at a given redshift and therefore, the brightest galaxies within a color slice are more likely to be cluster galaxies and should be given extra weight. The distribution of magnitude weights was defined as P(M) by GY00 (see their \S 4.3). We compute the P(M) using the data themselves, as suggested by those authors, and when doing so we consider objects within the one-percentile highest density regime as ``cluster'' galaxies. This is a slightly more strict cut than the ten-percentile cut used by GY00; however, the fact that IR-selected galaxies are more strongly clustered than optically-selected galaxies justifies using a more stringent cut. \subsection{Probability Maps} Once the magnitude and color weights for all galaxies in each of the individual color slices have been computed, a probablility map of each slice is created. The map is a spatial galaxy density map of the survey within each redshift slice. The map is made using pixels which are 125 kpc in physical size at the redshift of each slice. The probablility flux from each pixel is determined by placing each galaxy on the pixel that corresponds to its location in the survey, weighted by the product of its color and magnitude weights. Once each slice is constructed this way, it is smoothed with the exponential kernel suggested in GY00 (their equation 3). \subsection{Noise Maps} The noise properties of the probability maps of different color/redshifts slices are usually different. In particular, the maps of the highest redshift slices tend to have large noise peaks because the survey is only as deep as $\sim$ M$^{*}$ in those slices. The lower redshift probability maps have a smoother background because there are numerous M $>$ M$^{*}$ galaxies which are more evenly distributed spatially and have a low probability of belonging to any slice because they have a large color error. The higher redshift maps are shallower, thereby lacking the M $>$ M$^{*}$ galaxies which provide this smooth background. If peak finding is run on all probability maps using similar detection parameters, it produces significantly different numbers of detections in different slices. In particular, almost any noise in the highest redshift maps results in the detection of a ``cluster''. \newline\indent To circumvent this problem the parameters of the peak finding for each map can be tuned individually in order to produce a reasonable number of detections in each slice; however, the resulting cluster catalogue is clearly biased by what is considered a ``real'' detection in a given map. It is preferrable to have a cluster catalogue which is as homogenously-selected as possible and based on a quantitative selection. Therefore ``noise'' maps are constructed and are added to each probability map to homogenize their noise properties. \newline\indent The noise maps are constructed by adding fake red sequence galaxies to each pixel of the probability maps. Adding a constant background of fake galaxies does not change the noise properties of a map because it is the variance in the number of background galaxies that determines the noise. We experimented with a variety of variances to add, but settled that the variance from the photometric color errors of six M$^{*}$ red sequence galaxies per pixel provided the best results. This level of noise removes the spurious detections in the highest redshift slices, but does not add so much noise as to wash-out the majority of the poorer clusters in the lower redshift slices. \newline\indent The noise in each pixel is calculated by first determining the average color error of an M$^{*}$ red sequence galaxy using the survey data. Once the average color error per slice is tabulated, the weights of six M$^{*}$ red sequence galaxies are Monte Carlo simulated for each pixel of a noise map assuming that the colors are normally distributed around the red sequence with a dispersion equal to the mean color error. These simulated weights are then assigned to each pixel of the noise map and each noise map is added to the appropriate cluster probability map. This approach thereby implictly defines a ``cluster'' as an overdensity detectable above the Poisson noise from six M$^{*}$ background red sequence galaxies at any redshift. The noise+clusters maps have similar noise properties for every slice and peak finding can be run using identical parameters for all maps. \newline\indent We note that in our simple empirical method for homogenizing the noise in the probability maps the added noise is Poissionian, not clustered like the underlying background galaxy distribution. Despite this, the noise maps technique works extremely well, effectively smoothing out spurious noise spikes in the highest redshift probability maps. In principle, a more sophisticated method which includes the clustering properties of background galaxies could be implemented; however, for our purposes such an approach is unecessary. Only the detection probability of poorest clusters near the significance limit are affected by different choices in noise maps. Galaxies from the poorest clusters do not contribute significantly to the LFs, which are dominated by counts from more massive systems, and therefore we do not consider this issue further. \subsection{Cluster Detection} Once the combined noise-probability maps have been made, peaks are detected in each map using SExtractor. The peak-finding is done differently from GY00 in that the individual 2d slices are searched instead of merging the slices into a 3d datacube and searching for 3d peaks. It is unclear how these two methods compare; however, they are likely to be similar and searching the slices individually permits easy visual inspection of the sources on each map which allows us to check any problems that have occured with peak finding or in the generation of the map. Pixels 5$\sigma$ above the background are flagged and 25 connected pixels are required to make a detection. \newline\indent The slices are close in color space and therefore clusters (particularly rich ones) are detected in more than one color slice. The same cluster is identified in multiple color slices by merging the slice catalogues using a matching radius of 8 pixels (1 Mpc). Clusters found across as many as 20 color slices are connected as being the same object. The color slices are not linear in redshift, but 20 slices correspond to $\Delta$$z \sim \pm$0.06. These combined spatial and color limits for connecting clusters imply that clusters with separations $>$ 1 Mpc in transverse distance and $>$ 0.06 in redshift space can be resolved into distinct systems\footnote{The color slices are closer together at $z >$ 1 and only systems with $\Delta z$ $>$ 0.12 can be resolved at this redshift. We note that although the overall level of projections is likely to be low, because of the bunching up of the color slices, the highest redshift clusters will be the most suseptible to projection effects.}. This level of sensitivity is similar to that found by Gladders \& Yee (2005) using R - z$^{\prime}$ colors to select clusters. They also demonstrated that subclumps at redshift spacings much less than this are likely to be associated subclumps or infalling structures related to the main body of the cluster. \subsection{Photometric Redshifts} Each cluster is assigned the photometric redshift of the color slice in which it is most strongly detected. The strength of the detection is determined by using SExtractor to perform aperture photometry of each cluster on each probability map. This provides a ``probability flux'', and the cluster is assigned to the slice in which it has the largest probability flux. \newline\indent The large number of spectroscopic redshifts available for the FLS can be used to verify the accuracy of the red sequence photometric redshifts. Examining the spectroscopic catalogue for galaxies within a 1 Mpc circle around each cluster shows that there are numerous galaxies with spectroscopic redshifts in the field of many of the clusters. The spectroscopic targets were chosen with a variety of selection criteria (none of which preferentially select early type galaxies) and therefore the majority of galaxies with redshifts are foreground or background galaxies. We use only the spectroscopic redshifts for galaxies which have a combined magnitude and color weight of $>$ 0.2 in order to preferentially select likely cluster members. This cut in weight is used because it corresponds to the typical combined magnitude and color weight of M $<$ M$^{*}$ red sequence galaxies. Once the cut is made there are 23 clusters which have at least one spectroscopic redshift for a likely cluster red sequence galaxy. Remarkably, there are 26 galaxies which meet this criteria and 24 of these have a spectroscopic redshift $<$ 0.1 from the photometric redshift of the cluster. This illustrates the effectiveness of the red sequence color at estimating photometric redshifts provided that the galaxy has a high-probability of being a cluster early type. \newline\indent In Figure 3 we plot spectroscopic vs. photometric redshift for these 23 clusters plus the additional 6 for which we obtained our own spectroscopic redshifts ($\S$ 2.5). The straight line marks a one-to-one correlation. Large points represent clusters with more than one galaxy with a redshift consistent with the being in the cluster. Small points represent clusters with a single spectroscopic redshift. Excluding the large single outlier with $z_{spec} \sim$ 0.9 (which is likely to be a bluer galaxy at high-redshift based on its spectrum and IRAC colors, see $\S$6.3) the rms scatter in the cluster spectroscopic vs. photometric redshift is $\Delta$$z$ = 0.04, demonstrating that the photometric redshifts from the red sequence algorithm work extremely well. \newline\indent The accuracy of the photometric redshifts from the FLS sample is comparable to the accuracy of the RCS surveys (Yee et al. 2007; Gladders \& Yee 2005) which use R - z$^{\prime}$ color selection, even though the R - 3.6$\micron$ colors have larger photometric errors than the R - z$^{\prime}$ colors. It is likely this is because the model red sequence colors change much more rapidly with redshift in R - 3.6$\micron$ than in R - z$^{\prime}$ (R - 3.6$\micron$ spans 2 magnitudes between $z$ = 0.5 and $z$ = 1.0, whereas it spans only 1 magnitude in R - z$^{\prime}$). The larger change in the R - 3.6$\micron$ colors with redshift means that photometric measurement errors should correspond to smaller errors in photometric redshift. \begin{figure} \plotone{f3.eps} \caption{\footnotesize Photometric vs. spectroscopic redshift for clusters in the FLS field. The color of the circle corresponds to the telescope/project where the spectroscopic redshifts were obtained (see $\S$2). Large circles denote clusters with more than one spectroscopic redshift, small circles denote clusters with only one spectroscopic redshift. Excluding the one large outlier at $z_{spec} \sim$ 0.9, the rms scatter is $\Delta$z = 0.04.} \end{figure} \subsection{B$_{gc}$ Richness Parameter} The final step in the selection of the cluster sample is to cut low-richness detections from the catalogue. The false postive rate is higher for low richness systems (i.e., galaxy groups) and we prefer restrict our analysis of the cluster LFs to a high confidence sample of massive clusters. The cluster richnesses are measured quantitatively using the B$_{gc}$ richness parameter (Longair \& Seldner 1979; for a detailed look at the application of B$_{gc}$ to measuring cluster richnesses see Yee \& L\'{o}pez-Cruz 1999). B$_{gc}$ is the amplitude of the 3-dimensional correlation function between cluster galaxies and the cluster center. B$_{gc}$ is measured within a fixed aperture (typically 500 kpc radius) and is well-correlated with cluster physical parameters such as velocity disperion ($\sigma$), X-ray temperature (T$_{x}$), and the radius as which the mean density of the cluster exceeds the critical density by a factor of 200 (R$_{200})$ (e.g., Yee \& Lopez-Cruz 1999; Yee \& Ellingson 2003; Gilbank et al. 2004; Muzzin et al 2007b). \newline\indent Gladders \& Yee (2005) introduced a new form of the B$_{gc}$ parameter, counting the overdensity of red sequence galaxies within a fixed aperture, rather than all galaxies and defined this new parameter as B$_{gc,R}$. This form of richness suffers less from cosmic variance in the background because red sequence galaxies provide better contrast with the field, and therefore it is a more robust estimate of the cluster richness. We use B$_{gc,R}$ rather than B$_{gc}$ for determing the richnesses of the FLS clusters. The net number of 3.6$\micron$ red sequence galaxies with M $<$ M$^{*}$ + 1.0 (where M$^{*}$ is determined from the data itself, see $\S$5.1) are counted within a fixed aperture of 500 kpc radius. The model red sequences from $\S$3.1 are used and galaxies within $\pm$ 0.3 in color are considered to belong to the red sequence. \newline\indent Systems with B$_{gc,R}$ $<$ 200 are removed from the cluster catalogue. The B$_{gc}$-M$_{200}$ relation of Yee \& Ellingson (2003) implies that this corresponds to removing groups with M$_{200}$ $<$ 6.6 x 10$^{13}$ M$_{\odot}$, where M$_{200}$ is defined as the mass contained with R$_{200}$. Groups with masses below this typically have only $\sim$ 5-10 bright galaxies (e.g., Balogh et al. 2007), making them difficult to select robustly with the CRS algorithm. The systems that are removed by the richness cut are typically tight compact groups of 3-4 extremely bright galaxies that are the same color. Although they are not rich, they have a strong probability of being detected by the CRS algorithm because of their luminosity and compactness. It is likely that the majority of these systems are bona fide low-richness galaxy groups; however, we have no way of verifying the false-positive rate for these systems. \newline\indent Before these low-richness systems are cut from the catalogue there are 134 cluster candidates between 0.1 $< z <$ 1.4 in the FLS field. Removing systems with B$_{gc,R}$ $<$ 200 leaves a total of 99 candidate clusters in the sample. \subsection{Cluster Centroids} \indent Defining a centroid for clusters can be a challenging task, yet is extremely important because properties determined within some aperture around the cluster (such as richness, or LF) can vary strongly with the choice of centroid. In many cluster studies the location of the BCG is used as the center of the cluster. This is a reasonble definition as frequently the BCG lies at the center of the dark matter halo and X-ray emission; however, there are also many examples where it does not. Furthermore, not all clusters have an obvious BCG, particularly at higher redshift. \newline\indent Given these issues, two centroids are computed for the FLS clusters, one based on the location of the peak of the red sequence probability flux in the probability maps, and the other based on the location of the BCG within 500 kpc of this centroid. In order to avoid bright foreground galaxies the brightest galaxy in the field with a red sequence weight $>$ 0.4 is designated as the BCG. Eye examination of the clusters shows that this criteria is effective at choosing what appears visually to be the correct galaxy; however, because it chooses only a single galaxy this technique is still potentially suseptible to red low-redshift field interlopers. \newline\indent When computing the cluster LFs, only one of the centroids can be used. We define an ``optimum'' centroid for each cluster using the B$_{gc,R}$ parameter. B$_{gc,R}$ is computed at both centroids and the optimum centroid is the centroid which produces the maximum value of B$_{gc,R}$. This approach is simplistic, but because B$_{gc}$ is the correlation amplitude between the cluster center and galaxies, the centroid which produces the largest value should be the best centroid of the galaxy population. \section{Properties of the Cluster Catalogue} \indent The final cluster catalogue of 99 clusters and groups is presented in Table 1. Where spectroscopic redshifts are available for high-probability cluster members they are listed in column 3, with the number of redshifts in parenthesis. For each cluster we also compute an estimate of R$_{200}$ and M$_{200}$. The M$_{200}$ values are estimated using the correlation between B$_{gc}$ and M$_{200}$ measured by Muzzin et al. (2007b) for 15 X-ray selected clusters at $z \sim$ 0.3 in the K-band. The K-band and 3.6$\micron$ bandpasses sample similar parts of a galaxy's spectrum at 0.1 $< z <$ 1.5 and therefore it is reasonable to assume that B$_{gc}$ values measured in both these bands will be comparable. The best-fit relation between M$_{200}$ and B$_{gc}$ is \begin{equation} Log(M_{200}) = (1.62 \pm 0.24)Log(B_{gc}) + (9.86 \pm 0.77). \end{equation} Muzzin et al. (2007b) did not measure the correlation between B$_{gc}$ and R$_{200}$ in the K-band, although Yee \& Ellingson (2003) showed a tight correlation for the same clusters using $r$-band selected B$_{gc}$. Using the Muzzin et al. (2007b) K-band data we fit the correlation between these parameters for those clusters and find that the best fit relation is \begin{equation} Log(R_{200}) = (0.53 \pm 0.09)Log(B_{gc}) - (1.42 \pm 0.29). \end{equation} The rms scatter in the M$_{200}$ - B$_{gc}$ relation is 35\% and for the R$_{200}$ - B$_{gc}$ relation it is 12\%. These scatters are similar to that measured between M$_{200}$ and K-band selected richness (parameterized by N$_{200}$) at $z \sim$ 0 by Lin et al. (2004). The values of M$_{200}$ and R$_{200}$ derived from these equations are listed in columns 9 and 10 of Table 1, respectively. \newline\indent We caution that these equations have only been calibrated using rich clusters, and that extrapolating to lower richness clusters such as those in the FLS may not be appropriate. The lowest richness cluster in the Muzzin et al. (2007b) sample has a richness of Log(B$_{gc}$) = 2.8, yet the majority of clusters in the FLS (70/99) have lower richnesses than this. There is evidence from both observations (e.g., Lin et al. 2004) and numerical simulations (e.g., Kravtsov et al. 2004) that the same power-law correlation between cluster galaxy counts (which are closely related to B$_{gc}$) and M$_{200}$ extend to richnesses well lower than our B$_{gc,R}$ $>$ 200 cut, and therefore it probably not too unreasonable to extrapolate equations (1) and (2) to lower richnesses. \newline\indent Using an indirect method to estimate M$_{200}$ and R$_{200}$ means that reliable errors in R$_{200}$ and M$_{200}$ can not be computed for individual clusters; however, the rms scatters in the correlations are at least indicative of the average uncertainty in the measurement of the parameters for the sample. Therefore, we suggest that the average error in the M$_{200}$ and R$_{200}$ values listed in Table 1 are $\pm$ 35\% and 12\% respectively, but that the error in a particular cluster can be several times larger or smaller. \newline\indent In Figure 4 we plot a histogram of the number of clusters as a function of redshift in the FLS. The solid histogram shows the distribution of all clusters and the dot-dashed histogram shows the distribution of clusters with M$_{200}$ $>$ 3 x 10$^{14}$ M$_{\odot}$ (B$_{gc,R} >$ 700). Similar to the predictions of numerical simulations (e.g., Haiman et al. 2003) the number of clusters peaks at $z \sim$ 0.6. Qualitatively, the distribution of clusters is also similar to that found by Gladders \& Yee (2005) in comparable size patches; however, the cosmic variance in the number of clusters in $\sim$ 4 deg$^2$ patches is too large to make a meaningful comparison between the selection of clusters in the R - $z^{\prime}$ bandpasses versus the R - 3.6$\micron$ bandpasses. \newline\indent We plot the locations of the clusters superposed on the 3.6$\micron$ image of the FLS field in Figure 5 as open circles. Large and small circles represent clusters with M$_{200}$ $>$ 3 x 10$^{14}$ M$_{\odot}$ and M$_{200}$ $<$ 3 x 10$^{14}$ M$_{\odot}$, respectively, and clusters with photometric redshifts 0.1 $< z <$ 0.4, 0.4 $< z <$ 0.8, z $>$ 0.8 are plotted as blue, green, and red circles, respectively. The clusters themselves are clearly clustered; demonstrating the need for wide-field surveys when searching for representative samples of galaxy clusters. \newline\indent We show a few examples of some of the richest cluster candidates in Figures 6 - 11. The top left panel for each Figure is the R-band image; the top right is the 3.6$\micron$ image, and the bottom left panel is the 8.0$\micron$ image. All images are 1 Mpc across at the cluster redshift. The bottom right panel of each figure shows a histogram of the color distribution of galaxies with M $<$ M$^{*}$ within a 1 Mpc diameter aperture. The color of the red sequence for the photometric redshift is marked with an arrow. The dashed histogram is the mean color background in that aperture measured from the entire survey. The error bars on the dashed histogram are computed as the 1$\sigma$ variance in each bin from 200 randomly selected 1 Mpc apertures within the survey. Galaxies are clustered, and therefore assuming the variance is Gaussian-distributed is probably an overestimate of the true variance (because there will be large wings in the distribution due to clustering); however, it provides a first-order demonstration of the overdensity of the cluster relative to the field. \newline\indent Overall, the cluster catalogue is qualitatively similar in both redshift, and richness distributions to catalogues selected with the same technique in different bandpasses (e.g. Gladders \& Yee 2005, Gilbank et al. 2004), demonstrating that clusters can be reliably selected with the CRS method on IRAC data despite the limited spatial resolution of the instrument. \begin{figure} \plotone{f4.eps} \caption{\footnotesize Redshift distribution of clusters in the FLS. The solid histogram is for all clusters and the dot-dashed histogram is for clusters with M$_{200}$ $>$ 3 x 10$^{14}$ M$_{\odot}$.} \end{figure} \begin{figure*} \plotone{f5.ps} \caption{\footnotesize The 3.6$\micron$ image of the FLS with the positions of clusters superposed. The blue, green, and red circles denote clusters with 0.1 $< z <$ 0.4, 0.4 $< z <$ 0.8, and $z >$ 0.8 respectively. Large circles represent clusters with M$_{200}$ $>$ 3 x 10$^{14}$ M$_{\odot}$ and small circles represent clusters with M$_{200}$ $>$ 3 x 10$^{14}$ M$_{\odot}$. The size of the circles is arbitrarily chosen for clarity and is not related to the projected size of R$_{200}$ for the clusters. } \end{figure*} \begin{figure} \plotone{f6.ps} \caption{\footnotesize Multi-wavelength images of FLS J172449+5921.3 at $z_{spec}$ = 0.252 (Cluster \#10 from Table 1). The top left, top right, and bottom left panels are the R-band, IRAC 3.6$\micron$, and IRAC 8.0$\micron$ respectively. In each image the field-of-view is 1 Mpc across at the redshift of the cluster. The solid histogram in the bottom right panel shows the color distribution of galaxies with M $<$ M$^{*}$ in the same field. The dashed histogram is the background distribution in the same aperture and the error bars show the average variance in the background. The arrow marks the color of the red sequence from the color-redshift models. The cluster red sequence is clearly detected at many sigma above the background.} \end{figure} \begin{figure} \plotone{f7.ps} \caption{\footnotesize Same as for Figure 6, but for FLS J172122+5922.7 at $z_{phot}$ = 0.53 (Cluster \#38 from Table 1).} \end{figure} \begin{figure} \plotone{f8.ps} \caption{\footnotesize Same as for Figure 6, but for FLS J171648+5838.6 at $z_{phot}$ = 0.56 (Cluster \#44 from Table 1).} \end{figure} \begin{figure} \plotone{f9.ps} \caption{\footnotesize Same as for Figure 6, but for FLS J171420+6005.5 at $z_{phot}$ = 0.63 (Cluster \#50 from Table 1).} \end{figure} \begin{figure} \plotone{f10.ps} \caption{\footnotesize Same as for Figure 6, but for FLS J172013+5845.4 at $z_{phot}$ = 0.69 (Cluster \#56 from Table 1).} \end{figure} \begin{figure} \plotone{f11.ps} \caption{\footnotesize Same as for Figure 6, but for FLS J171508+5845.4 at $z_{phot}$ = 0.75 (Cluster \#68 from Table 1).} \end{figure} \section{Cluster Luminosity Functions} \indent In this section we measure the IRAC luminosity functions of the FLS cluster sample and use these to study the evolution of stellar mass assembly and dusty star formation/AGN activity in clusters. \subsection{The 3.6$\micron$ and 4.5$\micron$ Luminosity Functions} The luminosity of galaxies at 3.6$\micron$ and 4.5$\micron$ over the redshift range 0.1 $< z <$ 1.5 is dominated by emission from low mass stars and is fairly insensitive to ongoing star formation or dust. Consequently, the 3.6$\micron$ and 4.5$\micron$ cluster LFs provide an estimate of the stellar mass function of cluster galaxies, and their redshift evolution can constrain the mass assembly history of cluster galaxies. One concern with using these LFs as a proxy for the stellar mass function is that at $z <$ 0.5 the 3.3$\micron$ PAH feature found in strongly star forming galaxies can contaminate the stellar emission observed at 3.6$\micron$ and 4.5$\micron$; however, it is likely that such contamination will be small for cluster galaxies in this redshift range. In a study of Luminous Infrared Galaxies (LIRGs, L$_{IR}$ $>$ 10$^{11}$ L$_{\odot}$) with estimated star formation rates of $\sim$ 100 M$_{\odot}$ yr$^{-1}$, Magnelli et al. (2008) found that the excess emission in the IRAC bands due to the 3.3$\micron$ PAH feature was only $\sim$ 30\%. Given that such luminous LIRGs are fairly rare at $z <$ 0.5 (e.g., P\'{e}rez-Gonz\'{a}lez et al. 2005), and the increase in flux is small, even for strongly star forming galaxies, contamination of the 3.6$\micron$ and 4.5$\micron$ bandpasses by 3.3$\micron$ PAH emission should be negligible. \newline\indent Another concern is that in the worse cases there can be variations in the stellar mass-to-light ratio (M$_{*}$/L) of galaxies in similar bandpasses as large as a factor of 5-7 (such as in the K-band, e.g., Brinchmann 1999; Bell \& de Jong, 2001; Bell et al. 2003). These variations are smaller for evolved populations such as those found in clusters and in general the luminosity of a galaxy at 3.6$\micron$ and 4.5$\micron$ is still a reasonable proxy for its stellar mass. \newline\indent Exhaustive studies of both the K-band (e.g., De Propris et al. 1999, Lin et al. 2006, Muzzin et al. 2007a) and 3.6$\micron$ \& 4.5$\micron$ (Andreon 2006, De Propris et al. 2007) LFs of cluster galaxies have shown that the evolution of M$^{*}$ in these bands is consistent with a passively evolving stellar population formed at high-redshift ($z_{f} >$ 1.5), suggesting that the majority of the stellar mass in bright cluster galaxies is already assembled into massive galaxies by at least $z \sim$ 1. Here we compute the LFs in the 3.6$\micron$ and 4.5$\micron$ bands for the FLS clusters to confirm that the FLS cluster sample provides similar results, and to demonstrate that these LFs can be used to estimate the stellar contribution to the MIR cluster LFs ($\S$5.2). \newline\indent The LFs are measured by stacking clusters in redshift bins of $\Delta$z = 0.1 starting from $z = 0.1$. For each cluster, the number of galaxies within R$_{200}$ in 0.25 mag bins is tabulated and the expected number of background galaxies within R$_{200}$ is subtracted from these counts. The background counts are determined from the entire 3.8 deg$^2$ survey area and are well constrained. Each background-subtracted cluster LF is then ``redshifted'' to the mean redshift of the bin using a differential distance modulus and a differential k-correction determined from the single-burst model ($\S$3.1). At 3.6$\micron$ and 4.5$\micron$ the k-corrections for galaxies are almost independent of spectral-type (e.g., Huang et al. 2007a) and therefore using only the single-burst k-correction rather than a k-correction based on spectral-type does not affect the LFs. Furthermore, the differential k-corrections and distance moduli are small (typically $<$ 0.1 mag) and do not affect the LFs in a significant way. \newline\indent The final stacked LFs are constructed by summing the individual LFs within each bin. The errors for each magnitude bin of the final LF are computed by adding the Poisson error of the total cluster counts to the Poisson error of the total background counts in quadrature. \newline\indent In Figure 12 and Figure 13 we plot the 3.6$\micron$ and 4.5$\micron$ cluster LFs respectively. The 3.6$\micron$ LFs are fit to a Schechter (1976) function of the form \begin{equation} \phi(\mbox{M}) = (0.4 \mbox{ln} 10)\phi_{*}(10^{0.4(\mbox{\scriptsize M$^{*}$-M})})^{1+\alpha}\mbox{exp}(-10^{0.4(\mbox{\scriptsize $M^{*}$-M})}), \end{equation} where $\alpha$ is the faint-end slope; $\phi_{*}$, the normalization; and M$^{*}$ is the ``characteristic'' magnitude, which indicates the transition between the power-law behavior of the faint-end and the exponential behavior of the bright end. The functions are fit using the Levenberg-Marquardt algorithm for least-squares (Press et al. 1992) and errors are estimated from the fitting covariance matrix. The data are not deep enough to provide good constraints on $\alpha$, $\phi^{*}$ and M$^{*}$ simultaneously, and therefore the faint-end slopes of the LFs are assumed to be fixed at $\alpha$ = -0.8. This value is similar to the $\alpha$ = -0.84 $\pm$ 0.08 measured in the K-band for clusters at $z \sim$ 0.3 by Muzzin et al. (2007a) as well as the value measured in the K-band for local clusters ($\alpha$ = -0.84 $\pm$ 0.02) by Lin et al. (2004). Although assuming a fixed value of $\alpha$ precludes measuring any evolution of the faint-end slope of the LFs with redshift, it removes the strong correlation between M$^{*}$ and $\alpha$ in the fitting and, provided the evolution in $\alpha$ is modest, it is the best way to measure the luminosity evolution of the cluster galaxies via the evolution of M$^{*}$. The fitted values of M$^{*}$ and the 1$\sigma$ errors are listed in the upper left of the panels in Figure 12. \newline\indent We plot the evolution of M$^{*}$ at 3.6$\micron$ as a function of redshift in Figure 14 as filled circles. Figure 14 also shows the predicted evolution of M$^{*}$ for single-burst models with $z_{f}$ = 1.0, 1.5, 2.0, 2.8, and 5.0. These models are normalized to M$^{*}$ = -24.02 at $z =$ 0 in the K-band, the result obtained by Lin et al. (2004) for 93 local clusters. This corresponds to a normalization of M$^{*}$ = -24.32 at 3.6$\micron$, assuming a K-3.6$\micron$ color from the z$_{f}$ = 2.8 passive evolution model. The FLS values of M$^{*}$ are consistent with most of these models, except the $z_{f}$ = 1.0 model, for which they are clearly too faint. Therefore, similar to the majority of previous studies we can conclude that the bulk of the stellar mass in bright cluster galaxies is consistent with having been both formed and assembled at $z >$ 1.5 and has passively evolved since then. As a comparison, the values measured at 3.6$\micron$ by De Propris et al. (2007) and Andreon (2006) are overplotted as open squares and open diamonds respectively. These values are from spectroscopically confirmed samples of $\sim$ 40 clusters (the majority of which are X-ray detected clusters) and both agree well with the FLS values demonstrating that passive evolution appears to be the ubiquitous conclusion regardless of cluster sample. \newline\indent Similar to the 3.6$\micron$ LFs, the 4.5$\micron$ LFs can be fit using a Schechter function; however, we do not perform fitting of the 4.5$\micron$ LFs. Instead, as a demonstration of the technique presented in $\S$5.2.3 and $\S$5.2.4, we use the measured 3.6$\micron$ LFs to predict the 4.5$\micron$ LFs. Unlike colors from the redder IRAC channels, the 3.6$\micron$-4.5$\micron$ colors of galaxies are nearly identical for most spectral-types over the redshift range 0.1 $< z <$ 1.5. As a consequence, the 4.5$\micron$ LFs can be predicted from the 3.6$\micron$ LFs using the 3.6$\micron$-4.5$\micron$ colors from almost any stellar population model. For simplicity, we use the passive evolution model to predict the 4.5$\micron$ LFs. The inferred 4.5$\micron$ LFs are overplotted as solid lines in Figure 13. The predicted 4.5$\micron$ LFs are consistent with the measured ones and this demonstrates that the 3.6$\micron$ LFs combined with simple models for the color evolution of galaxies can predict the LFs in other bandpasses. Furthermore, the self-consistency between the 3.6$\micron$ and 4.5$\micron$ LFs at $z$ = 0.15, where the 3.3$\micron$ PAH would contaminate the 3.6$\micron$ band, and at $z$ = 0.33 where it would contaminate the 4.5$\micron$ band suggests that the primary source of the emission in these bandpasses at $z <$ 0.5 is stellar. \begin{figure*} \plotone{f12.eps} \caption{\footnotesize The 3.6$\micron$ LFs of clusters in the FLS. The solid line shows the best-fit Schechter function assuming $\alpha$ = -0.8. The redshift, the value of M$^{*}$, and the number of clusters combined to make the LF are listed in the upper left corner of each panel.} \end{figure*} \begin{figure*} \plotone{f13.eps} \caption{\footnotesize The 4.5$\micron$ LFs of clusters in the FLS in the same redshift bins as Figure 12. The solid line is the 4.5$\micron$ LF that is predicted from the 3.6$\micron$ LF assuming that galaxies have the 3.6$\micron$ - 4.5$\micron$ colors of a passively evolving population formed at high redshift.} \end{figure*} \begin{figure} \plotone{f14.eps} \caption{\footnotesize Evolution in M$^{*}$ from the 3.6$\micron$ LFs as a function of redshift. The long dashed, dash-dotted, dash-dot, solid, and dashed lines show models where the stars form in a single burst at $z =$ 1.0, 1.5, 2.0, 2.8, and 5.0 respectively. The filled circles are the FLS clusters and the open diamonds and open squares are the M$^{*}$ values from the Andreon (2006) and De Propris et al. (2007) cluster samples.} \end{figure} \subsection{The 5.8$\micron$ and 8.0$\micron$ Luminosity Functions} Unlike the 3.6$\micron$ and 4.5$\micron$ bandpasses where the luminosity of galaxies is dominated by emission from low mass stars, the luminosity of galaxies at 5.8$\micron$ and 8.0$\micron$ comes from several sources. It can have contributions from warm dust continuum, PAH emission, and low mass stars. In particular, if warm dust (heated by intense star formation or an AGN) or PAH emission is present, it typically dominates the luminosity at these wavelengths. Therefore, the 5.8$\micron$ and 8.0$\micron$ LFs can be useful probes of the amount of dusty star formation and AGN activity in clusters if the contribution from stellar emission is properly accounted for. \newline\indent The main challenge in modeling the LFs at these wavelengths is that a massive, dust-free early type galaxy produces relatively the same flux at 5.8$\micron$ and 8.0$\micron$ from pure stellar emission as a much lower mass starburst galaxy or AGN produces from PAH emission or warm dust continuum. Determining the relative abundance of each of these populations in a LF is more challenging for a statistically defined sample such as this cluster sample where individual galaxies are not identified as field/cluster or star forming/non-star forming. Despite this challenge, we showed in $\S$5.1 that the 3.6$\micron$ LFs can be used as a diagnostic of the average stellar emission from the cluster galaxies and that with a model for galaxy colors they can predict the 4.5$\micron$ LFs extremely well. The 3.6$\micron$-5.8$\micron$ and 3.6$\micron$-8.0$\micron$ colors of different spectral-types vary significantly more than the 3.6$\micron$-4.5$\micron$ colors; however, if these colors are modeled correctly the same technique can be used to model the LFs in the 5.8$\micron$ and 8.0$\micron$ bandpasses and provide constraints on the number and type of star forming galaxies in clusters. \newline\indent Put another way, the 3.6$\micron$ LF provides effectively a ``stellar mass budget'' for predicting the 5.8$\micron$ and 8.0$\micron$ LFs. Subtracting this stellar mass budget at 5.8$\micron$ and 8.0$\micron$ leaves an excess which can be modeled with different populations of star forming galaxies or AGN. Unfortunately, such models are unlikely to be completely unique in the sense that there will be a degeneracy between the {\it fraction} of star forming galaxies or AGN, and the {\it intensity} of the star formation or AGN activity within those galaxies; however, we will show that using only rough empircal constraints on the fraction of star forming/non-star forming galaxies in clusters places interesting constraints on the intensity of star formation in cluster galaxies, and the relative percentages of ``regular'' star forming galaxies and dusty starbursts. \subsubsection{Measuring the 5.8$\micron$ and 8.0$\micron$ LFs} Before models of the cluster population are made we measure the 5.8$\micron$ and 8.0$\micron$ LFs using the same stacking and background subtraction methods as for the 3.6$\micron$ and 4.5$\micron$ LFs. The LFs are plotted in Figures 15 and 16 in the same redshift bins as the 3.6$\micron$ and 4.5$\micron$ LFs. IRAC is significantly less sensitive at 5.8$\micron$ and 8.0$\micron$ than at 3.6$\micron$ and 4.5$\micron$ and therefore these LFs are much shallower. Only the bright end of the LF (roughly M $<$ M$^{*}$, assuming a dust-free, pure stellar emission early type model) can be measured with these data; however, this shallow depth is still sufficient to be a good diagnostic of the presence of luminous dusty starbursts. For example, at 0.1 $< z <$ 0.4, an M82-type starburst is roughly 3 magnitudes brighter at 8.0$\micron$ than an early type model (e.g., Huang et al. 2007a; Wilson et al. 2007, see also $\S$6.3) and therefore, even a galaxy with M $\sim$ M$^{*}$ + 3 from the 3.6$\micron$ LF would be detected at 8.0$\micron$ if undergoing an M82-like dusty starburst. \begin{figure*} \plotone{f15.eps} \caption{\footnotesize The 5.8$\micron$ LFs of clusters in the FLS. The solid red line shows the 5.8$\micron$ predicted from the 3.6$\micron$ LF assuming all galaxies have the colors of the passive evolution model. The dashed green lines and dotted blue lines are the regular+quiescent and starburst+regular+quiescent models described in $\S$5.2.4 respectively; however, 5.8$\micron$ is not sensitive to PAH emission or warm dust at $z >$ 0.3 and therefore these models are not notably different from the passive evolution model.} \end{figure*} \begin{figure*} \plotone{f16.eps} \caption{\footnotesize The 8.0$\micron$ LFs of clusters in the FLS. The solid red line, dashed green line, and dotted blue line are the 8.0$\micron$ LFs predicted using the 3.6$\micron$ LF and the quiescent, regular+quiescent, and starburst+regular+quiescent models described in $\S$5.2.4. At lower redshift ($z <$ 0.4) the LFs are most similar to the predictions from the regular+quiescent model whereas at higher redshift ($z >$ 0.4) the LFs are better described by the starburst+regular+quiescent model.} \end{figure*} \subsubsection{Contamination from AGN} \indent In order to draw conclusions from models of the MIR cluster LFs, it is important to have some constraints on the fraction of cluster MIR sources that are AGN and the fraction that are star forming galaxies. The fraction of galaxies in clusters at $z <$ 0.6 identified as AGN based on their optical spectra in clusters is low ($<$ 2\%, e.g., Dressler et al. 1985, Dressler et al. 1999); whereas the fraction of star forming galaxies can be quite large (5 - 80\%, e.g., Butcher \& Oemler 1984; Dressler et al. 1999; Ellingson et al. 2001; Poggianti et al. 2006). Therefore, it might be expected that star forming galaxies will dominate the overall number of cluster MIR sources. It is possible that the AGN fraction in clusters may have been underestimated because some cluster AGN are missed by optical selection. X-ray observations of moderate redshift clusters have found an additional population of cluster X-ray AGN that do not have AGN-like optical spectra (e.g., Martini et al. 2006; 2007, Eastman et al. 2007). Martini et al. (2007) showed that this population is roughly as large the optical AGN population, making the overall AGN fraction $\sim$ 5\% for cluster galaxies at $z \sim$ 0.2 with moderate luminosity AGNs (broad-band X-ray luminosities L$_{X}$ $>$ 10$^{41}$ erg s$^{-1}$), but only $\sim$ 1\% for those with bright AGN (L$_{X}$ $>$ 10$^{42}$ erg s$^{-1}$). If the analysis is restricted to galaxies with hard X-ray luminosities $>$ 10$^{42}$ erg s$^{-1}$, then the fraction is about an order of magnitude lower (0.1\%, Eastman et al. 2007). \newline\indent Although these studies suggest the AGN fraction in clusters is low, particularly for bright AGN, it is unclear how many of the optical and X-ray selected cluster AGN will have detectable MIR emission, and what fraction of the cluster MIR population they comprise. Previous MIR studies of clusters have detected only a few AGN in spectroscopic samples of $\sim$ 30-80 cluster MIR sources (e.g., Duc et al. 2002; Coia et al. 2005; Marcillac et al. 2007; Bai et al. 2007) suggesting that $>$90\% of cluster galaxies detected in the MIR are star forming galaxies. One way to estimate the fraction of cluster MIR-bright AGN is to use the IRAC and MIPS color-color diagrams suggested by Lacy et al. (2004) and Stern et al. (2005). Although these simple color cuts fail to identify complete samples of AGN because they only identify those that have red power-law slopes in the MIR (e.g., Cardamone et al. 2008); these are precisely the type of AGN that will be included in the 5.8$\micron$ and 8.0$\micron$ LFs and therefore the color cuts should provide a reasonable estimate of the contamination of those LFs from AGN. \newline\indent In the left panels of Figure 17 we plot the IRAC colors of all galaxies brighter than the 50\% completeness limits using the color spaces suggested by Stern et al. (2005) (top) and Lacy et al. (2004) (bottom). The dashed lines in each panel represent the portion of color space used to select AGN in the MIR by these authors. FLS galaxies which satisfy the color criteria are plotted as grey circles. The right panels of Figure 17 shows the same plot for all galaxies with R $<$ R$_{200}$ for clusters at $z <$ 0.7 in the FLS (59 clusters). \newline\indent The entire FLS (left panels) can be used to estimate the surface density of MIR selected AGN in these color spaces. Subtracting this background from the cluster fields we find and excess of 26 $\pm$ 22 galaxies using the Stern et al. (2005) color cut and and excess of 30 $\pm$ 30 galaxies using the Lacy et al. (2004) color cut. Summing the background subtracted 3.6$\micron$ LFs to the same limit implies that there are 2466 total cluster galaxies in these 59 clusters, and that the overall fraction of cluster galaxies that candidate MIR-bright AGN (to our 3.6$\micron$ detection limit) is 1$^{+1}_{-1}$\%, where all error bars have been calculated using Poisson statistics. Integrating the 5.8$\micron$ and 8.0$\micron$ LFs shows there are 869 and 959 cluster galaxies detected in these bands and that the fraction of cluster sources detected in the MIR that are candidate AGN is $\sim$ 3$^{+3}_{-3}$\%. \newline\indent Although this crude estimate is almost certainly an incomplete census of the total fraction of AGN in clusters, it is remarkably similar to the AGN fractions measured with optical spectroscopy or by X-ray selection and is consistent with the fraction of spectroscopically confirmed MIR-bright AGN seen in previous cluster MIR studies. Based on the low estimated AGN fraction, and for the sake of simplicity in interpretation, we do not model an AGN component in the 5.8$\micron$ and 8.0$\micron$ LFs in this analysis. We do note that the X-ray, spectroscopic and MIR selection do show clearly that the fraction of MIR cluster sources that are AGN is {\it not} zero, and therefore some of the sources in the 5.8$\micron$ and 8.0$\micron$ LFs will certainly be AGN. \begin{figure*} \plotone{f17.eps} \caption{\footnotesize Top Left Panel: Color-color plot of all galaxies in the FLS (small dots). The dashed lines denote the region used to select AGN by Stern et al. (2005). Bottom Left Panel: Same as top left, but for the Lacy et al. (2004) color space. Right Panels: Color-color plots for galaxies at R $<$ R$_{200}$ in the fields of clusters at $z <$ 0.7 (59 clusters). The majority of these sources are foreground or background galaxies. Background subtraction based on the surface density of sources in the left panels suggest that 1$^{+1}_{-1}$\% of cluster galaxies detected at 3.6$\micron$ are AGN and that $\sim$ 3$^{+3}_{-3}$\% of cluster galaxies detected at 5.8$\micron$ and 8.0$\micron$ are AGN.} \end{figure*} \subsubsection{Modeling the 5.8$\micron$ Luminosity Function} \indent The simplest fiducial model that can be made for the MIR cluster galaxy population is to assume that the bright end of the LF is dominated by passive, dust-free, early type galaxies (i.e., the emission at 5.8$\micron$ and 8.0$\micron$ is completely stellar). Although such a model is unrealistic, it provides a baseline for predicting the amount of emission in the MIR from stellar emission, and any excess beyond this model is likely to be from dusty star formation in the cluster population. Assuming such a model, the 5.8$\micron$ LFs can be inferred from the 3.6$\micron$ LFs using the 3.6$\micron$-5.8$\micron$ colors from the Bruzual \& Charlot passive evolution model. These predicted 5.8$\micron$ LFs are overplotted on the LFs in Figure 15 as the solid red lines (Figure 15 also has additional models overplotted which are introduced in $\S$5.2.4). \newline\indent Qualitatively, the 3.6$\micron$ LFs and the passive evolution model predict the 5.8$\micron$ LFs reasonably well at all redshifts. This is perhaps not surprising because due to k-corrections, 5.8$\micron$ is only sensitive to emission from warm dust or PAHs in star forming galaxies at $z < 0.3$ (see $\S$6.3). For galaxies at higher redshift, 5.8$\micron$ probes rest-frame wavelengths which, similar to the 3.6$\micron$ LFs, are dominated by stellar emission. As a result, any dusty star forming cluster galaxies would only be visible as a notable excess in the predicted 5.8$\micron$ LFs at $z <$ 0.3. No such excess is seen; however, the fraction of blue star forming galaxies in clusters evolves rapidly (i.e., the Butcher-Oemler Effect) and clusters at $z <$ 0.3 typically have low blue fractions and relatively few star forming galaxies (e.g., Ellingson et al. 2001; Balogh et al. 1999; Margoniner et al. 2001). This result confirms that the fraction of star forming galaxies in clusters at $z <$ 0.3 low, and, that furthermore there is no significant additional population of MIR luminous dusty star forming galaxies in clusters at these redshifts that are missing from optically-selected spectroscopic or photometric studies. \subsubsection{Modeling the 8.0$\micron$ Luminosity Function} \indent Unlike the 5.8$\micron$ LFs, the cluster 8.0$\micron$ LFs are not consistent with the passive evolution model predictions from the 3.6$\micron$ LFs illustrated by the solid red lines plotted in Figure 16. This model clearly underpredicts the number of galaxies in the 8.0$\micron$ LFs at all redshifts. In order to construct a more useful model for the 8.0$\micron$ LF that includes the cluster star forming population, we use the 3.6$\micron$-8.0$\micron$ colors for different types of star forming galaxies from J. Huang et al. (2008, in preparation). These authors have empirically extended the color/redshift models of Coleman et al. (1980) to 10$\micron$ using local galaxies with $ISO$ spectroscopy. Some examples of the colors from these models are presented in Wilson et al. (2007). \newline\indent Given the large number of permutations possible in the types of star forming galaxies, we are interested in as simple a model as possible which will allow for a straightforward interpretation of the data. For this analysis we divide the cluster star forming population into two populations: ``regular'' star forming cluster spirals, and dusty starburst galaxies. Huang et al. (2008) have models for both Sbc and Scd galaxies; however, the 3.6$\micron$-8.0$\micron$ colors of these models are indistinguishable, and therefore we adopt their Sbc galaxy as the model for a ``regular'' star forming cluster spiral. Huang et al. also have colors for several ``canonical'' dusty starburst galaxies such as M82, Arp220, and NGC 1068. M82 is a moderate-strength dusty starburst, has no AGN component, and is classified as a luminous infrared galaxy (LIRG). By contrast, Arp220 and NGC 1068 are powerful dusty starbursts with AGN components. The IR luminosity of Arp220 is dominated by star formation from a major merger, while the IR luminosity of NGC 1068 is dominated by a powerful AGN (although both galaxies have AGN and starburst components). Both are classified as ultra-luminous infrared galaxies (ULIRGs). Given that the majority of distant clusters studied thus far in the MIR have shown a significant population of LIRGs but no population of ULIRGs (e.g., Coia et al. 2005; Geach et al. 2006; Marcillac et al. 2007), we assume that any cluster dusty starbursts will have colors similar to M82, rather than Arp220 or NGC 1068. In general, replacing M82 as the model for cluster dusty starbursts with either Arp220 or NGC 1068 requires a smaller fraction of dusty starbursts since they are more luminous. \newline\indent In order to ascertain the dominant mode of star formation present in the cluster population we can construct simple models for the 8.0$\micron$ LFs from the 3.6$\micron$ LFs using various combinations of these populations. The purpose of the models is not to perfectly reproduce the cluster 8.0$\micron$ LFs (this requires a much more detailed knowledge of the populations in each cluster than can be obtained by statistical background subtraction), but to demonstrate how the 8.0$\micron$ LFs should appear given different proportions of these populations and thereby estimate the importance of each's contribution to the 8.0$\micron$ LFs. Hereafter we refer to the Sbc model as ``regular'', the M82 model as ``dusty starburst'', and the Bruzual \& Charlot passive evolution model as ``quiescent''. \newline\indent Beyond assuming that all cluster galaxies are quiescent, which clearly underpredicts the 8.0$\micron$ LFs, the next most simple model that can be made is to assume some fraction of the cluster galaxies are ``regular'' star forming galaxies (hereafter we refer to this model as regular+quiescent). In order to make such a model we require an approximation of the relative proportions of star forming and quiescent galaxies in clusters as a function of redshift and luminosity. The best spectroscopically-classified data at these redshifts comes from the MORPHS (Dressler et al. 1999; Poggianti et al. 1999) and CNOC1 (Balogh et al. 1999; Ellingson et al. 2001) projects. Unfortunately, the number of cluster spectra per d$z$ is relatively small in these samples and they cover only a modest range in redshift (0.2 $< z <$ 0.5) and depth in terms of the cluster M$^{*}$. \newline\indent Although spectroscopic classification would be the most reliable, the lack of data motivates the use of cluster blue fractions (f$_{b}$) as a function of redshift as a model for the relative fractions of star forming/non-star forming galaxies. Blue fractions for reasonably large samples of clusters at different redshifts have been calculated and it is fairly straightforward to measure them as a function of magnitude within these clusters. In particular, using f$_{b}$ as an estimate of the star forming fraction should predict the number of blue star forming galaxies (i.e., those with colors similar to the Sbc model). If a population of red, dust-obscured starburst galaxies exists in clusters they should be evident in the 8.0$\micron$ LFs as an excess of galaxies beyond the regular+quiescent model. \newline\indent For f$_{b}$ as a function of redshift we use the data of Ellingson et al. (2001) from the CNOC1 clusters which span the redshift range $z$ = 0.2 to $z =$ 0.4, and for clusters at $z >$ 0.4 we use the data on RCS-1 clusters from Loh et al. (2008). Rough f$_{b}$ values for both these samples were recomputed using only galaxies with M $<$ M$^{*}$ (D. Gilbank private communication), because this matches the depth of the 5.8$\micron$ and 8.0$\micron$ LFs. These f$_{b}$ values as a function of redshift are listed in Table 2. \newline\indent The scatter in cluster f$_{b}$ values at a given redshift is large, and therefore different studies find different mean values depending on sample. The values we have adopted are consistent with the majority of work in the field (e.g., Butcher \& Oemler 1984; Smail et al. 1998; Margoniner et al. 2001; Andreon et al. 2004), although we have measured them using a brighter luminosity cut. Of course, the best way to infer the f$_{b}$ of the FLS clusters would be to measure it from the clusters themselves; however, we do not have the proper filter coverage at $z <$ 0.5 to make this measurement properly nor a large enough sample to make a measurement that would be statistically different from the adopted values. \newline\indent The cluster f$_{b}$ is also a function of limiting magnitude (e.g., Ellingson et al. 2001), and without incorporating some variation in f$_{b}$ as a function of magnitude, all of the model LFs consistently overpredict the number of bright galaxies in the 8.0$\micron$ LFs, and underpredict the number of faint ones. In order to estimate the variation of f$_{b}$ as a function of magnitude we use the spectrally-typed LFs of Muzzin et al. (2007a). They measured the K-band LF for cluster galaxies defined spectroscopically as either star forming or quiescent. Comparing those LFs (their Figure 13) and assuming all star forming galaxies are blue, and all quiescent galaxies are red, results in f$_{b}$ values of 0.19, 0.35, and 0.52 for galaxies with M $<$ M$^{*}$, M$^{*}$ $<$ M $<$ M$^{*}$ + 1, and M$^{*}$ +1 $<$ M $<$ M$^{*}$ + 2 respectively in clusters at $z \sim$ 0.3. Comparing these values shows that f$_{b}$ is 1.8 times larger at M$^{*}$ $<$ M $<$ M$^{*}$ + 1 than at M $<$ M$^{*}$, and is 2.7 times larger at M$^{*}$ +1 $<$ M $<$ M$^{*}$ + 2 than at M $<$ M$^{*}$. We therefore adopt an f$_{b}$ that varies with magnitude with the following conditions: For galaxies with M $<$ M$^{*}$ in the 3.6$\micron$ LF we use the f$_{b}$ values from Table 2. For galaxies with M$^{*}$ $<$ M $<$ M$^{*}$ + 1, we assume that f$_{b}$ is twice as large as the values in Table 2, and for galaxies with M$^{*}$ + 1 $<$ M $<$ M$^{*}$ + 2 we assume that f$_{b}$ is three times as large as the values in Table 2. In cases where this causes f$_{b}$ $>$ 1.0, it is set equal to 1.0. \newline\indent Combining the f$_{b}$ as a function of redshift and magnitude with the 3.6$\micron$ LFs assuming all ``blue'' galaxies have the color of the Huang et al. Sbc galaxies and all ``red'' galaxies have the color of the passive evolution model results in the models that are plotted as green dashed lines in Figures 15 and 16. Comparing the data to these models shows that this simple model using only regular+quiescent galaxies predicts the cluster 8.0$\micron$ LFs fairly well. In particular, the z = 0.15, 0.25 and 0.33 LFs are well described by this model. For the higher redshift LFs this model is clearly better than the purely quiescent model; however, it still does not account for the entire 8.0$\micron$ population. \newline\indent Most importantly, the regular+quiescent model shows that out to $z \sim$ 0.65, where 8.0$\micron$ still probes rest-frame dust emission, there is no significant population of bright (M $<$ M$^{*}$) galaxies in clusters that cannot reasonably be accounted for by ``regular'' star forming cluster spirals. This is significant because it suggests that whatever processes responsible for transforming the morphology and spectral-type of bright cluster galaxies over the same redshift range do not involve an ultra-luminous dusty starburst phase such as those caused by major mergers of gas-rich galaxies (i.e., ``wet'' mergers). We note that there appears to be an overdensity of very bright galaxies in the $z =$ 0.82 LF that cannot be accounted for by the regular+quiescent model and this suggests the possibility of an onset of luminous starbursts (possibly from mergers) or AGN activity in bright galaxies at higher redshift. \newline\indent Although the regular+quiescent model predicts the bright end of the 8.0$\micron$ LFs well at all redshifts, and the entire 8$\micron$ LF at lower redshift, it fails to account for all of the LFs. In particular, this model seems to underpredict the number of fainter galaxies in the 8.0$\micron$ LFs for clusters at $z >$ 0.4. This suggests a third component to the cluster 8.0$\micron$ population, possibly a red, dusty starburst population which is not accounted for by the cluster f$_{b}$. Such a population was suggested by Wolf et al. (2005) who found that the SEDs of roughly 30\% of the red sequence galaxies in the Abell 901/902 supercluster ($z$ = 0.17) were better described by dusty templates rather than a dust-free, old stellar population. In order to explore this possibility, we construct a new model with the same values of f$_{b}$ as a function of magnitude and redshift as for the regular+quiescent model, but this time we assume that some of the red quiescent galaxies are instead M82-like dusty starbursts. M82 has optical-IR colors that are similar to quiescent galaxies (see Huang et al. 2008 and $\S$6.3) so it is reasonable to assume that any M82-like dusty starbursts would be part of the population of red cluster galaxies rather than the blue cluster galaxies. \newline\indent If we assume that the dusty starburst population is a constant fraction of the red cluster galaxies, this would result in a varying ratio of dusty starburst to regular star forming galaxies in clusters as a function of redshift. In particular, clusters at low redshift will have the highest fraction of dusty starburst galaxies (because the f$_{b}$ is low and the red fraction is high). The LFs above have already suggested that there is no need for a dusty starburst population at low redshift, so modeling the dusty starbursts as a fixed fraction of the red galaxies seems inappropriate. Instead, a better way to model the population is to assume that the cluster f$_{b}$ is a tracer of the total star formation in the cluster and that ratio of dusty starburst to regular star forming galaxies is a constant. Given this assumption we can predict the fraction of dusty starbursts directly from the cluster f$_{b}$. This fraction of dusty starbursts is then removed from the fraction of red quiescent galaxies and a model for the LFs can be made. Hereafter we refer to this model as starburst+regular+quiescent. The fractions of the cluster galaxy populations in terms of f$_{b}$ are defined using the equations, \begin{equation} f_{dsb} = f_{b} \times f_{dsb/reg}, \end{equation} \begin{equation} f_{q} = 1 - f_{b} - f_{dsb}, \end{equation} where f$_{dsb}$ is the fraction of dusty starburst galaxies, f$_{dsb/reg}$ is the assumed ratio of dusty starburst to regular star forming galaxies, and f$_{q}$ is the fraction of quiescent galaxies. In cases where f$_{dsb}$ + f$_{b}$ $>$ 1 we set f$_{dsb}$ = 1 - f$_{b}$ and f$_{q}$ = 0. \newline\indent As of yet there are no good observational constraints on the parameter f$_{dsb/reg}$. Therefore, as a first-order fiducial value we assume that f$_{dsb/reg}$ = 0.5. In general, we find that allowing a range of values between 0.3 - 1.0 provides models that are fairly similar. More importantly, the differences in models that use f$_{sb/reg}$ between 0.3 - 1.0 are much smaller than the difference between any of those models and the regular+quiescent model. Therefore, the interpretation of the data using these models will not depend strongly on the assumed value of f$_{sb/reg}$. The starburst+regular+quiescent model with f$_{sb/reg}$ = 0.5 is overplotted on Figures 15 and 16 as the dotted blue line. \newline\indent This starburst+regular+quiescent model over-predicts the number of bright galaxies in the $z <$ 0.4 8.0$\micron$ LFs but it is better at describing the LFs at $z >$ 0.4 than the regular+quiescent or purely quiescent models. This suggests that there is a population of dusty starbursts in clusters at $z >$ 0.4 that does not exist at $z <$ 0.4, and that these starbursts are consistent with being of an M82-type. We discuss this in more detail in $\S$6.1. \section{Discussion} \subsection{Evidence for a Change in Star Formation Properties of Cluster Galaxies?} \indent In order to better illustrate the differences in the model populations described above, we subtract the quiescent model from the 5.8$\micron$ and 8.0$\micron$ LFs between 0.15 $< z <$ 0.65 and plot the residuals in Figures 18 and 19. The residuals from the quiescent+regular model and starburst+regular+quiescent models from $\S$5.2.4 are also plotted in Figures 18 and 19. The solid vertical lines in the plots represent the magnitude of M$^{*}$ inferred from the 3.6$\micron$ LF assuming the passive evolution model, and give some indication of the depth of the LFs. If we compare the data to the models and take the results at face value, it suggests that the intensity of star formation in clusters is evolving with redshift and that it can be classified into three types. The first type of star formation is ``weak'' and best describes the lowest redshift clusters ($z <$ 0.15) which are consistent with the colors of an almost exclusively quiescent population in all IRAC bandpasses. This result is consistent with numerous studies of nearby clusters using spectroscopy which show few star forming galaxies (e.g., Dressler et al. 1985, Popesso et al. 2007; Christlein \& Zabludoff 2005; Rines et al. 2005). \newline\indent Between 0.2 $< z <$ 0.5 the 8.0$\micron$ LFs are no longer well-described by the purely quiescent model and the regular+quiescent model is the best model. This shows that the majority of star formation in clusters at this epoch is primarily relegated to galaxies that have MIR colors similar to local late-type star forming galaxies (i.e., the Sbc model). This has direct implications for the SFRs of these galaxies because Wu et al. (2005) showed that the dust-obscured SFR of galaxies is proportional to their 8.0$\micron$ flux. Although other authors have demonstrated that there are caveats when using the 8.0$\micron$ flux to infer SFRs (i.e., the scatter can be as high as a factor of 20-30, Dale et al. 2005), this still implies that the average SFR or the average SFR per unit stellar mass (the average specific star formation rate, SSFR) of star forming cluster galaxies at 0.2 $< z <$ 0.5 is similar to those in the local universe (because they have 3.6$\micron$-8.0$\micron$ colors similar to local Sbc galaxies). This second mode of star formation in clusters is roughly what would be considered ``regular'' star formation for galaxies in the local universe. \newline\indent At $z >$ 0.5 the starburst+regular+quiescent model becomes the best description of the LFs. Again, assuming that 8.0$\micron$ flux is an indicator of SFR, the M82 starburst model is approximately a factor 2.5 brighter at 8.0$\micron$ than the regular Sbc model for the same 3.6$\micron$ flux. Given that our model suggests that regular star forming galaxies make up $\sim$ 30-40\% of the cluster population at this redshift and M82 galaxies make up $\sim$ 15-20\%, this implies that not only is the abundance of star forming galaxies in clusters higher at higher redshift (i.e., the Butcher-Oemler Effect), but also the average SSFR of cluster galaxies is approximately a factor of 1.5 higher at $z >$ 0.5 than it is at $z <$ 0.5. This increase in SSFR suggests a third mode of star formation in cluster galaxies that could be considered a ``burst'' mode, at least relative to local star formation rates. Interestingly, this increase in the SSFR of cluster galaxies at higher redshift is consistent with field studies of the universal star formation density ($\rho_{*}$) which show an increase of roughly a factor of 2-5 between $z = 0.2$ and $z = 0.5$ (e.g., Lilly et al. 1996; Madau et al. 1996; Wilson et al. 2002; Schiminovich et al. 2005; Le Floc'h et al. 2005). It suggests that the increasing fraction of dusty starbursts in the cluster population could be interpreted as the result of an increase in the universal SSFR of galaxies with redshift and the constant accretion of these galaxies into clusters and is not necessarily because starbursts are triggered by the cluster environment. Furthermore, these galaxies might only be considered ``starbursts'' relative to the mean SSFR locally, whereas at higher redshift their higher SSFR is simply typical of galaxies at that redshift. We compare the cluster 5.8$\micron$ and 8.0$\micron$ LFs to the field LFs in $\S$6.2 and discuss this further in that section. \newline\indent It is interesting that the cluster star forming population transitions from being best described by regular star forming galaxies to regular and dusty starburst galaxies around a redshift of $z \sim $ 0.4. This is notable because of the discrepant abundances of k+a and a+k post-starburst galaxies found in clusters by the MORPHS (Dressler et al. 1999) and CNOC1 (Balogh et al. 1999) projects. Dressler et al. (1999) found that approximately 18\% of cluster galaxy spectra could be classified as k+a galaxies based on the equivalent width of the H$\delta$ line, whereas Balogh et al. (1999) found that only 2\% of the cluster population could be classified this way. These results obviously lead to very different interpretations of the role of starbursts in the evolution of cluster galaxies. In particular, Dressler et al. found that the number of k+a galaxies was an order of magnitude higher in clusters than the coeval field, suggesting a cluster-related process to the creation of these galaxies, while Balogh et al. found roughly equal numbers, suggesting no environmental role. \newline\indent Although both Dressler et al. (2004) and Balogh et al. (1999) have pointed out that the different methods of data analysis may be partly responsible for such discrepant results, this study suggests that the slightly different redshift range of the MORPHS and CNOC1 sample may also play some role. Excluding the two highest redshift clusters in the CNOC1 sample (MS 0451-03 and MS 0016+16, both at $z \sim$ 0.55) the mean redshift of the other 14/16 (88\%) clusters in the sample is $z = 0.28$. By contrast, the mean redshift of the MORPHS sample is $z = 0.46$. Our 8.0$\micron$ cluster LFs seem to indicate that $z \sim $ 0.4 represents a transition redshift above which the dominant mode of star formation in clusters is better described as starburst, as opposed to regular. Given that once star formation ceases, the typical lifetime of the A star component of a starburst galaxy's spectrum is $\sim$ 1.5 Gyr, and that the lookback time between $z =$ 0.46 and $z =$ 0.28 is also 1.5 Gyr, it is possible that both dusty starbursts, and k+a galaxies that are in clusters at $z =$ 0.46 may have evolved to quiescent ``k''-type galaxies by $z \sim$ 0.28, provided that the dusty star formation is immediately truncated. This would be consistent with the change in the 8.0$\micron$ LFs around this redshift and may explain why the MORPHS and CNOC1 samples show different abundances of post-starburst galaxies. Furthermore, 1.5 Gyr prior to $z =$ 0.46 is $z \sim$ 0.65. Our $z =$ 0.65 cluster LF has the largest abundance of dusty starburst galaxies, and if a significant fraction of these had their star formation truncated, these would be logical progenitors to the large population of k+a galaxies seen at $z =$ 0.46 by Dressler et al. (1999). \newline\indent Our results, which show an increase in the strength of the dominant mode of star formation in cluster galaxies (from weak to normal to starburst), as well as an overall increase in the abundance of dusty star forming galaxies are also consistent with MIR observations of other clusters at these redshift ranges. In particular, Coia et al. (2005), Geach et al. (2006), Marcillac et al. (2007), and Bai et al. (2007) have all shown that clusters at higher redshifts have significantly more MIR sources than clusters at lower-redshift, and that these sources are typically brighter than the sources in lower-redshift clusters. Taken at face value, our results and their results show the equivalent of a Butcher-Oemler Effect in the MIR where both the fraction, and SSFR of star forming galaxies is increasing with increasing redshift. Whether this increase is caused by the increase in the universal SFR with redshift, and the constant infall of such galaxies into the cluster environment, or by the triggering of starbursts by the high-redshift cluster environment is still uncertain. We investigate this point further in $\S$6.2 by comparing the cluster and field IRAC LFs. \begin{figure*} \plotone{f18.eps} \caption{\footnotesize Residuals of the cluster 5.8$\micron$ LFs once the predictions from the 3.6$\micron$ LFs and the passive evolution model have been subtracted. The solid red line shows the passive evolution model, the dashed green line shows the regular+quiescent model and the dotted blue line shows the starburst+regular+quiescent model. The solid vertical line represents the location of M$^{*}$ from the 3.6$\micron$ LFs assuming the 3.6$\micron$ - 5.8$\micron$ color of the passive evolution model.} \end{figure*} \begin{figure*} \plotone{f19.eps} \caption{\footnotesize Same as Figure 18 but for the 8.0$\micron$ LFs.} \end{figure*} \subsection{Is the Cluster Population Different From the Field Population?} The most obvious way to understand if the cluster environment is responsible for triggering starburst events is to directly compare the field and cluster 5.8$\micron$ or 8.0$\micron$ LFs and look for an excess of galaxies in the cluster LFs. For this comparison we use the field LFs measured by Babbedge et al. (2006, hereafter B06). Their LFs are determined using photometric redshifts of $\sim$ 100 000 galaxies from a 6.5 deg$^2$ patch of the SWIRE survey. The field LFs are measured in 5 redshift bins, and we compare the cluster LFs to the three bins which overlap the redshift range of the clusters (0.0 $< z <$ 0.25, 0.25 $< z <$ 0.50, and 0.5 $< z <$ 1.0). The corresponding cluster LFs used for comparison are the $z = 0.15$, $z =$ 0.33, and $z = $ 0.65 LFs respectively. \newline\indent The B06 field LFs are determined using total luminosities, not apparent magnitudes like for the cluster LFs. Converting the units of the cluster LFs to total luminosities requires distance moduli and full k-corrections. In $\S$5.2.4 we showed that the cluster LFs can be well-described using three basic populations of galaxies: quiescent, regular star forming, and dusty starburst. We use the models of these three spectral types for the k-corrections. The k-corrections for the quiescent galaxies are taken from the single-burst model and the k-corrections for the regular and dusty starburst galaxies are taken from the Huang et al. (2008) Sbc and M82 models respectively. Each LF is statistically k-corrected using the relative proportions of the galaxies which best described the LFs in $\S$5.2.4. The apparent LF for each redshift is divided into the three components by the fraction of galaxies of that type and are individually k-corrected and shifted by the distance modulus. These LFs are then summed to provide the total cluster LF in terms of absolute luminosities in units of $\nu$L$_{\nu}$/L$_{\odot}$. \newline\indent The cluster LFs are normalized by the number of galaxies per virial volume, whereas the field LFs are normalized by their actual number density per Mpc$^{3}$. The cluster normalization can be put in the same units as the field LFs by dividing by the virial volume; however, this does not provide a fair comparison because clusters have much higher volume densities of galaxies than the field. The most useful way to compare the cluster and field LFs is on a per unit stellar mass basis. We do not have stellar mass functions for either the field or cluster; however, we can again assume that the 3.6$\micron$ luminosity is roughly a proxy for stellar mass and renormalize the LFs to a common normalization so that they reproduce the same $\phi^{*}$ in the Schechter function fits. The renormalized 3.6$\micron$, 4.5$\micron$, 5.8$\micron$, and 8.0$\micron$ cluster LFs are plotted in Figures 20, 21, 22, and 23, respectively as the filled red circles. The field LFs are overplotted as blue squares. \newline\indent Figures 20 and 21 show that the overall shape of the cluster and field 3.6$\micron$ and 4.5$\micron$ LFs are similar at all redshifts. There is a slight, though not statistically significant, excess in the number of the brightest galaxies in the cluster LFs; however, these are likely to be giant elliptical galaxies which are common in clusters and typically do not follow the distribution of the Schechter function. Other than the giant ellipticals, the shape of the 3.6$\micron$ and 4.5$\micron$ cluster and field LFs are similar which shows that the distribution of galaxies as a function of stellar mass is nearly identical in these environments. This result is consistent with K-band studies which have shown only small differences in M$^{*}$ ($<$ 0.2 mag) between these environments (e.g., Balogh et al. 2001; Lin et al. 2004; Rines et al. 2004; Muzzin et al. 2007a). \newline\indent Conversely, there are significant differences in the 5.8$\micron$ and 8.0$\micron$ LFs of the cluster and field. Both the 5.8$\micron$ and 8.0$\micron$ LFs follow a sequence where the cluster LF is more abundant in MIR galaxies at $z =$ 0.65, particularly moderate-luminosity galaxies, and thereafter the abundance of MIR galaxies in clusters declines relative to the field with decreasing redshift. At $z =$ 0.33, the cluster is slightly deficient in both 5.8$\micron$ and 8.0$\micron$ galaxies relative the field, reduced by a factor of $\sim$ 2 for galaxies with $\nu$L$_{\nu}$ = 5 x 10$^9$ - 5 x 10$^{10}$ L$_{\odot}$. At $z =$ 0.15, the cluster LF is significantly depleted compared to the field, reduced by a factor of $\sim$ 5 for galaxies with $\nu$L$_{\nu}$ = 5 x 10$^8$ - 5 x 10$^{10}$ L$_{\odot}$. This trend not only indicates that the environment of dusty star forming galaxies affects their evolution, but that the enironmental effects seem to evolve with redshift. At $z =$ 0.15 dusty star forming galaxies are more frequently found in the lower density field environment, whereas at $z =$ 0.65 they are found more frequently in the higher density cluster environment. \newline\indent Our results are similar to those from recent studies by Elbaz et al. (2007) and Cooper et al. (2008) that have shown that the mean star formation rate of field galaxies in higher density environments increases faster than those in low density environments with increasing redshift. This differential increase leads to a remarkable reversal in the slope of the $<$SFR$>$ of galaxies as a function of density at $z \sim$ 1 as compared to $z \sim$ 0. Field galaxies in high density environments at $z \sim$ 1 actually have higher $<$SFR$>$ than those in low density environments. Although those studies compare $<$SFR$>$ of galaxies at a range of densities within the field and do not use clusters per se, our comparison between the 8$\micron$ LFs of the cluster and field environments seem to at least qualitatively suggest a similar trend. \newline\indent It is not entirely obvious why starbursts should prefer the cluster environment over the field environment at high ($z >$ 0.5) redshift and then reject it at lower redshift ($z <$ 0.5). We suggest that starbursts could preferentially be triggered during the initial formation and collapse of the cluster, and be quenched thereafter by the high-density environment. If this interpretation is correct, it is likely that the parameter most responsible for the change in star formation properties relative to the field is the degree of virialization of the clusters. \newline\indent Clusters that are unrelaxed, or in the process of collapsing, have two properties that would permit increased numbers of dusty starbursts. Firstly, before virialization, the cluster gas has not yet been shock-heated to its maximum temperature. This hot intracluster gas has long been considered the primary cause for the quenching of star formation in cluster galaxies because it prevents the cooling of gas in the outer halo of a galaxy, thereby ``strangling'' star formation. Depending on the density/temperature threshold required for quenching, it is possible that starbursts that would normally be quenched in virialized clusters at lower redshifts may survive longer in unvirialized clusters at high redshift. Secondly, the velocity dispersions in unrelaxed systems are lower and therefore mergers and harassments should be more common at higher redshift (e.g., Tran et al. 2005b). It is plausible that this more dynamically ``active'' environment preferentially triggers star formation. The combination of more triggered dusty starbursts through harassment and mergers as well as a weaker quenching process may be the reason for more dusty starbursts in clusters relative to the field at higher redshift. Once a cluster becomes virialized the interactions between galaxies should become less frequent and the quenching of star formation by the hot cluster gas will be more efficient. In such a scenario the relative abundances of dusty starbursts in clusters should decrease relative to the field. \newline\indent If our interpretation is correct we might expect different results from the 8.0$\micron$ LFs of X-ray selected samples of clusters (i.e., those which require a hot virialized cluster gas component) compared to red sequence selected samples, which, assuming the early type population is formed prior to cluster collapse, do not require that clusters are fully virialized. \begin{figure} \plotone{f20.eps} \caption{\footnotesize Comparison between the cluster and field 3.6$\micron$ LFs at different redshifts. The field LFs are plotted as open blue squares and the cluster LFs are plotted as filled red circles. The cluster LFs are renormalized so that the values of $\phi^{*}$ from the Schechter function fits ($\S$5.1) match the $\phi^{*}$ values from the Schechter function fits in B06. } \end{figure} \begin{figure} \plotone{f21.eps} \caption{\footnotesize Same as Figure 19 but for the 4.5$\micron$ LFs.} \end{figure} \begin{figure} \plotone{f22.eps} \caption{\footnotesize Same as Figure 19 but for the 5.8$\micron$ LFs.} \end{figure} \begin{figure} \plotone{f23.eps} \caption{\footnotesize Same as Figure 19 but for the 8.0$\micron$ LFs. } \end{figure} \subsection{Are the Color Models Correct?} \indent The main conclusions from the cluster LFs presented in this paper depend on interpreting color models that have been primarily calibrated or determined using nearby galaxies. If these models are not applicable at higher redshift then this could cause incorrect conclusions to be drawn from the LFs. Using the spectroscopic redshifts we can examine the colors of confirmed cluster galaxies as a function of redshift to check if the models are reasonable. \newline\indent There are 55 spectroscopic redshifts available for cluster galaxies (see $\S$2.4 \& $\S$2.5). Using the spectra we can classify these galaxies into two basic types, star forming and non-star forming. For the Hectospec, SDSS, and WIYN spectroscopy the best-fitting cross-correlation template is used for the classification. For the remaining galaxies the classification is made by eye-examining the spectra for any evidence of the [OII], [OIII], or H$\alpha$ emission lines. Galaxies with any of these emission lines are classified as star forming, and those without are classified as non-star forming. Although this is a crude approach to classifying galaxies, we are only interested in a rough classification and taking a more quantitative approach, such as measuring EWs, is unnecessary. Furthermore, in all cases the cluster galaxies had spectra that were typical of either normal star forming (several emission lines including [OII] and H$\alpha$) or quiescent galaxies (strong H and K lines and a 4000\AA$ $ break), and classification was straightforward. There were no hybrid objects associated with clusters except two AGN from the Hectospec data. \newline\indent In Figure 24 we plot several of the colors of these galaxies as a function of redshift. Star forming galaxies are plotted as purple points and non-star forming galaxies are plotted as red points. The Bruzual \& Charlot single-burst model is overplotted as the solid line, and the Huang et al. (2008) Sbc and M82 models are overplotted as the dotted and dash dotted lines, respectively. In general, the non-star forming galaxies follow the single-burst model well at all redshifts. There are a handful non-star forming galaxies which appear to have some excess 8.0$\micron$ emission, and this may be from either low-level star formation or a low-luminosity AGN. \newline\indent There are fewer star forming than non-star forming galaxies in the sample; however, their colors follow the Sbc and M82 models quite well. At 8.0$\micron$, where the colors of the Sbc and M82 models are most different from the single-burst model, it is clear that galaxies with emission lines have colors similar to those models, whereas those without tend to follow the single-burst model. Half of the star-forming galaxies in Figure 24 (8/16) come from our spectroscopy of FLS J172449+5921.3 (cluster \#10, $z =$ 0.252). These galaxies were selected for spectroscopy because they were detected at 24$\micron$. Interestingly, most of these galaxies (7/8) have a 3.6$\micron$ - 8.0$\micron$ color similar to the Sbc model, yet they show a wide range in R - 3.6$\micron$ color. A few have an R - 3.6$\micron$ color bluer than the red sequence, typical of Sbc galaxies, whereas others have an R - 3.6$\micron$ color redder than the red sequence. This illustrates that there are both ``red'' and ``blue'' dusty star forming galaxies in clusters, and that our approach of modeling the 5.8$\micron$ and 8.0$\micron$ LFs with populations of both is reasonable. Furthermore, the fact that these are some of the brightest MIR sources in the cluster field, and that most have colors similar to the Sbc model, rather than the M82 model, is consistent with our conclusion that the 8.0$\micron$ LF at this redshift is best modeled using the quiescent+regular model, with no need for a luminous dusty starburst component. We defer a more detailed discussion of the spectroscopy, including quantitative measurements of star formation from line widths to a future paper (Muzzin et al. 2008, in preparation). \newline\indent Overall, Figure 24 demonstrates that the galaxy templates used to model the cluster LFs agree well with the colors of spectroscopically confirmed cluster galaxies, and that they are reasonable descriptions of star forming and non-star forming galaxies between 0 $< z <$ 1. \begin{figure*} \plotone{f24.eps} \caption{\footnotesize Plot of optical - IRAC or IRAC - IRAC colors of galaxies as a function of redshift. The red, purple, and green points are spectroscopic cluster members classified as non-star forming, star forming, and AGN respectively. The solid, dotted, and dashed lines are the model colors from the passive evolution model, the Sbc model, and the M82 model respectively.} \end{figure*} \subsection{Systematic Uncertainties} The data presented in this paper support a self-consistent model of the evolution of stellar mass assembly and dusty star formation in clusters; however, there are several details of this analysis that have not been discussed and could potentially result in inappropriate conclusions being drawn from the data. Although it is difficult to quantify what effect, if any, these details will have on the interpretation of the data, we believe it is important to at least note these issues here. \newline\indent One worthwhile concern is the sample of clusters used in the analysis. Although this sample is much larger than the mere handful of clusters that have been studied in the MIR thus far, it is still of modest size and subject to cosmic variance. In particular, given that the clusters come from only 3.8 deg$^2$, it is unclear whether the higher redshift clusters in the sample are truly the progenitors of the lower redshift clusters. Unfortunately, a cosmologically significant sample of clusters covering of the order 100 degree$^2$ or more is likely needed to avoid biases that might result from cosmic variance in the sample. \newline\indent Another potential problem is that there are many more low richness clusters in the sample than high richness clusters, simply because of the nature of the cluster mass function. Any effects that depend on cluster mass will clearly be missed by combining these samples. This could be important because processes that could quench star formation (e.g., ram-pressure stripping, gas strangulation) or incite starbursts (tidal effects, harassment) will likely depend on cluster mass. Using a much larger sample which can be separated by both mass and redshift would be invaluable for studying this issue further. \newline\indent Perhaps the most important concern is that there is a degeneracy between the intensity of star formation in clusters and the fraction of star forming galaxies. We showed in $\S$6.3 that the color models used for the cluster galaxies reproduce the colors of cluster galaxies with spectroscopic redshifts very well; however, even though these colors are correct, the models of the 5.8$\micron$ and 8.0$\micron$ LFs still depend on the assumed f$_{b}$ as a function of magnitude and redshift for the clusters. If the f$_{b}$ values are overestimated and need to be reduced, then a larger fraction of dusty starburst galaxies than we have assumed will be required to correctly model the cluster 5.8$\micron$ and 8.0$\micron$ LFs. Likewise, if the f$_{b}$ is underestimated, fewer dusty starbursts will be required. The assumed f$_{b}$ are consistent with most previous studies; however, optimally, if more data were available the f$_{b}$ should be calculated from the clusters themselves and this would avoid this degeneracy. \newline\indent Lastly, it is worth mentioning that much of the excess seen in the 8.0$\micron$ LFs is near the limiting magnitude of the survey. Problems with the background estimation could artificially inflate these values. It is unlikely that this is the case because if the excess of galaxies near the faint limit of the survey were due to an undersubtraction of the background, it should also be seen in the lower redshift LFs, which it is not. Furthermore, undersubtraction of the background should be even more prevalent in the lower redshift LFs because clusters have much larger angular sizes and therefore more total area from which to undersubtract the background. It is unlikely that this is a problem; however, deeper data would be useful in ensuring there are no errors due to completeness near the survey limit. \section{Conclusions} We have presented a catalogue of 99 candidate clusters and groups at 0.1 $< z_{phot} <$ 1.3 discovered in the $Spitzer$ First Look Survey using the cluster red sequence technique. Using spectroscopic redshifts from FLS followup campaigns and our own spectroscopic followup of clusters we have shown that the R - 3.6$\micron$ color of the cluster red sequence is an accurate photometric redshift estimator at the $\Delta$z = 0.04 level at $z <$ 1.0. Furthermore, we demonstrated that the properties of the FLS cluster catalogue are similar to previous cluster surveys such as the RCS-1. Using this cluster sample we studied the evolution of the cluster 3.6$\micron$, 4.5$\micron$, 5.8$\micron$, and 8.0$\micron$ LFs. The main results from these LFs can be summarized as follows: \begin{itemize} \item In agreement with previous work, the evolution of the 3.6$\micron$ and 4.5$\micron$ LFs between 0.1 $< z <$ 1.0 is consistent with a passively evolving population of galaxies formed in a single-burst at $z >$ 1.5. Given that the 3.6$\micron$ and 4.5$\micron$ bandpasses are reasonable proxies for stellar mass, this suggests that the majority of stellar mass in clusters is already assembled into massive galaxies by $z \sim$ 1. \item The MIR color cuts used to select AGN by Lacy et al. (2004) and Stern et al. (2005) suggest that the fraction of cluster galaxies that host MIR-bright AGN at $z <$ 0.7 is low. We estimate that the AGN fraction of cluster galaxies detected at 3.6$\micron$ is 1$^{+1}_{-1}$\%. AGN are a larger, but still modest component of the 5.8$\micron$ and 8.0$\micron$ cluster population, approximately 3$^{+3}_{-3}$\% of these galaxies. \item The cluster 5.8$\micron$ and 8.0$\micron$ LFs do not look similar to the 3.6$\micron$ and 4.5$\micron$ LFs, and this is due to the presence of the cluster star forming galaxies. Star forming galaxies are much brighter in these bandpasses than early type galaxies and their varying fractions with redshift cause deviations from the shape of the 3.6$\micron$ and 4.5$\micron$ LFs. The 5.8$\micron$ and 8.0$\micron$ LFs are well-described using different fractions of three basic types of galaxies: quiescent, regular star forming, and dusty starburst by assuming that the fractions of the latter two are proportional to the cluster f$_{b}$. \item The 8.0$\micron$ cluster LFs suggest that both the frequency and SSFR of star forming cluster galaxies is increasing with increasing redshift. In particular it appears that when compared to star forming galaxies in the local universe, the intensity of star formation in clusters evolves from ``weak'' to ``regular'' to ``starburst'' with increasing redshift. Qualitatively, this evolution mimics the evolution in the universal star formation density with redshift suggesting that this evolution is at least in part caused by the accretion of star forming galaxies into the cluster environment. \item Comparing the 3.6$\micron$ and 4.5$\micron$ cluster and field LFs with similar normalization shows that the LFs in these environments are similar, with evidence for a small excess in the brightest galaxies in clusters, likely caused by the cluster giant ellipticals. In agreement with previous K-band studies this suggests that the distribution of galaxies as a function of stellar mass in both environments is roughly equivalent. \item There is a significant differential evolution in the cluster and field 5.8$\micron$ and 8.0$\micron$ LFs with redshift. At $z =$ 0.65 the cluster is more abundant in 8.0$\micron$ galaxies than the field; however, thereafter the relative number of 5.8$\micron$ and 8.0$\micron$ galaxies declines in clusters with decreasing redshift and by $z =$ 0.15 the cluster is underdense in these sources by roughly a factor of 5. This differential evolution could be explained if starbursts are preferentially triggered during the early formation stages of the cluster but then preferentially quenched thereafter by the high density environment. \end{itemize} \indent A well-sampled spectroscopic study of several high-redshift clusters with MIR data would be extremely valuable for verifying our interpretation of the IRAC cluster LFs because it is always difficult to draw incontrovertible conclusions from LFs alone. Still, the cluster LFs do show a strong increase in the number of 5.8$\micron$ and 8.0$\micron$ sources in clusters with increasing redshift which must almost certainly be attributed to increased amounts of dusty star formation in higher redshift clusters. \newline\indent One of the strengths of this analysis is that it is based on a relatively large sample of galaxy clusters. It has become clear from the handful of clusters studied thus far by $ISO$ and $Spitzer$ that the MIR properties of cluster galaxies can be quite different from cluster to cluster. They may depend on dynamical state, mass, f$_{b}$, or other parameters (e.g., Coia et al. 2005; Geach et al. 2006). The advantage of using many clusters is that it provides a metric of how the ``average'' cluster is evolving as a function of redshift. Detailed studies of individual clusters with significant ancillary data will pave the way to a better understanding of the physics behind the evolution of dusty star formation in cluster galaxies; however, large statistical studies such as this one will indicate whether the clusters studied in future work are representative of the cluster population as a whole, or are potentially rare, biased clusters with unusual properties caused by an ongoing merger or some other event. \newline\indent It is worth noting that although the quality of the LFs provided by the 99 clusters in the FLS is good, these LFs would still benefit from a larger statistical sample. In particular, a larger sample would allow for the separation of clusters by other properties such as mass or morphology, and to understand if these properties play a role in shaping the MIR cluster galaxy population. We are currently working on a survey to detect clusters in the much larger SWIRE survey: the Spitzer Adaptation of the Red sequence Cluster Survey (SpARCS). This project has 13 times more area than the FLS and is a factor of 2 deeper in integration time in the IRAC bands. The analysis of that sample should provide a significant improvement in the quality of the cluster LFs. \acknowledgements We thank the anonymous referee whose comments improved this manuscript significantly. We would like to thank David Gilbank, Thomas Babbedge, Roberto De Propris, and Stefano Andreon for graciously making their data available to us. We thank David Gilbank for useful conversations which helped improve the clarity of this analysis. We also thank Dario Fadda for recomputing the FLS R-band photometry using different apertures. A.M. acknowledges support from the $Spitzer$ Visiting Graduate Student Program during which much of this work was completed. A.M. also acknowledges support from the National Sciences and Engineering Research Council (NSERC) in the form of PGS-A and PGSD2 fellowships. The work of H.K.C.Y. is supported by grants from the Canada Research Chair Program, NSERC, and the University of Toronto. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
1,116,691,498,879
arxiv
\section{Introduction} Deep Convolutional Neural Networks (CNN) have achieved tremendous success among many computer vision tasks~\cite{girshick2015fast,krizhevsky2012imagenet,simonyan2014very}. However, a hand-crafted network structure tailored to one task may perform poorly on another task. Therefore, it usually requires extensive amount of human efforts to design an appropriate network structure for a certain task. Recently, there are emerging research works ~\cite{assunccao2018using,baker2016designing,bergstra2013making,istrate2018tapas,mendoza2016towards,zoph2017learning} on automatically searching neural network structures for image recognition tasks. In this paper, we focus on optimizing the evolution-based algorithms~\cite{liu2017hierarchical,miikkulainen2017evolving,stanley2002evolving,xie2017genetic} for searching networks from scratch with poorly-initialized networks, such as a network with one global pooling layer, and with few constraints forced during the evolution~\cite{real2017large} as we assume no prior knowledge about the task domain. Existing work along this line of research suffers from either prohibitive computational cost or unsatisfied performance compared with hand-crafted network structures. In~\cite{real2017large}, it costs more than $256$ hours on $250$ GPU for searching neural network structures, which is not affordable for general users. In~\cite{xie2017genetic}, the final learned network structure by their genetic approach achieves about $77\%$ test accuracy on CIFAR-10, even though better performance as 92.9\% could be obtained after fine-tuning certain parameters and modifying some structures on the discovered network. In~\cite{li2018evoltuion}, they firstly aim to achieve better performance with the reduced computational cost by the proposed aggressive selection strategy in genetic approach and more mutations operation to increase diversity which is decreased by the proposed selection strategy. In their work, they reduce computational cost dramatically from more than $64,000$ GPU hours (GPUH) to few hundreds GPUH. However, their approach still suffers performance sacrifice, for example, $90.5\%$ test accuracy compared to $94.6\%$ test accuracy from~\cite{real2017large} on CIFAR-10 dataset. Inspired by a few key concepts in ecological system, in this paper, we try to improve the genetic approach to achieve better test performance compared to~\cite{real2017large} or competitive performance to hand-crafted network structures~\cite{he2016deep} under limited computation cost~\cite{li2018evoltuion}, but without utilizing pre-designed architectures~\cite{liu2017progressive,liu2017hierarchical,liu2018darts,zoph2017learning}. Inspired by primary, secondary succession from ecological system~\cite{sahney2008recovery}, we enforce a poorly initialized population of neural network structures to rapidly evolve to a population containing network structures with dramatically improved performance. After the first stage of primary succession, we perform fine-grained search for better networks in a population during the secondary succession stage. During the succession stages, we also introduce an accelerated extinction algorithm to improve the search efficiency. In our approach, we apply the mimicry~\cite{greeney2012feeding} concept to help inferior networks learn the behavior from superior networks to obtain the better performance. In addition, we also introduce the gene duplication to further utilize the novel block of layers that appear in the discovered network structure. The contribution of this paper is four-fold and can be summarized as follows: \begin{itemize} \item We proposed an efficient genetic approach to search neural network structure from scratch with poorly initialized network and without limiting the searching space. Our approach can greatly reduce the computation cost compared to other genetic approaches, where neural network structures are searched from scratch. This is different from some recent works~\cite{elsken2017simple,liu2018darts,pham2018efficient} that significantly restricts the search space. \item We incorporate primary and secondary succession concepts from ecological system into our genetic framework to search for optimal network structures under limited computation cost. \item We explore the mimicry concept from ecological system to help search better networks during the evolution and use the gene duplication concept to utilize the discovered beneficial structures. \item Experimental results show that the obtained neural network structures achieves better performance compared with existing genetic-based approaches and competitive performance with the hand-crafted network structures. \end{itemize} \section{Related Work} There is growing interest on automatic searching of neural network architectures from scratch. Methods based on reinforcement learning (RL) show promising results on obtaining the networks with the performance similar or better than human designed architectures~\cite{baker2016designing,zhong2018practical,zoph2016neural}. Zoph \textit{et al}. propose to searching in cells, including a normal cell and a reduction cell, where the final architecture is based on stacking the cells~\cite{zoph2017learning}. The idea of cell based searching is widely adopted in many studies~\cite{cai2018efficient,cai2018path,liu2017progressive,liu2018darts,pham2018efficient,zhong2018blockqnn}. In order to reduce high computational cost, efforts have been done to avoid training all networks during the searching process from scratch~\cite{baker2017accelerating,bender2018understanding,brock2017smash,cai2018efficient,domhan2015speeding,elsken2017simple,klein2016learning,zhong2017practical}. However, these works require strict hand-designed constraints to reduce computation cost, and comparison with them are not the focus of this paper. On the other hand, there emerges a few studies~\cite{real2017large,suganuma2017genetic,xie2017genetic} targeting on network searching using evolutionary approaches. In order to have a fair comparison with the RL and evolutionary based approaches, Real~\textit{et al.}~\cite{real2018regularized} conduct the study where the RL and evolutionary approaches are performed under the same searching space. Experiments show the evolutionary approach converges faster than RL. Therefore, in this paper we focus on the \textit{genetic-based approaches} for searching optimal neural network structures. Suganuma \textit{et al.} propose the network searching based on Cartesian genetic programming~\cite{harding2008evolution}. However, a pre-defined grid with the fixed row and column is used as the network has to fit in the grid~\cite{suganuma2017genetic} . The studies that have the searching space similar to us are introduced in ~\cite{li2018evoltuion,real2017large,xie2017genetic}, where the network searching starts from poorly-initialized networks and uses few constraints during the evolution. Since in this paper we focus on achieving better performance with limited computational cost through a genetic approach, we will highlight the differences between our work with the similar studies~\cite{li2018evoltuion,real2017large,xie2017genetic} in the following from two aspects: reducing computation cost and improving performance. In~\cite{real2017large}, the authors encode each individual network structure as a graph into DNA and define several different mutation operations such as IDENTITY and RESET-WEIGHTS to apply to each \textit{parent} network to generate \textit{children} networks. The essential part of this genetic approach is that they utilize a large amount of computation to search the optimal neural network structures in a huge searching space. Specifically, the entire searching procedure costs more than $256$ hours with $250$ GPUs to achieve $94.6\%$ test accuracy from the learned network structure on CIFAR-10 dataset, which is not affordable for general users. Due to prohibitive computation cost, in~\cite{xie2017genetic} the authors impose restriction on the neural network searching space. In their work, they only learn one block of network structure and stack the learned block by certain times in a designed routine to obtain the best network structure. Through this mechanism, the computation cost is reduced to several hundreds GPU hours, however, the test performance of the obtained network structure is not satisfactory, for example, the found network achieves $77\%$ test accuracy on CIFAR-10, even though fine-tuning parameters and modifying certain structures on the learned network structure could lead to the test accuracy as 92.9\%. In~\cite{li2018evoltuion}, they aim to achieve better performance from automatically learned network structure with limited computation cost in the course of evolution, which is not brought up previously. Different from restricting the search space to reduce computational cost~\cite{dufourq2017eden,xie2017genetic}, they propose the aggressive selection strategy to eliminate the weak neural network structures in the early stage. However, this aggressive selection strategy may decrease the diversity which is the nature of genetic approach to improve performance. In order to remedy this issue, they define more mutation operations such as add\_fully\_connected or add\_pooling. Finally, they reduce computation cost dramatically to 72 GPUH on CIFAR-10. However, there is still performance loss in their approach. For example, on CIFAR-10 dataset, the test accuracy of the found network is about $4\%$ lower than~\cite{real2017large}. At the end of this section, we highlight that our work is in the line of ~\cite{li2018evoltuion}. Inspired from ecological concepts, we propose the Ecologically-Inspired GENetic approach (EIGEN) for neural network structure search by evolving the networks through rapid succession, and explore the mimicry and gene duplication along the evolution. \section{Approach} Our genetic approach for searching the optimal neural network structures follows the standard procedures: i) initialize population in the first generation with simple network structures; ii) evaluate the \textit{fitness} score of each neural network structure (fitness score is the measurement defined by users for their purpose such as validation accuracy, number of parameters in network structure, number of FLOP in inference stage, and so on); iii) apply a selection strategy to decide the surviving network structures based on the fitness scores; iv) apply mutation operations on the survived \textit{parent} network structures to create the \textit{children} networks for next generation. The last three steps are repeated until the convergence of the fitness scores. Note that in our genetic approach, the \textit{individual} is denoted by an acyclic graph with each node representing a certain layer such as \textit{convolution}, \textit{pooling} and \textit{concatenation} layer. A \textit{children} network can be generated from a \textit{parent} network through a mutation procedure. A \textit{population} includes a fixed number of networks in each generation, which is set as 10 in our experiments. For details of using genetic approach to search neural network structures, we refer the readers to~\cite{li2018evoltuion}. In the following, we apply the ecological concepts of succession, extinction, mimicry and gene duplication to the genetic approach for an accelerated search of neural network structures. \subsection{Evolution under Rapid Succession} Our inspiration comes from the fact that in an ecological system, the population is dominated by diversified fast-growing individuals during the primary succession, while in the secondary succession, the population is dominated by more competitive individuals~\cite{sahney2008recovery}. Therefore, we treat all the networks during each generation of the evolution process as a \textit{population} and focus on evolving the population instead of on a single network~\cite{real2017large}. With this treatment, we propose a two-stage \textit{rapid succession} for accelerated evolution, analogous to the ecological succession. The proposed rapid succession includes a primary succession, where it starts with a community consisting of a group of poorly initialized individuals which only contains one global pooling layer, and a secondary succession which starts after the primary succession. In the primary succession, a large search space is explored to allow the community grow at a fast speed, and a relatively small search space is used in the secondary succession for fine-grained search. In order to depict how the search space is explored, we define \textit{mutation step-size} $m$ as the maximum mutation iterations between the parent and children. The actual mutation step for each child is uniformly chosen from $[1,m]$. In the primary succession, in order to have diversified fast-growing individuals, a large mutation step-size is used in each generation so the mutated children could be significantly different from each other and from their parent. Since we only go through the training procedure after finishing the entire mutation steps, the computation cost for each generation will not increase with the larger step-size. In the secondary succession, we adopt a relative small mutation step-size to perform a fine-grained search for network structures. Each mutation step is randomly selected from the nine following operations including: \begin{itemize} \item INSERT-CONVOLUTION: A convolutional layer is randomly inserted into the network. The inserted convolutional layer has a default setting with kernel size as 3$\times$3, number of channels as 32, and stride as 1. The convolutional layer is followed by batch normalization~\cite{ioffe2015batch} and Rectified Linear Units~\cite{krizhevsky2012imagenet}. \item INSERT-CONCATENATION: A concatenation layer is randomly inserted into the network where two bottom layers share the same size of feature maps. \item INSERT-POOLING: A pooling layer is randomly inserted into the network with kernel size as 2$\times$2 and stride as 2. \item REMOVE-CONVOLUTION: The operation randomly remove a convolutional layer. \item REMOVE-CONCATENATION: The operation randomly remove a concatenation layer. \item REMOVE-POOLING: The operation randomly remove a pooling layer. \item ALTER-NUMBER-OF-CHANNELS, ALTER-STRIDE, ALTER-FILTER-SIZE: The three operations modify the hyper-parameters in the convolutional layer. The number of channels is randomly selected from a list of $\left \{ 16, 32, 48, 64, 96 \right \}$; the stride is randomly selected from a list of $\left \{ 1, 2 \right \}$; and the filter size is randomly selected from $\left \{ 1\times1, 3\times3\right \}$. \end{itemize} During the succession, we employ the idea from previous work~\cite{li2018evoltuion} that only the best individual in the previous generation will survive. However, instead of evaluating the population in each generation after all the training iterations, it is more efficient to extinguish the individuals that may possibly fail at early iterations, especially during the primary succession where the diversity in the population leads to erratic performances. Based on the assumption that a better network should have better fitness score at earlier training stages, we design our extinction algorithm as follows. To facilitate the presentation, we denote $n$ as the population size in each generation, $T_1$ and $T_2$ as the landmark iterations, $f_{g,i,T_1}$ and $f_{g,i,T_2}$ as fitness scores (validation accuracy used in our work) of the $i^{th}$ network in the $g^{th}$ generation after training $T_1$ and $T_2$ iterations, $v_{g, T_1}$ and $v_{g, T_2}$ as threshold to eliminate weaker networks at $T_1$ and $T_2$ iterations in the $g^{th}$ generation. In the $g^{th}$ generation, we have fitness scores for all networks $\mathcal{F}_{g,T_1} = \{f_{g,i,T_1}, i = 1, \cdots, n\}$ and $\mathcal{F}_{g,T_2} =\{ f_{g,i,T_2}, i = 1, \cdots, \hat{n}\}$ after training $T_1$ and $T_2$ iterations, respectively. Note that $\hat{n}$ can be less than $n$ since weaker networks are eliminated after $T_1$ iterations. The thresholds $v_{g, T_1}$ and $v_{g, T_2}$ are updated at $g^{th}$ iteration as \begin{equation} v_{g, T_1} = \text{max} \Big(S(\mathcal{F}_{g, T_1})_p, \hspace{0.1cm} v_{g-1, T_1}\Big) \label{eq:T1} \end{equation} and \begin{equation} v_{g, T_2} = \text{max} \Big( S(\mathcal{F}_{g, T_2})_q, \hspace{0.1cm} v_{g-1, T_2}\Big) \label{eq:T2} \end{equation} where $S(.)$ is a sorting operator in decreasing order on a list of values and the subscripts $p$ and $q$ represents $p^{th}$ and $q^{th}$ value after the sorting operation, $p$ and $q$ are the hyper-parameters. For each generation, we perform the following steps until the convergence of the fitness scores: (i) train the population for $T_1$ iterations, extinguish the individuals with fitness scores less than $ v_{g, T_1}$; (ii) train the remaining population for $T_2$ iterations, and distinguish the population with fitness scores less than $v_{g, T_2}$; (iii) the survived individuals are further trained till convergence and the best one is chosen as the parent for next generation. The details for the extinction algorithm are described in Algorithm~\ref{alg:aae}. \begin{algorithm}[t] \caption{Algorithm for Extinction} \label{alg:aae} \begin{algorithmic}[1] \STATE \textbf{Input:} $T_1$, $T_2$, $v_{0,T_1}$, $v_{0,T_2}$, $p$, $q$ \FOR{$g = 1 \cdots, G$} \STATE Obtain $\mathcal{F}_{g, T_1} = \{ f_{g,i,T_1},i=1, ...,n\}, n=10$ by training all individuals for $T_1$ iterations\\ \STATE Update $v_{g, T_1}$ based on Eq.~\ref{eq:T1} \\ \STATE Extinguish the individuals with fitness value less than $v_{g, T_1}$\\ \STATE Obtain $\mathcal{F}_{g, T_2} = \{ f_{g,i,T_1},i=1, ...,\hat{n}\}$ by training the remain individuals for $T_2$ iterations\\ \STATE Update $v_{g, T_2}$ based on Eq.~\ref{eq:T2} \\ \STATE Extinguish the individuals with fitness value less than $v_{g, T_2}$\\ \STATE Train the remain individuals for $T_3$ iterations and select the best one as parent \ENDFOR \end{algorithmic} \end{algorithm} \begin{figure}[t] \begin{center} \includegraphics[width=1\columnwidth]{duplication.png} \caption{Example of duplication. The image on the left shows the structure discovered after the rapid succession, where each block includes a number of layers with the same size of feature maps. The image in the middle and right are two examples of the duplication that the Block 2 undergoes different combination to create new architectures.} \label{duplication_exp} \end{center} \end{figure} \begin{table*} \begin{center} \begin{tabular}{C{5cm} | C{2cm} C{2cm} C{2cm} C{3cm} } \hline Model & PARAMS. & C10+ & C100+ & Comp Cost \\ \hline\hline MAXOUT~\cite{goodfellow2013maxout} & - &90.7\% & 61.4\% & - \\ \hline Network In Network~\cite{lin2013network} & - & 91.2\% & 64.3\%& - \\ \hline ALL-CNN~\cite{springenberg2014striving} & 1.3 M & 92.8\% & 66.3\% & -\\ \hline DEEPLY SUPERVISED~\cite{lee2015deeply} & - & 92.0\% & 65.4\% & -\\ \hline HIGHWAY~\cite{srivastava2015highway} & 2.3 M & 92.3\% & 67.6\% & - \\ \hline RESNET~\cite{he2016deep}& 1.7 M & 93.4\% & 72.8\%& - \\ \hline DENSENET ($k = 40, l = 100$)~\cite{huang2017densely} & 25.6 M & 96.5\% & 82.8\%& - \\ \hline Teacher Network & 17.2 M & 96.0\% & 82.0\% & - \\\hline\hline EDEN~\cite{dufourq2017eden}&0.2 M&74.5\% &- &- \\ \hline Genetic CNN~\cite{xie2017genetic} & - & 92.9\% & 71.0\% & 408 GPUH \\ \hline LS-Evolution~\cite{real2017large}& 5.4 M& 94.6\% &- & 64,000 GPUH \\ LS-Evolution~\cite{real2017large}& 40.4 M& -& 77.0\% & $>$ 65,536 GPUH \\ \hline AG-Evolution~\cite{li2018evoltuion} & - & 90.5\%&- & 72 GPUH\\ AG-Evolution~\cite{li2018evoltuion} & - & -& 66.9\%& 136 GPUH\\ \hline\hline EIGEN & 2.6 M & \textbf{94.6}\% & - & 48 GPUH\\ \hline EIGEN & 11.8 M & - & \textbf{78.1}\% & 120 GPUH\\ \hline \end{tabular} \end{center} \caption{Comparison with hand-designed architectures and automatically discovered architectures using genetic algorithms. The C10+ and C100+ columns indicate the test accuracy achieved on data-augmented CIFAR-10 and CIFAR-100 datasets, respectively. The PARAMS. column indicates the number of parameters in the discovered network.} \label{compare_resutls} \end{table*} \subsection{Mimicry} In biology evolution, mimicry is a phenomenon that one species learn behaviours from another species. For example, moth caterpillars learn to imitate body movements of a snake so that they could scare off predators that are usually prey items for snakes~\cite{greeney2012feeding}. The analogy with mimicry signifies that we could force inferior networks to adopt (learn) the behaviors, such as statistics of feature maps~\cite{romero2014fitnets,yim2017gift} or logits~\cite{bucilu2006model,hinton2015distilling}, from superior networks in designing neural network structure during the evolution. In our approach, we force the inferior networks to learn the behavior of a superior network by generating similar distribution of logits in the evolution procedure. Since learning the distribution of logits from the superior network gives more freedom for inferior network structure, compared to learning statistics of feature maps. This is in fact the knowledge distillation proposed in~\cite{hinton2015distilling}. More specifically, for the given training image $\mathbf{x}$ with one-hot class label $\mathbf{y}$, we define $\mathbf{t}$ as the logits predicted from the pre-trained superior network, and $\mathbf{s}$ as the logits predicted by the inferior network. We use the following defined $\mathcal{L}_K$ as the loss function to encode the prediction discrepancy between inferior and superior networks as well as the difference between inferior networks prediction and ground truth annotations during the evolution: \begin{equation} \begin{split} \mathcal{L}_K= (1 - \alpha )\mathcal{L}_C(\mathbf{y}, \mathcal{H}(\mathbf{s})) + \alpha T^2\mathcal{L}_C\left (\mathcal{H}\left (\frac{\mathbf{s}}{T} \right ),\mathcal{H} \left (\frac{\mathbf{t}}{T}\right ) \right ) \label{distillation} \end{split} \end{equation} where $\mathcal{H}(.)$ is the \textit{softmax} function, $\mathcal{L}_C$ is the cross-entropy of two input probability vectors such that \begin{equation} \mathcal{L}_C(\mathbf{y}, \mathcal{H}(\mathbf{s})) = -\sum_{k}\mathbf{y}_k \mathrm{log} \mathcal{H}(\mathbf{s}_k) , \end{equation} $\alpha$ is the ratio controlling two loss terms and $T$ is a hyper-parameter. We adopt the terms from knowledge distillation~\cite{hinton2015distilling} where student network and teacher network represent the inferior network and superior network, respectively. We fix $T$ as a constant. While the target of neural network search is to find the optimal architecture, mimicry is particularly useful when we want to find a small network for applications where inference computation cost is limited. \subsection{Gene Duplication} During the primary succession, the rapid changing of network architectures leads to the novel beneficial structures decoded in DNA~\cite{real2017large} that are not shown in the previous hand-designed networks. To further leverage the automatically discovered structures, we propose an additional mutation operation named \textit{duplication} to simulate the process of gene duplication since it has been proved as an important mechanism for obtaining new genes and could lead to evolutionary innovation~\cite{zhang2003evolution}. In our implementation, we treat the encoded DNA as a combination of blocks. For each layer with the activation map defined as $N \times D \times W \times H$, where $N, D, W, H$ denote the batch size, depth, width and height, respectively, the block includes the layers with activation map that have the same $W$ and $H$. As shown in Figure~\ref{duplication_exp}, the optimal structure discovered from the rapid succession could mutate into different networks by combining the blocks in several ways through the duplication. We duplicate the entire block instead of single layer because the block contains the beneficial structures discovered automatically while simple layer copying is already an operation in the succession. \begin{figure*} \centering \begin{subfigure}{1\textwidth} \centering \includegraphics[width=0.95\linewidth]{cifar_10_network} \caption{The discovered network architecture using the proposed method on CIFAR-10 dataset that includes convolutional layers, concatenation layers and global pooling layer.} \label{cifar_10_network_s} \end{subfigure} \\ \begin{subfigure}{1\textwidth} \centering \includegraphics[width=0.95\linewidth]{cifar_10_network_block} \caption{The detailed architecture of the BLOCK shown in (a).} \label{cifar_10_network_b} \end{subfigure} \caption{Discovered neural network structure for CIFAR-10 dataset.} \label{cifar_10_network} \end{figure*} \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{step_size} \caption{Primary succession using the mutation step-size as 1, 2, 10 and 100.} \label{step_size_s} \end{subfigure}\\ \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{step_size_L} \caption{Primary succession using the muation step-size as 50, 100 and 200.} \label{step_size_l} \end{subfigure} \caption{The effect of different mutation step-size for the primary succession on CIFAR-10. The solid lines show the average test accuracy over five experiments of the individuals with the highest accuracy in each generation. The shaded area around each line has a width of standard deviation $\pm \sigma $. In general, the larger mutation step-size, the faster the convergence of fitness score.} \label{mutation_exp} \end{figure} \section{Experimental Results and Analysis} In this section, we report the experimental results of using EIGEN for structure search of neural networks. We first describe the experiment setup including datasets preprossessing and training strategy in Subsection~\ref{ss:experiment-setup} and show the comparison results in Subsection~\ref{ss:experiment-results}. Following that, we analyze the experimental results in Subsection~\ref{ss:rapid-succession} with regard to each component of our approach. \subsection{Experiment Setup}\label{ss:experiment-setup} \paragraph{Datasets.} The experiments are conducted on two benchmark datasets including CIFAR-10~\cite{krizhevsky2009learning} and CIFAR-100~\cite{krizhevsky2009learning}. The CIFAR-10 dataset contains 10 classes with 50, 000 training images and 10, 000 test images. The images have the size of 32$\times$32. The data augmentation is applied by a Global Contrast Normalization (GCN) and ZCA whitening~\cite{goodfellow2013maxout}. The CIFAR-100 dataset is similar to CIFAR-10 except it includes 100 classes. \paragraph{Training Strategy and Details.} During the training process, we use mini-batch Stochastic Gradient Descent (SGD) to train each individual network with the batch size as 128, momentum as 0.9, and weight-decay as 0.0005. Each network is trained for a maximum of 25, 000 iterations. The initial learning rate is 0.1 and is set as 0.01 and 0.001 at 15, 000 iterations and 20, 000 iterations, respectively. The parameters in Algorithm~\ref{alg:aae} are set to $T_1 = 5, 000$, $T_2 = 15, 000$, $T_3 = 5, 000$, $p = 5$, and $q = 2$. For the mimicry, we set $T$ to 5 and $\alpha$ to 0.9 in Eq.~\ref{distillation}. The teacher network is an ensemble of four Wide-DenseNet ($k=60, l=40$)~\cite{huang2017densely}. The fitness score is validation accuracy from validation set. The primary succession ends when the fitness score saturates and then the secondary succession starts. The entire evolution procedure is terminated when the fitness score converges. Training is conducted with \textit{TensorFlow}~\cite{abadi2016tensorflow}. We directly adopt the hyper-parameters developed on CIFAR-10 dataset to CIFAR-100 dataset. The experiments are run on a machine that has one Intel Xeon E5-2680 v4 2.40GHz CPU and one Nvidia Tesla P100 GPU. \subsection{Comparison Results}\label{ss:experiment-results} The experimental results shown in Table~\ref{compare_resutls} justify the proposed approach are competitive with hand designed networks. Compared with the evolution-based algorithms, we can achieve the best results with the minimum computational cost. For example, we obtain similar results on the two benchmark datasets compared to ~\cite{real2017large}, but our approach is 1,000 times faster. Also, the number of parameters of the networks found by our approach on the two datasets are more than two times smaller than LS-Evolution~\cite{real2017large}. We show the discovered network architecture using our proposed method on CIFAR-10 dataset in Figure~\ref{cifar_10_network}, where Figure~\ref{cifar_10_network_s} shows the engire network and Figure~\ref{cifar_10_network_b} represents the detailed architecture in the BLOCK of Figure~\ref{cifar_10_network_s}. \subsection{Analysis}\label{ss:rapid-succession} \paragraph{Effect of Primary Succession.} We show the results on different mutation step-size for the primary succession in Figure~\ref{mutation_exp}. The solid lines show average test accuracy of the best networks among five experiments and the shaded area represents the standard deviation $\sigma$ in each generation among five experiments. Larger mutation step-size, such as 100, leads to the faster convergence of fitness score compared with the smaller mutation step-size, as shown in Figure~\ref{step_size_s}. However, no further improvement is observed by using too large mutation step-size, such as 200, as shown in Figure~\ref{step_size_l}. \paragraph{Effect of Secondary Succession.} We further analyze the effect of the secondary succession during the evolution process. After the primary succession, we utilize the secondary succession to search the networks with a smaller searching space. We adopt small mutation step-size for the purpose of fine-grained searching based on the survived network from previous generation. Figure~\ref{two_datasets} shows the example evolution on CIFAR-10 and CIFAR-100 during the rapid succession. We use mutation step-size 100 and 10 for primary succession and secondary succession, respectively. The blue line in the plots shows performance of the best individual in each generation. The gray dots show the number of parameters for the population in each generation, and the red line indicates where the primary succession ends. The accuracy on the two datasets for the secondary succession shown in Table~\ref{secondary_resutls} demonstrates that small mutation step-size is helpful for searching better architectures in the rapid succession. \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{cifar10_size_acc} \caption{Experiment on CIFAR-10 dataset.} \label{cifar_10} \end{subfigure}\\ \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{cifar100_size_acc} \caption{Experiment on CIFAR-100 dataset.} \label{cifar_100} \end{subfigure} \caption{The progress of rapid succession on CIFAR-10 (a) and CIFAR-100 (b). The blue line is the test performance of the best individual in each generation. The gray dots show the number of parameters of the individuals in each generation. The red line denotes the generation where primary succession ends.} \label{two_datasets} \end{figure} \paragraph{Analysis on Mimicry.}\label{ss:kd} In order to analyze the effect of mimicry, we consider the situation where only primary and secondary succession are applied during the evolution. Both the duplication and mimicry are disabled. We denote the method as \textbf{EIGEN w/o mimicry and duplication}. We compare \textbf{EIGEN w/o mimicry and duplication} with the approach where mimicry is enabled and denote it as \textbf{EIGEN w/o duplication}. The comparison between \textbf{EIGEN w/o mimicry and duplication} and \textbf{EIGEN w/o duplication} in Table~\ref{distill_resutls} proves the effectiveness of the mimicry during the rapid succession. \begin{table} \centering \begin{tabular}{C{3.2cm} | C{0.96cm} C{0.96cm}} \hline Succession&C10+&C100+ \\ \hline\hline Primary Succession&93.3\%&74.7\% \\ \hline Secondary Succession&93.7\%&76.9\% \\ \hline \end{tabular} \caption{The results of secondary succession during the evolution. After the primary succession, the smaller mutation step-size is adopted to search the better network architectures. The accuracy on both CIFAR-10 and CIFAR-100 are improved.} \label{secondary_resutls} \end{table} \begin{table} \centering \begin{tabular}{C{3cm} | C{1cm} C{1cm}} \hline Method & C10+ & C100+ \\ \hline\hline EIGEN w/o mimicry and duplication& 92.4\% & 74.8\%\\ \hline EIGEN w/o duplication& 93.7\% & 76.9\% \\ \hline \end{tabular} \caption{Analysis of mimicry during the raipd succession.} \label{distill_resutls} \end{table} \paragraph{Effect of Gene Duplication.} After the rapid succession, the duplication operation is applied to leverage the automatically discovered structures. To analyze the effect of gene duplication, we denote the approach without duplication as \textbf{EIGEN w/o duplication} and show the results on CIFAR-10 and CIFAR-100 in Table~\ref{duplication_resutls}. Although more parameters are induced in the networks by duplication, the beneficial structures contained in the block can actually contribute to the network performance through duplication. \begin{table}[h] \begin{center} \centerline{ \begin{tabular}{C{3.33cm} | C{2.1cm} C{2.3cm} } \hline Method & C10+ (PARAMS.) & C100+ (PARAMS.) \\ \hline\hline EIGEN w/o duplication & 93.7\% (1.2 M) & 76.9\% (6.1 M) \\ EIGEN & 94.6\% (2.6 M) & 78.1\% (11.8 M) \\ \hline \end{tabular} } \end{center} \caption{Analysis of the gene duplication operation on CIFAR-10 and CIFAR-100. The performance on the two datasets is improved with more parameters on the networks discovered from gene duplication.} \label{duplication_resutls} \end{table} Furthermore, we analyze the effect of mimicry on the network after the gene duplication. We denote the best network found by our approach as \textbf{EIGEN network}. By utilizing the mimicry to train the network from scratch, which is \textbf{EIGEN network w mimicry}, the networks obtain the improvement as 1.3\% and 4.2\% on CIFAR-10 and CIFAR-100, respectively, compared with the network trained from scratch without mimicry, which is \textbf{EIGEN network w/o mimicry}. \begin{table}[h] \begin{center} \centerline{ \begin{tabular}{C{4.4cm} | C{1.5cm} C{1.5cm}} \hline Method & C10+ & C100+ \\ \hline\hline EIGEN network w/o mimicry & 93.3\% &73.9\% \\ EIGEN network w mimicry & 94.6\% & 78.1\%\\ \hline \end{tabular} } \end{center} \caption{Analysis of mimicry after the gene duplication. } \label{distill_resutls_2} \end{table} \section{Discussion and Conclusions} In this paper, we propose an Ecologically-Inspired GENetic Approach (EIGEN) for searching neural network architectures automatically from scratch, with poor initialization networks, such as a network with one global pooling layer, and few constraints forced during the searching process. Our searching space follows the work in~\cite{li2018evoltuion,real2017large} and we introduce rapid succession, mimicry and gene duplication in our apporach to make the search more efficient and effective. The rapid succession and mimicry could evolve a population of networks into an optimal status under the limited computational resources. With the help of gene duplication, the performance of the found network could be boosted without sacrificing any computational cost. The experimental results show the proposed approach can achieve competitive results on CIFAR-10 and CIFAR-100 under dramatically reduced computational cost compared with other genetic-based algorithms. Admittedly, compared with other searching neural network algorithms~\cite{liu2018darts,pham2018efficient} which aim to searching network under limited computation resource, our work has the slightly higher error rate. But our genetic algorithm requires little prior domain knowledge from human experts, and is more ``complete-automatic'' compared with other semi-automatic searching neural network approaches~\cite{liu2018darts,pham2018efficient}, which require more advanced initialization, carefully designed cell-based structures and much more training iterations after the searching process. Such comparison, although unfair, still indicates that more exploration is needed to improve the efficiency for genetic-based approaches in searching neural networks from scratch for the future study. {\small \bibliographystyle{ieee_full} \section{Introduction} Deep Convolutional Neural Networks (CNN) have achieved tremendous success among many computer vision tasks~\cite{girshick2015fast,krizhevsky2012imagenet,simonyan2014very}. However, a hand-crafted network structure tailored to one task may perform poorly on another task. Therefore, it usually requires extensive amount of human efforts to design an appropriate network structure for a certain task. Recently, there are emerging research works ~\cite{assunccao2018using,baker2016designing,bergstra2013making,istrate2018tapas,mendoza2016towards,zoph2017learning} on automatically searching neural network structures for image recognition tasks. In this paper, we focus on optimizing the evolution-based algorithms~\cite{liu2017hierarchical,miikkulainen2017evolving,stanley2002evolving,xie2017genetic} for searching networks from scratch with poorly-initialized networks, such as a network with one global pooling layer, and with few constraints forced during the evolution~\cite{real2017large} as we assume no prior knowledge about the task domain. Existing work along this line of research suffers from either prohibitive computational cost or unsatisfied performance compared with hand-crafted network structures. In~\cite{real2017large}, it costs more than $256$ hours on $250$ GPU for searching neural network structures, which is not affordable for general users. In~\cite{xie2017genetic}, the final learned network structure by their genetic approach achieves about $77\%$ test accuracy on CIFAR-10, even though better performance as 92.9\% could be obtained after fine-tuning certain parameters and modifying some structures on the discovered network. In~\cite{li2018evoltuion}, they firstly aim to achieve better performance with the reduced computational cost by the proposed aggressive selection strategy in genetic approach and more mutations operation to increase diversity which is decreased by the proposed selection strategy. In their work, they reduce computational cost dramatically from more than $64,000$ GPU hours (GPUH) to few hundreds GPUH. However, their approach still suffers performance sacrifice, for example, $90.5\%$ test accuracy compared to $94.6\%$ test accuracy from~\cite{real2017large} on CIFAR-10 dataset. Inspired by a few key concepts in ecological system, in this paper, we try to improve the genetic approach to achieve better test performance compared to~\cite{real2017large} or competitive performance to hand-crafted network structures~\cite{he2016deep} under limited computation cost~\cite{li2018evoltuion}, but without utilizing pre-designed architectures~\cite{liu2017progressive,liu2017hierarchical,liu2018darts,zoph2017learning}. Inspired by primary, secondary succession from ecological system~\cite{sahney2008recovery}, we enforce a poorly initialized population of neural network structures to rapidly evolve to a population containing network structures with dramatically improved performance. After the first stage of primary succession, we perform fine-grained search for better networks in a population during the secondary succession stage. During the succession stages, we also introduce an accelerated extinction algorithm to improve the search efficiency. In our approach, we apply the mimicry~\cite{greeney2012feeding} concept to help inferior networks learn the behavior from superior networks to obtain the better performance. In addition, we also introduce the gene duplication to further utilize the novel block of layers that appear in the discovered network structure. The contribution of this paper is four-fold and can be summarized as follows: \begin{itemize} \item We proposed an efficient genetic approach to search neural network structure from scratch with poorly initialized network and without limiting the searching space. Our approach can greatly reduce the computation cost compared to other genetic approaches, where neural network structures are searched from scratch. This is different from some recent works~\cite{elsken2017simple,liu2018darts,pham2018efficient} that significantly restricts the search space. \item We incorporate primary and secondary succession concepts from ecological system into our genetic framework to search for optimal network structures under limited computation cost. \item We explore the mimicry concept from ecological system to help search better networks during the evolution and use the gene duplication concept to utilize the discovered beneficial structures. \item Experimental results show that the obtained neural network structures achieves better performance compared with existing genetic-based approaches and competitive performance with the hand-crafted network structures. \end{itemize} \section{Related Work} There is growing interest on automatic searching of neural network architectures from scratch. Methods based on reinforcement learning (RL) show promising results on obtaining the networks with the performance similar or better than human designed architectures~\cite{baker2016designing,zhong2018practical,zoph2016neural}. Zoph \textit{et al}. propose to searching in cells, including a normal cell and a reduction cell, where the final architecture is based on stacking the cells~\cite{zoph2017learning}. The idea of cell based searching is widely adopted in many studies~\cite{cai2018efficient,cai2018path,liu2017progressive,liu2018darts,pham2018efficient,zhong2018blockqnn}. In order to reduce high computational cost, efforts have been done to avoid training all networks during the searching process from scratch~\cite{baker2017accelerating,bender2018understanding,brock2017smash,cai2018efficient,domhan2015speeding,elsken2017simple,klein2016learning,zhong2017practical}. However, these works require strict hand-designed constraints to reduce computation cost, and comparison with them are not the focus of this paper. On the other hand, there emerges a few studies~\cite{real2017large,suganuma2017genetic,xie2017genetic} targeting on network searching using evolutionary approaches. In order to have a fair comparison with the RL and evolutionary based approaches, Real~\textit{et al.}~\cite{real2018regularized} conduct the study where the RL and evolutionary approaches are performed under the same searching space. Experiments show the evolutionary approach converges faster than RL. Therefore, in this paper we focus on the \textit{genetic-based approaches} for searching optimal neural network structures. Suganuma \textit{et al.} propose the network searching based on Cartesian genetic programming~\cite{harding2008evolution}. However, a pre-defined grid with the fixed row and column is used as the network has to fit in the grid~\cite{suganuma2017genetic} . The studies that have the searching space similar to us are introduced in ~\cite{li2018evoltuion,real2017large,xie2017genetic}, where the network searching starts from poorly-initialized networks and uses few constraints during the evolution. Since in this paper we focus on achieving better performance with limited computational cost through a genetic approach, we will highlight the differences between our work with the similar studies~\cite{li2018evoltuion,real2017large,xie2017genetic} in the following from two aspects: reducing computation cost and improving performance. In~\cite{real2017large}, the authors encode each individual network structure as a graph into DNA and define several different mutation operations such as IDENTITY and RESET-WEIGHTS to apply to each \textit{parent} network to generate \textit{children} networks. The essential part of this genetic approach is that they utilize a large amount of computation to search the optimal neural network structures in a huge searching space. Specifically, the entire searching procedure costs more than $256$ hours with $250$ GPUs to achieve $94.6\%$ test accuracy from the learned network structure on CIFAR-10 dataset, which is not affordable for general users. Due to prohibitive computation cost, in~\cite{xie2017genetic} the authors impose restriction on the neural network searching space. In their work, they only learn one block of network structure and stack the learned block by certain times in a designed routine to obtain the best network structure. Through this mechanism, the computation cost is reduced to several hundreds GPU hours, however, the test performance of the obtained network structure is not satisfactory, for example, the found network achieves $77\%$ test accuracy on CIFAR-10, even though fine-tuning parameters and modifying certain structures on the learned network structure could lead to the test accuracy as 92.9\%. In~\cite{li2018evoltuion}, they aim to achieve better performance from automatically learned network structure with limited computation cost in the course of evolution, which is not brought up previously. Different from restricting the search space to reduce computational cost~\cite{dufourq2017eden,xie2017genetic}, they propose the aggressive selection strategy to eliminate the weak neural network structures in the early stage. However, this aggressive selection strategy may decrease the diversity which is the nature of genetic approach to improve performance. In order to remedy this issue, they define more mutation operations such as add\_fully\_connected or add\_pooling. Finally, they reduce computation cost dramatically to 72 GPUH on CIFAR-10. However, there is still performance loss in their approach. For example, on CIFAR-10 dataset, the test accuracy of the found network is about $4\%$ lower than~\cite{real2017large}. At the end of this section, we highlight that our work is in the line of ~\cite{li2018evoltuion}. Inspired from ecological concepts, we propose the Ecologically-Inspired GENetic approach (EIGEN) for neural network structure search by evolving the networks through rapid succession, and explore the mimicry and gene duplication along the evolution. \section{Approach} Our genetic approach for searching the optimal neural network structures follows the standard procedures: i) initialize population in the first generation with simple network structures; ii) evaluate the \textit{fitness} score of each neural network structure (fitness score is the measurement defined by users for their purpose such as validation accuracy, number of parameters in network structure, number of FLOP in inference stage, and so on); iii) apply a selection strategy to decide the surviving network structures based on the fitness scores; iv) apply mutation operations on the survived \textit{parent} network structures to create the \textit{children} networks for next generation. The last three steps are repeated until the convergence of the fitness scores. Note that in our genetic approach, the \textit{individual} is denoted by an acyclic graph with each node representing a certain layer such as \textit{convolution}, \textit{pooling} and \textit{concatenation} layer. A \textit{children} network can be generated from a \textit{parent} network through a mutation procedure. A \textit{population} includes a fixed number of networks in each generation, which is set as 10 in our experiments. For details of using genetic approach to search neural network structures, we refer the readers to~\cite{li2018evoltuion}. In the following, we apply the ecological concepts of succession, extinction, mimicry and gene duplication to the genetic approach for an accelerated search of neural network structures. \subsection{Evolution under Rapid Succession} Our inspiration comes from the fact that in an ecological system, the population is dominated by diversified fast-growing individuals during the primary succession, while in the secondary succession, the population is dominated by more competitive individuals~\cite{sahney2008recovery}. Therefore, we treat all the networks during each generation of the evolution process as a \textit{population} and focus on evolving the population instead of on a single network~\cite{real2017large}. With this treatment, we propose a two-stage \textit{rapid succession} for accelerated evolution, analogous to the ecological succession. The proposed rapid succession includes a primary succession, where it starts with a community consisting of a group of poorly initialized individuals which only contains one global pooling layer, and a secondary succession which starts after the primary succession. In the primary succession, a large search space is explored to allow the community grow at a fast speed, and a relatively small search space is used in the secondary succession for fine-grained search. In order to depict how the search space is explored, we define \textit{mutation step-size} $m$ as the maximum mutation iterations between the parent and children. The actual mutation step for each child is uniformly chosen from $[1,m]$. In the primary succession, in order to have diversified fast-growing individuals, a large mutation step-size is used in each generation so the mutated children could be significantly different from each other and from their parent. Since we only go through the training procedure after finishing the entire mutation steps, the computation cost for each generation will not increase with the larger step-size. In the secondary succession, we adopt a relative small mutation step-size to perform a fine-grained search for network structures. Each mutation step is randomly selected from the nine following operations including: \begin{itemize} \item INSERT-CONVOLUTION: A convolutional layer is randomly inserted into the network. The inserted convolutional layer has a default setting with kernel size as 3$\times$3, number of channels as 32, and stride as 1. The convolutional layer is followed by batch normalization~\cite{ioffe2015batch} and Rectified Linear Units~\cite{krizhevsky2012imagenet}. \item INSERT-CONCATENATION: A concatenation layer is randomly inserted into the network where two bottom layers share the same size of feature maps. \item INSERT-POOLING: A pooling layer is randomly inserted into the network with kernel size as 2$\times$2 and stride as 2. \item REMOVE-CONVOLUTION: The operation randomly remove a convolutional layer. \item REMOVE-CONCATENATION: The operation randomly remove a concatenation layer. \item REMOVE-POOLING: The operation randomly remove a pooling layer. \item ALTER-NUMBER-OF-CHANNELS, ALTER-STRIDE, ALTER-FILTER-SIZE: The three operations modify the hyper-parameters in the convolutional layer. The number of channels is randomly selected from a list of $\left \{ 16, 32, 48, 64, 96 \right \}$; the stride is randomly selected from a list of $\left \{ 1, 2 \right \}$; and the filter size is randomly selected from $\left \{ 1\times1, 3\times3\right \}$. \end{itemize} During the succession, we employ the idea from previous work~\cite{li2018evoltuion} that only the best individual in the previous generation will survive. However, instead of evaluating the population in each generation after all the training iterations, it is more efficient to extinguish the individuals that may possibly fail at early iterations, especially during the primary succession where the diversity in the population leads to erratic performances. Based on the assumption that a better network should have better fitness score at earlier training stages, we design our extinction algorithm as follows. To facilitate the presentation, we denote $n$ as the population size in each generation, $T_1$ and $T_2$ as the landmark iterations, $f_{g,i,T_1}$ and $f_{g,i,T_2}$ as fitness scores (validation accuracy used in our work) of the $i^{th}$ network in the $g^{th}$ generation after training $T_1$ and $T_2$ iterations, $v_{g, T_1}$ and $v_{g, T_2}$ as threshold to eliminate weaker networks at $T_1$ and $T_2$ iterations in the $g^{th}$ generation. In the $g^{th}$ generation, we have fitness scores for all networks $\mathcal{F}_{g,T_1} = \{f_{g,i,T_1}, i = 1, \cdots, n\}$ and $\mathcal{F}_{g,T_2} =\{ f_{g,i,T_2}, i = 1, \cdots, \hat{n}\}$ after training $T_1$ and $T_2$ iterations, respectively. Note that $\hat{n}$ can be less than $n$ since weaker networks are eliminated after $T_1$ iterations. The thresholds $v_{g, T_1}$ and $v_{g, T_2}$ are updated at $g^{th}$ iteration as \begin{equation} v_{g, T_1} = \text{max} \Big(S(\mathcal{F}_{g, T_1})_p, \hspace{0.1cm} v_{g-1, T_1}\Big) \label{eq:T1} \end{equation} and \begin{equation} v_{g, T_2} = \text{max} \Big( S(\mathcal{F}_{g, T_2})_q, \hspace{0.1cm} v_{g-1, T_2}\Big) \label{eq:T2} \end{equation} where $S(.)$ is a sorting operator in decreasing order on a list of values and the subscripts $p$ and $q$ represents $p^{th}$ and $q^{th}$ value after the sorting operation, $p$ and $q$ are the hyper-parameters. For each generation, we perform the following steps until the convergence of the fitness scores: (i) train the population for $T_1$ iterations, extinguish the individuals with fitness scores less than $ v_{g, T_1}$; (ii) train the remaining population for $T_2$ iterations, and distinguish the population with fitness scores less than $v_{g, T_2}$; (iii) the survived individuals are further trained till convergence and the best one is chosen as the parent for next generation. The details for the extinction algorithm are described in Algorithm~\ref{alg:aae}. \begin{algorithm}[t] \caption{Algorithm for Extinction} \label{alg:aae} \begin{algorithmic}[1] \STATE \textbf{Input:} $T_1$, $T_2$, $v_{0,T_1}$, $v_{0,T_2}$, $p$, $q$ \FOR{$g = 1 \cdots, G$} \STATE Obtain $\mathcal{F}_{g, T_1} = \{ f_{g,i,T_1},i=1, ...,n\}, n=10$ by training all individuals for $T_1$ iterations\\ \STATE Update $v_{g, T_1}$ based on Eq.~\ref{eq:T1} \\ \STATE Extinguish the individuals with fitness value less than $v_{g, T_1}$\\ \STATE Obtain $\mathcal{F}_{g, T_2} = \{ f_{g,i,T_1},i=1, ...,\hat{n}\}$ by training the remain individuals for $T_2$ iterations\\ \STATE Update $v_{g, T_2}$ based on Eq.~\ref{eq:T2} \\ \STATE Extinguish the individuals with fitness value less than $v_{g, T_2}$\\ \STATE Train the remain individuals for $T_3$ iterations and select the best one as parent \ENDFOR \end{algorithmic} \end{algorithm} \begin{figure}[t] \begin{center} \includegraphics[width=1\columnwidth]{duplication.png} \caption{Example of duplication. The image on the left shows the structure discovered after the rapid succession, where each block includes a number of layers with the same size of feature maps. The image in the middle and right are two examples of the duplication that the Block 2 undergoes different combination to create new architectures.} \label{duplication_exp} \end{center} \end{figure} \begin{table*} \begin{center} \begin{tabular}{C{5cm} | C{2cm} C{2cm} C{2cm} C{3cm} } \hline Model & PARAMS. & C10+ & C100+ & Comp Cost \\ \hline\hline MAXOUT~\cite{goodfellow2013maxout} & - &90.7\% & 61.4\% & - \\ \hline Network In Network~\cite{lin2013network} & - & 91.2\% & 64.3\%& - \\ \hline ALL-CNN~\cite{springenberg2014striving} & 1.3 M & 92.8\% & 66.3\% & -\\ \hline DEEPLY SUPERVISED~\cite{lee2015deeply} & - & 92.0\% & 65.4\% & -\\ \hline HIGHWAY~\cite{srivastava2015highway} & 2.3 M & 92.3\% & 67.6\% & - \\ \hline RESNET~\cite{he2016deep}& 1.7 M & 93.4\% & 72.8\%& - \\ \hline DENSENET ($k = 40, l = 100$)~\cite{huang2017densely} & 25.6 M & 96.5\% & 82.8\%& - \\ \hline Teacher Network & 17.2 M & 96.0\% & 82.0\% & - \\\hline\hline EDEN~\cite{dufourq2017eden}&0.2 M&74.5\% &- &- \\ \hline Genetic CNN~\cite{xie2017genetic} & - & 92.9\% & 71.0\% & 408 GPUH \\ \hline LS-Evolution~\cite{real2017large}& 5.4 M& 94.6\% &- & 64,000 GPUH \\ LS-Evolution~\cite{real2017large}& 40.4 M& -& 77.0\% & $>$ 65,536 GPUH \\ \hline AG-Evolution~\cite{li2018evoltuion} & - & 90.5\%&- & 72 GPUH\\ AG-Evolution~\cite{li2018evoltuion} & - & -& 66.9\%& 136 GPUH\\ \hline\hline EIGEN & 2.6 M & \textbf{94.6}\% & - & 48 GPUH\\ \hline EIGEN & 11.8 M & - & \textbf{78.1}\% & 120 GPUH\\ \hline \end{tabular} \end{center} \caption{Comparison with hand-designed architectures and automatically discovered architectures using genetic algorithms. The C10+ and C100+ columns indicate the test accuracy achieved on data-augmented CIFAR-10 and CIFAR-100 datasets, respectively. The PARAMS. column indicates the number of parameters in the discovered network.} \label{compare_resutls} \end{table*} \subsection{Mimicry} In biology evolution, mimicry is a phenomenon that one species learn behaviours from another species. For example, moth caterpillars learn to imitate body movements of a snake so that they could scare off predators that are usually prey items for snakes~\cite{greeney2012feeding}. The analogy with mimicry signifies that we could force inferior networks to adopt (learn) the behaviors, such as statistics of feature maps~\cite{romero2014fitnets,yim2017gift} or logits~\cite{bucilu2006model,hinton2015distilling}, from superior networks in designing neural network structure during the evolution. In our approach, we force the inferior networks to learn the behavior of a superior network by generating similar distribution of logits in the evolution procedure. Since learning the distribution of logits from the superior network gives more freedom for inferior network structure, compared to learning statistics of feature maps. This is in fact the knowledge distillation proposed in~\cite{hinton2015distilling}. More specifically, for the given training image $\mathbf{x}$ with one-hot class label $\mathbf{y}$, we define $\mathbf{t}$ as the logits predicted from the pre-trained superior network, and $\mathbf{s}$ as the logits predicted by the inferior network. We use the following defined $\mathcal{L}_K$ as the loss function to encode the prediction discrepancy between inferior and superior networks as well as the difference between inferior networks prediction and ground truth annotations during the evolution: \begin{equation} \begin{split} \mathcal{L}_K= (1 - \alpha )\mathcal{L}_C(\mathbf{y}, \mathcal{H}(\mathbf{s})) + \alpha T^2\mathcal{L}_C\left (\mathcal{H}\left (\frac{\mathbf{s}}{T} \right ),\mathcal{H} \left (\frac{\mathbf{t}}{T}\right ) \right ) \label{distillation} \end{split} \end{equation} where $\mathcal{H}(.)$ is the \textit{softmax} function, $\mathcal{L}_C$ is the cross-entropy of two input probability vectors such that \begin{equation} \mathcal{L}_C(\mathbf{y}, \mathcal{H}(\mathbf{s})) = -\sum_{k}\mathbf{y}_k \mathrm{log} \mathcal{H}(\mathbf{s}_k) , \end{equation} $\alpha$ is the ratio controlling two loss terms and $T$ is a hyper-parameter. We adopt the terms from knowledge distillation~\cite{hinton2015distilling} where student network and teacher network represent the inferior network and superior network, respectively. We fix $T$ as a constant. While the target of neural network search is to find the optimal architecture, mimicry is particularly useful when we want to find a small network for applications where inference computation cost is limited. \subsection{Gene Duplication} During the primary succession, the rapid changing of network architectures leads to the novel beneficial structures decoded in DNA~\cite{real2017large} that are not shown in the previous hand-designed networks. To further leverage the automatically discovered structures, we propose an additional mutation operation named \textit{duplication} to simulate the process of gene duplication since it has been proved as an important mechanism for obtaining new genes and could lead to evolutionary innovation~\cite{zhang2003evolution}. In our implementation, we treat the encoded DNA as a combination of blocks. For each layer with the activation map defined as $N \times D \times W \times H$, where $N, D, W, H$ denote the batch size, depth, width and height, respectively, the block includes the layers with activation map that have the same $W$ and $H$. As shown in Figure~\ref{duplication_exp}, the optimal structure discovered from the rapid succession could mutate into different networks by combining the blocks in several ways through the duplication. We duplicate the entire block instead of single layer because the block contains the beneficial structures discovered automatically while simple layer copying is already an operation in the succession. \begin{figure*} \centering \begin{subfigure}{1\textwidth} \centering \includegraphics[width=0.95\linewidth]{cifar_10_network} \caption{The discovered network architecture using the proposed method on CIFAR-10 dataset that includes convolutional layers, concatenation layers and global pooling layer.} \label{cifar_10_network_s} \end{subfigure} \\ \begin{subfigure}{1\textwidth} \centering \includegraphics[width=0.95\linewidth]{cifar_10_network_block} \caption{The detailed architecture of the BLOCK shown in (a).} \label{cifar_10_network_b} \end{subfigure} \caption{Discovered neural network structure for CIFAR-10 dataset.} \label{cifar_10_network} \end{figure*} \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{step_size} \caption{Primary succession using the mutation step-size as 1, 2, 10 and 100.} \label{step_size_s} \end{subfigure}\\ \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{step_size_L} \caption{Primary succession using the muation step-size as 50, 100 and 200.} \label{step_size_l} \end{subfigure} \caption{The effect of different mutation step-size for the primary succession on CIFAR-10. The solid lines show the average test accuracy over five experiments of the individuals with the highest accuracy in each generation. The shaded area around each line has a width of standard deviation $\pm \sigma $. In general, the larger mutation step-size, the faster the convergence of fitness score.} \label{mutation_exp} \end{figure} \section{Experimental Results and Analysis} In this section, we report the experimental results of using EIGEN for structure search of neural networks. We first describe the experiment setup including datasets preprossessing and training strategy in Subsection~\ref{ss:experiment-setup} and show the comparison results in Subsection~\ref{ss:experiment-results}. Following that, we analyze the experimental results in Subsection~\ref{ss:rapid-succession} with regard to each component of our approach. \subsection{Experiment Setup}\label{ss:experiment-setup} \paragraph{Datasets.} The experiments are conducted on two benchmark datasets including CIFAR-10~\cite{krizhevsky2009learning} and CIFAR-100~\cite{krizhevsky2009learning}. The CIFAR-10 dataset contains 10 classes with 50, 000 training images and 10, 000 test images. The images have the size of 32$\times$32. The data augmentation is applied by a Global Contrast Normalization (GCN) and ZCA whitening~\cite{goodfellow2013maxout}. The CIFAR-100 dataset is similar to CIFAR-10 except it includes 100 classes. \paragraph{Training Strategy and Details.} During the training process, we use mini-batch Stochastic Gradient Descent (SGD) to train each individual network with the batch size as 128, momentum as 0.9, and weight-decay as 0.0005. Each network is trained for a maximum of 25, 000 iterations. The initial learning rate is 0.1 and is set as 0.01 and 0.001 at 15, 000 iterations and 20, 000 iterations, respectively. The parameters in Algorithm~\ref{alg:aae} are set to $T_1 = 5, 000$, $T_2 = 15, 000$, $T_3 = 5, 000$, $p = 5$, and $q = 2$. For the mimicry, we set $T$ to 5 and $\alpha$ to 0.9 in Eq.~\ref{distillation}. The teacher network is an ensemble of four Wide-DenseNet ($k=60, l=40$)~\cite{huang2017densely}. The fitness score is validation accuracy from validation set. The primary succession ends when the fitness score saturates and then the secondary succession starts. The entire evolution procedure is terminated when the fitness score converges. Training is conducted with \textit{TensorFlow}~\cite{abadi2016tensorflow}. We directly adopt the hyper-parameters developed on CIFAR-10 dataset to CIFAR-100 dataset. The experiments are run on a machine that has one Intel Xeon E5-2680 v4 2.40GHz CPU and one Nvidia Tesla P100 GPU. \subsection{Comparison Results}\label{ss:experiment-results} The experimental results shown in Table~\ref{compare_resutls} justify the proposed approach are competitive with hand designed networks. Compared with the evolution-based algorithms, we can achieve the best results with the minimum computational cost. For example, we obtain similar results on the two benchmark datasets compared to ~\cite{real2017large}, but our approach is 1,000 times faster. Also, the number of parameters of the networks found by our approach on the two datasets are more than two times smaller than LS-Evolution~\cite{real2017large}. We show the discovered network architecture using our proposed method on CIFAR-10 dataset in Figure~\ref{cifar_10_network}, where Figure~\ref{cifar_10_network_s} shows the engire network and Figure~\ref{cifar_10_network_b} represents the detailed architecture in the BLOCK of Figure~\ref{cifar_10_network_s}. \subsection{Analysis}\label{ss:rapid-succession} \paragraph{Effect of Primary Succession.} We show the results on different mutation step-size for the primary succession in Figure~\ref{mutation_exp}. The solid lines show average test accuracy of the best networks among five experiments and the shaded area represents the standard deviation $\sigma$ in each generation among five experiments. Larger mutation step-size, such as 100, leads to the faster convergence of fitness score compared with the smaller mutation step-size, as shown in Figure~\ref{step_size_s}. However, no further improvement is observed by using too large mutation step-size, such as 200, as shown in Figure~\ref{step_size_l}. \paragraph{Effect of Secondary Succession.} We further analyze the effect of the secondary succession during the evolution process. After the primary succession, we utilize the secondary succession to search the networks with a smaller searching space. We adopt small mutation step-size for the purpose of fine-grained searching based on the survived network from previous generation. Figure~\ref{two_datasets} shows the example evolution on CIFAR-10 and CIFAR-100 during the rapid succession. We use mutation step-size 100 and 10 for primary succession and secondary succession, respectively. The blue line in the plots shows performance of the best individual in each generation. The gray dots show the number of parameters for the population in each generation, and the red line indicates where the primary succession ends. The accuracy on the two datasets for the secondary succession shown in Table~\ref{secondary_resutls} demonstrates that small mutation step-size is helpful for searching better architectures in the rapid succession. \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{cifar10_size_acc} \caption{Experiment on CIFAR-10 dataset.} \label{cifar_10} \end{subfigure}\\ \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1\linewidth]{cifar100_size_acc} \caption{Experiment on CIFAR-100 dataset.} \label{cifar_100} \end{subfigure} \caption{The progress of rapid succession on CIFAR-10 (a) and CIFAR-100 (b). The blue line is the test performance of the best individual in each generation. The gray dots show the number of parameters of the individuals in each generation. The red line denotes the generation where primary succession ends.} \label{two_datasets} \end{figure} \paragraph{Analysis on Mimicry.}\label{ss:kd} In order to analyze the effect of mimicry, we consider the situation where only primary and secondary succession are applied during the evolution. Both the duplication and mimicry are disabled. We denote the method as \textbf{EIGEN w/o mimicry and duplication}. We compare \textbf{EIGEN w/o mimicry and duplication} with the approach where mimicry is enabled and denote it as \textbf{EIGEN w/o duplication}. The comparison between \textbf{EIGEN w/o mimicry and duplication} and \textbf{EIGEN w/o duplication} in Table~\ref{distill_resutls} proves the effectiveness of the mimicry during the rapid succession. \begin{table} \centering \begin{tabular}{C{3.2cm} | C{0.96cm} C{0.96cm}} \hline Succession&C10+&C100+ \\ \hline\hline Primary Succession&93.3\%&74.7\% \\ \hline Secondary Succession&93.7\%&76.9\% \\ \hline \end{tabular} \caption{The results of secondary succession during the evolution. After the primary succession, the smaller mutation step-size is adopted to search the better network architectures. The accuracy on both CIFAR-10 and CIFAR-100 are improved.} \label{secondary_resutls} \end{table} \begin{table} \centering \begin{tabular}{C{3cm} | C{1cm} C{1cm}} \hline Method & C10+ & C100+ \\ \hline\hline EIGEN w/o mimicry and duplication& 92.4\% & 74.8\%\\ \hline EIGEN w/o duplication& 93.7\% & 76.9\% \\ \hline \end{tabular} \caption{Analysis of mimicry during the raipd succession.} \label{distill_resutls} \end{table} \paragraph{Effect of Gene Duplication.} After the rapid succession, the duplication operation is applied to leverage the automatically discovered structures. To analyze the effect of gene duplication, we denote the approach without duplication as \textbf{EIGEN w/o duplication} and show the results on CIFAR-10 and CIFAR-100 in Table~\ref{duplication_resutls}. Although more parameters are induced in the networks by duplication, the beneficial structures contained in the block can actually contribute to the network performance through duplication. \begin{table}[h] \begin{center} \centerline{ \begin{tabular}{C{3.33cm} | C{2.1cm} C{2.3cm} } \hline Method & C10+ (PARAMS.) & C100+ (PARAMS.) \\ \hline\hline EIGEN w/o duplication & 93.7\% (1.2 M) & 76.9\% (6.1 M) \\ EIGEN & 94.6\% (2.6 M) & 78.1\% (11.8 M) \\ \hline \end{tabular} } \end{center} \caption{Analysis of the gene duplication operation on CIFAR-10 and CIFAR-100. The performance on the two datasets is improved with more parameters on the networks discovered from gene duplication.} \label{duplication_resutls} \end{table} Furthermore, we analyze the effect of mimicry on the network after the gene duplication. We denote the best network found by our approach as \textbf{EIGEN network}. By utilizing the mimicry to train the network from scratch, which is \textbf{EIGEN network w mimicry}, the networks obtain the improvement as 1.3\% and 4.2\% on CIFAR-10 and CIFAR-100, respectively, compared with the network trained from scratch without mimicry, which is \textbf{EIGEN network w/o mimicry}. \begin{table}[h] \begin{center} \centerline{ \begin{tabular}{C{4.4cm} | C{1.5cm} C{1.5cm}} \hline Method & C10+ & C100+ \\ \hline\hline EIGEN network w/o mimicry & 93.3\% &73.9\% \\ EIGEN network w mimicry & 94.6\% & 78.1\%\\ \hline \end{tabular} } \end{center} \caption{Analysis of mimicry after the gene duplication. } \label{distill_resutls_2} \end{table} \section{Discussion and Conclusions} In this paper, we propose an Ecologically-Inspired GENetic Approach (EIGEN) for searching neural network architectures automatically from scratch, with poor initialization networks, such as a network with one global pooling layer, and few constraints forced during the searching process. Our searching space follows the work in~\cite{li2018evoltuion,real2017large} and we introduce rapid succession, mimicry and gene duplication in our apporach to make the search more efficient and effective. The rapid succession and mimicry could evolve a population of networks into an optimal status under the limited computational resources. With the help of gene duplication, the performance of the found network could be boosted without sacrificing any computational cost. The experimental results show the proposed approach can achieve competitive results on CIFAR-10 and CIFAR-100 under dramatically reduced computational cost compared with other genetic-based algorithms. Admittedly, compared with other searching neural network algorithms~\cite{liu2018darts,pham2018efficient} which aim to searching network under limited computation resource, our work has the slightly higher error rate. But our genetic algorithm requires little prior domain knowledge from human experts, and is more ``complete-automatic'' compared with other semi-automatic searching neural network approaches~\cite{liu2018darts,pham2018efficient}, which require more advanced initialization, carefully designed cell-based structures and much more training iterations after the searching process. Such comparison, although unfair, still indicates that more exploration is needed to improve the efficiency for genetic-based approaches in searching neural networks from scratch for the future study. {\small \bibliographystyle{ieee_full}
1,116,691,498,880
arxiv
\section{Introduction} Moderate and large deviations of the sum of independent and identically distributed (i.i.d.) real-valued random variables have been investigated since the beginning of the 20th century. Kinchin \cite{Kinchin29} in 1929 was the first to give a result on large deviations of the sum of i.i.d.\ Bernoulli distributed random variables. In 1933, Smirnov \cite{Smirnov33} improved this result and in 1938 Cramér \cite{Cramer38} gave a generalization to sums of i.i.d.\ random variables satisfying the eponymous Cramér's condition which requires the Laplace transform of the common distribution of the random variables to be finite in a neighborhood of zero. Cramér's result was extended by Feller \cite{Feller43} to sequences of non identically distributed bounded random variables. A strengthening of Feller's result was given by Petrov in \cite{Petrov54,petrov2008large} for non identically distributed random variables. When Cramér's condition does not hold, an early result is due to Linnik \cite{Linnik61} in 1961 and concerns polynomial-tailed random variables. The case where the tail decreases faster than all power functions (but not enough for Cramér's condition to be satisfied) has been considered by Petrov \cite{Petrov54} and by S.V.\ Nagaev \cite{Nagaev62}. In \cite{Nagaev69-1,Nagaev69-2}, A.V.\ Nagaev studied the case where the commom distribution of the i.i.d.\ random variables is absolutely continuous with respect to the Lebesgue measure with density $p(t)\sim e^{-\abs{t}^{1-\epsilon}}$ as $\abs{t}$ tends to infinity, with $\epsilon \in (0,1)$. He distinguished five exact-asymptotics results corresponding to five types of deviation speeds. In \cite{borovkov2000large, borovkov2008asymptotic}, Borovkov investigated exact asymptotics of the deviations probability for random variables with semiexponential distribution, also called Weibull-like distribution, i.e.\ with a tail writing as $e^{-t^{1-\epsilon}L(t)}$, where $\epsilon \in (0,1)$ and $L$ is a suitably slowly varying function at infinity. In \cite{FATP2020}, the authors consider the following setting. Let $\epsilon \in (0,1)$ and let $X$ be a real-valued random variable with a density $p$ with respect to the Lebesgue measure verifying: \begin{equation} \label{eq:behav_X} p(x)\sim e^{-x^{1-\epsilon}},\quad \mathrm{as} \quad x\to +\infty. \end{equation} and \begin{equation} \label{hyp1} \exists \gamma \in \intervalleof{0}{1} \quad \rho \mathrel{\mathop:}= \mathbb{E}[|X|^{2+\gamma}] < \infty . \end{equation} For all $n \in \ensnombre{N}^*$, let $X_1$, $X_2$, ..., $X_n$ be i.i.d.\ copies of $X$ and set $S_n=X_1+\dots+X_n$ and $P_n(x)=\mathbb{P}(S_n\geqslant x)$. According to the asymptotics of $x_n$, three logarithmic asymptotic ranges then appear. In the sequel, the notation $x_n \gg y_n$ (resp. $x_n \ll y_n$, $x_n \preccurlyeq y_n$, and $x_n =\Theta(y_n)$) means that $y_n/x_n\to 0$ (resp. $x_n/y_n\to 0$, $\limsup \abs{x_n/y_n}<\infty$, and $y_n \preccurlyeq x_n \preccurlyeq y_n$) as $n\to \infty$. \begin{description} \item[Maximal jump range] \cite[Theorem 1]{FATP2020} When $x_n \gg n^{1/(1+\epsilon)}$, \[ \log P_n(x_n)\sim \log\mathbb{P}(\max(X_1,\ldots,X_n)\geqslant x_n). \] \item[Gaussian range] \cite[Theorem 2]{FATP2020} When $x_n \ll n^{1/(1+\epsilon)}$, \[ \log P_n(x_n)\sim \log(1-\phi(n^{-1/2} x_n)), \] $\phi$ being the cumulative distribution function of the standard Gaussian law. \item[Transition] \cite[Theorem 3]{FATP2020} The case $x_n = \Theta(n^{1/(1+\epsilon)})$ appears to be an interpolation between the Gaussian range and the maximal jump one. \end{description} In the present paper, we exhibit the same asymptotic behaviour for triangular arrays of random variables $(\rva[i]{Y}{n})_{1 \leqslant i \leqslant N_n}$ satisfying the following weaker assumption: there exists $q>0$ such that, if $y_n \to \infty$, \begin{align}\label{ass:queue} \log\mathbb{P}(\rva{Y}{n} \geqslant y_n) \sim -q y_n^{1-\varepsilon} , \end{align} together with similar assumptions on the moments. \medskip The first main contribution of this paper is the generalization of \cite{FATP2020,Nagaev69-1,Nagaev69-2} to triangular arrays. Such a setting appears naturally in some combinatorial problems, such as those presented by \cite{Janson01a}, including hashing with linear probing. Since the eighty's, laws of large numbers have been established for triangular arrays (see, e.g., \cite{gut1992complete,GUT199249,hu1989strong}). Lindeberg's condition is standard for the central limit theorem to hold for triangular arrays (see, e.g., \cite[Theorem 27.2]{billingsley2013convergence}). Dealing with triangular arrays of light-tailed random variables, Gärtner-Ellis theorem provides moderate and large deviation results. Deviations for sums of heavy-tailed i.i.d.\ random variables are studied by several authors (e.g., \cite{borovkov2000large, borovkov2008asymptotic, FATP2020, Linnik61, Nagaev69-1, Nagaev69-2, Nagaev79,Petrov54}) and a good survey can be found in \cite{Mikosch98}. Here, we focus on the particular case of semiexponential tails (treated in \cite{borovkov2000large, borovkov2008asymptotic,FATP2020, Nagaev69-1,Nagaev69-2} for sums of i.i.d.\ random variables) generalizing the results to triangular arrays. See \cite{ATP2020hashing} for an application to hashing with linear probing. Another contribution is the fact that the random variables are not supposed absolutely continuous as in \cite{FATP2020,Nagaev69-1,Nagaev69-2}. Assumption \eqref{ass:queue} is analogue to that of \cite{borovkov2000large, borovkov2008asymptotic}, but there the transition at $x_n = \Theta(n^{1/(1+\epsilon)})$ is not considered. Hence, up to our knowledge, Theorem \ref{thm:nagaev_weak_array_mob_eps_intermediate} is the first large deviation result at the transition which is explicit. The paper is organized as follows. In Section \ref{sec:main}, we state the main results, the proofs of which can be found in Section \ref{sec:proofs}. In Section \ref{sec:assump}, a discussion on the assumptions is proposed. Section \ref{sec:baby} is devoted to the study of the model of a truncated random variable which is a natural model of triangular array. This kind of model appears in many proofs of large deviations. Indeed, when one wants to deal with a random variable, the Laplace transform of which is not finite, a classical approach consists in truncating the random variable and in letting the truncation going to infinity. In this model, we exhibit various rate functions, especially nonconvex rate functions. \section{Main results} \label{sec:main} For all $n \geqslant 1$, let $\rva{Y}{n}$ be a centered real-valued random variable, let $N_n$ be a natural number, and let $\left(\rva[i]{Y}{n}\right)_{1 \leqslant i \leqslant N_n}$ be a family of i.i.d.\ random variables distributed as $\rva{Y}{n}$. Define, for all $k \in \intervallentff{1}{N_n}$, \[ \rva[k]{T}{n}\mathrel{\mathop:}=\sum_{i=1}^{k} \rva[i]{Y}{n}. \] To lighten notation, let $\rva{T}{n} \mathrel{\mathop:}= \rva[N_n]{T}{n}$. \begin{thm}[Maximal jump range] \label{thm:nagaev_weak_array_mob_eps} Let $\varepsilon \in \intervalleoo{0}{1}$, $q > 0$, and $\alpha >(1+\varepsilon)^{-1}$. Assume that: \begin{description} \item[(H1)\label{hyp:tails}] for all $N_n^{\alpha \varepsilon} \preccurlyeq y_n \preccurlyeq N_n^\alpha$, $\log \mathbb{P}(\rva{Y}{n} \geqslant y_n) \sim -q y_n^{1-\varepsilon}$; \item[(H2)\label{hyp:mt2_weak_array_mob_v2}] $\mathbb{E}[\rva{Y}{n}^2] = o(N_n^{\alpha(1+\varepsilon)-1})$. \end{description} Then, for all $y \geqslant 0$, \begin{align* \lim_{n \to \infty} \frac{1}{N_n^{\alpha(1-\varepsilon)}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - q y^{1-\varepsilon} . \end{align*} \end{thm} As in \cite{Nagaev69-1, Nagaev69-2,FATP2020}, the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps} immediately adapts to show that, if $x_n \gg N_n^{(1+\varepsilon)^{-1}}$, if, for all $x_n^{\varepsilon} \leqslant y_n \leqslant x_n(1+\delta)$ for some $\delta>0$, $\log\mathbb{P}(\rva{Y}{n} \geqslant y_n)\sim -q y_n^{1-\varepsilon}$, and if $\Var(\rva{Y}{n}) = o(x_n^{(1+\varepsilon)}/N_n)$, then \begin{align*} \log \mathbb{P}( \rva{T}{n} \geqslant x_n ) \sim - q x_n^{1-\varepsilon}. \end{align*} In this paper (see also Theorems \ref{thm:nagaev_weak_array_mob_eps2} and \ref{thm:nagaev_weak_array_mob_eps_intermediate}), we have chosen to explicit the deviations in terms of powers of $N_n$, as it is now standard in large deviation theory. \medskip In addition, the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps} immediately adapts to show that, if $L$ is a slowly varying function such that, for all $N_n^{\alpha \varepsilon}/L(N_n^\alpha) \preccurlyeq y_n \preccurlyeq N_n^\alpha$, $\log \mathbb{P}(\rva{Y}{n} \geqslant y_n) \sim -L(y_n) y_n^{1-\varepsilon}$ and if assumption \ref{hyp:mt2_weak_array_mob_v2} holds, then, for all $y \geqslant 0$, \begin{align* \lim_{n \to \infty} \frac{1}{L(N_n^\alpha) N_n^{\alpha(1-\varepsilon)}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - y^{1-\varepsilon}. \end{align*} The same is true for Theorem \ref{thm:nagaev_weak_array_mob_eps2} below whereas Theorem \ref{thm:nagaev_weak_array_mob_eps_intermediate} below requires additional assumptions on $L$ to take into account the regularly varying tail assumption. \medskip Moreover, if an analogous assumption as \ref{hyp:tails} for the left tail of $\rva{Y}{n}$ is also satisfied, then $\rva{T}{n}$ satisfies a large deviation principle at speed $N_n^{\alpha(1-\varepsilon)}$ with rate function $-q\abs{y}^{1-\varepsilon}$ (the same remark applies to Theorems \ref{thm:nagaev_weak_array_mob_eps2} and \ref{thm:nagaev_weak_array_mob_eps_intermediate}). \begin{thm}[Gaussian range] \label{thm:nagaev_weak_array_mob_eps2} Let $\varepsilon \in \intervalleoo{0}{1}$, $q > 0$, and $1/2 < \alpha < (1+\varepsilon)^{-1}$. Suppose that \ref{hyp:tails} holds together with: \begin{description} \item[(H2')\label{hyp:var_inf}] $ \mathbb{E}[\rva{Y}{n}^2]\to \sigma^2; $ \item[(H2+)\label{hyp:mt3_inf}] there exists $\gamma \in \intervalleof{0}{1}$ such that $\mathbb{E}[\abs{\rva{Y}{n}}^{2+\gamma}]=o(N_n^{\gamma(1-\alpha)})$. \end{description} Then, for all $y \geqslant 0$, \[ \lim_{n \to \infty} \frac{1}{N_n^{2\alpha-1} }\log \mathbb{P}( \rva{T}{n} \geqslant N_n^{\alpha} y ) = - \frac{y^2}{2\sigma^2}. \] \end{thm} \begin{thm}[Transition] \label{thm:nagaev_weak_array_mob_eps_intermediate} Let $\varepsilon \in \intervalleoo{0}{1}$, $q > 0$, and $\alpha = (1+\varepsilon)^{-1}$. Suppose that \ref{hyp:tails}, \ref{hyp:var_inf}, and \ref{hyp:mt3_inf} hold. Then, for all $y \geqslant 0$, \begin{equation} \label{eq:I_def} \lim_{n \to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)} }\log \mathbb{P}( \rva{T}{n} \geqslant N_n^{1/(1+\varepsilon)} y ) = - \inf_{0\leqslant \theta \leqslant 1} \bigl\{ q\theta^{1-\varepsilon} y^{1-\varepsilon}+\frac{(1-\theta)^2y^2}{2\sigma^2} \bigr\} \mathrel{=}: - I(y) . \end{equation} \end{thm} Let us explicit a little the rate function $I$. Let $f(\theta)=q\theta^{1-\varepsilon} y^{1-\varepsilon}+{(1-\theta)^2y^2/}{(2\sigma^2)}$. An easy computation shows that, if $y \leqslant y_0\mathrel{\mathop:}= ((1-\varepsilon^2)(1+1/\varepsilon)^\varepsilon q\sigma^2)^{1/(1+\varepsilon)}$, $f$ is increasing and its minimum $y^2/(2\sigma^2)$ is attained at $\theta=0$. If $y>y_0$, $f$ has two local minima, at $0$ and at $\theta(y)$: the latter corresponds to the greatest of the two roots in $\intervalleff{0}{1}$ of $f'(t)=0$, equation equivalent to \begin{align} \label{eq:nag_6} (1-\theta)\theta^{\epsilon}=\frac{(1-\varepsilon)q\sigma^2}{y^{1+\varepsilon}}. \end{align} If $y_0< y\leqslant y_1\mathrel{\mathop:}= (1+\varepsilon)\left({q\sigma^2}/{(2\varepsilon)^{\varepsilon}}\right)^{\frac{1}{1+\varepsilon}}$, then $f(\theta(y))\geqslant f(0)$. And if $y>y_1$, $f(\theta(y))< f(0)$. As a consequence, for all $y\geqslant 0$, \[ I(y) = \begin{cases} \frac{y^2}{2\sigma^2} & \text{if $y\leqslant y_1$}\\ q\theta(y)^{1-\varepsilon} y^{1-\varepsilon}+\frac{(1-\theta(y))^2y^2}{2\sigma^2} & \text{if $y> y_1$} . \end{cases} \] \section{Proofs} \label{sec:proofs} \subsection{Proof of Theorem \ref{thm:nagaev_weak_array_mob_eps} (Maximal jump regime)} Let us fix $y > 0$. The result for $y=0$ follows by monotony. First, we define \begin{align} \mathbb{P}( \rva{T}{n} \geqslant N_n^{\alpha} y ) & = \mathbb{P}( \rva{T}{n} \geqslant N_n^{\alpha} y,\ \forall i \in \intervallentff{1}{N_n} \quad \rva[i]{Y}{n} < N_n^{\alpha} y) \nonumber \\ & \hspace{3cm} + \mathbb{P}( \rva{T}{n} \geqslant N_n^{\alpha} y,\ \exists i \in \intervallentff{1}{N_n} \quad \rva[i]{Y}{n} \geqslant N_n^{\alpha} y) \nonumber \\ & \mathrel{=}: P_{n,0} + R_{n,0} \label{eq:decomp} . \end{align} \begin{lem} \label{lem2_weak_array_mob_v2} Under \ref{hyp:tails} and \ref{hyp:mt2_weak_array_mob_v2}, for $\alpha > 1/2$ and $y > 0$, \[ \lim_{n \to \infty} \frac{1}{ N_n^{\alpha(1-\varepsilon)} } \log R_{n,0} = -q y^{1-\varepsilon} . \] \end{lem} \begin{proof}[Proof of Lemma \ref{lem2_weak_array_mob_v2}] Using \ref{hyp:tails}, \begin{align} \limsup_{n \to \infty} \frac{1}{ N_n^{\alpha(1-\varepsilon)} } \log R_{n,0} & \leqslant \lim_{n \to \infty} \frac{1}{ N_n^{\alpha(1-\varepsilon)} } \log (N_n \mathbb{P}(\rva{Y}{n} \geqslant N_n^{\alpha} y)) = - qy^{1-\varepsilon} . \label{eq:R0_maj} \end{align} Let us prove the converse inequality. Let $\delta > 0$. We have, \begin{align*} R_{n,0} & \geqslant \mathbb{P}\left( \rva{T}{n} \geqslant N_n^{\alpha} y,\ \rva[1]{Y}{n} \geqslant N_n^{\alpha} y\right) \geqslant \mathbb{P}\left( \rva[N_n-1]{T}{n} \geqslant -N_n^\alpha\delta\right) \mathbb{P}(\rva{Y}{n} \geqslant N_n^{\alpha} (y +\delta)). \end{align*} By Chebyshev's inequality, observe that \begin{align*} \mathbb{P}( \rva[N_n-1]{T}{n} \geqslant -{N_n}^{\alpha}\delta ) & \geqslant 1-\frac{\Var(\rva{Y}{n})}{N_n^{2\alpha-1}\delta^2} \to 1 , \end{align*} using \ref{hyp:mt2_weak_array_mob_v2}. Finally, by \ref{hyp:tails}, one gets \begin{align*} \liminf_{n \to \infty} \frac{1}{N_n^{\alpha(1-\varepsilon)}} \log R_{n,0} & \geqslant \lim_{n \to \infty} \frac{1}{N_n^{\alpha(1-\varepsilon)}} \log \mathbb{P}(\rva{Y}{n} \geqslant N_n^{\alpha}(y + \delta)) = - q (y + \delta)^{1-\varepsilon} . \end{align*} We conclude by letting $\delta \to 0$. \end{proof} To complete the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps}, it remains to prove that, for $\alpha > (1+\varepsilon)^{-1}$, \begin{equation} \label{lem1_weak_array_mob_v2} \limsup_{n \to \infty} \frac{1}{ N_n^{\alpha(1-\varepsilon)} }\log P_{n,0} \leqslant -q y^{1-\varepsilon} , \end{equation} and to apply the principle of the largest term (\emph{see, e.g.}, \cite[Lemma 1.2.15]{DZ98}). Let $q' \in \intervalleoo{0}{q}$. Using the fact that $\mathbbm{1}_{x \geqslant 0} \leqslant e^x$, we get \[ P_{n,0} \leqslant e^{-q' (N_n^{\alpha} y)^{1-\varepsilon}} \mathbb{E}\left[ e^{\frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}} \rva{Y}{n}} \mathbbm{1}_{\rva{Y}{n} < N_n^{\alpha} y} \right]^{N_n} . \] If we prove that \[ \mathbb{E}\left[ e^{\frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}} \rva{Y}{n}} \mathbbm{1}_{\rva{Y}{n} < N_n^{\alpha} y} \right] \leqslant 1 + o ( N_n^{\alpha(1-\varepsilon)-1} ), \] then \[ \log P_{n,0} \leqslant -q' (N_n^{\alpha} y)^{1-\varepsilon} + o(N_n^{\alpha(1-\varepsilon)}) \] and the conclusion follows by letting $q' \to q$. Write \begin{align*} \mathbb{E} &\left[ e^{\frac{q'}{(N_n^{\alpha} y)^{\varepsilon}} \rva{Y}{n}} \mathbbm{1}_{\rva{Y}{n} < N_n^{\alpha} y} \right] = \mathbb{E} \left[ e^{\frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}} \rva{Y}{n}} \mathbbm{1}_{\rva{Y}{n} < (N_n^{\alpha} y)^{\varepsilon}} \right] + \mathbb{E} \left[ e^{\frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}} \rva{Y}{n}} \mathbbm{1}_{(N_n^{\alpha} y)^{\varepsilon} \leqslant\rva{Y}{n} < N_n^{\alpha} y} \right]. \end{align*} First, by a Taylor expansion and \ref{hyp:mt2_weak_array_mob_v2}, we get \begin{align*} \mathbb{E} \left[ e^{\frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}} \rva{Y}{n}} \mathbbm{1}_{\rva{Y}{n} < (N_n^{\alpha} y)^{\varepsilon}} \right] & \leqslant \mathbb{E} \left[ \Bigl(1 + \frac{q'}{(N_n^{\alpha} y)^{\varepsilon}}\rva{Y}{n} + \frac{(q')^2 e^{q'}}{2 (N_n^{\alpha} y)^{2\varepsilon}}\rva{Y}{n}^2\Bigr) \mathbbm{1}_{\rva{Y}{n} < (N_n^{\alpha} y)^{\varepsilon}}\right]\\ & \leqslant 1 + \frac{(q')^2 e^{q'}}{2}\cdot \frac{\mathbb{E}[\rva{Y}{n}^2]}{(N_n^{\alpha} y)^{2\varepsilon}} \nonumber \\ & = 1 + o (N_n^{\alpha(1-\varepsilon)-1}) . \nonumber \end{align*} To bound above the second expectation, we need the following simple consequence of \ref{hyp:tails}. \begin{lem} \label{lem:tail} Under \ref{hyp:tails}, for all $y > 0$, \[ \forall q' < q \quad \exists n_0 \quad \forall n \geqslant n_0 \quad \forall u \in \intervalleff{(N_n^\alpha y)^\varepsilon}{N_n^\alpha y} \quad \log\mathbb{P}(\rva{Y}{n}\geqslant u)\leqslant -q' u^{1-\varepsilon} . \] \end{lem} \begin{proof}[Proof of Lemma \ref{lem:tail}] By contrapposition, if the conclusion of the lemma is false, we can construct a sequence $(u_n)_{n \geqslant 1}$ such that, for all $n \geqslant 1$, $u_n \in \intervalleff{(N_n^\alpha y)^\varepsilon}{N_n^\alpha y}$ and $\log \mathbb{P}(\rva{Y}{n}\geqslant u_n) > -q' u_n^{1-\varepsilon}$, whence \ref{hyp:tails} is not satisfied. \end{proof} Now, integrating by parts, we get \begin{align*} \mathbb{E} &\left[ e^{\frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}} \rva{Y}{n}} \mathbbm{1}_{(N_n^{\alpha} y)^{\varepsilon} \leqslant\rva{Y}{n} < N_n^{\alpha} y} \right] = \int_{(N_n^{\alpha} y)^{\varepsilon}}^{N_n^{\alpha} y} e^{\frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}} u} \mathbb{P}(\rva{Y}{n} \in du)\\ & = - \Big[ e^{\frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}} u} \mathbb{P}(\rva{Y}{n} \geqslant u) \Big]_{ (N_n^{\alpha} y)^{\varepsilon}}^{ N_n^{\alpha} y} + \frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}} \int_{ (N_n^{\alpha} y)^{\varepsilon}}^{ N_n^{\alpha} y} e^{\frac{q'}{ (N_n^{\alpha} y)^{\varepsilon}}u} \mathbb{P}(\rva{Y}{n} \geqslant u) du \\ & \leqslant e^{q'} \mathbb{P}(\rva{Y}{n} \geqslant (N_n^{\alpha} y)^{\varepsilon}) + \frac{q'}{(N_n^{\alpha} y)^{\varepsilon}} \int_{(N_n^{\alpha} y)^{\varepsilon}}^{ N_n^{\alpha} y} e^{\frac{q'}{(N_n^{\alpha} y)^{\varepsilon}} u-q''u^{1-\varepsilon}}du \\ & \leqslant (1 + q'(N_n^{\alpha} y)^{1-\varepsilon}) e^{q'-q''(N_n^{\alpha} y)^{\varepsilon(1-\varepsilon)}} \\ & = o(N_n^{\alpha(1-\varepsilon)-1}) \end{align*} for $n$ large enough, using \ref{hyp:tails} and Lemma \ref{lem:tail} with $q''\in \intervalleoo{q'}{q}$, and the supremum of $u \mapsto q'(N_n^{\alpha} y)^{-\varepsilon} u-q''u^{1-\varepsilon}$ over $\intervalleff{(N_n^{\alpha} y)^{\varepsilon}}{N_n^{\alpha} y}$. The proof of Theorem \ref{thm:nagaev_weak_array_mob_eps} is now complete. \subsection{Proof of Theorem \ref{thm:nagaev_weak_array_mob_eps2} (Gaussian regime)} Let us fix $y > 0$. The result for $y=0$ follows by monotony. For all $m \in \intervallentff{0}{N_n}$, we define \begin{align*} \Pi_{l,m}(x) & = \mathbb{P}\Big( \rva[l]{T}{n} \geqslant x, \, \forall i \in \intervallentff{1}{l-m} \quad \rva[i]{Y}{n}< (N_n^{\alpha} y)^{\varepsilon},\\ & \hspace{2cm} \forall i \in \intervallentff{l-m+1}{l} \quad (N_n^{\alpha} y)^{\varepsilon}\leqslant \rva[i]{Y}{n} < N_n^{\alpha} y \Big) , \end{align*} and we denote $\Pi_{N_n,m}(N_n^{\alpha} y)$ by $\Pi_{n,m}$, so that \begin{align}\label{eq:decomp2} P_{n} & = P_{n,0}+R_{n,0} = \sum_{m=0}^{N_n} \binom{N_n}{m} \Pi_{n,m}+R_{n,0} . \end{align} By Lemma \ref{lem2_weak_array_mob_v2} and the fact that, for $\alpha > (1+\varepsilon)^{-1}$, $2\alpha-1 > \alpha(1-\varepsilon)$, we get \begin{align} \label{eq:gauss_Rn0} \lim_{n \to \infty} \frac{1}{ N_n^{2\alpha-1} } \log R_{n,0} = -\infty . \end{align} \begin{lem} \label{lem:pi0} Under \ref{hyp:tails}, \ref{hyp:var_inf}, and \ref{hyp:mt3_inf}, for $1/2 < \alpha \leqslant (1+\varepsilon)^{-1}$ and $y > 0$, \begin{align}\label{eq:pi0} \lim_{n \to \infty} \frac{1}{N_n^{2\alpha-1}} \log \Pi_{n,0} = - \frac{y^2}{2\sigma^2} . \end{align} \end{lem} \begin{proof}[Proof of Lemma \ref{lem:pi0}] For all $n\geqslant 1$, we introduce the variable $\rva{Y^<}{n}$ distributed as $\mathcal{L}(\rva{Y}{n}\ |\ \rva{Y}{n} < (N_n^{\alpha}y)^{\varepsilon})$. Let $\rva{T^<}{n}=\sum_{i=1}^{N_n} \rva[i]{Y^<}{n}$ where the $\rva[i]{Y^<}{n}$ are independent random variables distributed as $\rva{Y^<}{n}$. Then \begin{align*} \Pi_{n,0}=\mathbb{P}(\rva{T^<}{n} \geqslant N_n^\alpha y) \mathbb{P}(\rva{Y}{n}<(N_n^\alpha y)^{\varepsilon})^{N_n} . \end{align*} On the one hand, $\mathbb{P}(\rva{Y}{n}<(N_n^\alpha y)^{\varepsilon})^{N_n} \to 1$ by \ref{hyp:tails}. On the other hand, in order to apply the unilateral version of Gärtner-Ellis theorem (\emph{see} \cite{PS75}, and \cite{FATP2020} for a modern formulation), we compute, for $u\geqslant 0$, \[ \Lambda_n(u)=N_n^{2(1-\alpha)} \log \mathbb{E} \left[e^{\frac{u}{N_n^{1-\alpha}}\rva{Y^<}{n}}\right]. \] Now, there exists a constant $c>0$ such that, for all $t \leqslant uy^\varepsilon$, $\abs{e^t-(1+t+t^2/2)}\leqslant c\abs{t}^{2+\gamma}$ , whence \begin{align}\label{eq:taylor2} \abs{e^{\frac{u}{N_n^{1-\alpha}}\rva{Y^<}{n}} - 1 - \frac{u}{N_n^{1-\alpha}}\rva{Y^<}{n} - \frac{u^2}{2N_n^{2(1-\alpha)}}(\rva{Y^<}{n})^2} \leqslant \frac{c u^{2+\gamma}}{N_n^{(2+\gamma)(1-\alpha)}}\abs{\rva{Y^<}{n}}^{2+\gamma} , \end{align} by the definition of $\rva{Y^<}{n}$ and $\alpha(1+\varepsilon)\leqslant 1$. Now, \begin{align} \left| \mathbb{E} \left[e^{\frac{u}{N_n^{1-\alpha}}\rva{Y^<}{n}} \right] - e^{\frac{u^2 \sigma^2}{2 N_n^{2(1-\alpha)}}} \right| & \leqslant \left| \mathbb{E} \left[e^{\frac{u}{N_n^{1-\alpha}}\rva{Y^<}{n}} \right] - \mathbb{E} \left[ 1 + \frac{u}{N_n^{1-\alpha}}\rva{Y^<}{n} + \frac{u^2}{2N_n^{2(1-\alpha)}}(\rva{Y^<}{n})^2 \right] \right| \nonumber \\ & \hspace{1cm} + \left| \mathbb{E} \left[1 + \frac{u}{N_n^{1-\alpha}}\rva{Y^<}{n} + \frac{u^2}{2N_n^{2(1-\alpha)}}(\rva{Y^<}{n})^2 \right] - e^{\frac{u^2 \sigma^2}{2 N_n^{2(1-\alpha)}}} \right|. \label{eq:gaussian_57} \end{align} The first term of \eqref{eq:gaussian_57} is bounded above by \begin{align} \frac{c u^{2+\gamma}}{N_n^{(2+\gamma)(1-\alpha)}}\mathbb{E}[\abs{\rva{Y^<}{n}}^{2+\gamma}] = o(N_n^{-2(1-\alpha)}), \label{eq:gaussian_2} \end{align} by assumptions \ref{hyp:tails} and \ref{hyp:mt3_inf}, and an integration by parts. Using a Taylor expansion of order $2$ of the exponential function, the second term of \eqref{eq:gaussian_57} is equal to \begin{align} \left| \frac{u}{N_n^{1-\alpha}} \mathbb{E}[\rva{Y^<}{n}] + \frac{u^2}{2N_n^{2(1-\alpha)}} (\mathbb{E}[(\rva{Y^<}{n})^2]-\sigma^2) + o(N_n^{-2(1-\alpha}) \right| . \end{align} By \ref{hyp:tails} and the fact that $\mathbb{E}[\rva{Y}{n}]=0$, $\mathbb{E}[\rva{Y^<}{n}]$ is exponentially decreasing, whence $\mathbb{E}[\rva{Y^<}{n}]=o(1/N_n^{1-\alpha})$; similarly, by \ref{hyp:tails}, \ref{hyp:var_inf}, and \ref{hyp:mt3_inf}, $\mathbb{E}[(\rva{Y^<}{n})^2]\to \sigma^2$; hence, we get \begin{align*} \Lambda_n(u) = \frac{u^2\sigma^2}{2} + o(1) , \end{align*} and the proof of Lemma \ref{lem:pi0} is complete. \end{proof} Theorem \ref{thm:nagaev_weak_array_mob_eps2} stems from \eqref{eq:gauss_Rn0}, \eqref{eq:pi0} and the fact that, for $1/2 < \alpha < (1+\varepsilon)^{-1}$, \begin{equation} \label{eq:pi_sum} \limsup_{n\to \infty} \frac{1}{N_n^{2\alpha-1}} \log \sum_{m=1}^{N_n} \binom{N_n}{m} \Pi_{n,m} \leqslant - \frac{y^2}{2\sigma^2} , \end{equation} the proof of which is given now. We adapt the proof in \cite[Lemma 5]{Nagaev69-2} and focus on the logarithmic scale. Let $m_n=\lceil N_n^{\alpha (1-\varepsilon)^2 2y^{(1-\varepsilon)^2}}\rceil$. In particular, for all $m > m_n$, \begin{align}\label{eq:m_n} m(N_n^{\alpha}y)^{\varepsilon(1-\varepsilon)}\geqslant \frac{m(N_n^{\alpha}y)^{\varepsilon(1-\varepsilon)}}{2} + (N_n^{\alpha}y)^{1-\varepsilon}. \end{align} \begin{lem} \label{lem:pinm_gaussian} Under \ref{hyp:tails}, for $\alpha > 1/2$, $y > 0$, and $q' < q$, \[ \limsup_{n \to \infty} \frac{1}{N_n^{\alpha(1-\varepsilon)}} \log \sum_{m=m_n+1}^{N_n} \binom{N_n}{m} \Pi_{n,m} \leqslant -q'y^{1-\varepsilon} . \] \end{lem} \begin{proof} For $n$ large enough, using Lemma \ref{lem:tail} and inequality \eqref{eq:m_n}, \begin{align*} \sum_{m=m_n+1}^{N_n} \binom{N_n}{m}\Pi_{n,m} &\leqslant \sum_{m=m_n+1}^{N_n} \binom{N_n}{m}\mathbb{P}(\forall i\in \intervallentff{1}{m} \quad \rva[i]{Y}{n}\geqslant (N_n^{\alpha}y)^{\varepsilon}) \\ &\leqslant \sum_{m=m_n+1}^{N_n} \binom{N_n}{m}e^{-mq' (N_n^{\alpha}y)^{\varepsilon(1-\varepsilon)}} \\ &\leqslant e^{-q'(N_n^{\alpha}y)^{1-\varepsilon}} \sum_{m=m_n+1}^{N_n} \binom{N_n}{m}e^{-mq' (N_n^{\alpha}y)^{\varepsilon(1-\varepsilon)}/2} \\ &\leqslant e^{-q'(N_n^{\alpha}y)^{1-\varepsilon}} \Big(1+e^{-q' (N_n^{\alpha}y)^{\varepsilon(1-\varepsilon)}/2}\Big)^{N_n} . \end{align*} \end{proof} Here, as $\alpha (1-\varepsilon) > 2\alpha-1$, we conclude that \begin{align}\label{eq:pinmgrand} \lim_{n\to \infty} \frac{1}{N_n^{2\alpha-1}}\log \sum_{m=m_n+1}^{N_n} \binom{N_n}{m} \Pi_{n,m} =-\infty. \end{align} Now, for $m\in\intervallentff{1}{m_n}$, let us bound above $\Pi_{n,m}$. Let us define \[ f_m(u_1,\dots,u_m) \mathrel{\mathop:}= \Pi_{N_n-m,0}\biggl( N_n^\alpha y-\sum_{i=1}^m u_i \biggr) , \] which is nondecreasing in each variable. For $q''<q'<q$ and $n$ large enough, \begin{align*} \Pi_{n,m} & = \mathbb{P}(\rva{T}{n}\geqslant N_n^\alpha y \, , \, (N_n^\alpha y)^{\varepsilon}\leqslant \rva[1]{Y}{n},\dots,\rva[m]{Y}{n} < N_n^\alpha y \,,\, \rva[m+1]{Y}{n},\dots,\rva[n]{Y}{n}< (N_n^\alpha y)^{\varepsilon}) \\ &= \int_{[(N_n^\alpha y)^{\varepsilon},N_n^\alpha y]^m} f_m(u_1,\dots,u_m)\, d \mathbb{P}_{\rva{Y}{n}}(u_1)\cdots d\mathbb{P}_{\rva{Y}{n}}(u_m) \\ &= \sum_{(k_1,\dots,k_m)} \int_{\prod_i \intervalleof{k_i-1}{k_i}} f_m(u_1,\dots,u_m)\, d \mathbb{P}_{\rva{Y}{n}}(u_1)\cdots d\mathbb{P}_{\rva{Y}{n}}(u_m) \\ &\leqslant \sum_{(k_1,\dots,k_m)} f_m(k_1,\dots,k_m) \prod_i \mathbb{P}(\rva{Y}{n}\in \intervalleof{k_i-1}{k_i})\nonumber\\ &\leqslant \sum_{(k_1,\dots,k_m)} f_m(k_1,\dots,k_m) e^{-q'\sum_{i=1}^m (k_i-1)^{1-\varepsilon}} \\ &\leqslant \int_{[(N_n^\alpha y)^{\varepsilon},N_n^\alpha y+2]^m} f_m(u_1,\dots,u_m)\, e^{-q''\sum_{i=1}^m u_i^{1-\varepsilon}}\,du_1\cdots du_m \\ &= I_{1,m}+I_{2,m} , \end{align*} where, for $j\in\{1,2\}$, \begin{align}\label{def:Im} I_{j,m} & \mathrel{\mathop:}= \int_{A_{j,m}} f_m(u_1,\dots,u_m)\, e^{-q'' s_m(u_1, \dots, u_m)}\,du_1\cdots du_m \end{align} with \begin{align*} A_{1,m}&\mathrel{\mathop:}= \enstq{(u_1,\dots,u_m)\in\intervalleff{(N_n^\alpha y)^{\varepsilon}}{N_n^\alpha y+2}^m}{\sum_{i=1}^m u_i \geqslant N_n^\alpha y},\\ A_{2,m}&\mathrel{\mathop:}=\enstq{(u_1,\dots,u_m)\in\intervalleff{(N_n^\alpha y)^{\varepsilon}}{N_n^\alpha y+2}^m}{\sum_{i=1}^m u_i < N_n^\alpha y}, \end{align*} and \[ s_m(u_1, \dots, u_m) \mathrel{\mathop:}= \sum_{i=1}^m u_i^{1-\varepsilon} . \] \begin{lem} \label{lem:I1m_gaussian} For $\alpha > 1/2$ and $y > 0$, \begin{align*} \limsup_{n \to \infty} \frac{1}{N_n^{\alpha(1-\varepsilon)}} \log \sum_{m=1}^{m_n} \binom{N_n}{m} I_{1,m} \leqslant -q'' y^{1-\varepsilon} . \end{align*} \end{lem} \begin{proof} Since $s_m$ is concave, $s_m$ reaches its minimum on $A_{1,m}$ at the points with all coordinates equal to $(N_n^\alpha y)^{\varepsilon}$ except one equal to $N_n^\alpha y-(N_n^\alpha y)^{\varepsilon}(m-1)$. Moreover, using the fact that $f_m(u_1,\dots,u_m) \leqslant 1$ in \eqref{def:Im}, it follows that, for $n$ large enough, for all $m \in \{ 1, \dots, m_n \}$, \begin{align*} I_{1,m} & \leqslant (N_n^\alpha y)^m e^{-q''(m-1)(N_n^\alpha y)^{\varepsilon(1-\varepsilon)}-q''(N_n^\alpha y-(m-1)(N_n^\alpha y)^{\varepsilon})^{1-\varepsilon}}\\ & \leqslant(N_n^\alpha y)^m e^{-q''(N_n^\alpha y)^{1-\varepsilon}}e^{-q''(m-1)((N_n^\alpha y)^{\varepsilon(1-\varepsilon)}-1)}. \end{align*} Finally, \begin{align*} \sum_{m=1}^{m_n} \binom{N_n}{m} I_{1,m} & \leqslant e^{-q''(N_n^\alpha y)^{1-\varepsilon}} \sum_{m=1}^{m_n} \binom{N_n}{m} (N_n^\alpha y)^m e^{-q''(m-1)((N_n^\alpha y)^{\varepsilon(1-\varepsilon)}-1)} \\ & \leqslant e^{-q''(N_n^\alpha y)^{1-\varepsilon}} N_n^{1+\alpha} y \sum_{m=1}^{m_n} \left(N_n^{1+\alpha} y e^{-q''((N_n^\alpha y)^{\varepsilon(1-\varepsilon)}-1)}\right)^{m-1} , \end{align*} and the conclusion follows, since the latter sum is bounded. \end{proof} As $\alpha(1-\varepsilon) > 2\alpha-1$, we conclude that \begin{equation} \label{eq:majgausssumI1m} \lim_{n\to \infty} \frac{1}{N_n^{2\alpha-1}}\log \sum_{m=1}^{m_n} \binom{N_n}{m}I_{1,m}=-\infty. \end{equation} \begin{lem} \label{lem:I2m} Under \ref{hyp:var_inf} and \ref{hyp:mt3_inf}, for $1/2 < \alpha \leqslant (1+\varepsilon)^{-1}$, $y > 0$, $n$ large enough, and $m \in \{ 1, \dots, m_n \}$, \[ I_{2,m} \leqslant (N_n^\alpha y)^m e^{- q''(m-1)(N_n^\alpha y)^{\varepsilon(1-\varepsilon)}} \exp \biggl( \sup_{m(N_n^\alpha y)^{\varepsilon} \leqslant u < N_n^\alpha y} \phi_m(u) \biggr) \] where \[ \phi_m(u) \mathrel{\mathop:}= -\frac{(N_n^\alpha y-u)^2}{2\sigma^2(N_n-m)(1+c_n)} - q'' (u-(m-1)(N_n^\alpha y)^{\varepsilon})^{1-\varepsilon} . \] \end{lem} \begin{proof} Here, we use Chebyshev's exponential inequality to control $f_m(u_1,\dots,u_m)=\Pi_{N_n-m,0}(N_n^\alpha y - u_1 - \dots - u_m)$ in $I_{2,m}$. For all $l \in \ensnombre{N}^*$, for all $x \in \ensnombre{R}$, and for all $\lambda \geqslant 0$, \begin{align*} \Pi_{l,0}(x) &\leqslant \exp\left\{-\lambda x +l\log \mathbb{E} \left[e^{\lambda \rva{Y}{n}} \mathbbm{1}_{\rva{Y}{n} \leqslant (N_n^\alpha y)^{\varepsilon}}\right] \right\}. \end{align*} Let $M > y/\sigma^2$. There exists $c > 0$ such that, for all $s \leqslant M$, we have $e^s \leqslant 1 + s + s^2/2+c|s|^{2+\gamma}$. Hence, as soon as $\lambda \leqslant M/N_n^{1-\alpha} \leqslant M/N_n^{\alpha \epsilon}$, \begin{align} \mathbb{E} \left[ e^{\lambda \rva{Y}{n}} \mathbbm{1}_{\rva{Y}{n} \leqslant (N_n^\alpha y)^\varepsilon} \right] & \leqslant 1 + \frac{\lambda^2}{2} \mathbb{E}[\rva{Y}{n}^2] + c \lambda^{2+\gamma} \mathbb{E}[\abs{\rva{Y}{n}}^{2+\gamma}] \leqslant 1 + \frac{\lambda^2\sigma^2}{2}(1+c_n) , \label{eq:cheb_expo} \end{align} where \[ c_n \mathrel{\mathop:}= \mathbb{E}[\rva{Y}{n}^2] - \sigma^2 + \frac{2 c M^\gamma}{\sigma^2} N_n^{-\gamma(1-\alpha)} \mathbb{E}[\abs{\rva{Y}{n}}^{2+\gamma}] = o(1) , \] by \ref{hyp:var_inf} and \ref{hyp:mt3_inf}. Thus, for $\lambda \leqslant M/N_n^{1-\alpha}$, \[ \Pi_{N_n-m,0}(N_n^\alpha y - u) \leqslant \exp\biggl( -\lambda(N_n^\alpha y - u) + (N_n-m) \frac{\lambda^2 \sigma^2}{2} (1+c_n) \biggr) . \] For $n$ large enough and $m \in \{ 1, \dots, m_n \}$, the infimum in $\lambda$ of the last expression is attained at \[ \lambda^* \mathrel{\mathop:}= \frac{N_n^\alpha y-u}{(N_n-m)\sigma^2(1+c_n)} \leqslant \frac{M}{N_n^{1-\alpha}} , \] and is equal to $-(N_n^\alpha y-u)^2/(2\sigma^2(N_n-m)(1+c_n))$. So, for $n$ large enough: \begin{equation} \label{majPin-mgauss} \Pi_{N_n-m,0}(N_n^\alpha y-u) \leqslant \exp\biggl(-\frac{(N_n^\alpha y-u)^2}{2\sigma^2 (N_n-m)(1+c_n)} \biggr) . \end{equation} Since $s_m$ is concave, $s_m$ reaches its minimum on \[ A_{2,m,u} \mathrel{\mathop:}= \enstq{(u_1,\dots,u_m)\in [(N_n^\alpha y)^{\varepsilon},N_n^\alpha y+2]^m}{\sum_{i=1}^m u_i=u} \] at the points with all coordinates equal to $(N_n^\alpha y)^{\varepsilon}$ except one equal to $u-(m-1)(N_n^\alpha y)^{\varepsilon}$, whence, for $n$ large enough and $m \in \{ 1, \dots, m_n \}$, \begin{align*} I_{2,m} & \leqslant (N_n^\alpha y)^m \sup_{m(N_n^\alpha y)^{\varepsilon}\leqslant u < N_n^\alpha y} \exp \biggl( - \frac{(N_n^\alpha y-u)^2}{2\sigma^2 (N_n-m)(1+c_n)} - q'' (u-(m-1)(N_n^\alpha y)^{\varepsilon})^{1-\varepsilon} \nonumber \\ & \hspace{9cm} -q''(m-1)(N_n^\alpha y)^{\varepsilon(1-\varepsilon)} \biggr) \\ & \leqslant (N_n^\alpha y)^m e^{- q''(m-1)(N_n^\alpha y)^{\varepsilon(1-\varepsilon)}} \exp \biggl( \sup_{m(N_n^\alpha y)^{\varepsilon} \leqslant u < N_n^\alpha y} \phi_m(u) \biggr) . \nonumber \end{align*} \end{proof} Now, for $1/2 < \alpha < (1+\varepsilon)^{-1}$, $n$ large enough, and $m \in \{ 1, \dots, m_n \}$, the function $\phi_m$ is decreasing on $\intervallefo{m(N_n^\alpha y)^{\varepsilon}}{N_n^\alpha y}$. So, \begin{align*} I_{2,m} & \leqslant (N_n^\alpha y)^m \exp\left(-q''m(N_n^\alpha y)^{\varepsilon(1-\varepsilon)}-\frac{(N_n^\alpha y-m_n(N_n^\alpha y)^{\varepsilon})^2}{2N_n\sigma^2(1+c_n)} \right). \end{align*} It follows that \[ \sum_{m=1}^{m_n} \binom{N_n}{m} I_{2,m} \leqslant e^{-\frac{(N_n^\alpha y-m_n(N_n^\alpha y)^{\varepsilon})^2}{2N_n\sigma^2(1+c_n)}} \sum_{m=1}^{m_n} \left(N_n^{1+\alpha} y e^{-q''(N_n^\alpha y)^{\varepsilon(1-\varepsilon)}}\right)^m. \] Since the latter sum is bounded, we get \begin{align} \label{eq:majgausssumI2m} \limsup_{n\to \infty} \frac{1}{N_n^{2\alpha-1}}\log \sum_{m=1}^{m_n} \binom{N_n}{m}I_{2,m} &\leqslant -\frac{y^2}{2\sigma^2} \end{align} as $m_n(N_n^\alpha y)^{\varepsilon}=o(N_n^\alpha)$ and $c_n=o(1)$. By \eqref{eq:pinmgrand}, \eqref{eq:majgausssumI1m}, and \eqref{eq:majgausssumI2m}, we get the required result. \begin{rem} Notice that, using the contraction principle, one can show that, for all fixed $m$, \begin{align*} \limsup_{n\to\infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}}\log \Pi_{n,m} = -\frac{y^2}{2\sigma^2}. \end{align*} \end{rem} \subsection{Proof of Theorem \ref{thm:nagaev_weak_array_mob_eps_intermediate} (Transition)} Here, we assume \ref{hyp:tails}, \ref{hyp:var_inf}, and \ref{hyp:mt3_inf}, and we deal with the case $\alpha = (1+\varepsilon)^{-1}$, so that $\alpha(1-\varepsilon) = 2\alpha-1 = (1-\varepsilon)/(1+\varepsilon)$. Let us fix $y > 0$. The result for $y=0$ follows by monotony. We still consider the decomposition \eqref{eq:decomp2}. By Lemmas \ref{lem2_weak_array_mob_v2} and \ref{lem:pi0}, and the very definition of $I$ in \eqref{eq:I_def}, we have \[ \lim_{n \to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log R_{n,0} = -q y^{1-\varepsilon} \leqslant -I(y) \] and \[ \lim_{n \to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \Pi_{n,0} = -\frac{y^2}{2\sigma^2} \leqslant -I(y) . \] To complete the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps_intermediate}, it remains to prove that \begin{align} & \liminf_{n\to\infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \Pi_{n,1} \geqslant -I(y) \label{eq:pi1} \\ & \limsup_{n\to\infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \sum_{m=1}^{N_n}\Pi_{n,m} \leqslant -I(y) \label{eq:pi_sum_intermediate} \end{align} and to apply the principle of the largest term. \begin{proof}[Proof of \eqref{eq:pi1}] For all $t \in \intervalleoo{0}{1}$, \begin{align*} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \Pi_{n,1} & \geqslant \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \mathbb{P}\bigl( \rva[N_n-1]{T}{n} \geqslant N_n^\alpha ty,\ \forall i \in \intervallentff{1}{N_n-1} \quad \rva[i]{Y}{n} < (N_n^\alpha y)^\varepsilon \bigr) \\ & \hspace{2cm} + \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \mathbb{P}\bigl(N_n^\alpha (1-t)y \leqslant \rva[N_n]{Y}{n} < N_n^\alpha y \bigr) \\ & \xrightarrow[n \to \infty]{} - \frac{t^2y^2}{2\sigma^2} - q(1-t)^{1-\varepsilon} y^{1-\varepsilon} , \end{align*} by Lemma \ref{lem:pi0} and by \ref{hyp:tails}. Optimizing in $t \in \intervalleoo{0}{1}$ provides the conclusion. \end{proof} \begin{proof}[Proof of \eqref{eq:pi_sum_intermediate}] We follow the same lines as in the proof of \eqref{eq:pi_sum}. By Lemma \ref{lem:pinm_gaussian}, letting $q'\to q$, we get \begin{align} \label{eq:pinmgrand_intermediate} \limsup_{n\to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \sum_{m=m_n+1}^{N_n} \binom{N_n}{m} \Pi_{n,m} \leqslant -q y^{1-\varepsilon} \leqslant -I(y) . \end{align} Let $\eta = 1-q''/q \in \intervalleoo{0}{1}$. By Lemma \ref{lem:I1m_gaussian}, \begin{align} \label{eq:majgausssumI1m_intermediate} \limsup_{n\to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \sum_{m=1}^{m_n} \binom{N_n}{m} I_{1,m} \leqslant - q'' y^{1-\varepsilon} \leqslant - (1-\eta) I(y) . \end{align} Now, recall that Lemma \ref{lem:I2m} provides $I_{2,m} \leqslant (N_n^\alpha y)^m e^{- q''(m-1)(N_n^\alpha y)^{\varepsilon(1-\varepsilon)}} e^{M_n}$, for $n$ large enough, where \begin{align*} M_n & = \sup_{m(N_n^\alpha y)^{\varepsilon} \leqslant u < N_n^\alpha y} \biggl( -\frac{(N_n^\alpha y-u)^2}{2\sigma^2(N_n-m)(1+c_n)} - q'' (u-(m-1)(N_n^\alpha y)^{\varepsilon})^{1-\varepsilon} \biggr) \\ & \leqslant N_n^{(1-\varepsilon)/(1+\varepsilon)} \sup_{m(N_n^\alpha y)^{-1+\varepsilon} \leqslant \theta < 1} \biggl( -\frac{(1-\theta)^2 y^2}{2\sigma^2 (1+c_n)} - q'' \theta^{1-\varepsilon} y^{1-\varepsilon} \biggl(1 - \frac{(m-1)(N_n^\alpha y)^{-1+\varepsilon}}{\theta} \biggr)^{1-\varepsilon} \biggr). \end{align*} For $n$ large enough, for all $m \in \{ 1, \dots, m_n \}$, \begin{align*} \inf_{m(N_n^\alpha y)^{-1+\varepsilon} \leqslant \theta < \eta} \left\{ \frac{(1-\theta)^2 y^2}{2\sigma^2(1+c_n)} + q'' \theta^{1-\epsilon} \left( 1 -\frac{(m-1)(N_n^\alpha y)^{-1+\varepsilon}}{\theta} \right)^{1-\epsilon} \right\} & \geqslant \frac{(1-\eta)^2 y^2}{2\sigma^2} (1-\eta) \\ & \geqslant (1-\eta)^3 I(y) \end{align*} and \begin{align*} & \inf_{\eta \leqslant \theta < 1} \left\{ \frac{(1-\theta)^2 y^2}{2\sigma^2(1+c_n)} + q'' \theta^{1-\epsilon} \left( 1 -\frac{(m-1)(N_n^\alpha y)^{-1+\varepsilon}}{\theta} \right)^{1-\epsilon} \right\} \\ \geqslant & \inf_{\eta \leqslant \theta < 1} \left\{ \frac{(1-\theta)^2 y^2}{2\sigma^2} + q'' \theta^{1-\epsilon} y^{1-\varepsilon} \right\} (1-\eta) \\ \geqslant & \mathop{} (1-\eta)^2 I(y) . \end{align*} So $M_n \leqslant - N_n^{(1-\varepsilon)/(1+\varepsilon)} (1-\eta)^3 I(y)$ and \begin{align} & \limsup_{n \to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \sum_{m=1}^{m_n} \binom{n}{m} I_{2,m} \nonumber \\ \leqslant & - (1-\eta)^3 I(y) + \limsup_{n \to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \sum_{m=1}^{m_n} \binom{n}{m} \bigl( N_n^\alpha y e^{- q''(N_n^\alpha y)^{\varepsilon(1-\varepsilon)}} \bigr)^m \nonumber \\ = & - (1-\eta)^3 I(y) . \label{eq:I2m_transition} \end{align} Finally, \eqref{eq:pinmgrand_intermediate}, \eqref{eq:majgausssumI1m_intermediate}, and \eqref{eq:I2m_transition} imply \[ \limsup_{n\to\infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \sum_{m=1}^{N_n}\Pi_{n,m} \leqslant - (1-\eta)^3 I(y) , \] and \eqref{eq:pi_sum_intermediate} follows, letting $q'' \to q$, i.e.\ $\eta \to 0$. \end{proof} \begin{rem} Notice that, using the contraction principle, one can show that, for all fixed $m$, \begin{align*} \limsup_{n\to\infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}}\log \Pi_{n,m} = -I(y). \end{align*} \end{rem} \section{About the assumptions}\label{sec:assump} Looking into the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps}, one can see that assumption \ref{hyp:tails} can be weakened and one may only assume the two conditions that follow. \begin{thm} The conclusion of Theorem \ref{thm:nagaev_weak_array_mob_eps} holds under \ref{hyp:mt2_weak_array_mob_v2} and: \begin{description} \item[(H1a)\label{hyp:h1a}] for all $y_n = \Theta(N_n^\alpha)$, $\log \mathbb{P}(\rva{Y}{n} \geqslant y_n) \sim -q y_n^{1-\varepsilon}$; \item[(H1b)\label{hyp:h1b}] for all $N_n^{\alpha \varepsilon} \preccurlyeq y_n \preccurlyeq N_n^\alpha$, $\limsup y_n^{-(1-\varepsilon)} \log \mathbb{P}(\rva{Y}{n} \geqslant y_n) \leqslant -q$. \end{description} \end{thm} \begin{lem} \ref{hyp:h1a} is equivalent to: \begin{description} \item[(H1a')\label{hyp:h1a'}] for all $y > 0$, $\log \mathbb{P}(\rva{Y}{n} \geqslant N_n^\alpha y) \sim - q (N_n^\alpha y)^{1-\varepsilon}$. \end{description} \end{lem} \begin{proof} If $ N_n^\alpha c_1\leqslant y_n \leqslant N_n^\alpha c_2$, then \[ -qc_2 \leqslant N_n^{-\alpha(1-\varepsilon)} \log \mathbb{P}(\rva{Y}{n} \geqslant y_n) \leqslant -q c_1 . \] First extract a convergent subsequence; then, again extract a subsequence such that $N_n^{-\alpha} y_n$ is convergent and use \ref{hyp:h1a} to show that $N_n^{-\alpha(1-\varepsilon)} \log \mathbb{P}(\rva{Y}{n} \geqslant y_n)$ is convergent. \end{proof} \medskip \begin{lem} \ref{hyp:h1b} is equivalent to the conclusion of Lemma \ref{lem:tail}: \begin{description} \item[(H1b')\label{hyp:h1b'}] $\forall y > 0 \quad \forall q' < q \quad \exists n_0 \quad \forall n \geqslant n_0 \quad \forall u \in \intervalleff{(N_n^\alpha y)^\varepsilon}{N_n^\alpha y} \quad \log\mathbb{P}(\rva{Y}{n}\geqslant u)\leqslant -q' u^{1-\varepsilon}$. \end{description} \end{lem} \begin{proof} See the proof of Lemma \ref{lem:tail}. \end{proof} \begin{thm} The conclusion of Theorem \ref{thm:nagaev_weak_array_mob_eps} holds under assumptions \ref{hyp:h1a}, \ref{hyp:h1b}, and $\mathbb{E}[\abs{\rva{Y}{n}}^{2+\gamma}]/\mathbb{E}[\abs{\rva{Y}{n}}^{2}]^{1+\gamma/2}=o(N_n^{\gamma/2})$. \end{thm} \begin{proof} The only modification in the proof is the minoration of $R_{n,0}$: \[ R_{n,0} \geqslant \mathbb{P}(\rva[N_n-1]{T}{n} \geqslant 0) \mathbb{P}(\rva{Y}{n} \geqslant N_n^{\alpha} y) . \] Now Lyapunov's theorem \cite[Theorem 27.3]{billingsley2013convergence} applies and provides $\mathbb{P}(\rva[N_n-1]{T}{n} \geqslant 0) \to 1/2$. \end{proof} As for Theorem \ref{thm:nagaev_weak_array_mob_eps2}, assumption \ref{hyp:tails} can be weakened and one may only assume \ref{hyp:h1b}, or even the following weaker assumption. \begin{thm} The conclusion of Theorem \ref{thm:nagaev_weak_array_mob_eps2} holds under \ref{hyp:var_inf}, \ref{hyp:mt3_inf}, and: \begin{description} \item[(H1c)\label{hyp:h1c}] $\forall y > 0 \quad \exists q > 0 \quad \exists n_0 \quad \forall n \geqslant n_0 \quad \forall u \in \intervalleff{(N_n^\alpha y)^\varepsilon}{N_n^\alpha y} \quad \log \mathbb{P}(\rva{Y}{n} \geqslant u) \leqslant - q u^{1-\varepsilon}$. \end{description} \end{thm} Finally, in Theorem \ref{thm:nagaev_weak_array_mob_eps_intermediate}, assumption \ref{hyp:tails} can be weakened and one may only assume \ref{hyp:h1a} and \ref{hyp:h1b}. \section{Application: truncated random variable}\label{sec:baby} Let us consider a centered real-valued random variable $Y$, admitting a finite moment of order $2+\gamma$ for some $\gamma>0$. Set $\sigma^2 \mathrel{\mathop:}= \mathbb{E}[Y^2]$. Now, let $\beta > 0$ and $c > 0$. For all $n \geqslant 1$, let us introduce the truncated random variable $\rva{Y}{n}$ defined by $\mathcal{L}(\rva{Y}{n}) = \mathcal{L}(Y\ |\ Y < N_n^\beta c)$. Such truncated random variables naturally appear in proofs of large deviation results. \medskip If $Y$ has a light-tailed distribution, i.e.\ $\Lambda_Y(\lambda)\mathrel{\mathop:}= \log \mathbb{E}[e^{\lambda Y}]<\infty$ for some $\lambda>0$, then (the unilateral version of) Gärtner-Ellis theorem applies: \begin{itemize} \item if $\alpha\in \intervalleoo{1/2}{1}$, then \[ \lim_{n\to \infty} \frac{1}{N_n^{2\alpha-1}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^\alpha y) = -\frac{y^2}{2\sigma^2}; \] \item if $\alpha=1$, then \[ \lim_{n\to \infty} \frac{1}{N_n} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^\alpha y) = -\Lambda_Y^*(y)\mathrel{\mathop:}= -\sup_{\lambda\geqslant 0}\{\lambda y-\Lambda_Y(\lambda)\}. \] \end{itemize} Note that we recover the same asymptotics as for the non truncated random variable $Y$. In other words, the truncation does not impact the deviation behaviour. \medskip Now we consider the case where $\log \mathbb{P}(Y \geqslant y) \sim -q y^{1-\varepsilon}$ for some $q>0$ and $\varepsilon \in \intervalleoo{0}{1}$. In this case, Gärtner-Ellis theorem does not apply since all the rate functions are not convex as usual (as can be seen in Figures \ref{fig:fonction_taux_1} to \ref{fig:fonction_taux_3}). Observe that, as soon as $y_n \to \infty$, \[ \limsup_{n \to \infty} \frac{1}{y_n^{1-\varepsilon}} \log \mathbb{P}(\rva{Y}{n} \geqslant y_n) = \limsup_{n\to \infty} \frac{1}{y_n^{1-\varepsilon}} \left(\log \mathbb{P} (y_n \leqslant Y < N_n^\beta c)-\log \mathbb{P} (Y < N_n^\beta c)\right) \leqslant -q , \] so \ref{hyp:h1b} is satisfied. If, moreover, $y_n \leqslant N_n^\beta c'$ with $c' < c$, then $\log \mathbb{P}(\rva{Y}{n} \geqslant y_n) \sim -q y_n^{1-\varepsilon}$, so \ref{hyp:tails} is satisfied for $\alpha < \beta$. In addition, $\mathbb{E}[\rva{Y}{n}]-\mathbb{E}[Y]$, $\mathbb{E}[\rva{Y}{n}^2]-\mathbb{E}[Y^2]$ and $\mathbb{E}[\rva{Y}{n}^{2+\gamma}]-\mathbb{E}[Y^{2+\gamma}]$ are exponentially decreasing to zero. Therefore, our theorems directly apply for $\alpha < \max(\beta, (1+\varepsilon)^{-1})$, and even for $\alpha = (1+\varepsilon)^{-1} < \beta$. For $\alpha \geqslant \max(\beta, (1+\varepsilon)^{-1})$, the proofs easily adapt to cover all cases. To expose the results, we separate the three cases $\beta > (1+\varepsilon)^{-1}$, $\beta < (1+\varepsilon)^{-1}$ and $\beta = (1+\varepsilon)^{-1}$ and provide a synthetic diagram at the end of the section (page \pageref{graph:alpha_beta}) and the graphs of the exhibited rate functions (pages \pageref{fig:fonction_taux_1} and \pageref{fig:fonction_taux_3}). \subsection{Case \texorpdfstring{$\beta > (1+\varepsilon)^{-1}$}{f}} \paragraph{Gaussian range} When $\alpha < (1+\epsilon)^{-1}$, Theorem \ref{thm:nagaev_weak_array_mob_eps2} applies and, for all $y \geqslant 0$, \[ \lim_{n \to \infty} \frac{1}{N_n^{2\alpha-1}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - \frac{y^2}{2\sigma^2} . \] \paragraph{Transition 1} When $\alpha = (1+\epsilon)^{-1}$, Theorem \ref{thm:nagaev_weak_array_mob_eps_intermediate} applies and, for all $y \geqslant 0$, \[ \lim_{n \to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - I_1(y) \mathrel{\mathop:}= - I(y)= - \inf_{0\leqslant \theta \leqslant 1} \bigl\{ q\theta^{1-\varepsilon} y^{1-\varepsilon}+\frac{(1-\theta)^2y^2}{2\sigma^2} \bigr\} . \] \paragraph{Maximal jump range} When $(1+\epsilon)^{-1} < \alpha < \beta$, Theorem \ref{thm:nagaev_weak_array_mob_eps} applies and, for all $y \geqslant 0$, \[ \lim_{n \to \infty} \frac{1}{N_n^{\alpha(1-\varepsilon)}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - q y^{1-\varepsilon} . \] \paragraph{Transition 2} When $\alpha = \beta$, for all $y \geqslant 0$, \[ \lim_{n \to \infty} \frac{1}{N_n^{\alpha(1-\varepsilon)}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y)= - I_2(y) \mathrel{\mathop:}= - q\left(\floor{y/c} c^{1-\varepsilon} + (y - \floor{y/c} c)^{1-\varepsilon} \right). \] Here, as in all cases where $\alpha \geqslant \beta$, we adapt the definitions \eqref{eq:decomp} and \eqref{eq:decomp2} as: \begin{equation} \label{eq:decomp3} \mathbb{P}( \rva{T}{n} \geqslant N_n^{\alpha} y ) = \mathbb{P}( \rva{T}{n} \geqslant N_n^{\alpha} y,\ \forall i \in \intervallentff{1}{N_n} \quad \rva[i]{Y}{n} < N_n^{\beta} c) \mathrel{=}: P_{n,0} \end{equation} ($R_{n,0} = 0$) and, for all $m \in \intervallentff{0}{N_n}$, \begin{align} \Pi_{n,m} & = \mathbb{P}\Big( \rva{T}{n} \geqslant N_n^{\alpha} y, \, \forall i \in \intervallentff{1}{N_n-m} \quad \rva[i]{Y}{n}< (N_n^{\beta}c)^\varepsilon, \nonumber\\ & \hspace{2cm} \forall i \in \intervallentff{N_n-m+1}{N_n} \quad (N_n^{\beta}c)^\varepsilon \leqslant \rva[i]{Y}{n} < N_n^{\beta} c \Big) . \label{eq:decomp4} \end{align} For all $t > 0$, \begin{align*} \Pi_{n,0}&= \mathbb{P}(\rva{T}{n} \geqslant N_n^\alpha y,\ \forall i \in \intervallentff{1}{N_n} \quad \rva[i]{Y}{n} < (N_n^{\alpha}c)^\varepsilon)\\ & \leqslant e^{-tyN_n^{\alpha(1-\varepsilon)}} \mathbb{E}\bigl[ e^{t N_n^{-\alpha\varepsilon} \rva{Y}{n}} \mathbbm{1}_{\rva{Y}{n} < (N_n^{\alpha}c)^\varepsilon} \bigr]^{N_n} \\ & = e^{-tyN_n^{\alpha(1-\varepsilon)}(1+o(1))} , \end{align*} (see the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps}), whence Lemma \ref{lem:pi0} with $\mathcal{L}(\rva{Y^<}{n}) = \mathcal{L}(\rva{Y}{n}\ |\ \rva{Y}{n} < N_n^\alpha c)$ updates into \[ \frac{1}{N_n^{\alpha(1-\varepsilon)}} \log \Pi_{n,0} \xrightarrow[n \to \infty]{} -\infty . \] So, by the contraction principle, for all fixed $m \geqslant 0$, \[ \frac{1}{N_n^{\alpha(1-\varepsilon)}} \log \Pi_{n,m} \xrightarrow[n \to \infty]{} \begin{cases} -\infty & \text{if $m \leqslant y/c-1$} \\ -q\left(\floor{y/c} c^{1-\varepsilon} + (y - \floor{y/c} c)^{1-\varepsilon} \right) & \text{otherwise,} \end{cases} \] that provides a minoration of the sum of the $\Pi_{n,m}$'s. To obtain a majoration, let us introduce $m_n=\lceil N_n^{\alpha (1-\varepsilon)^2} 2k\rceil$ where $k=\floor{y/c} c^{1-\varepsilon} + (y - \floor{y/c} c)^{1-\varepsilon}$. Lemma \ref{lem:pinm_gaussian} remains unchanged while Lemmas \ref{lem:I1m_gaussian} and \ref{lem:I2m} requires adjustments. The integration domains defining $I_{1,m}$ and $I_{2,m}$ become \begin{align*} A_{1,m}&=\enstq{(u_1,\dots,u_m)\in\intervalleff{(N_n^{\alpha}c)^\varepsilon}{N_n^\alpha c+2}^m}{\sum_{i=1}^m u_i \geqslant N_n^\alpha y},\\ A_{2,m}&=\enstq{(u_1,\dots,u_m)\in\intervalleff{(N_n^{\alpha}c)^\varepsilon}{N_n^\alpha c+2}^m}{\sum_{i=1}^m u_i < N_n^\alpha y}, \end{align*} Further, the concave function $s_m$ attains its minimum at points with all coordinates equal to $(N_n^{\alpha}c)^\varepsilon$ except $\lfloor y/c_n \rfloor$ coordinates equal to $N_n^\alpha c_n$ and one coordinate equal to $N_n^\alpha\left( y-\lfloor y/c_n \rfloor c_n\right) -(N_n^{\alpha}c)^\varepsilon \left(m-1-\lfloor y/c_n \rfloor \right)$ with $c_n=c+2N_n^{-\alpha}$. Then following the same lines as in the proof of Lemmas \ref{lem:I1m_gaussian} and \ref{lem:I2m}, we get, for $j\in\{1,2\}$, \begin{align*} &\lim_{n\to \infty} \frac{1}{N_n^{2\alpha-1}}\log \sum_{m=1}^{m_n} \binom{N_n}{m}I_{j,m}=-q\left(\floor{y/c} c^{1-\varepsilon} + (y - \floor{y/c} c)^{1-\varepsilon} \right). \end{align*} \paragraph{Truncated maximal jump range} When $\beta < \alpha < \beta + 1$ and $y \geqslant 0$, or $\alpha = \beta+1$ and $y < c$, the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps} adapts and provides \[ \lim_{n \to \infty} \frac{1}{N_n^{\alpha-\beta\varepsilon}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - q y c^{-\varepsilon} . \] As in the previous case, we use the decomposition given by \eqref{eq:decomp3} and \eqref{eq:decomp4}. To upper bound $P_{n,0}$, we write \[ P_{n,0} \leqslant e^{- qy c^{-\varepsilon} N_n^{\alpha-\beta\varepsilon}} \mathbb{E}\left[ e^{y c^{-\varepsilon} N_n^{-\beta\varepsilon} \rva{Y}{n}} \bigl(\mathbbm{1}_{\rva{Y}{n} < (N_n^\beta c)^\varepsilon} + \mathbbm{1}_{(N_n^\beta c)^\varepsilon \leqslant \rva{Y}{n} < N_n^\beta c}\bigr) \right]^{N_n} \] and follow the same lines as in the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps}. To lower bound $P_{n,0}$, we write, for $c' < c$, \begin{align*} \log P_{n,0} & \geqslant \log \mathbb{P}(\forall i \in \llbracket 1, \lceil N_n^{\alpha-\beta} y/c' \rceil \rrbracket \quad \rva[i]{Y}{n} \geqslant N_n^\beta c') \\ & \sim - N_n^{\alpha-\beta} y (c')^{-1} q (N_n^\beta c' )^{1-\varepsilon} \\ & = - N_n^{\alpha-\beta\varepsilon} q y (c')^{-\varepsilon} , \end{align*} and we recover the upper bound, when $c' \to c$. \paragraph{Trivial case} When $\alpha=\beta+1$ and $y \geqslant c$, or $\alpha > \beta+1$, we obviously have $\mathbb{P}(\rva{T}{n} \geqslant N_n^\alpha y) = 0$. \subsection{Case \texorpdfstring{$\beta < (1+\varepsilon)^{-1}$}{f}} Here, Theorem \ref{thm:nagaev_weak_array_mob_eps2} applies for $\alpha < (1+\varepsilon)^{-1}$. The notable fact is that the Gaussian range is extended: it spreads until $\alpha < 1-\beta\varepsilon$. \paragraph{Gaussian range} When $\alpha < 1-\beta\varepsilon$, the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps2} adapts and, for all $y \geqslant 0$, \[ \lim_{n \to \infty} \frac{1}{N_n^{2\alpha-1}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - \frac{y^2}{2\sigma^2} . \] As we said, the result for $\alpha < (1+\varepsilon)^{-1}$ is a consequence of Theorem \ref{thm:nagaev_weak_array_mob_eps2}. Now, suppose $\alpha \geqslant (1+\varepsilon)^{-1} > \beta$. We use the decomposition given by \eqref{eq:decomp3} and \eqref{eq:decomp4}. Lemma \ref{lem:pi0} works for $\alpha < 1-\beta\varepsilon$, with $\mathcal{L}(\rva{Y^<}{n}) = \mathcal{L}(\rva{Y}{n}\ |\ \rva{Y}{n} < (N_n^\beta c)^\varepsilon)$. Then, we choose $m_n = \lceil N_n^{\alpha-2\beta\varepsilon+\beta\varepsilon^2} 2 y c^{-\varepsilon} \rceil$. We obtain the equivalent of Lemma \ref{lem:pinm_gaussian}: \[ \limsup_{n \to \infty} \frac{1}{N_n^{\alpha-\beta\varepsilon}} \log \sum_{m=m_n+1}^{N_n} \binom{N_n}{m} \Pi_{n,m} \leqslant - q' y c^{-\varepsilon} . \] with $N_n^{\alpha-\beta\varepsilon} \gg N_n^{2\alpha-1}$. Finally, Lemmas \ref{lem:I1m_gaussian} and \ref{lem:I2m} adapt as well, with \begin{align*} A_{1,m}&=\enstq{(u_1,\dots,u_m)\in\intervalleff{(N_n^\beta c)^{\varepsilon}}{N_n^\beta c+2}^m}{\sum_{i=1}^m u_i \geqslant N_n^{\alpha} y },\\ A_{2,m}&=\enstq{(u_1,\dots,u_m)\in\intervalleff{(N_n^\beta c)^{\varepsilon}}{N_n^\beta c+2}^m}{\sum_{i=1}^m u_i < N_n^{\alpha} y} . \end{align*} \paragraph{Transition 3} When $\alpha = 1-\beta\varepsilon$, the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps_intermediate} adapts and, for all $y \geqslant 0$, \begin{align*} \lim_{n \to \infty} \frac{1}{N_n^{1-2\beta\varepsilon}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{1-\beta\varepsilon} y) = -I_3(y) &\mathrel{\mathop:}= - \inf_{0 \leqslant t \leqslant 1} \Bigl\{ q(1-t) y c^{-\varepsilon} + \frac{t^2 y^2}{2\sigma^2} \Bigr\} \\ & = - \begin{cases} \frac{y^2}{2\sigma^2} & \text{if $y \leqslant y_3$} \\ \frac{qy}{c^\varepsilon} - \frac{q^2\sigma^2}{2c^{2\varepsilon}} & \text{if $y > y_3$} \end{cases} \end{align*} with $y_3\mathrel{\mathop:}= q\sigma^2 c^{-\varepsilon}$. \paragraph{Truncated maximal jump range} When $1-\beta\varepsilon < \alpha < 1+\beta$ and $y \geqslant 0$, or $\alpha = 1+\beta$ and $y < c$, as before, the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps} adapts and \[ \lim_{n \to \infty} \frac{1}{N_n^{\alpha-\beta\varepsilon}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - q y c^{-\varepsilon} . \] \paragraph{Trivial case} When $\alpha=\beta+1$ and $y \geqslant c$, or $\alpha > \beta+1$, we obviously have $\mathbb{P}(\rva{T}{n} \geqslant N_n^\alpha y) = 0$. \subsection{Case \texorpdfstring{$\beta = (1+\varepsilon)^{-1}$}{f}} \paragraph{Gaussian range} When $\alpha < (1+\varepsilon)^{-1} = \beta$, Theorem \ref{thm:nagaev_weak_array_mob_eps2} applies and, for all $y \geqslant 0$, \[ \lim_{n \to \infty} \frac{1}{N_n^{2\alpha-1}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - \frac{y^2}{2\sigma^2} . \] \paragraph{Transition $\mathbf{T_0}$} As in Section \ref{sec:main} after the statement of Theorem \ref{thm:nagaev_weak_array_mob_eps_intermediate}, we define $\theta(y)$ and $y_1$ for the function $f(\theta)=q\theta^{1-\varepsilon} y^{1-\varepsilon}+{(1-\theta)^2y^2/}{(2\sigma^2)}$. Define $\tilde{\theta}(y) \mathrel{\mathop:}= \mathbbm{1}_{y \geqslant y_1} \theta(y)$ and notice that $\tilde{\theta}$ is increasing on $\intervallefo{y_1}{\infty}$ (and $\tilde{\theta}(y) \to 1$ as $y \to \infty$). Set $c_0 \mathrel{\mathop:}= \tilde{\theta}(y_1)y_1 = (2\varepsilon q \sigma^2)^{1/(1+\varepsilon)}$. \hspace{0.5cm} \textbullet{} When $\alpha = (1+\varepsilon)^{-1} = \beta$ and $c \leqslant c_0$, then \[ \lim_{n \to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - q k_{0,1}(y) c^{1-\varepsilon} + \frac{(y - k_{0,1}(c)c)^2}{2 \sigma^2} \mathrel{=}: -I_{0,1}(y) \] where \[ k_{0,1}(y) \mathrel{\mathop:}= \max\left( \floor{\frac{y-y_{0,1}(c)}{c}} + 1, 0 \right) \quad \text{and} \quad y_{0,1}(c) \mathrel{\mathop:}= \frac{c}{2} + q \sigma^2 c^{-\varepsilon} \] ($y_{0,1}(c)$ is the unique solution in $y$ of $y_{0,1}^2-(y_{0,1}-c)^2 = 2\sigma^2qc^{1-\varepsilon}$). \hspace{0.5cm} \textbullet{} When $\alpha = (1+\varepsilon)^{-1} = \beta$ and $c \geqslant c_0$, then \[ \lim_{n \to \infty} \frac{1}{N_n^{(1-\varepsilon)/(1+\varepsilon)}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - q k_{0,2}(y) c^{1-\varepsilon} + I(y - k_{0,2}(c)c) \mathrel{=}: -I_{0,2}(y) \] where \[ k_{0,2}(y) \mathrel{\mathop:}= \max\left( \floor{\frac{y-y_{0,2}(c)}{c}} + 1, 0 \right) \quad \text{and} \quad y_{0,2}(c) \mathrel{\mathop:}= c + (1-\varepsilon)q\sigma^2c^{-\varepsilon} \] ($y_{0,2}(c)$ is the unique solution in $y$ of $\tilde{\theta}(y)y = c$). \medskip Remark: For all $c < c_0$, $y_{0,1}(c) > y_1$: so the Gaussian range in the nontruncated case (which stops at $y_1$) is extended. Moreover, $y_{0,1}(c_0) = y_1 = y_{0,2}(c_0)$, and, for $c = c_0$, $I_{0,1} = I_{0,2}$ (since $I_1(y) = y^2/(2\sigma^2)$ for $y \leqslant y_1$). \paragraph{Truncated maximal jump range} When $(1+\varepsilon)^{-1} = \beta < \alpha < \beta+1$ and $y \geqslant 0$, or $\alpha = 1+\beta$ and $y < c$, as before, the proof of Theorem \ref{thm:nagaev_weak_array_mob_eps} adapts and \[ \lim_{n \to \infty} \frac{1}{N_n^{\alpha-\beta\varepsilon}} \log \mathbb{P}(\rva{T}{n} \geqslant N_n^{\alpha} y) = - q y c^{-\varepsilon} . \] \paragraph{Trivial case} When $\alpha=\beta+1$ and $y \geqslant c$, or $\alpha > \beta+1$, we obviously have $\mathbb{P}(\rva{T}{n} \geqslant N_n^\alpha y) = 0$. \begin{center} \begin{figure} \includegraphics[scale=0.29]{fonction_taux_1.png} \caption{\label{fig:fonction_taux_1} Representation of the rate functions. Here, $q=1$, $\sigma^2=2$, $\epsilon =1/2$, and $c=1$. Left - Gaussian range. The typical event corresponds to the case where all the random variables are small but their sum has a Gaussian contribution. Center - Maximal jump range. The typical event corresponds to the case where one random variable contributes to the total sum ($N_n^\alpha y$), no matter the others. We recover the random variable tail. Right - Truncated maximal jump range. The typical event corresponds to the case where $N_n^{\alpha-\beta}y/c$ variables take the saturation value $N_n^\beta c$, no matter the others.} \end{figure} \end{center} \begin{center} \begin{figure} \includegraphics[scale=0.29]{fonction_taux_2.png} \caption{\label{fig:fonction_taux_2} Representation of the rate functions. Here, $q=1$, $\sigma^2=2$, $\epsilon =1/2$, and $c=1$. Left - Transition 1. The typical event corresponds to the case where one random variable is large ($N_n^\alpha \theta(y) y$) and the sum of the others has a Gaussian contribution (two competing terms). Center - Transition 2. The typical event corresponds to the case where $\floor{y/c}$ random variables take the saturation value $N_n^\beta c$ and one completes to get the total sum. Right - Transition 3. The typical event corresponds to the case where some random variables (a number of order $N_n^{1-\beta(1+\varepsilon)}$) take the saturation value $N_n^\beta c$, and the sum of the others has a Gaussian contribution (two competing terms).} \end{figure} \end{center} \begin{center} \begin{figure} \includegraphics[scale=0.29]{fonction_taux_3.png} \caption{\label{fig:fonction_taux_3} Representation of the rate functions. Here, $q=1$, $\sigma^2=2$, $\epsilon =1/2$, and $c=1$. Left - Transition 1- for $c \leqslant c_0$. The typical event corresponds to the case where $k_3(c)$ variables take the saturation value $N^\beta c$, and the sum of the others has a Gaussian contribution. Right - Transition 1- for $c \geqslant c_0$. The typical event corresponds to the case where $k_2(c)$ variables take the saturation value $N^\beta c$, one is also large ($N_n^\beta \theta(y-k_2(c)c) (y-k_2(c)c)$) and the sum of the others has a Gaussian contribution.} \end{figure} \end{center} \begin{center} \begin{figure} \begin{tikzpicture}[scale=1.5] \fill[color=gray](0,2)--(9,2)--(9,8/3)--(8/3,8/3)--(0,4)--cycle; \fill[color=gray!50](9,8/3)--(9,9)--(8/3,8/3)--cycle; \fill[color=gray!0](9,9)--(8/3,8/3)--(0,4)--(2,9)--cycle; \pattern[pattern=north west hatch, hatch distance=3mm, hatch thickness=.5pt] (0,4)--(0,9)--(5,9)--cycle; \node[draw,fill,color=white,text width=1.2cm,text height=0.7cm,rotate=45]at(1.4,7.2){}; \draw[line width=2pt] (8/3,8/3)--(9,9); \draw[line width=2pt] (0,4)--(8/3,8/3)--(9,8/3); \draw[line width=2pt] (0,4)--(5,9); \draw[line width=1pt,dashed] (0,8/3)--(8/3,8/3); \draw[line width=1pt,dashed] (8/3,2)--(8/3,8/3); \draw[line width=2pt,->](0,2)--(9.2,2); \draw[line width=2pt,->](0,2)--(0.0,9.2); \node(a)at(9,1.5){$\beta$}; \node(aa)at(0,1.7){$0$}; \node(b)at(-0.5,9){$\alpha$}; \node(bb)at(-0.5,2){\footnotesize{$1/2$}}; \node(d)at(8/3+0.2,1.5){\footnotesize{$(1+\varepsilon)^{-1}$}}; \node(e)at(-0.8,8/3){\footnotesize{$(1+\varepsilon)^{-1}$}}; \node(f)at(-0.5,4){$1$}; \node[rotate=45]at(8.5,8.2){\footnotesize{$\alpha=\beta$}}; \node[rotate=45]at(4.5,8.2){\footnotesize{$\alpha=\beta+1$}}; \node at(8.2,2.5){\footnotesize{$\alpha=(1+\varepsilon)^{-1}$}}; \node[rotate=-27]at(1.2,3.2){\footnotesize{$\alpha=1-\beta\varepsilon$}}; \node(g)at(1.4,2.30){\footnotesize{Gaussian}}; \node(gg)at(6,2.30){\footnotesize{$y^2/(2\sigma^2)$}}; \node(h)[rotate=45]at(1.6,7.1){\footnotesize{$\infty$}}; \node(hh)[rotate=45]at(1.3,7.3){\footnotesize{Trivial}}; \node(i)[rotate=45]at(6.8,4.5){\footnotesize{$qy^{1-\epsilon}$}}; \node(ii)[rotate=45]at(6.6,4.8){\footnotesize{Maximal Jump}}; \node(f)[rotate=45]at(4.8,6.5){\footnotesize{$qyc^{-\epsilon}$}}; \node(ff)[rotate=45]at(4.5,6.6){\footnotesize{Truncated Maximal Jump}}; \draw[line width=1pt,->](8/3,3.4)--(8/3,2.8); \node(f)at(8/3,3.7){$T_{0}$}; \node(g)at(5.2,8/3+0.2){\footnotesize{Transition 1 \hspace{1cm} $I_1(y)$}}; \node(h)[rotate=45]at(5.5,5.8){\footnotesize{Transition 2 \hspace{0.5cm} $I_2(y)$}}; \node(j)[rotate=-27]at(1.4,3.5){\footnotesize{Transition 3} \hspace{0.2cm} $I_3(y)$}; \end{tikzpicture} \caption{\label{graph:alpha_beta} Rate function transition diagram.} \end{figure} \end{center} \newpage \bibliographystyle{abbrv}
1,116,691,498,881
arxiv
\section{Introduction} \label{sec:introduction} The Cachazo-He-Yuan (CHY) formulae provide remarkable tree-level expressions for scattering amplitudes in theories of massless particles, written as an integral over marked points on the Riemann sphere. The integral localises as a sum over the solutions to the scattering equations \cite{Cachazo:2013hca}. This formalism generalizes earlier work of Roiban, Spradlin and Volovich \cite{Roiban:2004yf} based on Witten's twistor string theory \cite{Witten:2003nn}. The CHY formulae themselves originate in ambitwistor string theory~\cite{Mason:2013sva}: this provided a loop-level formulation~\cite{Adamo:2013tsa,Ohmori:2015sha} giving new formulae at genus one (torus)~\cite{Adamo:2013tsa,Casali:2014hfa} and two~\cite{Adamo:2015hoa} for type II supergravities in 10 dimensions. In~\cite{Geyer:2015bja,Geyer:2015jch}, we showed how the torus formulae reduce to formulae on a nodal Riemann sphere, by means of integration by parts in the moduli space of the torus. The node carries the loop momentum. We proposed that an analogous reduction was possible at any genus, leading to a new formalism that could become a practical tool in the computation of scattering amplitudes. In the one-loop case, our explicit analysis provided a proof that the formulae from ambitwistor strings reproduce the correct answer. Furthermore, on the nodal Riemann sphere, the formalism is more flexible than on the torus, and the formulae could be extended to a variety of theories with or without supersymmetry. An alternative approach to the one-loop scattering equations was pursued in \cite{Cardona:2016bpi,Cardona:2016wcr}. However, one loop is not such a stringent test of the framework, as many difficulties arise only at higher loops. The Feynman tree theorem, for example, shows how to construct one-loop integrands from tree formulae, if massive legs are allowed, and massive legs had already been considered in this context \cite{Naculich:2014naa}; an example of our formulae has been reproduced following such an approach \cite{Cachazo:2015aol}. However, the situation is more difficult at higher loops despite recent progress inspired by the tree theorem \cite{Baadsgaard:2015twa}. In \cite{Geyer:2015bja}, we gave a brief sketch as to how the loop-level scattering equations are obtained by reduction to the nodal Riemann sphere. In this Letter, we give a precise formulation at two loops. To fix the details of the reduction to the sphere, we use a factorization argument that leads to new off-shell scattering equations. An alternative approach \cite{Feng:2016nrf} applies higher-dimensional tree-level rules for the integration of the scattering equations to give diagrams for a scalar theory; however, our aim here is to give a framework that yields loop integrands on a nodal Riemann sphere for complete amplitudes. With this, we adapt genus-two supergravity integrands (type II, $d=10$) to a doubly nodal sphere, leading to the correct integrand for the four-point amplitude in maximal supergravity. We then conjecture an adjustment that gives instead a super-Yang-Mills integrand. These are checked both by factorization and numerically. Non-supersymmetric integrands require certain degenerate solutions to the scattering equations (on which the supersymmetric integrands vanish). We characterize these degenerate solutions here, but leave the subtler non-supersymmetric integrands for the future. \section{From higher genus to the sphere} \label{sec:twoloopequations} The higher genus scattering equations were formulated in the ambitwistor-string framework on Riemann surfaces $\Sigma_g$ of genus $g$~\cite{Adamo:2013tsa,Ohmori:2015sha}, in terms of a meromorphic 1-form $P^\mu$, $\mu=1,\ldots,d$ (the momentum of the string) that solves \begin{equation} \bar\partial P=\sum_{i=1}^n k_i \,\delta^2( z -z_i) \,d z\wedge d\bar z, \label{eq:X-path-int} \end{equation} where $ z _i$ are $n$ marked points on $ \Sigma _g . The solution is written as \begin{equation} P= \sum_{i=1}^n k_i \omega_{z_i,z_0}^{(g)}(z + \sum_{r=1}^g\ell_r \omega_r^{(g)}\,, \label{eq:P-loop} \end{equation} where $\omega_r^{(g)}$, $r=1,\ldots,g$ span a basis of holomorphic 1-forms on $\Sigma_g$ dual to a choice of a-cycles $a_r$, and $\omega_{z_i,z_0}^{(g)}(z)$ are meromorphic differentials with simple poles of residues $\pm1$ at $z_i$ and $z_0$ and vanishing $a$-cycle integrals. The dependence on the auxiliary point $z_0$ drops by momentum conservation. The $\ell_r\in \mathbb C^d$ parametrize the zero-modes of $P$ and will play the role of the loop momenta The genus-$g$ scattering equations are a minimal set of conditions on the $z_i$ and moduli of $\Sigma_g$ required for $P^2$ to vanish globally. They include $n$ conditions \begin{equation} k_i\cdot P( z _i)=0 \, , \qquad i=1,\ldots , n\label{eq:P-res} \end{equation} that set the residues of the simple poles of $P^2$ at the $ z _i$ to zero. These fix the locations of $ z _i$ on the surface. Once these are imposed, the quadratic differential $P^2$ is holomorphic, so it has $3g-3$ further degrees of freedom, corresponding to the moduli of the surface (its shape). We therefore impose another $3g-3$ scattering equations to reach $P^2=0$. This can be done by writing \begin{equation} P^2= \sum_{r,s=1}^g u_{rs}\omega_r^{(g)}\omega^{(g)}_s\,, \label{Psquare} \end{equation} where the $u_{rs}$ only depend on the moduli of $ \Sigma_g$ and on the kinematics, and setting an independent $3g-3$ subset of the $u_{rs}$ to zero. In total, the scattering equations localise the full moduli space integral to a discrete set of points. For $g=2,3$, there are precisely $3g-3$ $u_{rs}$'s, thus we simply set them all to zero and the ambitwistor-string loop integrand reads \begin{equation} \mathcal M_n^{(g)}=\int_{\mathcal{M}_{g,n}} \mathcal I \;\mathrm d^{3g-3} \mu \prod_{r\leq s} \bar{\delta}(u_{rs}) \prod_{i=1}^n \mathrm d z _i \, \bar{\delta}(k_i\cdot P(z_i))\label{eq:higher-genus} \,, \end{equation} where $\mathcal{I}$ is a correlator depending on the theory and the holomorphic $\delta$ functions are $2\pi i \bar\delta(f(z))= \bar\partial({1}/f)$. We stress that this is a formula for the loop integrand, and the $n$-point $g$-loop amplitude is $\int d^D\ell_1\cdots d^D\ell_g\, {\mathcal M_n^{(g)}}$. To reduce this expression to one on nodal Riemann spheres, the heuristics described in \cite{Geyer:2015bja}, based on the explicit genus-one calculation, was to integrate by parts (or use residue theorems) in the moduli integral $g$ times. This relaxes the delta functions $\bar\partial (1/u_{rr})$ to give measure factors $\prod_r 1/u_{rr}$, with the integration by parts yielding residues at the boundary of moduli space where all the chosen $a$-cycles contract to give double points, leaving a Riemann sphere with $g$ pairs of double points. This leaves $2g-3$ moduli that can be identified with the moduli of $2g$ points on the Riemann sphere, corresponding to the $g$ nodes, modulo M\"obius transformations. These moduli are fixed by $2g-3$ remaining scattering equations. On the nodal Riemann sphere $\Sigma$, the basis of 1-forms forms descending from the $\omega_r^{(g)}$'s dual to the pinched a-cycles, given by the pairs of double points $\sigma_{r^\pm}$, is \begin{equation} \omega_r=\frac{(\sigma_{r^+}-\sigma_{r^-}) d\sigma}{(\sigma-\sigma_{r^+})(\sigma-\sigma_{r^-})}\,, \quad r=1,\ldots,g \label{eq:omega-sphere} \end{equation} so now \begin{equation} P=d\sigma\sum_{i=1}^n \frac{k_i}{\sigma-\sigma_i} +\sum_{r=1}^g \ell_r\omega_r\, .\label{eq:P-sphere} \end{equation} From \eqref{Psquare} the coefficient of the double poles at $\sigma_{r^\pm}$ identifies $u_{rr}$ as $u_{rr}=\ell_r^2$. Thus the measure factor becomes $\prod_r 1/\ell_r^2$. Furthermore, the quadratic differential \begin{equation} \mathcal{S}_g=P^2-\sum_{r=1}^g \ell_r^2 \omega_r^2\label{eq:S-def} \end{equation} now only has simple poles at the $\sigma_i$ and $\sigma_{r^\pm}$. The $n+2g$ off-shell scattering equations were then proposed in \cite{Geyer:2015bja} to be Res$_{\sigma_A}\mathcal{S}=0$. There are three relations between these equations so that only $n+2g-3$ of them need to be imposed to enforce $\mathcal{S}=0$. The three relations follow from the vanishing of the sum of residues of $\mathcal{S}$ multiplied by three independent tangent vectors to the sphere. There is an ambiguity at two loops and higher, however. We could equally well have defined $\mathcal{S}$ as \begin{equation} \widetilde{\mathcal{S}}_g=P^2-\sum_{r=1}^g \ell_r^2 \omega_r^2 + \sum_{r<s} a_{rs} \omega_r\omega_s\label{eq:S-ambig} \,, \end{equation} where the $a_{rs}$ are linear combinations of the $u_{rs}$. We will see that $a_{rs}=\alpha(u_{rr}+u_{ss})=\alpha (\ell^2_r+\ell_s^2)$ is a better choice where $\alpha=\pm 1$ at two loops. This does not change the heuristic argument as it corresponds to replacing the original $u_{rs}=0$ scattering equations for the moduli by nondegenerate linear combinations thereof. The choice $\alpha=1$ (or equivalently -1) at two loops will be forced upon us by requiring correct factorisation channels. Thus our formula on the nodal Riemann sphere is \begin{equation} {\mathcal M}_n^{(g)}=\frac{1}{\prod_{r=1}^g \ell_r^2}\int_{\mathcal{M}_{0,n+2g}} \hspace{-.2cm} \mathcal I_0 \frac{d^{n+2g}\sigma}{\mathrm{vol}\, SL(2,\mathbb{C})}\prod_{A=1}^{n+2g}\hspace{-.1cm}{}^\prime\, \bar\delta(E_A) \,, \label{eq:nodal-final} \end{equation} where $E_A = \mathrm{Res}_{\sigma_A}\widetilde{\mathcal{S}}_g$, with the index $A$ spanning the $n$ marked points and the $2g$ double points. The delta functions enforce the off-shell scattering equations~\footnote{with the $'$ on the product denoting the omission of three of the delta functions in line with the $SL(2,\mathbb{C})$ quotient and the three relations between the $E_A$}. In the first instance, $\mathcal{I}_0$ will be taken to be the nodal limit of the higher-genus worldsheet correlator from the ambitwistor string type II supergravity in $d=10$ (together with a cross ratio motivated by factorization). This can be extended to theories for which no higher-genus expression is known, as we will demonstrate explicitly for super-Yang-Mills theory. \section{The 2-loop scattering equations} We now take $g=2$ with $\sigma_{1^\pm}$ and $\sigma_{2^\pm}$ the double points corresponding to $\ell_1$ and $\ell_2$. The two-loop scattering equations are the vanishing of the residues of \begin{equation} \mathcal{S}:=P^2- \ell_1^2 \omega_1^2 -\ell_2^2 \omega_2^2+ \alpha(\ell^2_1+\ell_2^2) \omega_1\omega_2\, . \label{eq:S2} \end{equation} We adopt the shorthand notation $(AB)=\sigma_{A}-\sigma_{B}$. The scattering equations $2E_A= \mathrm{Res}_{\sigma_A} S(\sigma)$ are then given by \begin{align}\label{eq:SE-2loop} \pm E_{1^\pm} & = \frac12\frac{L\,(2^+2^-)}{(1^\pm2^+)(1^\pm2^-)} + \sum_{i} \frac{\ell_1\cdot k_i}{(1^\pm i)} \,, \nonumber\\ \pm E_{2^\pm} & = \frac{1}{2} \, \frac{L\,(1^+1^-)}{(2^\pm 1^+)(2^\pm 1^-)} + \sum_{i} \frac{\ell_2\cdot k_i}{(2^\pm i)} \,,\\ E_i & = \frac{k_i\cdot\ell_1(1^+1^-)}{(i 1^+)(i 1^-)} + \frac{k_i\cdot\ell_2 (2^+2^-)}{(i2^+)(i2^-)} + \sum_{j\neq i} \frac{k_i\cdot k_j}{(ij)}\,,\nonumber \end{align} where $L=\alpha(\ell_1^2+\ell_2^2)+2\ell_1\cdot\ell_2$. In particular, for $\alpha=\pm 1$, $L=\pm(\ell_1\pm\ell_2)^2$. The equations are not independent, since there are three linear relations between them, \begin{equation} \sum_A E_A=0 \,, \quad \sum_A \sigma_A E_A=0 \,, \quad \sum_A \sigma_A^2 E_A=0 \,. \end{equation} We will see that $\alpha=\pm1$ follows from the correct factorisation. \subsection{Poles and factorization} Factorization channels of the integrand are related by the scattering equations to the boundary of the moduli space of the Riemann surface, where a subset of the marked points coalesce. Conformally, these configurations are equivalent to keeping the marked points at a finite distance, but pinching them off on another sphere, connected to the original one at the coalescence point $\sigma_I$. When $\Sigma$ degenerates in this way, the scattering equations force a kinematic configuration where an intermediate momentum goes on-shell \cite{Dolan:2013isa}, corresponding to a potential pole in the integrand. The pole can thus be calculated as $\sigma_A\rightarrow \sigma_I$ for $A\in I$ from \begin{equation}\label{eq:SE-poles} \sum_{A\in I} (\sigma_A-\sigma_I)E_A=0\,. \end{equation} Note however that whether this singularity is realized in a specific theory depends on the integrand $\mathcal{I}_0$. Let $K_I=\sum_{i\in I} k_i$ (with external particles only). The location of the singularities in terms of the external and loop momenta can then be characterized as follows: \begin{itemize} \item When $\sigma_{1^{\pm}},\sigma_{2^{\pm}}\notin I$, \eqref{eq:SE-poles} simply gives $K_I^2=0$, the standard factorization channel as for tree amplitudes, where a pole can appear in some intermediate propagator in massless scattering. \item When $\sigma_{1^\pm}\in I$, but $\sigma_{2^{\pm}}\notin I$, we find the (potential) pole $2\ell_1\cdot K_I \pm K_I^2$ as at 1-loop \cite{Geyer:2015bja}, where such poles arise from certain partial fraction relations and shifts in the loop momenta, and coincide with the `Q-cut' poles \cite{Baadsgaard:2015twa}. \item The crucial new configuration at two loops is given by $\sigma_{1^+}, \sigma_{2^\pm}\in I$, corresponding to the condition \begin{equation} L + 2(\ell_1\pm \ell_2)\cdot K_I + K_I^2=0 \,, \end{equation} with $L$ as above. As detailed in \cite{Baadsgaard:2015twa}, the partial fraction identities and shifts always give a quadratic propagator of the form $(\ell_1\pm \ell_2+K_I)^2$ at two loops; see also Appendix~\ref{sec:shift-integr-plan}. Therefore, requiring the correct behaviour under factorisation determines $\pm=+$ and $\alpha=1$, or $\pm=-$ and $\alpha=-1$. While both options are fully equivalent up to reparametrisation of the loop momenta, we will choose the former for the rest of the paper. For $\sigma_{1^+}, \sigma_{2^+}\in I$, this choice leads to a potential pole at $(\ell_1+ \ell_2+K_I)^2$. However, for $\sigma_{1^+}, \sigma_{2^-}\in I$, we are left with an unphysical potential pole at $(\ell_1+ \ell_2)^2+2(\ell_1-\ell_2)\cdot K_I +K_I^2=0$. The requirement that this pole is absent from the final answer will give important restrictions on the integrand $\mathcal{I}_0$. \item Let us briefly comment on the only other new scenario at 2-loops -- to have both $\sigma_{1^\pm}\in I$. Since their contributions cancel in \eqref{eq:SE-poles}, this just leads to $K_I^2=0$, although now associated to two 1-loop diagrams joined by an on-shell propagator. \end{itemize} The criterion for an integrand $\mathcal{I}_0$ to give a simple kinematic pole in the final formula at one of these potential singularities is that $\mathcal{I}_0$ should have a pole of order $2|I|-2$ as the marked points coalesce, as described in detail in \S4.1 of \cite{Geyer:2015jch}. If the pole has lower degree, the final formula will not have a factorisation pole in this channel. This gives an important criterion for determining the precise forms of possible integrands $\mathcal{I}_0$. \subsection{Degenerate and regular solutions} The off-shell scattering equations $\pm E_{r^\pm}(\sigma_{r^\pm})=0$ associated to the node $r$ have the same functional form, when seen as functions of $\sigma=\sigma_{r^+}$ and $\sigma_{r^-}$ respectively. We distinguish between `regular solutions', when $\sigma_{r^{\pm}}$ are different roots of ${E}_{r^\pm}(\sigma)=0$, and `degenerate' solutions with $\sigma_{r^{+}}=\sigma_{r^-}$ the same root, for generic momenta. These degenerate solutions can be summarized by the factorization diagrams given in figure \ref{fig:degen}, and can be understood as forward limits of the ($d+g$ dimensional) tree-level scattering equations; see Appendix \ref{sec:degen-sols} for details and \cite{He:2015yua,Cachazo:2015aol} for a discussion at one loop. \begin{figure}[ht] \centering \input{degen-pics.tex} \caption{Different possible worldsheet degenerations.} \label{fig:degen} \end{figure} However, not all solutions of the $d+2$ dimensional scattering equations survive in the forward limit. Consider a degeneration parameter $\tau$ which vanishes in the forward limit. While there are degenerate solutions with $\sigma_{r^+r^-}\sim\tau^2$, the zero-locus of ~\eqref{eq:SE-2loop} excludes them, and thus the two-loop integrands localise on the degenerate solutions with $\sigma_{r^+r^-}\sim\tau$ and on \begin{equation} \label{eq:nb-sols-2loops} N_{\text{reg}}=(n+1)!-4n!+4(n-1)!+6(n-3)! \end{equation} regular solutions (with $\sigma_{r^+r^-}\sim1$). Moreover, we shall see that the supersymmetric integrands only receive contributions from the regular solutions. As an important consequence, unphysical poles in the form of Gram determinants arising from double roots in $E_{r^\pm}(\sigma)$ are absent for supersymmetric theories: as discussed in \cite{Cachazo:2015aol} and appendix \ref{sec:degen-sols}, these poles can be localised on the degenerate solutions. For non-supersymmetric theories, however, degenerate solutions may contribute, and one must check that contributions with unphysical poles vanish upon loop integration, as detailed in \cite{Cachazo:2015aol} at one loop. \section{Supersymmetric two-loop amplitudes} \label{sec:genus-two} We now consider explicit expressions at four points for maximal supergravity and super-Yang-Mills. These expressions are examples of \eqref{eq:nodal-final} for $g=2$, $n=4$. The representation of loop integrands that arises can be connected to a standard Feynman-like representation, after use of partial fractions and shifts in the loop momenta as in \cite{Geyer:2015bja} at one-loop; see Appendix~\ref{sec:shift-integr-plan}. \subsection{Four-point supergravity integrand} We use the integrand that arises directly from the degeneration of the genus-two (ambitwistor) string \cite{Adamo:2015hoa,D'Hoker:2002gw,Berkovits:2005df,Berkovits:2005ng}. Define \begin{equation} \Delta_{i,j}=\omega_1 (\sigma_i) \,\omega_2(\sigma_j)-\omega_1 (\sigma_j)\, \omega_2(\sigma_i) \label{eq:Delta-def} \end{equation} and \begin{equation} {\hat{\mathcal Y}} = \frac{(k_1-k_2)\cdot (k_3-k_4)\, \Delta_{1,2}\Delta_{3,4} + \text{cyc(234)}}{3(1^+2^+)(1^+2^-)(1^-2^+)(1^-2^-)}, \label{eq:Yhat-def} \end{equation} where $\text{cyc(234)}$ is a sum over cyclic permutations. Our prescription for the four-point supergravity integrand is \begin{equation} \label{eq:4ptsugraSE} {\mathcal I}_0^{\text{SUGRA}} = ({\mathcal{K}}\tilde {\mathcal{K}}) \;{\hat{\mathcal Y}^2}\; \frac{(1^+2^-)(1^-2^+)}{(1^+1^-)(2^+2^-)}, \end{equation} where ${\mathcal{K}}\tilde {\mathcal{K}}$ is the standard kinematical supersymmetry prefactor ${\mathcal{K}}=(k_1\cdot k_2)(k_2\cdot k_3)A_{\text{tree}}^{\text{SYM}}(1,2,3,4)$ and ${\tilde{ \mathcal{K}}}=(k_1\cdot k_2)(k_2\cdot k_3){\tilde A}_{\text{tree}}^{\text{SYM}}(1,2,3,4)$, so that the supergravity states in the scattering are the direct product of super-Yang-Mills untilded ({\it left}) and tilded ({\it right}) states. The cross-ratio is inserted by hand to remove poles in the unphysical factorization channels discussed in the previous section. This is an important aspect of our prescription. We set the relative sign between $\ell_1$ and $\ell_2$ to be $+$, consistently with the $(\ell_1+\ell_2)^2$ factors in the scattering equations. This choice implies that the degenerations of the worldsheet at $(1^+2^+)\rightarrow 0$ or $(1^-2^-)\rightarrow 0$ occur when $(\ell_1+\ell_2\pm K_I)^2\rightarrow 0$, where $K_I$ is a partial sum of the external momenta. These physical poles can be realized in the formula. However, the numerator in the cross ratio suppresses unphysical poles of the type $(\ell_1+\ell_2)^2\pm 2(\ell_1-\ell_2)\cdot K_I+K_I^2 \rightarrow 0$, which might have arisen when $(1^+2^-)\rightarrow 0$ or $(1^-2^+)\rightarrow 0$. We have evaluated our formula numerically and checked that it matches the known result for this amplitude \cite{Bern:1998ug}, \begin{align} \label{eq:2loopsugraint} \frac{\mathcal{M}_4^{(2)}}{{\mathcal{K}}\tilde {\mathcal{K}}} =(k_1\cdot k_2)^2 \big[& I^{\text{planar}}_{12,34} + I^{\text{planar}}_{34,21} + I^{\text{non-planar}}_{1,2,34} \nonumber \\ & + I^{\text{non-planar}}_{3,4,21} \big] + \text{cyc(234)}. \end{align} The planar and non-planar double-box integrands are written down in the ``shifted'' representation in Appendix~\ref{sec:shift-integr-plan}. It is also possible to check the factorization of this formula explicitly. \subsection{Four-point super-Yang-Mills integrand} There is no fully well defined ambitwistor model that would give a first principle derivation of a super-Yang-Mills integrand, however the tree and one-loop results motivated us to postulate the following expression \begin{equation} \label{eq:4ptsymSE} {\mathcal I}_0^{\text{SYM}} = {\mathcal{K}} \;{\hat{\mathcal Y}}\; {\mathcal I}_{\text{PT}(2)}, \end{equation} where ${\mathcal{K}}$ is again the standard kinematical supersymmetry prefactor. The new and crucial ingredient is the extension to two loops of the Park-Taylor factor, ${\mathcal I}_{\text{PT}(2)}$. In~\cite{Geyer:2015bja}, we presented the analogous object at one loop. We will comment later on the general form, and first focus on the explicit formula for the four-point two-loop case, including the non-planar (NP) contributions, \begin{align} \label{eq:PT2loop} {\mathcal I}_{\text{PT}(2)} =& \big[N_c^2 \,C^{\text{P}}_{1234}+ C^{\text{NP},1}_{1234}\big] \big[ \text{tr}(1234)+\text{tr}(4321)\big] \nonumber \\ & + N_c\,C^{\text{NP},2}_{12,34} \,\text{tr}(12)\text{tr}(34) + \text{cyc(234)}, \end{align} where $N_c$ is the rank of the gauge group and $\text{tr}(12\cdots)\equiv\text{tr}(T^{a_1}T^{a_2}\cdots)$ denote the colour traces. We have, for the planar part \begin{align} C^{\text{P}}_{1234} =& c^{\text{P}}_{1234}(1^+,1^-,2^+,2^-) +c^{\text{P}}_{1234}(1^-,1^+,2^-,2^+) \nonumber \\ & + c^{\text{P}}_{1234}(2^+,2^-,1^+,1^-) +c^{\text{P}}_{1234}(2^-,2^+,1^-,1^+), \nonumber \end{align} with \begin{align} c^{\text{P}}_{1234}(a,b,c,d) &= \frac{1}{(abdc1234)} + \frac{1}{(ab1dc234)} \nonumber \\ &\hspace{-.2cm} + \frac{1/2}{(ab12dc34)} + \frac{1}{(acdb1234)} + \text{cyc(1234)} \nonumber, \end{align} where $(123\cdots m)$ stands for $(12)(23)\cdots (m1)$. For the double-trace contribution, we have \begin{align} C^{\text{NP},2}_{12,34} =& c^{\text{NP}}_{12,34}(1^+,1^-,2^+,2^-) +c^{\text{NP}}_{12,34}(1^-,1^+,2^-,2^+) \nonumber \\ & + c^{\text{NP}}_{12,34}(2^+,2^-,1^+,1^-) +c^{\text{NP}}_{12,34}(2^-,2^+,1^-,1^+), \nonumber \end{align} with \begin{align} c^{\text{NP}}_{12,34}(a,b,c,d) =& \frac{1/2}{(ac12db34)} + \frac{2}{(acd12b34)} \nonumber \\ & + \frac{1}{(a1cd2b34)} + \text{perm(12,34)} \nonumber, \end{align} where $\text{perm(12,34)}$ denotes the eight permutations $(1\leftrightarrow2)$, $(3\leftrightarrow4)$ and $(12)\leftrightarrow(34)$. The remaining contribution is determined by the ones already given, as seen in~\cite{Naculich:2011ep}, \begin{equation} C^{\text{NP},1}_{1234} = 2 \big[C^{\text{P}}_{1234}+C^{\text{P}}_{1342}+C^{\text{P}}_{1423}\big]-C^{\text{NP},2}_{13;24}. \end{equation} We checked numerically that our proposal matches the known result of~\cite{Bern:1997nh}. For instance, using the colour decomposition of eq.~\eqref{eq:PT2loop}, we get for the planar part \begin{align} \label{eq:2loopsymint} \frac{\mathcal{M}^{(2)\text{P}}_{1234}}{{\mathcal{K}}} =k_1\cdot k_2\; I^{\text{planar}}_{12,34} +k_4\cdot k_1\; I^{\text{planar}}_{41,23}. \end{align} The two-loop Parke-Taylor formula ${\mathcal I}_{\text{PT}(2)}$ is non-trivial, and may seem hard to extend for higher multiplicity or loop order. We propose, however, that in general it can be computed from the correlator of a current algebra on the Riemann sphere, which was our procedure at four points. This extends the tree-level result of \cite{Nair:1988bq} and, more generally, follows by analogy to the heterotic string~\cite{Gross:1984dd}, where gauge interactions have a closed string-like nature as in ambitwistor string theory~\cite{Mason:2013sva}. The sum over states at a node of the Riemann sphere translates into a sum over the Lie algebra index of two additional operator insertions per loop momentum. To eliminate the contributions from the unwanted poles, we drop Parke-Taylor terms that have orderings where loop momentum insertions appear with alternate signs as in $(1^+\cdot1^-\cdot2^+\cdot2^-\cdot)$, keeping only terms with orderings of the type $(1^+\cdot1^-\cdot2^-\cdot2^+\cdot)$; here $\cdot$ denote any external particles. For instance, we keep contributions such as $1/(1^+1^-2^-2^+1234)$, but discard terms like $1/(1^+1^-2^+2^-1234)$. This achieves the same effect as the cross-ratio appearing in the supergravity integrand. Moreover, we only include contributions with a single cyclic structure, e.g.~(123456), and discard contributions with subcycles, e.g.~(123)(456). These properties can be verified in the expressions above. Our two-loop Parke-Taylor expressions should be applicable to Yang-Mills theories with or without supersymmetry. \section{Discussion} We have obtained scattering equation formulae for two-loop integrands on the Riemann sphere, following the heuristic reduction of genus-two ambitwistor string formulae by integration by parts on the moduli space of Riemann surfaces as in \cite{Geyer:2015bja}. Our analysis is not a rigorous derivation from the genus-two ambitwistor string formulae of \cite{Adamo:2015hoa}, and in particular does not fix the parameter $\alpha$ in the off-shell scattering equations. Nevertheless, we have seen that factorization fixes the ambiguity in $\alpha $, and this choice leads to correct two-loop integrands for maximally supersymmetric theories. A more refined analysis of the ambitwistor string degeneration should uniquely fix the scattering equations and the details of the integrands (such as the cross ratio in the supergravity case). It would also give us the tools to address the higher-loop case, where we expect different boundary contributions (e.g. at three loops ``mercedes'' vs. ``ladder'' graphs) in the integration by parts on the moduli space associated to different classes of scattering equations. There are clearly many other challenges. It should also be possible to obtain integrands for non-supersymmetric theories, as we did in \cite{Geyer:2015jch} at one loop. These will in principle also have support on the degenerate solutions to the two-loop scattering equations, which we studied here; see \cite{Bjerrum-Bohr:2016juj,Bosma:2016ttj,Cardona:2016gon} and references therein for recent work on the scattering equations. For both supersymmetric and non-supersymmetric Yang-Mills and gravity at higher points, we need to understand the higher-loop analogues of the CHY Pfaffians on the Riemann sphere with and without supersymmetry. The extent of supersymmetry should be determined by the particular sum over spin structures, as in \cite{Geyer:2015jch} at one loop. More generally, we would like to extend our results to a new formalism, where our formulae arise directly as correlation functions of vertex operators on the nodal Riemann sphere. A natural question is then what type of quantum field theories admit such a formulation; there are CHY formulae and ambitwistor string models for a variety of theories \cite{Cachazo:2014xea,Ohmori:2015sha,Casali:2015vta}. Finally, it would be important to clarify the relation of these ideas to full string theory, which has been the subject of recent works \cite{Siegel:2015axg,Huang:2016bdd,Casali:2016atr}. \section*{Acknowledgements} We would like to thank Tim Adamo, Zvi Bern, Eduardo Casali and David Skinner for discussions. We also thank NORDITA, Stockholm, and the Isaac Newton Institute for Mathematical Sciences, Cambridge, for hospitality and financial support from EPSRC grant EP/K032208/1 during the program GTA 2016. YG is supported by the EPSRC Doctoral Prize Scheme EP/M508111/1, LJM by the EPSRC grant EP/M018911/1, and the work of PT is supported by STFC grant ST/L000385/1. \onecolumngrid
1,116,691,498,882
arxiv
\section{Introduction} In this paper we study the interactions of observed pions with inferred scalar $\sigma$ meson [1,2] and fermion quark SU(2) fields in a chiral-invariant manner at low energies. Specifically we consider two chiral theories: a) A chiral quark model (CQM) dynamically inducing [2] the entire quark-level SU(2) linear $\sigma$ model (L$\sigma$M) but depending on no free parameters. b) Chiral perturbation theory (ChPT) involving ten strong interaction parameters $L_1 - L_{10}$ [3-5], now called low energy constants (LECs). Following the surveys of Donoghue and Holstein [6,7], we compare the predictions of the above two theories with the measured values of the i) pion charge radius, ii) $\pi_{e3}$ form factor ratio $F_A/F_V$ at zero invariant momentum transfer and the $\pi^\circ \rightarrow \gamma\gamma$ amplitude, iii) charged pion polarizabilities, iv) $\gamma \gamma \rightarrow \pi^\circ \pi^\circ$ amplitude at low energies, v) $\pi\pi$ s-wave $I=0$ scattering length. We begin in Sec.II with the quark-level Goldberger-Treiman relation (GTR), (its meson analog) the KSRF relation [8] and the link to vector meson dominance (VMD). Then in Sec.III we examine the pion charge radius $r_\pi$ in the above two chiral theories. Next in Sec IV, we first review $\pi^\circ \rightarrow \gamma\gamma$ decay and then study its isospin-rotated semileptonic weak analog $\pi^+ \rightarrow e^+ \nu \gamma$, giving rise to the form factor ratio $F_A/F_V \equiv \gamma$ at zero invariant momentum transfer. This naturally leads in Sec.V to the charged pion electric polarizability $\alpha_{\pi^+}$ due to the model-independent relation [9,6] between $\alpha_{\pi^+}$ and the above $\pi_{e3}$ ratio $\gamma$. Finally in Sec.VI we review the Weinberg soft-pion prediction [10] for the s-wave $I = 0 \ \pi\pi$ scattering length and its chiral-breaking corrections. In all of the above cases the predictions of the CQM-L$\sigma$M and ChPT chiral theories are compared with the measured values of $r_\pi, \gamma, \alpha_{\pi^+}, a^{(0)}_{\pi\pi}$. We review these results in Sec.VII. \section{CQM Link to GTR, VMD and KSRF} The chiral quark model (CQM) involves $u$ and $d$ quark loops coupling in a chiral invariant manner to external pseudoscalar pions (and scalar $\sigma$ mesons). In order to manifest the Nambu-Goldstone theorem with $m_\pi = 0$ and conserved axial currents $\partial A^{\vec{\pi}} = 0$, it is clear that the quark-level Goldberger-Treiman relation (GTR) must hold: $$ f_\pi g_{\pi qq} = m_q. \eqno(1) $$ Here the pion decay constant is $f_\pi \approx 90 MeV$ in the chiral limit [11] and the constituent quark mass is expected to be $m_q \sim m_N/3 \sim 320 MeV$. Indeed, this dynamical quark mass $m_q \sim 320 MeV$ also follows from nonperturbative QCD considerations [12], scaled to the quark condensate. Given these nonperturbative mass scales of 90 MeV and 320 MeV, the dimensionless pion-quark coupling should be $g_{\pi qq}\sim 320 /90 \approx 3.6$. The latter scale of 3.6 also follows from the phenomenological $\pi NN$ coupling constant [13] $g_{\pi NN} \approx 13.4$ since then $$ g_{\pi qq} = g_{\pi NN}/3g_A \approx 3.5 \eqno(2) $$ for the measured value [14] $g_A \approx 1.267$. In fact in the SU(2) CQM with $u$ and $d$ loops for $N_c = 3$, cutoff-independent dimensional regularization dynamically generates the entire quark-level linear sigma model (L$\sigma$M) and also requires [2] $$ g_{\pi qq} = 2\pi/\sqrt{3} \approx 3.6276 \ \ {\rm and } \ \ m_\sigma = 2 m_q. \eqno(3) $$ The former coupling is compatible with (1) and (2) and the latter scalar-mass relation also holds in the four-quark chiral NJL scheme [15] in the chiral limit. If one substitutes $g_{\pi qq} = 2\pi/\sqrt{3} $ back into the GTR (1), one finds $$ m_q = f_\pi 2\pi/\sqrt{3} \approx 325\ MeV\ \ {\rm and} \ \ m_\sigma = 2 m_q \approx 650\ MeV. \eqno(4) $$ Moreover the CQM quark loop for the vacuum to pion matrix element of the axial current $\langle 0| \overline{q} {1 \over 2} \lambda_3 \gamma_\mu \gamma_5 q | \pi^\circ \rangle = i f_\pi q_\mu$ as depicted in Fig.1, generates the log-divergent gap equation in the chiral-limit once the GTR (1) is employed: $$ 1 = -i 4 N_c g^2_{\pi qq} \int^\Lambda_0 {d^4 p \over (2\pi)^4 } {1 \over (p^2-m^2_q)^2}. \eqno(5) $$ Given the pion-quark coupling in (2) or (3), it is easy to show that the cutoff in (5) must be $\Lambda \approx 2.3 \ m_q \approx 750 MeV$, This naturally separates the ``elementary" $\sigma$ with $m_\sigma \approx 650 MeV$ in (4) from the ``bound state" $\rho$ meson with $m_\rho \approx 770 MeV$. In fact it was shown in the third reference in [1] that the CQM $u$ and $d$ quark loops of Fig 2 for $\rho^\circ \rightarrow \pi^+\pi^-$ lead to the chiral-limiting relation $$ g_{\rho \pi\pi} = g_\rho \left[ -i 4 N_c g^2_{\pi qq} \int {d^4 p \over (2\pi)^4 }{1 \over (p^2-m^2_q)^2} \right] = g_\rho, \eqno(6) $$ where the gap equation (5) is used. Experimentally [14] $g_{\rho \pi\pi}^2 / 4\pi \approx 3.0$ or $|g_{\rho \pi\pi}| \approx 6.1$, while the rho-quark coupling measured in $\rho^\circ \rightarrow e^+e^-$ is $|g_\rho| \approx 5.0$. From the perspective of vector meson dominance (VMD), equ. (6) is the well-known VMD universality relation [16]. Moreover CQM quark loops with an external $\rho^\circ$ replaced by a photon $\gamma$ corresponds to the VMD $\rho^\circ - \gamma$ analogy [17]. However from the perspective of the dynamical generated L$\sigma$M, $g_{\rho \pi\pi} = g_\rho$ in (6) corresponds to a Z = 0 compositeness condition [18]. It shrinks ``loops to trees", implying that the L$\sigma$M analogue equation $g_{\sigma \pi\pi} = g^\prime$ can treat the $\sigma$ as an elementary particle while the NJL model can treat the $\sigma$ as a $\bar{q}q$ bound state. Lastly, the meson analogue of the fermion GTR (1) is the KSRF relation [8], generating the $\rho$ mass as $$ m_\rho \approx \sqrt{2} f_\pi (g_\rho g_{\rho \pi\pi})^{1/2} \approx 730 \ MeV. \eqno(7) $$ We recall that (7) also follows by equating the $I = 1 \ \pi N$ VMD $\rho$-dominated amplitude $g_{\rho \pi\pi} g_\rho / m^2_\rho$ to the chiral-symmetric current algebra amplitude $1/2 f^2_\pi$ [19]. In short, the CQM quark loops combined with the quark-level GTR (1) dynamically generate the entire L$\sigma$M and the NJL relation (3), along with the VMD universality and KSRF relations (6) and (7). This collective CQM-L$\sigma$M-NJL-VMD-KSRF picture [20] will represent our first chiral approach to pion interactions as characterized by $r_\pi, F_A/F_V, \alpha_{\pi^+}$ and $a^{(0)}_{\pi\pi}.$ \bigskip \section{Pion Charge Radius} It is now well-understood [21] that the CQM quark loop-depicted in Fig 3 generates the pion charge radius (squared) for $N_c = 3$ in the chiral limit with $f_\pi \approx 90 \ MeV$ as $$ \langle r^2_\pi \rangle = {3 \over 4\pi^2 f^2_\pi} \approx (0.60 fm)^2. \eqno(8) $$ Stated another way, using the CQM-L$\sigma$M $g_{\pi qq} = 2\pi/\sqrt{3}$ coupling relation in (3), $r_\pi$ in (8) can be expressed in terms of the GTR as the inverse Compton mass $$ r_\pi = {\sqrt{3} \over 2\pi f_\pi} = {1 \over g_{\pi qq} f_\pi}={1 \over m_q} \approx 0.61 \ fm, \eqno(9) $$ using the quark mass scale in (4). In either case this predicted pion charge radius is quite close to the measured value [22] of 0.63 fm. A CQM interpretation of (9) is that the quarks in a Goldstone $\bar{q}$q pion are tightly bound and $\emph{fuse}$ together, so that $m_\pi=0$ in the chiral limit with pion charge radius $r_\pi=1/m_q$ the size of just $\emph{one}$ quark. Another link of $r_\pi$ to the CQM-L$\sigma$M-VMD-KSRF picture derives from examining the standard VMD result $$ r_\pi = {\sqrt{6} \over m_\rho} \approx 0.63\ fm. \eqno(10) $$ Not only is (10) in agreement with experiment, but equating the square root of (8) to (10) and invoking the KSRF relation (7) in turn requires with $g_{\rho \pi\pi} = g_\rho$, $$ g_{\rho \pi\pi} = 2\pi \approx 6.28. \eqno(11) $$ This relation has long been stressed in a L$\sigma$M context [23], and is of course compatible with the measured $\rho \rightarrow 2\pi$ coupling $|g_{\rho \pi\pi}| \approx 6.1.$ But a deeper CQM-L$\sigma$M connection exists due to (11). In ref [2] the CQM quark loops of Fig 4 for the vacuum to $\rho^\circ$-matrix element $\langle 0 | V^{em}_\mu | \rho^\circ \rangle$ $= (em^2_\rho/g_\rho) \varepsilon_\mu$ was shown to dynamically generate the vector polarization function $\Pi(k^2,m_q)$ in the chiral limit $k^2 \rightarrow$ 0, $$ {1 \over g^2_\rho} = \Pi (k^2=0,m_q) = -{8iN_c \over 6} \int {d^4 p \over (2\pi)^4} {1 \over (p^2-m^2_q)^2} = {1 \over 3g^2_{\pi qq}}, \eqno(12) $$ by use of the gap equation (5). Then invoking the CQM-L$\sigma$M coupling $g_{\pi qq} = 2\pi/\sqrt{3}$ from (3), equation (12) together with the VMD relation (6) leads to $$ g_\rho = g_{\rho \pi \pi} = \sqrt{3} g_{\pi qq} = 2\pi, \eqno(13) $$ which recovers (11). The second chiral approach, referred to as chiral perturbation theory (ChPT), is not considered as a model but a method relating various chiral observables. However the cornerstone of ChPT is that the pion charge radius $r_\pi$ $\emph{diverges}$ [24] in the chiral limit (CL) and that away from the CL $r_\pi$ is fixed by the LEC $L_9$ as $$ \langle r^2_\pi \rangle = 12 \ L_9/f^2_\pi + {\rm chiral\ loops}. \eqno(14) $$ To the extent that $L_9$ is scaled to the VMD value of $r_\pi$ in (10) and the chiral loops in (14) are small [24], this ChPT-VMD approach leads to reasonable phenomenology, as emphasized in ref. [6]. But from our perspective, this ChPT relation (14) circumvents the physics of (8)-(13). Instead the CL $r_\pi$ is $\emph{finite}$ and is 0.60-0.61 fm in (8) or (9), near the measured value $0.63\pm 0.01$ fm. The LEC $L_9$ does not explain this fact. \section{$\Pi^+_{e3}$ Form Factors and $\pi^\circ \rightarrow 2\gamma$ Decay} The CQM $u$ and $d$ quark loops for $\pi^\circ \rightarrow 2\gamma$ decay in Fig 5 generate the Steinberger-ABJ anomaly amplitude [25] $F_{\pi^\circ\gamma\gamma} \varepsilon_{\mu\nu\alpha\beta} (\varepsilon^{*\mu}\varepsilon^\nu k^\prime k)$ where $$ |F_{\pi^\circ\gamma\gamma}| = {\alpha \over \pi f_\pi} \approx 0.0258 \ GeV^{-1} \eqno(15) $$ in the $m_\pi = 0 $ chiral limit, using the quark-level GTR (1). Since no pion loop can contribute to $\pi^\circ \rightarrow 2\gamma$, the CQM-Steinberger-ABJ anomaly result (15) is also the L$\sigma$M amplitude. Then with $m_q \approx 325$ MeV traversing the quark loops in Fig 5, the $\pi^\circ\gamma\gamma$ decay rate from (15) is predicted to be [26] $$ \Gamma_{\pi^\circ\gamma\gamma} = {m^3_\pi \over 64\pi} |F_{\pi^\circ\gamma\gamma}|^2\approx 8\ eV \left[ {2m_q \over m_\pi} \sin^{-1} \left( {m_\pi \over 2m_q} \right) \right]^4 \approx 8\ eV \eqno(16) $$ with $m_\pi / 2 m_q \approx 0.21 << 1$. Of course the latter rate in (16) is near the observed value [14] $(7.74 \pm 0.6)$ eV. Treating $\pi^+ \rightarrow e^+ \nu \gamma$ as an off-shell version of $\pi^\circ \rightarrow \gamma\gamma$ decay, the CVC SU(2) rotation of (15) predicts the zero momentum transfer vector form factor [27] $$ F_V (0) = {\sqrt{2}\over 8\pi^2 f_\pi} \sim 0.19 \ GeV^{-1} \sim 0.027 m^{-1}_\pi. \eqno(17) $$ A pure quark model is then in doubt [28], because the analogue axial vector quark loop is identical to (17) so that $\gamma_{qk} = F_A(0)/F_V(0)|_{qk} = 1$, which is about twice the observed $\gamma$. In fact the 1998 PDG values [14], statistically dominated by the same experiment (minimizing the systematic errors) gives $$ \gamma_{exp} = {F_A(0) \over F_V(0)} = {0.0116 \pm 0.0016 \over 0.017 \pm 0.008} = 0.68 \pm 0.34. \eqno(18) $$ However the L$\sigma$M generates both quark and meson loops to the $\pi^+ \rightarrow e^+ \nu\gamma$ amplitude as depicted in Fig 6. This leads to the $F_A (0)$ axial current form factor [29] $$ F_A(0) = F_A^{qk}(0) + F_A^{meson} (0) = \sqrt{2} (8\pi^2 f_\pi)^{-1} - \sqrt{2} (24\pi^2 f_\pi)^{-1} = \sqrt{2} (12\pi^2 f_\pi)^{-1}, \eqno(19) $$ or with a $\gamma$ found from (19) divided by (17): $$ \gamma_{L\sigma M} = {F_A(0) \over F_V(0)} = 1 - {1 \over 3} = {2 \over 3} . \eqno(20) $$ It is satisfying that $\gamma_{L\sigma M}$ in (20) accurately reflects the central value of the observed ratio in (18). On the other hand, the ChPT picture appears [6] to give values of $\gamma = F_A(0)/F_V(0)$ varying from 0 (in leading-log approximation) to 1 in a chiral quark model-type calculation [6] $$ {F_A(0) \over F_V(0)} = 32 \pi^2 (L_9 + L_{10}) = 1. \eqno(21) $$ The latter (incorrect) value holds when the pion charge radius is (correctly) given by the CQM-VMD value [6] $$ \langle r^2_\pi \rangle = {12 L_9 \over f^2_\pi } = {3 \over 4\pi^2 f^2_\pi}. \eqno(22) $$ \section{Charged Pion Polarizabilities and $\gamma\gamma \rightarrow \pi\pi$ scattering} Electric and magnetic polarizabilities characterize the next-to-leading order (non-pole) terms in a low energy expansion of the $\gamma \pi \rightarrow \gamma \pi$ amplitude. Although in rationalized units (with $\alpha = e^2/4\pi \approx 1/137)$ the classical energy $U$ generated by electric and magnetic fields is $U = ({1\over2}) \int d^3 x (\vec{E}^2 + \vec{B}^2)$, we follow recent convention and define charged or neutral electric $(\alpha_\pi)$ and magnetic $(\beta_\pi)$ polarizabilities from the effective potential $V_{eff}$ as $$ V_{eff} = - {4\pi \over 2} ( \alpha_\pi \vec{E}^2 + \beta_\pi \vec{B}^2 ). \eqno(23) $$ With this definition [30], $\alpha_\pi$ and $\beta_\pi$ have units of volume expressed in terms of $10^{-4} \ fm^3 = 10^{-43} \ cm^3$. Chiral symmetry with $m_\pi \rightarrow 0$ requires $\alpha_\pi + \beta_\pi \rightarrow 0$ for charged or neutral pion polarizabilities and this appears to be approximately borne out by experiment. As for the charged pion polarizabilities, three different experiments for $\gamma\pi^+ \rightarrow \gamma\pi^+$ respectively yield the values [31-33] $$ \alpha_{\pi^+} = (6.8 \pm 1.4 \pm 1.2) \times 10^{-4} \ fm^3 \eqno(24) $$ $$ \alpha_{\pi^+} = (20 \pm 12) \times 10^{-4} \ fm^3 \eqno(25) $$ $$ \alpha_{\pi^+} = (2.2 \pm 1.6) \times 10^{-4} \ fm^3. \eqno(26) $$ In the CQM-L$\sigma$M scheme, the simplest way to find the charged pion electric polarizability $\alpha_{\pi^+}$ is to link it to the $\pi_{e3}$ ratio $\gamma = F_A (0) / F_V (0)$ via the model-independent relation $$ \alpha_{\pi^+} = {\alpha \over 8\pi^2 m_\pi f^2_\pi} \gamma \eqno(27) $$ first derived by Terent'ev [9]. Since one knows that $\gamma_{L\sigma M} = 2/3$ from (20) (consistent with observation), the L$\sigma$M combined with (27) predicts $$ \alpha^{L\sigma M}_{\pi^+} = {\alpha \over 12\pi^2 m_\pi f^2_\pi} \approx 3.9 \times 10^{-4} \ fm^3. \eqno(28a) $$ This L$\sigma$M polarizability (28a) is internally consistent because a direct (but tedious) calculation of $\alpha_{\pi^+}$ due to quark loops and meson loops gives the L$\sigma$M value [34] $$ \alpha^{L\sigma M}_{\pi^+} = \alpha^{qk}_{\pi^+} + \alpha^{meson}_{\pi^+}= {\alpha \over 8\pi^2 m_\pi f^2_\pi} - {\alpha \over 24\pi^2 m_\pi f^2_\pi} = {\alpha \over 12\pi^2 m_\pi f^2_\pi} , \eqno(28b) $$ in complete agreement with (28a). This L$\sigma$M value for $\alpha_{\pi^+}$ is midway between the measurements in (24) and (26). The recent phenomenological studies [35] of Kaloshin and Serebryakov (KS) analyze the Mark II data [36] for $\gamma\gamma \rightarrow \pi^+ \pi^-$ and find $(\alpha - \beta )_{\pi^+} = (4.8 \pm 1.0) \times 10^{-4} \ fm^3$ and $(\alpha + \beta )_{\pi^+} = (0.22 \pm 0.06) \times 10^{-4} \ fm^3$. These results correspond to $$ \alpha^{KS}_{\pi^+} = (2.5 \pm 0.5) \times 10^{-4} \ fm^3, \eqno(29) $$ not too distant from the L$\sigma$M value (28) and (26), but substantially below (24). However (29) is very close to the ChPT prediction of Donoghue and Holstein [6] (DH) $$ \alpha^{ChPT}_{\pi^+} = ({4\alpha \over m_\pi f_\pi^2})(L_9 + L_{10}) \approx 2.8 \times 10^{-4} \ fm^3, \eqno(30) $$ if one uses the implied value of $L_9 + L_{10}$ from $\gamma \approx 0.5$. But in ref.[7] they show in Figs.7 and 9 that a full dispersive calculation for $\gamma\gamma \rightarrow \pi^+\pi^-$ (including the dominant pole term) reasonably maps out the low energy Mark II data from $0.3 GeV < E < 0.7 \ GeV$ for any $\pi^+$ polarizability in the range $$ 1.4 \times 10^{-4}\ fm^3 < \alpha^{DH}_{\pi^+} < 4.2 \times 10^{-4} \ fm^3. \eqno(31) $$ Moreover both data analyses in (24), (26) or in (29), (31) also surround the L$\sigma$M-Terent'ev-L'vov prediction for $\alpha_{\pi^+}$ in (28), so the ChPT prediction (30) is not unambiguously ``gold plated" as ChPT advocates maintain. Next we study low energy $\gamma\gamma \rightarrow \pi^\circ \pi^\circ$ scattering, where there is no pole term and the polarizabilities $\alpha_{\pi^\circ}$ and $\beta_{\pi^\circ}$ are much smaller than for charged pions. Even the sign of $\alpha_{\pi^\circ}$ is not uniquely determined. In Fig. 7a we display the comparison of the $\gamma\gamma \rightarrow \pi^\circ \pi^\circ$ cross section in the low energy region $0.3 \ GeV < E < 0.7 \ GeV$ as found from Crystal Ball data [37] and a parameter-independent dispersive calculation (solid line) [7, 38], verses the one-loop ChPT prediction (dashed line) [39]. This graph has already been displayed in refs [5,7]. As noted by Leutwyler [5], this first-order ``gold-plated prediction of ChPT" might cause reason to panic. In fact, Kaloshin and Serebryakov in their Physics Letter Fig. 1 of ref. [35], now displayed as our Fig. 7b, show a solid line through their (gold-plated) $\gamma\gamma \rightarrow \pi^\circ \pi^\circ$ prediction [40] made five years {\em prior} to the Crystal Ball results. This was based in part upon the {\em existence of a broad scalar $\varepsilon$(700)} i.e. the L$\sigma$M $\sigma$(700). On the other hand, ChPT theory rules out [3, 20] the existence of an $\varepsilon$(700) scalar. Stated in reverse, perhaps the ChPT rise of $\sigma(\gamma\gamma \rightarrow \pi^\circ \pi^\circ)$ above 10nb in the 700 MeV region (inconsistent with Crystal Ball data) could be corrected if the $\varepsilon (700)$ (or the L$\sigma$M-NJL $\sigma$ meson in eq. (3)) were taken into account. To make this point in another way, recall that the decay $A_1 \rightarrow \pi(\pi \pi)_{s\ wave}$ has a very small measured rate [14] $\Gamma = (1 \pm 1)\ MeV$. This can be understood [41] in the context of our CQM-L$\sigma$M picture giving rise to the {\em two} quark loop graphs in Fig 8. Owing to the general Dirac-matrix partial fraction identity $$ {1\over p\kern-4.5pt\hbox{$/$} - m_q} 2m_q {1\over p\kern-4.5pt\hbox{$/$} - m_q} = -\gamma_5 {1\over p\kern-4.5pt\hbox{$/$} - m_q}- {1\over p\kern-4.5pt\hbox{$/$} - m_q}\gamma_5, \eqno(32) $$ there is a soft pion theorem (SPT) which forces the ``box" and ``triangle" quark loops in Fig 8 to interfere destructively. Specifically the quark-level GTR in (1) and (32) above give in the soft pion $p_\pi \rightarrow 0$ limit $$ \langle (\pi\pi)_{sw} \pi | A_1 \rangle = \left[ - {i \over f_\pi } \langle \sigma_\pi | A_1 \rangle + {i \over f_\pi } \langle\sigma_\pi | A_1 \rangle \right] = 0 , \eqno(33) $$ in agreement with the data. Applying a similar soft-pion argument to the two neutral pion quark loop graphs in Fig. 9 representing the CQM-L$\sigma$M amplitude for $\gamma\gamma \rightarrow \pi^\circ \pi^\circ$ scattering, a quark box plus quark triangle cancellation due to the identity (32) leads to the SPT prediction $$ \langle\pi^\circ\pi^\circ|\gamma\gamma\rangle \rightarrow \left[ - {i \over f_\pi } \langle \sigma | \gamma\gamma\rangle + {i \over f_\pi } \langle \sigma | \gamma\gamma\rangle\right] = 0 . \eqno(34) $$ Qualitatively this ``$\sigma$ interference" may be what ref. [40] predicts and what ChPT is lacking in the data plots of Fig. 3 and Fig. 2 in refs. [5,7] respectively, corresponding to our Fig. 7a. \section{$\pi\pi$ s-wave I = 0 Scattering Length} In the context of the CQM-L$\sigma$M picture, $\pi\pi$ quark box graphs ``shrink" back to ``tree" diagrams due to the Z = 0 [18] structure of this theory [2]. Thus one need not go beyond the original tree-level L$\sigma$M [42,43] as recently emphasized by Ko and Rudaz in ref. [1]. Following Weinberg's [10] soft-pion expansion, Ko and Rudaz express the $\pi\pi$ scattering amplitude in ref. [1] as $$ M_{ab,cd} = A(s,t,u) \delta_{ab} \delta_{cd} + A(t,s,u) \delta_{ac} \delta_{bd} + A(u,t,s) \delta_{ad} \delta_{cb} \eqno(35) $$ and write the s channel $I = 0$ amplitude as $$ T^0 (s,t,u) = 3A(s,t,u) + A(t,s,u) + A(u,t,s). \eqno(36) $$ Then they note that the original (tree-level) L$\sigma$M predicts $$ A(s,t,u) = -2\lambda \left[ 1 - {2\lambda f^2_\pi \over m^2_\sigma - s} \right] , \eqno(37) $$ where away from the chiral limit $m_\pi \neq 0$ one knows $$ \lambda = {g_{\sigma\pi\pi}\over f_\pi} = {(m^2_\sigma - m^2_\pi) \over 2f^2_\pi} , \eqno(38) $$ regardless of the value of $m_\sigma$ [42-44] Substituting (38) into (37) one obtains a slight modification of the Weinberg $(s-m^2_\pi)/ f^2_\pi$ structure: $$ A(s,t,u) = \left( {m^2_\sigma - m^2_\pi \over m^2_\sigma - s} \right) \left( { s- m^2_\pi \over f^2_\pi } \right), \eqno(39) $$ so that the s-wave I=0 scattering length at $s = 4m^2_\pi $, $t = u= 0$ becomes in the L$\sigma$M with $\varepsilon = m^2_\pi/ m^2_\sigma \approx 0.046 $ from (3): $$ a^{(0)}_{\pi\pi} |_{L\sigma M} \approx \left( {7 + \varepsilon \over 1 - 4 \varepsilon } \right) {m_\pi \over 32 \pi f^2_\pi } \approx (1.23) { 7 m_\pi \over 32 \pi f^2_\pi } \approx 0.20 \ m^{-1}_\pi . \eqno(40) $$ This 23\% enhancement of the Weinberg prediction of [10] $0.16 \ m^{-1}_\pi$ is also obtained from ChPT considerations [45]. It is interesting that ChPT simulates [46] the 23\% enhancement found from the L$\sigma$M analysis in (37)-(40) above, especially in light of the ``miraculous" cancellation of L$\sigma$M tree level terms, as explicitly shown in eqs. (5.61)-(5.62) of ref. [43] Such a L$\sigma$M-induced cancellation instead resembles the SPT eq. (34) for $\gamma\gamma \rightarrow \pi^\circ \pi^\circ$ where ChPT fails, whereas it simulates (good) results similar to the L$\sigma$M for the above $a^{(0)}_{\pi\pi}$ scattering length. To compare the L$\sigma$M prediction (40) or the similar ChPT result with data, one recalls the $\pi\pi$ scattering length found from $K_{e4}$ decay [46] $$ a^{(0)}_{\pi\pi} |_{exp}^{K_{e4}} = (0.27 \pm 0.04) m^{-1}_\pi , \eqno(41) $$ or the $\pi\pi$ scattering length inferred from $\pi N$ partial wave data [47] $$ a^{(0)}_{\pi\pi} |_{exp}^{\pi N} = (0.27 \pm 0.03) m^{-1}_\pi . \eqno(42) $$ On the other hand, the Weinberg-L$\sigma$M soft-pion scattering length (39) can acquire a hard-pion correction $\Delta a^{(0)}_{\pi\pi}$ due to the resonance decay $f_\circ (980) \rightarrow \pi\pi$. This was initially computed in ref. [48] based on a $f_\circ \rightarrow \pi\pi$ decay width $\Gamma = 24 \pm 8 \ MeV$. Since this 1992 PDG decay width has increased [14] to $\Gamma = 37 \pm 7 \ MeV$, the hard-pion scattering length correction is now (with $g^2_{f_s} = 16 \pi m^2_{f_\circ} \Gamma / 3p \approx 1.27 GeV^2$, $\xi = m^2_\pi / m^2_{f_o} \approx 0.02)$ $$ \Delta a^{(0)}_{\pi\pi} = {g^2_{f_o} \over 32 \pi m_\pi m^2_{f_o} } \left[ {5-8\xi \over 1 - 4\xi} \right] \approx 0.07 m_\pi^{-1}. \eqno(43) $$ Thus the entire Weinberg-L$\sigma$M hard-pion correction prediction for the I = 0 s-wave $\pi\pi$ scattering length from (40) and (43) is $$ a^{(0)}_{\pi\pi} \approx 0.20 m_\pi^{-1} + 0.07 m_\pi^{-1} = 0.27 m_\pi^{-1}. \eqno(44) $$ We note that this $a^{(0)}_{\pi\pi}$ L$\sigma$M prediction (44) is in exact agreement with the central value of the data in (41) or (42). \section{Summary} In this paper we have studied low energy pion process and compared the data with the predictions of two chiral theories: (a) the chiral quark model (CQM) and its dynamically generated extension to the quark-level linear $\sigma$ model (L$\sigma$M); (b) modern chiral perturbation theory (ChPT). We began in Sec.II by showing the direct link between the CQM-the quark-level L$\sigma$M- and the Goldberger-Treiman relation (GTR), the $Z=0$ condition and vector meson dominance (VMD), and the KSRF relation. In Sec.III we used this CQM-L$\sigma$M theory to compute the pion charge radius $r_\pi = \sqrt{3} / 2\pi f_\pi \approx 0.61\ fm$ in the chiral limit. This agrees well with the observed [22] and VMD values $r_\pi = \sqrt{6} / m_\rho \approx 0.63 \ fm$. In fact setting $r_\pi^{L\sigma M} = r_\pi^{VMD}$ leads to the rho-pion coupling $g_{\rho\pi\pi} = 2\pi$, which is only 2\% greater than the observed PDG value [14]. On the other hand, ChPT fits the measured $r_\pi$ to the parameter $L_9$ while maintaining that chiral log corrections are small [24]. Then in Sec.IV we computed the $\pi^\circ \rightarrow \gamma\gamma$ and $\pi^+ \rightarrow e^+\nu\gamma$ amplitudes in the L$\sigma$M and found both match data with the latter predicting $\gamma = F_A/F_V = 2/3$, while experiment gives [14] $\gamma = 0.68 \pm 0.34$. We extended the latter L$\sigma$M loop analysis to charged pion polarizabilities in Sec.V, finding $\alpha_{\pi^+} = \alpha(12\pi^2 m_\pi f^2_\pi)^{-1} \approx 3.9 \times 10^{-4}\ fm^3$, midway between the observed values. Also we studied $\gamma\gamma \rightarrow \pi^\circ\pi^\circ $ scattering at low energy, where data requires an s-wave cross section $\sigma < 10 nb$ around energy $E \sim 700$ MeV, and where ChPT predicts $ \sigma \sim 20nb$. In contrast, refs [35,40], accounting for the (L$\sigma$M) scalar resonance $\varepsilon(700)$ appeared to predict $\sigma < 10 nb$ five years before the first Crystal Ball data was published [37]. Finally, in Sec VI we extended Weinberg's soft pion (PCAC) prediction [10] for $a^{(0)}_{\pi\pi}$, the I = 0 s-wave $\pi\pi$ scattering length, to the (tree level) L$\sigma$M. Also, hard-pion corrections due to the $f_o (980) \rightarrow \pi\pi$ scalar resonance decays led to an overall scattering length $a^{(0)}_{\pi\pi} \approx 0.27 m^{-1}_\pi$ in the extended L$\sigma$M, in perfect agreement with the central value of both the $K_{l4}$ and $\pi N$-based measurements [46, 47] of $a^{(0)}_{\pi\pi}$. In all of the above cases we compared these CQM-L$\sigma$M predictions (depending upon {\em no} arbitrary parameters) with the predictions of chiral perturbation theory (ChPT depending on ten parameters $L_1 - L_{10}$) and found the latter theory almost always lacking. These results were tabulated in the following Table 1. It is important to stress that a $Z =0$ condition [18] is automatically satisfied in the strong interaction CQM-L$\sigma $M-VMD-KSRF theory and always ``shrinks" quark loop graphs to ``trees" for strong interaction processes such as $g_{\rho\pi\pi} = g_\rho$ or $\pi\pi\rightarrow\pi\pi$ and its accompanying $a^{(0)}_{\pi\pi}$ scattering length. This also makes VMD a tree-level phenomenology, as long stressed by Sakurai [16,19]. For processes involving a photon, however, such as for $r_\pi$, $\pi^\circ \rightarrow 2 \gamma$, $\pi^+ \rightarrow e^+ \nu \gamma$, $\gamma\gamma \rightarrow \pi\pi$ and for $\alpha_\pi, \beta_\pi$, the above L$\sigma $M loop graphs must be considered (since a $Z =0$ condition no longer applies). In all of the above pion processes, the (internal) scalar $\sigma$ meson plays an important role in ensuring the overall chiral symmetry (and current algebra-PCAC in the case of KSRF, $\pi\pi\rightarrow\pi\pi$, $A_1 \rightarrow 3\pi$ and $\gamma\gamma \rightarrow \pi\pi$) for the relevant Feynman amplitude. Cases in point are $A_1 \rightarrow \pi(\pi\pi)_{s\ wave}$, and $\gamma\gamma\rightarrow \pi^\circ\pi^\circ$, where the internal $\sigma$ mesons in Figs. 8b and 9b ensure the soft pion theorems (SPT), equs. (33) and (34), which in fact are compatible with the data. This SPT role of the $\sigma(700)$ in $\gamma\gamma\rightarrow \pi^\circ\pi^\circ$ may explain why the ChPT approach does not conform to Crystal Ball data in Figs. (3,5) in refs [5,7], respectively [49]. In fact clues of a broad $\sigma(700)$ have been seen in (at least) seven different experimental analyses in the past 16 years [50]. As noted in ref.[20], the ChPT attempt to rule out a L$\sigma$M structure was based on problems of a pure meson L$\sigma$M (with that we agree)-but a pure meson L$\sigma$M is {\em not} the CQM-L$\sigma$M to which we adhere. The latter always begins in the chiral limit with axial current conservation due to a {\em quark}-level Goldberger-Treiman relation (GTR) eq. (1), which dynamically induces the L$\sigma$M [2] starting from (CQM) {\em quark} loops. As our final observation, the GTR-VMD-KSRF basis of our proposed CQM-L$\sigma$M theory are really examples [44] of soft-pion theorems and PCAC coupled to current algebra as used throughout the 1960's. Staunch advocates of ChPT in ref. [51] refer to such 1960 soft pion theorems as ``low energy guesses" [LEG]. Instead they prefer the strict ``low energy theorems" [LET] of modern ChPT. However ref. [51] concludes with an interesting remark: ``it may be one of nature's follies that experiments seem to favour the original LEG over the correct LET". Moreover ref. [6] concludes by noting that the VMD approach appears to give more reliable predictions for $r_\pi$, $\gamma$, $\alpha_\pi$ and $a^{(0)}_{\pi\pi}$ than does ChPT. We agree with both of these statements. The author appreciates discussions with A. Bramon, S. Coon, R. Delbourgo, V. Elias, N. Fuchs, A. Kaloshin and R. Tarrach. \newpage \leftline{\bf Figure Captions} \begin{description} \bigskip \item[Fig. 1] Quark loops for axial current $\langle 0| \bar{q} {1\over 2} \lambda^3 \gamma_\mu \gamma_5 q | \pi^\circ \rangle = i f_\pi q_\mu$. \bigskip \item[Fig. 2] Quark loops for $\rho^\circ \rightarrow \pi^+\pi^-$. \bigskip \item[Fig. 3] Quark loops for $r_\pi$. \bigskip \item[Fig. 4] Quark loops for $\rho^\circ \rightarrow \gamma$ em vector current. \bigskip \item[Fig. 5] Quark loops for $\pi^\circ \rightarrow \gamma\gamma$ decay. \bigskip \item[Fig. 6] Quark (a) and meson (b) loops for $\pi^+ \rightarrow e^+\nu \gamma$ in the L$\sigma$M. \bigskip \item[Fig. 7] Plots of Crystal Ball $\gamma\gamma\rightarrow \pi^\circ \pi^\circ$ data verses (a) the ChPT prediction; (b) the Kaloshin-Serebryakov prediction accounting for the $\varepsilon (700)$ scalar meson. \bigskip \item[Fig. 8] Quark box (a) and triangle (b) graphs for $A_1 \rightarrow \pi(\pi\pi)_{s\ wave}$ decay. \bigskip \item[Fig. 9] Quark box (a) and triangle (b) graphs for $\gamma\gamma \to \pi^\circ \pi^\circ$ scattering. \end{description} \newpage \leftline{\bf Table I} \center{Comparison of chiral theory predictions of pion processes with experiment} \bigskip \bigskip \bigskip \noindent \ \ \ \ \ \ \ \u{CQM-L$\sigma$M} \hskip 1in \u{ChPT to one loop} \hskip 1.25in \u{Data} $$ \begin{array}{llll} r_\pi \ \ \ \ \ \ \ \ \ \ \ & 0.61 fm \ \ \ \ \ \ &{12\ L_9 \over f^2_\pi} + \rm chiral\ loops & (0.63 \pm 0.01) \ fm \\ \\ \\ g_{\rho\pi\pi} & 2\pi \approx 6.28 &\ \ \ \ ? & 6.1 \pm 0.1 \\ \\ \\ F_{\pi\gamma\gamma} & \alpha / \pi f_\pi \approx 0.0258 \ GeV^{-1} & \ \ \ \ ? & (0.026 \pm 0.001 ) \ GeV^{-1} \\ \\ \\ \gamma = {F_A(0) \over F_V(0)} & {2 \over 3} & 32\pi^2(L_9 + L_{10}) & 0.68 \pm 0.34 \\ \\ \\ \alpha_{\pi^+} & 3.9 \times 10^{-4}\ fm^3 &({4\alpha \over m_\pi f_\pi^2})(L_9 + L_{10}) & (2.2 \ to \ 6.8 ) \times 10^{-4}\ fm^3\\ \\ \\ \gamma\gamma \to \pi^\circ\pi^\circ & \sigma (E \sim 0.7\ GeV)\sim 7nb & \sigma (E\sim 0.7\ GeV)\approx 20nb & \sigma (E \sim 0.7\ GeV) < 10nb \\ \ \ & \rm and \ \rm falling & \rm and \ \rm rising & \rm and \ \rm falling \\ \\ \\ a^{(0)}_{\pi\pi} & 0.27 m^{-1}_\pi & 0.20 m^{-1}_\pi & (0.27 \pm 0.04) m^{-1}_\pi \end{array} $$ \newpage
1,116,691,498,883
arxiv
\section{\bf INTRODUCTION } Inelastic Raman and neutron scattering experiments on insulating cuprates provide information about the spin dynamic of the CuO$_{2}$ planes of high-$% T_c$ precursors \cite{chakra}. The observed short and long wave-length low-energy excitations have been described using the two-dimensional spin-$% \frac{1}{{2}}$ antiferromagnetic Heisenberg model (AFH), and the standard theory of Raman scattering based on the Loudon-Fleury \cite{LF} (LF) coupling between the light and the spin system. Within this theoretical scheme, the value of the Cu-Cu exchange constant $J$ has been estimated from the first moment $M_{1}$ of the $B_{1g}$ Raman line and, in virtual agreement, from neutron scattering measurements of the spin velocity. For insulating materials having the Cu atom on different chemical environments, the estimated value of $J$ results in the range $\sim {(0.10-0.14)}~{eV}$. Within the AFH framework, numerical and analytical calculations have been able to describe, the temperature dependence of the spin-spin correlation length obtained from neutron scattering data \cite{makivic,sigma}, the temperature dependence of the spin lattice relaxation rate $1/T_{1}$ measured by Nuclear Magnetic Resonance \cite{sokol,imai}, and a prominent structure, ascribed to 2-magnon scattering, of the $B_{1g}$ Raman line \cite {rajiv,nos,manousakis2,girvin,sandvik}. Although, all these theoretical results suggest the AFH on the square lattice as a {\it very good starting point} for describing the spin dynamic of the insulating cuprates, some features of the Raman lines remain to be explained within the LF theory. In particular, it is not yet understood the AFH-LF failure to describe the $B_{1g}$ spectral shape and its enhanced width. And of course, the AFH-LF can not reproduce a non-vanishing $A_{1g}$ response. The shape of the $B_{1g}$ line presents some characteristic features common to several insulating cuprates. In fact, at room temperature, a {\it very similar} line shape has been observed for all members of the M$_{2}$CuO$_{4}$ series (M= {La, Pr, Nd, Sm, Gd} ), Bi$_{2}$Sr$_{2}$Ca$_{0.5}$Y$_{0.5}$Cu$% _{2} $O$_{8+y}$(BSCYCO), YBa$_{2}$Cu$_{3}$O$_{6.2}$ and PrBa$_{2}$Cu$_{2.7}$% Al$_{0.3}$O$_{7}$ (PRBACUALO)\cite{sugai,sule1,tomeno,sule2,merkt}. In all cases, the line extends up to $\sim 8J$, which in the Ising limit, can be ascribed to multimagnon excitations involving 16 spins. Unfortunately, calculations on the AFH show a negligible contribution of multimagnon processes not only for the Raman line but also for the spin structure factor at the antiferromagnetic wave vector. Another feature is a very intensive peak located at $\omega \sim 3J$, identified as $2-magnon$ scattering\cite {rajiv,nos,manousakis2,girvin}. Now, it is believed that the asymmetry of the line originates on a second hidden peak on the high energy side of the spectrum at $\omega \sim 4J$ whose intensity is $\sim 25\%$ smaller than the intensity of the 2-magnon peak \cite{tomeno}. In the Ising limit, the energy of these magnetic excitations corresponds to 4-magnon scattering. For the AFH model, they give a very small contribution to the intensity of the spectrum. On other geometries interesting features also appear. The $A_{1g}$ line has a maximum at a slightly high frequency with respect to the $2-magnon $ peak\cite{lyons}, while for the $A_{2g}$ and $B_{2g}$ symmetries the center of the spectrum is located at $\omega \sim 5J$ and $\omega \sim 4J$ respectively \cite{sule3}. Finally, it has been found a slight temperature dependence of the scattering intensities of the $B_{1g}$ line in the insulating cuprates. In fact, the rise in temperature from 30 to 273K causes the $2-magnon$ peak intensity to decrease by only $\sim 10\%$, in contrast to $\sim50\%$ for the $S=1$ system La$_{2}$NiO$_{4}$ \cite{sugai}. Consistently, previous calculations show a weak temperature dependence of the $B_{1g}$ spectrum for $T<J$\cite{nos0} for the spin $\frac{1}{2}$ AFH model, which in turns suggest the importance of short-range spin-spin correlations. In the Fleury-Loudon theory, the state emerging from the application of the current operator on the ground state is considered as an eigenstate. In this intermediate eigenstate, the electron and the hole, produced by the photon scattering, are very close in real space,creating a charge transfer exiton, and the individual propagation of each particle is neglected. This approximation could be appropriated for the case of the cuprates due to the presence of an electron-electron interaction ($U_{pd}$) between nearest neighbor sites (oxygen and copper sites). This interaction, not included in the Hubbard Hamiltonian, is important to describe the charge excitations of the cuprates and induces an attractive interaction between the hole and the electron \cite{elina}. Due to this interaction, the hole and the electron are close in space and there is not free individual propagation of them \cite{wang}. Furthermore, the disorder present in the cuprates \cite{choten} tends to localize the electron-hole pairs. Chubukov and Frenkel tried to include the resonant effects in the frame of the one band Hubbard model \cite{chubukov-frenkel}. Sch\"onfeld {\it et al.} \cite{schonfeld} calculated the resonant contributions but they could not properly describe the $B_{1g}$ profile, and the observed $A_{1g}$ line remains to be explained. In both cases, they did not include the effect of the $U_{pd}$ interaction to describe the charge excitations and considered that the electron and the hole propagate individually. For the reasons given above, we think that the Fleury-Loudon theory is an appropriated framework to describe the Raman spectra of the cuprates. Even when the resonant effects may be needed to explain the dependence of the Raman spectra on the incoming laser frequency, the main features of the line studied in our work persist over the wide range of frequencies used in the experiments \cite{sugai,sule1,merkt,lyons,ohana,blumberg} and are essential the same than out of resonance. This point is shown in, e.g., the recent work in PrBa$_{2}$Cu$_{2.7}$Al$_{0.3}$\cite{merkt} where the Raman spectra out of resonance was measured and the main features of the $B_{1g}$ line shape remain unaltered. Besides, the LF theory has been the standard framework to describe the Raman spectra of other antiferromagnetic $S=1/2$ compounds \cite{singh,gros}. So, it is an interesting issue whether the persistent aspects of the Raman line can be well described within the LF theory with an appropriate model. Previous theoretical work using LF theory tackled the problem of the line's shape by using series expansions, variational Monte Carlo, interacting spin-waves and exact diagonalization techniques on small clusters \cite {rajiv,nos,manousakis2,girvin,sandvik}. Different theoretical scenarios have been proposed for describing the anomalously enhanced width of the $B_{1g}$ line, and although at a first sight, quantum fluctuations could be the main contribution to the width of the $B_{1g}$ line, this last hypothesis has been rejected after measurements on the $S=1$ system NiPS$_{3}$ for which a relative width comparable to the insulating cuprates has been observed \cite{merlin}. A different proposal has been motivated by the temperature dependence of the 2-magnon peak, namely, an increase of the Raman linewidth with increasing temperatures \cite {knoll}. This has been considered as a strong indication of a phonon mechanism for the line's broadening and its effect has been analyzed through a non-uniform renormalization of the exchange constant $J$ \cite{nori}. Another argument which support the phonon mechanism is that although the second cumulant $M_{2}$ of the $B_{1g}$ Raman spectrum is almost $J$- independent for the materials mentioned above, its exchange constant changes by $\sim 20\%$. This indicates a linewidth dominated by non-magnetic contributions such as intrinsic disorder of the spin lattice, defects or phonons. Of course, this does not rule out other magnetic mechanism providing scattering at frequencies higher than the 2-magnon peak. So far, the phonon mechanism alone is insufficient to describe the line asymmetry due to important contributions of 4-magnon scattering \cite{knoll}% . This requires to go beyond the minimal AFH model. An effective description of the insulating cuprates based on the single-band Hubbard model \cite{lema}% , recently appeared, gives a theoretical framework to go beyond the AFH model and provides additional multispin interactions which finally could contribute not only to the Raman spectra but also to the midinfrared phonon-assisted optical conductivity\cite{lorenz}. In this work, we extend previous calculations of the $B_{1g}$ and $A_{1g}$ lines based on the minimal AFH model by including multispin and spin-phonon interactions. We calculate the Raman spectra on finite-size systems using the now standard Lanczos method\cite{lanczos} for the series of materials mentioned above. Since inelastic light-scattering measurements probe short-wavelength spin fluctuations, we expect finite size effects to be small \cite{nos1}. We find that two mechanisms, multispin and spin-phonon interactions, contribute to the width and shape of the $B_{1g}$ Raman line, and at the same time, allow an otherwise forbidden $A_{1g}$ response. \section{\bf RAMAN SPECTRA AND EFFECTIVE SPIN MODEL } The scattering of light from insulating antiferromagnets at energies smaller than the charge-transfer gap $\sim (1.5-2)eV$ can be described phenomenologically, by introducing a Raman operator, based on the symmetries of the magnetic problem. For the one-dimensional irreducible representations of the square, these operators are given by \[ O_{B_{1g}} = \sum_{{\bf i}} {\vec S_{{\bf i}}} . {( {\vec S_{{\bf i}+{\bf % e_{x}} }} - {\vec S_{{\bf i}+{\bf e_{y}}}} )}. \] \[ O_{A_{1g}} = \sum_{{\bf i}} {\vec S_{{\bf i}}} . {( {\vec S_{{\bf i}+{\bf % e_{x}} }} + {\vec S_{{\bf i}+{\bf e_{y}}}} )}. \] \[ O_{B_{2g}} = \sum_{{\bf i}} {(\vec S_{{\bf i}}} . {\vec S_{{\bf {% i+e_{x}+e_{y}}} }} - {\vec S_{{\bf i}+{\bf e_{x}}}}. {{\vec S_{{\bf i}+{\bf % e_{y}}}})} \] \[ O_{A_{2g}} = \sum_{{\bf i}}{\epsilon_{\mu \nu}} {\vec S_{{\bf i}}} . {({\vec % S_{{\bf i}+{\bf e_{\mu}}}}\times {\ {\vec S_{{\bf i}+{\bf e_{\nu}}}})}}. \] \noindent where $\mu = \pm x, \pm y$, and $\epsilon_{\mu \nu} = -\epsilon_{\nu \mu}= -\epsilon_{-\mu \nu}$. Here ${\bf e_{x}}$ and ${\bf % e_{y}}$ denote unit vectors in $x$,$y$ directions of a square lattice. The Raman Hamiltonian is proportional to one of these operators, being the prefactor matrix elements of the dipolar moment and therefore depend on microscopic details of the system. Here, we will assume all them equal to one and therefore we will be unable to compare the relative intensities of spectra of different symmetries. Microscopically, these Raman operators can be derived from the theory of Raman scattering in Mott-Hubbard systems\cite {shastry}. In the strong-coupling limit of the Hubbard model, they appear quite naturally as the leading-order contributions in an expansion in $% \Delta = t/(U-\omega)$. Of course, for the $2D$ AFH model, the $O_{A_{1g}}$ operator commutes with the Hamiltonian, so the $A_{1g}$ response vanishes at the lowest order and terms of order $\Delta^3$ are required to have a small but finite response. The Raman spectrum $R_{\Gamma}(\omega)$ at $T=0$, is given by \[ R_{\Gamma}(\omega)=\sum_{n} {|\langle\phi_{0}|O_{\Gamma}|\phi_{n}% \rangle|^{2} } \delta(\omega-E_{n}+E_{0}) \] \noindent where $\Gamma$ is $A_{1g},B_{1g},A_{2g}$ or $B_{2g}$, $|\phi_{0}>$ is the ground-state, $E_0$ is the ground-state energy, and $|\phi_{n}>$ is an excited state of the system with energy $E_{n}$. The spectrum $% R_{\Gamma}(\omega)$ can be obtained from a continued fraction expansion of the diagonal matrix element of the resolvent operator $1/(\omega+E_{0}+i% \delta-H)$ between the state $O_{\Gamma}|\phi_{0}>$. Here $\delta$ is a small imaginary part added to move the poles away from the real axis \cite {lanczos}. Let us describe first the $B_{1g}$ Raman spectrum of the $2D$ antiferromagnetic Heisenberg model. It has mainly two contributions: (a) a very strong {\it two-magnon} structure located at $\omega \sim 3J$ and (b) weaker {\it 4-magnon} processes at higher-energies. In a recent work, A. Sandvik {\it et al.} \cite{sandvik} calculated numerically the Heisenberg $B_{1g}$ spectrum for lattices up to 6$\times6$ sites. They found a change in the strong structure at $\omega \sim 3J$ in the exact spectrum going from 4$\times$4, with a single two magnon dominant peak, to 6$\times$6, where two equal sized peaks appear. Unfortunately this is the bigger system available nowadays and it is not clear if these two peaks are not an spurious characteristic of this cluster since this kind of structure is not observed in bigger systems calculated with Monte Carlo-Max Entropy\cite{sandvik}. They concluded that the Heisenberg model can not describe the broad $B_{1g}$ spectrum and of course it gives a zero $A_{1g}$ profile even for that size of clusters. The $4-spin$ excitations are generated by flipping two pairs of spins of the Neel background in a plaquete or in a column with an energy cost of $4J$ and $5J$ respectively. The intensity of these high-frequency processes is {\it very small} $<10\%$% \cite{nos,nos1}, and cannot account for the asymmetry observed for this mode \cite{nota1} on the high frequency side of the spectrum. Since a distinct shoulder appears at $\omega \sim 4J$, it has been proposed that the $% 4-magnon $ channel is enhanced by 4-spin cyclic exchange interactions \cite {sugai}. In fact, finite cluster calculations \cite{kuramoto,roger,nos2} for the undoped compound have shown that although within the CuO$_{2}$ planes the leading term in the effective spin Hamiltonian is the antiferromagnetic Heisenberg exchange interaction, other terms are required to describe the low energy spin excitations. These higher-order terms appear naturally by performing a canonical transformation up to fourth order on the Hubbard model \cite{mcdonal}. A single-band Hubbard model can simultaneously describe the low-energy charge and spin responses of insulating Sr$_2$CuO$_2$Cl$_2$ \cite{lema}. Hence, our starting model to describe the spin dynamic of all the compounds mentioned before, is an extended Hubbard Hamiltonian, including next-nearest-neighbor (NNN) hopping $t^{\prime }$. By means of a canonical transformation to fourth order in $t$ and second order in $t^{\prime }$, the effective spin Hamiltonian is written as follows, \begin{eqnarray*} H_{eff} &=&J\sum_{i,\delta }\left( {\bf S}_{i}{\bf S}_{i+\delta }- \frac{1}{4% }\right) + J^{\prime }\sum_{i,\delta ^{\prime }}\left( {\bf S}_{i}{\bf S} _{i+\delta ^{\prime }}-\frac{1}{4}\right) \\ &&+J^{\prime \prime }\sum_{i,\delta ^{\prime \prime }}\left( {\bf S}_{i}{\bf S}% _{i+\delta ^{\prime }}-\frac{1}{4}\right) \\ &&+K\sum_{\left\langle i,j,k,l\right\rangle } ({\bf S}_{i}{\bf S}_{j})({\bf S}% _{k}{\bf S}_{l}) + ({\bf S}_{i}{\bf S}_{l})({\bf S}_{j}{\bf S}_{k}) \\ && - ({\bf S}_{i}{\bf S}_{k})({\bf S}_{j}{\bf S}_{l}) \end{eqnarray*} \noindent with $J=4t^{2}/U-64t^{4}/U^{3},\;J^{\prime }=4t^{\prime 2}/U+4t^{4}/U^{3},\;J^{\prime \prime}=4t^{4}/U^{3},\; $and $K=80t^{4}/U^{3}.$ $\delta $ $\delta ^{\prime }$ $\delta ^{\prime \prime }$ runs over NN; 2nd NN; 3th NN and $\left\langle i,j,k,l\right\rangle $ means the sum over groups of four spins in a unit square. The effect of the cyclic exchange interaction on the antiferromagnetic ground state of the $2D$ Heisenberg model has been studied showing that for small values of $K/J$, the antiferromagnetic phase is stable\cite{chubukov}. The staggered magnetization increases as $K/J$ is increased, has a maximum at $\sim 0.75$, and then decreases showing a transition to a spin canted region. In particular, for the value of $K$ estimated for the cuprates, the system is still in the antiferromagnetic phase and its staggered magnetization is $% m_{st}\sim 0.58$. The reduction of the three-band model onto the single band Hubbard model for realistic values of the multiband parameters, indicates that the effective hopping $t$ is bounded between 0.3eV and 0.5eV while $% U/t\sim 7-10$ \cite{3A}. Furthermore, the derivation of the one band Hubbard Hamiltonian given by Sim\'on and Aligia ( see Ref.\cite{3A} ) for the parameters obtained from LDA calculations for La$_2$CuO$_4$, gives $t\sim 0.45eV, U/t\sim 7.6, t^{\prime}/t\sim 0.15$. Deviations from these values are expected for different materials, in particular, for the values of the long-range hoppings. In Table I, we show the values of the parameters for the 7 systems studied in this work. \section{\bf RESULTS } Let us examine first the role of multispin interactions on the spectra. Figure 1 shows the calculated Raman spectrum at different symmetries for $% U=9.5t$ and $t^{\prime }=0.28t$. The $B_{1g}$ line shows the same features as for the Heisenberg model, being the main difference the gain of spectral weight at high-frequencies, Fig.1(a). The $4-magnon$ processes are now enhanced having a relative intensity to the $2-magnon$ peak of $\sim 20\%$. This is in agreement with Ref.\cite{tomeno}, where it has been found that the deviation from the symmetric part of the $B_{1g}$ mode is mainly due to a peak located at $\omega \sim 4J$ whose intensity is about $20\%$ of the 2-magnon peak. For the $A_{2g}$ and $B_{2g}$ modes, Fig.1(c), the center of gravity of the spectra agree rather well with the experimental data\cite {sule3}. \begin{figure} \narrowtext \epsfxsize=4.5truein \epsfysize=3.3truein \vbox{\hskip 1.truein \epsffile{fig11.ps}} \medskip \caption{ $B_{1g}$ and $A_{1g}$ Raman spectra for 20 and 16 clusters. Calculations were done for the effective spin model with no spin-phonon interaction. The microscopic parameters corresponds to {\protect YBa$_{2}$Cu$_{3}$O$_{6.2}$ } ( see Table I ).} \end{figure} However, calculations for the $A_{1g}$ line, Fig.1(b), shows that the maximum is shifted by $\sim J$ respect to the two magnon peak, in disagreement with the experimental shift. Although the 4-spin cyclic exchange interaction provides a mechanism for some features of the $B_{1g}$ mode and yields responses in the correct frequency range for the $A_{2g}$ and $B_{2g}$ lines, it does not give correctly the frequency range for the $% A_{1g}$ spectrum neither the full linewidth of the $B_{1g}$ line. As was mentioned before, this can be due to sample inhomogeneities, additional strong interactions with the spin excitations and/or disorder. Among all of them, the spin-phonon interaction seem to be the best candidate. The occurrence of a strong Peierls-type Fermi-surface instability involving breathing-type displacements of the oxygen atoms of the CuO$_2$ planes suggests that spin-phonon interactions could contribute to the damping of the spin excitations of the insulating compounds. The in-plane phonon modes have small energy compared to the exchange interaction. Indeed, infrared data for La$_{2}$CuO$_{4}$ gives frequencies of the O stretch modes as $550$ and $690~cm^{-1}$. This corresponds to a bond-stretching force constant $K$ of $\sim 7.5$ and $\sim 11~eV/ {\AA}^{2}$ respectively \cite{weber} $i.e.% \sqrt{<x^{2}>} \sim 0.1{\AA}$. The electron-phonon interaction on the real material, modifies the exchange interaction parameter of the effective spin Hamiltonian $mainly$ through the dependence of $t_{pd}$ and the charge-transfer energy on the $Cu-O$ distance $d_{Cu-O}$ \cite{ohta}. In the spin-wave approximation, the lowest order contribution to the damping of spin excitations due to the spin-phonon interaction is proportional to $% |\nabla J|^{2}$ \cite{cottam}. Because along the M$_{2}$CuO$_{4}$ family, $J$ changes almost linearly with $d_{Cu-O}$, $|\nabla J|$ takes the $constant$ value $\sim 4350~cm^{-1}/\AA$, so the maximum contribution to the exchange constant due to the spin-phonon coupling is $\Delta J \sim \pm 435~cm^{-1}$. The spin-phonon interaction produces, through quantum fluctuations, some kind of dynamically induced disorder which can be seen by short-wavelength spin excitations. While the spin dynamic is driven by $\hbar \omega _{s}=J$, the phonon movement is by $\hbar \omega _{ph}\sim (20-60)meV~<J$, then the spin system looks the oxygen atoms as frozen in different positions during $% each$ spin exchange process. Since the phonon energy is much smaller than the exchange parameter, as a first approximation, the phonon energy can be taken as zero, then the spin-phonon interaction shows up through fluctuations of the phonon field. In this limit phonons can be modeled by static disorder \cite{Rojo}, in other words, the fluctuations of the phonon field can be modeled by a random distribution $P(J,D)$ of the exchange parameter of width $D$ centered around $J$. For the one-band Hubbard model this correspond to a distribution for hoppings, centered around $% t\;(t^{\prime })$ with dispersion $Dt\;(Dt^{\prime })$. In what follows we assume a Gaussian distribution for them. The value of $D$ is of order $% \lambda \sqrt{<x^{2}>}/t \sim (0.07-0.7)$eV for an electron-phonon coupling $% \lambda \sim (0.3-3.0)$ \cite{weber}. Since the real value of $D$ depends on the details of the microscopic model used for the planes as well as the material itself, we fixed $D=0.13$ which gives the better overall description of the mentioned insulators. Fig. 2 shows a comparison between experimental and calculated $A_{1g}\;$and $% B_{1g}$ spectra for seven different compounds. Table II presents the calculated and experimental values for the first two cumulants. All the spectra of Fig.2 results from a quenched averaged over 150 samples. The calculated $B_{1g}$ response shows many of the experimentally observed features. Indeed, the characteristic $2-magnon$ peak is located at $\omega \sim 3J$, the spectra extend up to $8J$ and significant spectral weight is found at $\omega \sim ~(3.5-4)J$. The $A_{1g}$ scattering appears in the \begin{figure} \narrowtext \epsfxsize=3.7truein \epsfysize=6.0truein \vbox{\hskip .5truein \epsffile{fig2.ps}} \medskip \caption{ $B_{1g}$ and $A_{1g}$ Raman spectra for the 20 sites cluster. The value of the parameters are given in Table I and $D=0.13$. Scattered symbols correspond to experimental data taken form Ref.{\protect\cite{sugai,sule1,tomeno,sule2}} while solid lines are the result of theoretical calculations using the effective spin model taking into account fluctuations of the phonon field. } \end{figure} \noindent same frequency range of the $B_{1g}$ response $2J<\omega <4J$ and the center of gravity of this line is slightly shifted to the right respect to the $% 2-magnon$ peak. As it was stated in Ref. \cite{tomeno}, the $B_{1g}$ line can be well fitted with two gaussian curves. One centered at $\omega \sim 3J, $ and the other at $\omega \sim 4J$ with a relative intensity of 25\% respect to the first one. Hence, from the explanation given above and the results presented in Ref.\cite{nori}, it is clear that the Heisenberg model, even with the inclusion of spin-phonon interaction, cannot reproduce simultaneously the width and the asymmetry of the $B_{1g}$ peak. It is necessary to take unrealistic values of disorder ($D=0.5J$) \cite{nori} in order to transfer spectral weight from the $2-magnon$ peak to higher energies. From Figs.1 and 2, it becomes clear that the inclusion of additional terms ($J^{\prime }$and $K$) are necessary to reproduce the whole shape of the $B_{1g}$ line. These terms generate the whole structure of the $% A_{1g}$ line because $O_{A_{1}}$ does not commute with the Heisenberg Hamiltonian. This structure has two main contributions of approximately same weight. The first is the same as the $2-magnon$ $B_{1g}$ peak, now allowed in this representation due to the broken symmetry introduced by the fluctuations of the phonon field. Therefore the intensity of this peak increases with $D$. The second contribution corresponds to $4-magnon$ states (see Fig.1(b)) mainly introduced by the $J^{\prime }$ and $K$ terms of the Hamiltonian. Contrary to the $B_{1g}$ contribution, this one does not have a regular behavior with $D.$ If we apart from $D=0.13$ (the dispersion used to describe all $B_{1g}$ lines) the two main $A_{1g}$ peaks will not have the same intensity and the agreement with experiment will pair off. This point is clearly seen by comparing Fig.2 with Fig.1 of Ref.\cite{nori} where the $% A_{1g}$ line, using a Heisenberg model plus disorder, is given. \section{\bf SUMMARY } Summarizing, we have studied the role of spin-phonon and multispin interactions on the line shapes of light-scattering experiments within the Loudon-Fleury theory for the insulating cuprates. For the insulating parent materials of high-temperature superconductors, by contrast to conventional antiferromagnets, the energy of short-wavelength spin-excitations is greater than the characteristic energy of optical phonons so a non-negligible contribution of the spin-phonon interaction is expected for the damping of spin-excitations. Good qualitative agreement for the $B_{1g}$ an the $A_{1g}$ was found. As it was noted before, to describe the dependence of the spectrum with the incident laser, a resonant theory should be used. But, as it is shown in a recent experiment on PrBa$_{2}$Cu$_{2.7}$Al$_{0.3}$O$_{7}$\cite{merkt} (see figure 2), resonant effects become weak as the laser frequency is decreased. For low enough frequencies, the system is out of resonance. The main characteristics of the $B_{1g}$ line measured in this conditions, the width of the main peak and the shoulder at higher energies, remain unaltered. By including spin-phonon interactions in the framework of the single-band Hubbard model we can reproduce quantitatively these features. The $A_{1g}$ spectra are also well described within this theory. Up to our knowledge, this is the first time in which a theory with realistic parameters can reproduce the $B_{1g}$ and $A_{1g}$ measured spectra. Therefore we conclude that the Loudon-Fleury theory is a good starting point to describe the Raman spectra of the insulating cuprates. In a recent work, a numerical calculation of the Heisenberg model in clusters as big as 6$\times$6 was performed and two 2-magnon peaks were found. If this is confirmed for bigger clusters, note that Monte Carlo-Max Entropy calculations on bigger clusters do not show such structures, this could lower the phonon-magnon coupling needed to adjust the experiments. Unfortunately this is far from nowadays numerical possibilities. Finite-cluster calculations for 8 insulating cuprates support the view of two contributing mechanism to the $B_{1g}$ and $A_{1g}$ Raman lines. Although they contribute to different parts of the spectra, together, they account for the enhanced linewidth and the asymmetry of the $B_{1g}$ mode, as well as for a non-negligible response for the $A_{1g}$ Raman line observed in these materials. \section{\bf ACKNOWLEDGMENTS } We would like to thank E.Fradkin, R.M.Martin and M.Klein for useful discussions at early stages of this work. All of us are supported by the Consejo Nacional de Investigaciones Cient\'{\i }ficas y T\'{e}cnicas, Argentina. Partial support from Fundaci\'on Antorchas under grant 13016/1 and from Agencia Nacional de Promoci\'on Cient\'{\i }fica y Tecnol\'ogica under grant PMT-PICT0005 are gratefully acknowledged.
1,116,691,498,884
arxiv
\section{Introduction}\label{sec:Introduction} In recent years data sets have grown in size and dimension with the proliferation of advanced data acquisition techniques. We have been able to use such data meaningfully not only because the computation power has increased to match the size, but also due to the paradigm shift in data analysis techniques that handle such data. A prime example is Machine Learning (ML). As a result new applications and techniques are emerging more frequently than ever before. Examples include object classification with applications in medicine~(e.g., brain image analysis) and security~(e.g., face classification)~\cite{Zero}. In many such applications, items in a data set are considered as points in some feature space of the underlying data, enabling us to interpret the data set as a ``point cloud" in a suitably identified space. Even though, in certain cases the feature space is easily identifiable, in many other cases identifying a feature space could be a less obvious task. Making the matters less trivial, even when there is an obvious set of features that realize a data set as point cloud, ``Whether those features enables an accurate and practical representation of the data for the given task?" may not be fully answered. For example, the number of dimensions required to represent an object in the data set in a straight forward way may be in the order of thousands or even millions (e.g., DNA sequence). Not only the computational and storage cost of such high dimensional data is a burden, but also many fundamental assumptions in ML algorithms could fail in high dimensions. Thus effective ways of \emph{down-sampling} an original point cloud or representing data in a low dimensional point cloud is necessary~\cite{Eldar-Lindenbaum-etal-1997,Sethian-1999,Moenning-Dodgson-2003}. However, fast, computable, and meaningful feature extraction methods, if exist and could be identified, provide an efficient comparison of data. Thus, the problem of ``feature extraction" is an important step, though it may be less trivial. As a result, algorithms that enable to extract meaningful features lie at the center of modern data analysis techniques and research (e.g., ML). For example, the vector of grayscale values of each pixels represents each image in a grayscale digital image data set as a point in a high dimensional Euclidean space. However, different features of the image may represent the images in a space that is more meaningful for a specified purpose~\cite{Carlsson-2009,five,Memoli-Sarpiro-2005,Bronstein-Bronstein-Kimmel-2006,two}. The size of the data could be a burden for the task of feature extraction itself, making sampling inevitable. However, the sampling should be done in such a way that the resulting point cloud is minimally mutilating the features of the underlying~object. To summarize, the following key challenges in the algorithms aforementioned could be identified: \begin{itemize} \item[1)] Feature extraction for efficient representation of data, \item[2)] Feature extraction for efficient comparison, \item[3)] Efficient sampling techniques which could facilitate feature extraction. \end{itemize} This paper is centered around the idea of extracting persistence diagrams~\cite{Carlsson-2009} as topological features, originated from critical points of a point cloud. The importance of topological methods in image and signal processing applications have been emphasized in an extensive literature~\cite{Emrani-2014, Robinson-2012, Chintakunta-2014, Robinson-2014, Krim-2016, Lobaton-etal-2010, Emrani-etal-2013, Ernst-etal-2012, Wilkerson-etal-2013, Wagner-etal-2013}. The main motivation for this work is the question ``How one can device a sampling method that can support and improve such feature extraction process?". This paper assumes that each data point in the data set itself can be understood as a point cloud equipped with a real valued function defined on it. For example, in a database of grayscale images, each image itself can be thought of as a point cloud in $3$-dimensional space, with the function as the grayscale value of each pixel. \subsection{Our Contribution}\label{subsec:Our Contribution} What we propose in this paper is based on the philosophy that \emph{by knowing the critical points (up to some persistent level) of a point cloud with a Morse function $f$, we should be able to efficiently extract topological features, such as persistent homology, that are closely associated to $f$}. Thus, we consider \emph{sampling critical points} from an underlying point cloud equipped with a ``Morse function"~\cite{eight}, which is used to compute topological features; specifically, persistence diagrams. The proposed sampling mechanism is based on extracting critical points enabled by the computation of the \emph{Morse-Smale} (MS) \emph{ Complex}~\cite{Forman-1998,Forman-2002}. Henceforth, we refer to the proposed sampling as \emph{MS~sampling}. The approach is compared with classic Farthest Point Sampling (FPS). When compared with FPS, in addition to the potential advantage of considering critical points, there are several other advantages of the proposed method: 1)~MS complex can be computed in a parallel framework and attain acceptable time complexities. 2)~The proposed method considers an additional structure, a Morse function, for the sampling. 3)~Witness complex~\cite{silva-2004} computes persistent homology efficiently by building a simplified complex; however, relies on providing a dependable set of ``landmark points". Critical points can be an effective representation of landmark points, as oppose to manually or randomly selected set of points. We apply the proposed method for classifying human faces according to ethnicity. Numerical experiments suggest that the proposed mechanism can achieve comparable performance with less computational complexity than FPS. \subsection{Organization}\label{subsec:Organization} The rest of this paper is organized as follows. Section~\ref{sec:Background} skim through previous work and relevant concepts in computational topology. Section~\ref{sec:Methodology} presents the methodology we propose. An application of the proposed algorithm is presented in Section~\ref{sec:FaceClassification}, followed by conclusions in Section~\ref{sec:Conclusion}. \section{Background}\label{sec:Background} Authors of \cite{two} have considered extracting \emph{topological features} of a point cloud ~\cite{four} as follows. Given a \emph{sampled} point cloud, a filtrations of the Vietoris-Rips (VR) complex~\cite[\S~III]{Edlesbrunner-Harer-2010} was built, which in turn was used to compute the associated persistence diagrams~\cite[\S~VII]{Edlesbrunner-Harer-2010}. Then the authors have shown that the persistence diagrams, together with carefully defined metrics can be used to approximately resemble the Gromov-Hausdorff distance between the original point clouds. In the rest of this section, we will briefly outline the topological concepts and tools that are used in this paper. For more details, we refer reader to~\cite{Edlesbrunner-Harer-2010}. Given a point cloud $S$, a data structure designated as a \emph{complex} is constructed. There are many ways of generating complexes; however, this work focuses on VR complex and witness complex. We denote the VR complex associated with \emph{persistence} level $r$ by $\texttt{VR}_S(r)$. One can obtain smaller complexes by constructing a witness complex. The key idea of computing the witness complex is to first choose a set of \emph{landmark points} $L\subset S$ of the data set, and then to use non-landmark data points as witnesses for constructing simplices spanned by combinations of landmark points. Such a complex gives rise to \emph{homology groups}, one for each non-negative integer $p$, denoted by $H_p(S)$. For an increasing sequence of persistence levels, $r_0 < r_1 < \cdots < r_n$ there exits a sequence of complexes. For example, in the case of VR complex, we have $\texttt{VR}_S(r_0)\subseteq \texttt{VR}_S(r_1)\subseteq\cdots \texttt{VR}_S(r_n)$. Such a sequence is designated as a \emph{filtration}. A filtration determines a sequence of homology groups. Moreover, inclusion maps in the filtration give rise to maps on the homology. The \emph{persistent homology groups} are computed based on these maps. Under these maps new elements in the homology groups can be ``born" and existing ones can ``die". These births and the deaths of homology classes are then encoded into \emph{persistence diagrams}. Persistence diagrams~\cite[p.~181]{Edlesbrunner-Harer-2010} are multi-sets of points in the extended plane ${\rm I\!R}\cup\{\infty,-\infty\}$, one for each dimension $p$. In particular, we draw points at $(r_i, r_j)$ with multiplicity $\mu_p^{ij}$, where $\mu_p^{ij}$ is the the number of $p$-dimensional homology classes born at persistence level $r_i$ and dying at level $r_j$. Persistence diagrams are the topological signatures used throughout this paper to compare point clouds. One can measure the dissimilarity between two persistence diagrams by using appropriately defined metrics~\cite[\S~VIII.2]{Edlesbrunner-Harer-2010}. For example, this paper and~\cite{two} use \emph{Wasserstein} distance with parameter $q$, $W_q(D_1,D_2)$, to compare two persistence diagrams $D_1$ and $D_2$ \cite{Edlesbrunner-Harer-2010}. Let $\mathbb{M}$ be a manifold. A smooth function $f:\mathbb{M} \mapsto \mathbb{R}$ is a \emph{Morse function}, if all critical points are non-degenerate. One can think of $f$ as a height function on $\mathbb{M}$. We refer the reader to~\cite{eight} for more details. This paper relies on giving such a function defined on the point cloud $S$. Computation of the MS complex up to a given a persistence level was considered in~\cite{ten}, on which we rely to extract critical~points. \section{Methodology}\label{sec:Methodology} The proposed methodology is outlined in Algorithm~1. \noindent\rule{1\textwidth}{0.3mm} \\ \textbf{\emph{Algorithm~1}}\\ \rule{1\textwidth}{0.3mm}\vspace{-0mm} \textbf{Input:} \begin{itemize} \item $X=\{X_1, X_2,\cdots,X_n\}$, where each $X_i\in X$ itself is a point cloud with a distance metric $d_i$. \item $\{f_i : X_i \rightarrow \mathbb{R}\}$, whose critical points to be sampled \item Persistence level~$r$. \item $q$, parameter to Wasserstein distance. \end{itemize} \textbf{Steps:} \begin{enumerate} \item For each $X_i\in X$ \begin{enumerate} \item Extract the set $X_i^r$ of critical points of $f_i$ from $X_i\in X$ using MS algorithm with persistence level~$r$. \item For each $k\in \{0,1,2\}$, compute the $k$-dimensional persistence diagrams $D_k(X_i^r)$. \end{enumerate} \item For each pair of $X_i^r$, $X_j^r$ \begin{enumerate} \item Compute the distance $c(X_i,X_j)$ given by \begin{equation}\label{eq:metricization} c(X_i,X_j)=\max_{k} \{ W_q(D_k(X_i^r),D_k(X_j^r))\}. \end{equation} \item Form the distance matrix $\mathbf{M}$ whose $(i,j)$ entry is $c(X_i,X_j)$. \end{enumerate} \item Classify $X$ based on $\mathbf{M}$. \end{enumerate} \vspace{-2mm} \rule{1\textwidth}{0.3mm}\vspace{-0mm} \noindent In the sequel, we briefly discuss the steps of Algorithm~1. We assume that the data set $X=\{X_1, X_2,\cdots,X_n\}$ consists of $n$ points, where each $X_i$ itself is a point cloud. We also assume each $X_i$ is a subset metric space with metric $d_i$ and equipped with a function $f_i : X_i \rightarrow \mathbb{R}$. \FloatBarrier \begin{figure*}[t] \centering \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[height=1.9in,width=1.81in]{Sample0MS.jpeg \vspace{-1.5em} \caption{} \label{Fig:SamplesforMu-r1} \end{subfigure}% ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[height=1.9in,width=1.81in]{Sample2MS.jpeg \vspace{-1.5em} \caption{} \label{Fig:SamplesforMu-r2} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[height=1.9in,width=1.81in]{Sample0FPS.jpeg \vspace{-1.5em} \caption{} \label{Fig:SamplesforMu-FPS1} \end{subfigure} ~ \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[height=1.9in,width=1.81in]{Sample2FPS.jpeg \vspace{-1.5em} \caption{} \label{Fig:SamplesforMu-FPS2} \end{subfigure} \vspace{-0em} \caption{MS Sampling with: (a) $r=0.87$, (b) $r=0.6$. FPS with (c) $220$ points, (d) $119$ points. Image courtesy of FERET~\cite{fifteen}. } \label{Fig:SamplesforMu} \end{figure*} \subsection{Sampling System: Step~(1a)}\label{subsec:SamplingSystem} Main goal of the sampling is to extract a well represented set of points $X_i^{r}$ which can efficiently approximate the underlying space of $X_i$. The MS algorithm~\cite[Algorithm~1]{ten} is used with persistence level $r$ to obtain $X_i^{r}$. For an illustration, see Figure~\ref{Fig:SamplesforMu}-(a) and (b). \subsection{Persistence Diagrams: Step~(1b)} Persistence homology in dimensions $k=0,1,2$ is computed and the associated persistence diagrams are designated by~$D_k(X_i^r)$. \subsection{Metricizing $X$ for Classification: Step~(2) and (3)} In this step, Wasserstein distances $W_q$ between persistence diagrams obtained in Step~(1b) are computed. These distances are then used to metricize $X$ based on~\eqref{eq:metricization}. From this a distance matrix $\mathbf{M}$ is computed to be used in a preferred classification algorithm. \section{Application: Ethnicity Classification}\label{sec:FaceClassification} In this section, we test the proposed Algorithm~1 in an ethnicity classification problem. The goal is essentially to compare and contrast the potentials of MS sampling [\S~\ref{subsec:SamplingSystem}] and commonly used FPS. \subsection{Preparing the Data Set} We consider a database of face images, which consist of face images of Europeans and Asians, arbitrarily chosen from the on-line face database FERET~\cite{fifteen}. In particular, we chose $45$ images from each group, Europeans and Asians. As a preprocessing step, we resize each image into $100~\mbox{pixel}\times 150~\mbox{pixel}$ grayscale. The data set~$X=\{X_1,X_2,\ldots,X_{90}\}$ is such that each $X_i$ corresponds to a $2-$Dimensional grid $\mathbb{M}=\{1,2,\ldots,100\}\times\{1,2,\ldots,150\}$ and a function $f_i:\mathbb{M}\rightarrow\mathbb{G}$, where $\mathbb{G}=\{0,1,\ldots,255\}$ and for each pixel $(u,v)$, $f_i(u,v)$ is the grayscale value. Thus; \begin{equation}\nonumber X_i = \{(u,v)\in\mathbb{M} \ | \ f_i(u,v)<255\}. \end{equation} The Morse function on $X_i$ is chosen to be $f_i$. Finally, the data set $X$ is partition into two: training data set $A$ and testing data set $B$ which corresponds to 70\% and 30\% of $X$, respectively. \subsection{Training and Testing Phase} In the training phase, we consider $3$ classification methods: quadratic support vector machines (SVM), neural networks (NeuN), and $k$-nearest neighbors ($k$-NN). In particular, we run Algorithm~1 with $A$ as the input $X$, to obtain the corresponding distance matrix $\mathbf{M}$. Given $\mathbf{M}$, we use classic multidimensional scaling~(MDS)~\cite[\S~14.2]{Mardia-et-al-1979} to get a $2$-dimensional configuration of points in $A$, see Figure~\ref{fig: MDS_Plots}. Finally, the resulting points in MDS are used to build our classifiers SVM and NeuN. In the case of $k$-NN, $k=1, 2, 3, 4, 8$, the distance matrix $\mathbf{M}$ itself is used. The same set of steps are also performed to build the classifiers in the case of benchmark algorithm. Then we use $B$ to evaluate the prediction accuracy of the algorithms. \subsection{Numerical Results and Discussion} Numerical simulations were carried out for four persistence levels: $r=0.87, 0.76, 0.6,$ and $0.4$ with $q=1$. The proposed algorithm is compared with an algorithm which replaces MS sampling by FPS, but with the same classifiers. While the number of points returned by MS sampling can vary, FPS relies on specifying it. Therefore, for a fair comparison, in the case of FPS the average of the number of points returned by MS samplings over all images is used. Thus, both algorithms on average use the same number of points. For each $r$ above, the corresponding averages are respectively $220, 156, 119$ and $97$ points/image. \subsubsection{MS Sampling and FPS}\label{subsubsec:MS-FPS} Figure~\ref{Fig:SamplesforMu}(a) and (b) show representations of MS sampling applied to a data point $X_i$ in~$B$, at persistent levels $r=0.87$ and $r=0.6$, yielding $160$ points and $95$ points, respectively. Note that the points in $X_i^{r}$ are not uniformly distributed in Figure~\ref{Fig:SamplesforMu}(a) and (b); however, the points are concentrated at ``landmark points" of $X_i$, which are apparently crucial to build a rich representation of the original $X_i$ with VR and witness complexes. However, note that FPS does not discriminate special features of the face when sampling, see Figure~\ref{Fig:SamplesforMu}(c) and (d). \FloatBarrier \subsubsection{Performance} \begin{figure*} \centering \resizebox{1.01\textwidth}{!}{% \begin{tikzpicture} \begin{groupplot} [group style={columns=4}, ybar, tick label style={font=\normal}, tickpos=left, ylabel=Accuracy,, height=4cm, width=5.35cm, ymax=0.85, tick label style={font=\tiny}, label style={at={(0.18,0.45)},font=\tiny}, x label style={at={(0.50,0)},font=\normalsize}, ymin=0.45, legend columns=-1, legend style={font=\tiny}, legend entries={FPS,MS Sampling}, legend to name=grouplegend ] \nextgroupplot[xlabel=(a),xticklabels={SVM, NeuN, 1-NN, 2-NN, 3-NN, 4-NN, 8-NN},xtick={1,2,3,4,5,6,7}, bar width=0.15cm] \addplot [bar shift=-.1cm, area legend, red!20!black, fill=gray!10] coordinates {(1,0.66) (2,0.66)(3,0.73) (4,0.66) (5,0.63) (6, 0.7) (7, 0.73)}; \addplot [bar shift=.1cm, area legend, red!20!black, fill=gray!77]coordinates {(1, 0.8) (2,0.8) (3,0.83) (4,0.8) (5,0.7) (6, 0.7) (7, 0.76)}; \nextgroupplot[xlabel=(b),xticklabels={SVM, NeuN, 1-NN, 2-NN, 3-NN, 4-NN, 8-NN },xtick={1,2,3,4,5,6,7}, bar width=0.15cm] \addplot [bar shift=-.1cm, area legend, red!20!black, fill=gray!10] coordinates {(1,0.7) (2,0.7)(3,0.6) (4,0.56) (5,0.53) (6, 0.56) (7, 0.6)}; \addplot [bar shift=.1cm, area legend, red!20!black, fill=gray!77]coordinates {(1, 0.7) (2,0.76) (3,0.73) (4,0.66) (5,0.66) (6, 0.7) (7, 0.7)}; \nextgroupplot[xlabel=(c), xticklabels={SVM, NeuN, 1-NN, 2-NN, 3-NN, 4-NN, 8-NN },xtick={1,2,3,4,5,6,7}, bar width=0.15cm] \addplot [bar shift=-.1cm, area legend, red!20!black, fill=gray!10] coordinates {(1,0.73) (2,0.66)(3,0.6) (4,0.6) (5,0.66) (6, 0.5) (7, 0.6)}; \addplot [bar shift=.1cm, area legend, red!20!black, fill=gray!77]coordinates {(1, 0.76) (2,0.7) (3,0.66) (4,0.76) (5,0.76) (6, 0.76) (7, 0.76)}; \nextgroupplot[xlabel=(d),xticklabels={SVM, NeuN, 1-NN, 2-NN, 3-NN, 4-NN, 8-NN },xtick={1,2,3,4,5,6,7}, bar width=0.15cm] \addplot [bar shift=-.1cm, area legend, red!20!black, fill=gray!10] coordinates {(1,0.73) (2,0.66)(3,0.66) (4,0.63) (5,0.66) (6, 0.63) (7, 0.5)}; \addplot [bar shift=.1cm, area legend, red!20!black, fill=gray!77]coordinates {(1, 0.8) (2,0.73) (3,0.66) (4,0.73) (5,0.63) (6, 0.7) (7, 0.7)}; \end{groupplot} \node at (9.0,0.2) [inner sep=0pt,anchor=north, yshift=-5ex] {\ref{grouplegend}}; \end{tikzpicture}% } \caption{MS sampling with: (a) $r=0.87$, (b) $r=0.76$, (c) $r=0.6$, (d) $r=0.4$. FPS with: (a) $220$ points, (b) $156$ points, (c) $119$ points, (d) $97$ points. (Note: MS sampling and FPS points are used for Rips filtration.)} \label{fig:Data-set2-Simulation-VR} \end{figure*} \begin{figure*} \centering \resizebox{1.01\textwidth}{!}{% \begin{tikzpicture} \begin{groupplot} [group style={columns=4}, ybar, tick label style={font=\tiny}, tickpos=left, ylabel=Accuracy,, height=4cm, width=5.35cm, ymax=0.85, tick label style={font=\tiny}, label style={at={(0.18,0.45)},font=\tiny}, x label style={at={(0.50,0)},font=\normalsize}, ymin=0.45, legend columns=-1, legend style={font=\tiny}, legend entries={FPS,MS Sampling}, legend to name=grouplegend ] \nextgroupplot[xlabel=(a),xticklabels={SVM, NeuN, 1-NN, 2-NN, 3-NN, 4-NN, 8-NN },xtick={1,2,3,4,5,6,7}, bar width=0.15cm] \addplot [bar shift=-.1cm, area legend, red!20!black, fill=gray!10] coordinates {(1,0.7) (2,0.63)(3,0.53) (4,0.53) (5,0.6) (6, 0.56) (7, 0.66)}; \addplot [bar shift=.1cm, area legend, red!20!black, fill=gray!77]coordinates {(1, 0.6) (2,0.7) (3,0.6) (4,0.56) (5,0.6) (6, 0.63) (7, 0.8)}; \nextgroupplot[xlabel=(b),xticklabels={SVM, NeuN, 1-NN, 2-NN, 3-NN, 4-NN, 8-NN },xtick={1,2,3,4,5,6,7}, bar width=0.15cm] \addplot [bar shift=-.1cm, area legend, red!20!black, fill=gray!10] coordinates {(1,0.63) (2,0.66)(3,0.53) (4,0.56) (5,0.53) (6, 0.6) (7, 0.6)}; \addplot [bar shift=.1cm, area legend, red!20!black, fill=gray!77]coordinates {(1, 0.73) (2,0.7) (3,0.63) (4,0.63) (5,0.66) (6, 0.7) (7, 0.8)}; \nextgroupplot[xlabel=(c), xticklabels={SVM, NeuN, 1-NN, 2-NN, 3-NN, 4-NN, 8-NN },xtick={1,2,3,4,5,6,7}, bar width=0.15cm] \addplot [bar shift=-.1cm, area legend, red!20!black, fill=gray!10] coordinates {(1,0.5) (2,0.5)(3,0.63) (4,0.66) (5,0.66) (6, 0.73) (7, 0.66)}; \addplot [bar shift=.1cm, area legend, red!20!black, fill=gray!77]coordinates {(1, 0.66) (2,0.66) (3,0.66) (4,0.56) (5,0.6) (6, 0.63) (7, 0.66)}; \nextgroupplot[xlabel=(d),xticklabels={SVM, NeuN, 1-NN, 2-NN, 3-NN, 4-NN, 8-NN },xtick={1,2,3,4,5,6,7}, bar width=0.15cm] \addplot [bar shift=-.1cm, area legend, red!20!black, fill=gray!10] coordinates {(1,0.73) (2,0.66)(3,0.46) (4,0.53) (5,0.53) (6, 0.5) (7, 0.56)}; \addplot [bar shift=.1cm, area legend, red!20!black, fill=gray!77]coordinates {(1, 0.73) (2,0.7) (3,0.6) (4,0.76) (5,0.66) (6, 0.66) (7, 0.63)}; \end{groupplot} \node at (9.0,0.2) [inner sep=0pt,anchor=north, yshift=-5ex] {\ref{grouplegend}}; \end{tikzpicture}% } \caption{MS sampling with: (a) $r=0.87$, (b) $r=0.76$, (c) $r=0.6$, (d) $r=0.4$. FPS with: (a) $220$ points, (b) $156$ points, (c) $119$ points, (d) $97$ points. (Note: MS sampling and FPS points are used as landmark points of witness complex filtration.)} \label{fig:Data-set2-Simulation-W} \end{figure*} \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[height=7.0cm, width=7.4cm]{MDSPLOTMS1} \vspace{-2em} \caption{} \label{Fig:MDSMS} \end{subfigure}% ~ \begin{subfigure}{0.49\textwidth} \centering \includegraphics[height=7.0cm, width=7.4cm]{MDSPLOTFPS1} \vspace{-2em} \caption{} \label{Fig:SamplesforMu-r1} \end{subfigure} \vspace{-0em} \caption{(a) MDS plot obtained from MS sampling at $r=0.76$. (b) MDS plot obtained from FPS with 156 points.} \label{fig: MDS_Plots} \end{figure} Figure~\ref{fig:Data-set2-Simulation-VR} shows a comparison of the prediction accuracy between Algorithm~1 with VR construction (i.e., with MS sampling) and the benchmark with VR construction (i.e., with FPS) under the different classifiers considered. Results show that in the case of SVM and NeuN, the predictive accuracy of Algorithm~1 is at least $70\%$. In all these cases Algorithm~1 outperforms the benchmark except in the case SVM at $r=0.76$ where the two algorithms are comparable, see Figure~\ref{fig:Data-set2-Simulation-VR}(b). Algorithm~1 with $k$-NN for $k=1,2,3,4,8$ also outperforms the benchmark in most of the cases. Further, note that the accuracy of Algorithm~1 is no less than $66\%$ for all $k$-NN classifiers. Comparison of Figure~\ref{fig:Data-set2-Simulation-VR}(a) and (d) suggest that the performance of Algorithm~1 even with $97$ points on average is comparable with the benchmark with $220$ points. Figure~\ref{fig:Data-set2-Simulation-W} shows a comparison of the prediction accuracy between Algorithm~1 and the benchmark with witness construction. Results closely resemble the key behaviors we have already noted in the previous simulation. Roughly speaking, Algorithm~1 outperforms the benchmark in many cases, irrespective of the underlying classifier. The performance of Algorithm~1 over the benchmark depicted in Figure~\ref{fig:Data-set2-Simulation-VR} and \ref{fig:Data-set2-Simulation-W} are further justified from the MDS plots shown in Figure~\ref{fig: MDS_Plots}. It is evident that the Algorithm~1 has been able to capture extra features, as the spread of the second coordinate is significantly larger than that of the benchmark. As a result, Algorithm~1 clearly provides a better resolution of the points than that of the benchmark, in which points are mostly described by a single dimension (first principle component). Though, in this particular case, results of the SVM are comparable, the neural network has been able to take this advantage, see~Figure~\ref{fig:Data-set2-Simulation-VR}(b). Results suggest that MS sampling, which uses an additional structure of the data to sample points intelligently, can yield better performances when compared with the FPS. It suggests that the inclusion of ``landmark points" or ``critical points" with the aid of such sampling method can improve the results in certain applications that computes persistence homology. \FloatBarrier \section{Conclusion}\label{sec:Conclusion} The sampling method proposed to aid the computation of the persistence homology of a point cloud has been empirically shown superior to the popular furthest point sampling (FPS) strategy. It has been shown that the method effectively approximates the underlying space of a point cloud based on a sample of critical points of a Morse function~$f$ to yield the said advantage. A test case was considered, using images from the database FERET, to classify human faces according to ethnicity. The method with proposed sampling gave comparable results using on average as half as many~points used by the benchmark with FPS. Thus, with an appropriate classifier (e.g., Neural networks), the proposed method could compute persistence homology with less computational burden, using a significantly small sample to achieve a given accuracy. Particularly, it is evident that the proposed method is more effective over FPS for the cases where the critical points of $f$ play a decisive role in determining persistence homology. A future direction is to consider the adaptability of the proposed approach to \emph{discrete} MS complexes, which can be computed efficiently ~\cite{Gunther-Reininghaus-etal-2011,Gunther-Reininghaus-etal-2012,Gyulassy-etal-2008,Robins-Wood-Sheppard-2011,Shivashankar-Natarajan-2012,Gyulassy-Pascucci-etal-2012,Gyulassy-Gunther-etal-2014}. \bibliographystyle{IEEEtran}
1,116,691,498,885
arxiv
\section{INTRODUCTION} Many years of study of the light curves and spectra of type Ia supernovae have revealed that a thermonuclear runaway induced by accretion on a white dwarf of $\sim$1 M$_{\sun}$ (Hoyle \& Fowler 1960) produces the energy to eject the entire star at $\sim$10,000 km s$^{-1}$ and synthesizes $\sim$0.1-1.0 M$_{\sun}$ of radioactive $^{56}$Ni. Prior to the maximum luminosity, the ejecta are opaque and both the explosion energy and the energy from the $^{56}$Ni$\rightarrow^{56}$Co$\rightarrow^{56}$Fe decays are deposited in and diffuse outward through the ejecta. After the bolometric light curve reaches its peak, the luminosity approaches the instantaneous decay power (Arnett 1980; 1982) and the luminosity decline follows the decay of $^{56}$Co, with $\gamma$-ray scattering and absorption dominating energy deposition. Later, due to expansion, $\gamma$-ray optical depth decreases and the bolometric light curve decline becomes steeper than the $^{56}$Co decay. Around $t \geq$ 200$^d$, almost all $\gamma$-ray photons directly escape the ejecta and positrons from the decay of $^{56}$Co provide the main source of energy deposition into the ejecta. At this time, the light curve settles onto another, lower, $^{56}$Co decay curve. Beyond this time the light curve evolution depends on the details of positron transport and energy loss. Here we calculate energy deposition in type Ia supernovae (SNe Ia) for $t \geq$ 60$^{d}$, when the photon diffusion time is short and the bolometric luminosity is almost entirely due to gamma-ray and positron energy loss processes. There has been significant progress in understanding the physics of the early type Ia light curves. Detailed radiation transport and expansion opacity computations have reproduced observations of the bolometric and filter light curves (i.e., Pinto \& Eastman 1998; H\H{o}flich, M\H{u}ller, \& Khokhlov 1993; Woosley \& Weaver 1986). It is clear that the light curve's rapid early evolution is due to the explosion of a low mass compact object with a short radiative diffusion time (Arnett 1982; Pinto \& Eastman 1998). The secondary IR-maximum is the result of a photospheric radius that continues to expand after visual maximum. The differences in pre-maximum rise-times among observed SNe can be explained in model-dependent terms as due to the differences in a transition from a deflagration flame to a detonation. The connections among peak luminosity, light curve width, and post-maximum slope with the ejecta mass, $^{56}$Ni location, kinetic energy of explosion, and opacity are explored in an elegant paper by Pinto \& Eastman (1998). Comparisons between early observed light curves and many various theoretical models have been made extensively in recent years ( H\H{o}flich, Wheeler, \& Thielemann 1998; H\H{o}flich et al. 1996; H\H{o}flich \& Khokhlov 1996 (hereafter HK96)). We take the results of these works as given and extend the comparison of the models and observations to later times. For a few bright SNe Ia in the past half-century, good light curves beyond $t \geq$ 200$^d$ have been obtained and studied. The first work that utilized the information from late light curves was done by Colgate, Petschek, \& Kriese (1980). They showed that the late photographic and B band light curves of SN 1937C \& SN 1972E can be explained with positron kinetic energy deposition from $^{56}$Co decays, as suggested by Arnett (1969). Recently Colgate (1991, 1997) analyzed late light curves with positron transport to infer the ejecta mass. They found that ejecta somewhat transparent to positrons was required to fit observed light curves. Cappellaro et al. (1997) (hereafter CAPP) studied the influence of the mass of ejecta and the total mass of $^{56}$Ni on SN Ia light curves, and concluded that the fraction of positron energy deposition varies among the five SNe Ia they analyzed, from complete transparency to a complete trapping of the positrons. Ruiz-Lapuente \& Spruit (1998) (hereafter RLS) studied the effects of magnetic field geometry and magnitude in exploding white dwarfs to explain observed SNe Ia bolometric light curves. They found that the SN 1991bg magnetic field was weak ($\leq$10$^3$G), while for SN 1972E a full trapping of positrons by a tangled magnetic field in a Chandrasekhar mass model was required. In addition to furthering our understanding of SNe Ia, we are motivated by the possibility that the Galactic diffuse 511 keV emission might be due to the annihilation of positrons that escape from SNe Ia, in addition to other possible sources such as core-collapse SN radioactivity and compact objects. The long lifetime of energetic positrons in the ISM (10$^3$-10$^7$ yrs) would mean that positrons from many SNe Ia would appear as a diffuse component of the 511 keV line emission. Chan \& Lingenfelter (1993) (hereafter CL) calculated the number of surviving positrons from type Ia supernova models for different magnetic field assumptions. Combining their positron yields with supernova rate estimates, they suggested that type Ia SNe could provide from an insignificant to a dominant contribution to the observed diffuse Galactic annihilation radiation flux -- depending on the magnetic field geometry. Average escape of a few per cent of $^{56}$Co positrons are necessary to contribute to the Galactic emission, i.e., $\geq$50\% of those emitted after one year would have to escape. If the escapees take a significant fraction of their kinetic energy with them, we might be able to see the power deficit in the measured light curves. Thus we hope to use observed light curves to determine the SNe Ia contribution to Galactic positrons and obtain information about the magnetic fields in SNe Ia. Because of the variety of observed SN Ia properties and numerous physical details that remain uncertain, there are many possible models allowed (for a review see Nomoto et al. 1996). We do not try to make judgements among them. We calculate the late bolometric light curves of deflagration, delayed detonation, pulsed-delayed detonation, He detonation, and accretion-induced collapse (AIC) models for comparison with observed light curves. For each observed SN we consider the model(s) suggested by other authors to agree with observations near maximum light. We begin by fitting each model to the observations (to the inferred bolometric light curve when available) in the interval 100--200 days, when gamma-ray energy deposition is still important. We then follow each model to later times for various choices of magnetic field configuration and assumed ionization state of the ejecta, and compare the calculated power deposited to the observed light curves. In addition to the specific light curve consequences of positron transport we examine some more generic features of the late light curves. We show that the light curve decline rate between 60 and 200 days after the explosion can be used to clarify total ejecta mass and expansion velocity of the supernova models. We find that the transition time between the gamma-ray and positron dominated phases is a useful diagnostic to measure the ejecta mass. In addition, the slope of the light curve at $t\geq$200 days is a good indicator of the magnetic field configuration in the ejecta. As a baseline, we compare the calculated light curves from positron transport with the light curve that results from the immediate, in-situ kinetic energy deposition assumption, which has been typically assumed in early light curve calculations. The physics of energy deposition in type Ia SNe is discussed in section 2, followed by a discussion of the strength and geometry of the magnetic field in the post-explosion ejecta. Our bolometric light curves for both turbulent and radial field geometries are shown and described in Section 3 for the various models used in our study. The issues confronting fitting model generated light curves to actual observations are discussed in section 4. In Section 5, we combine model predictions with SN data to fit a few of the best observed SNe. We conclude with Section 6 in which we use positron yields from our models to estimate the type Ia contribution to the observations of 511 keV annihilation radiation in the Galaxy. \section{Energy Deposition in SN Models} The $^{56}$Ni $\rightarrow$ $^{56}$Co ($\tau \sim$ 8.8$^{d}$) decay proceeds via electron capture (100\%) and produces photons and neutrinos. The line photons have energies between 0.16 and 1.56 MeV, with a mean energy in photons of 1.75 MeV per decay. The $^{56}$Co $\rightarrow$ $^{56}$Fe ($\tau \sim$ 111$^{d}$) decay proceeds 81\% of the time via electron capture and 19\% via positron emission ($\beta^{+}$ decay). The line photons have energies between 0.85 and 3.24 MeV, with mean energy in photons per decay of 3.61 MeV. Segre (1977) derived the distribution of positron kinetic energy (which is shown in figure \ref{emit}). The mean positron kinetic energy is 0.63 MeV, so when multiplied by the branching ratio 0.19, the mean positron energy per decay is 0.12 MeV. The ratio of the mean photon energy to the mean positron kinetic energy is then $\sim$ 30. The photon luminosity will dominate until the deposition fraction for positrons (f$_{e^{+}}$) is $\sim$ 30 times larger than the deposition fraction for photons (f$_{\gamma}$). The energy deposition from both gammas and positrons depend upon two factors; how much material must be traversed enroute to the surface, and the interaction cross-sections. The transport of gamma rays and positrons will be developed separately. \subsection{Gamma-Ray Transport} The gamma-line photons from the decays of $^{56}$Ni$\rightarrow ^{56}$Co$\rightarrow ^{56}$Fe are mostly Compton scattered to lower energy during the early phases of the Type Ia event. These photons then escape as X-ray continuum or are absorbed by material in the ejecta via the photo-electric effect. Due to the supernova expansion, the gamma-line optical depth decreases, becoming low enough at late-times that most gamma-line photons escape directly. In calculating the energy deposition and the energy escape fraction of the photon energy created by the radioactive decays, we simulate the scatter adopting the prescription of Podznyakov, Sobol, \& Sunyaev (1983). A detailed description of the Monte Carlo algorithm and its application in calculating the spectra and bolometric light curves of SN1987A and type Ia supernovae have been presented by The, Burrows, \& Bussard (1990) and Burrows \& The (1990). Figure \ref{fracE} shows the fraction of the total decay energy that escapes directly as gamma-lines and emerges as X-ray continuum (f$_{\gamma +X}$) as well as the fractions deposited by scattering (f$_{scat}$) and photoelectric absorption (f$_{PE}$) as functions of time for model W7 (Nomoto, Thielemann \& Yokoi 1984), which we use for illustration here. The solid curve labeled with f$_{dep}$ (= f$_{scat}$ + f$_{PE}$) is the total gamma-ray energy deposition fraction. Scattering dominates over photoelectric absorption at all times, the photoelectric absorption becoming relatively less important as there become fewer multiply scattered photons at late times. The overall photon deposition fractionscales roughly as t$^{-2}$, reflecting the time dependence of the column depth to the surface from any location in the ejecta. \subsection{Positron Transport} The cross sections for the various modes of positron annihilation are strongly energy-dependent and favor low energies. Energetic positrons will move through the ejecta until slowed to thermal velocities and then quickly annihilate.\footnote{The lifetime of positrons once thermalized is small compared to the slowing time and to the decay timescale, so for this study we set it equal to zero.} Five energy-loss mechanisms were included: ionization and excitation of bound electrons, synchrotron emission, bremsstrahlung emission, inverse Compton scattering, and plasma losses. The first four are described in detail in CL. We include these processes in a calculation of model W7, first assuming a uniformly 1\% ionized medium (i.e., 0.01 free electrons per nucleus) and then a uniformly triply ionized medium. The results, shown in figure~\ref{eloss}, support CL's conclusion that for low ionization, the interactions with bound electrons dominate the energy loss. For higher ionization, we find that plasma energy losses must be included. Regardless of the ionization, processes other than ionization and excitation and plasma losses can be ignored. The ionization and excitation energy loss rate used is (Heitler 1954; Blumenthal $\&$ Gould 1970; Gould 1972; Berger $\&$ Seltzer 1954), \begin{displaymath} \left( \frac{dE}{d \xi} \right)_{ie} = - \frac{4 \pi r_{o}^{2} m_{e} c^{2} Z_{B}} {\beta ^{2} A m_{n}} \left[ ln \left( \frac{ \sqrt{\gamma -1} \gamma \beta}{\frac{ \overline{I}} {m_{e}c^{2}}} \right) + \frac{1}{2} ln2 \right) \end{displaymath} \begin{equation} \left. - \frac{ \beta^{2}}{12} \left( \frac{23}{2} + \frac{7}{\gamma +1} + \frac{5}{ (\gamma + 1)^{2}} + \frac{2}{ (\gamma +1)^{3}} \right) \right], \end{equation} \noindent where E is the positron's kinetic energy, r$_{o}$ is the classical electron radius, and m$_{e}$ and m$_{n}$ are the electron and atomic mass unit masses respectively. Z$_{B}$ and A are the effective nuclear charge and atomic mass of the ejecta. $\overline{I}$ is the effective ionization potential and is approximated by Segre (1977) and by Roy $\&$ Reed(1968) as, \begin{equation} \overline{I} = 9.1 Z \left( 1 + \frac{1.9}{Z^{\frac{2}{3}}} \right) eV . \end{equation} The plasma energy loss was described by Axelrod (1980). The formula is the same as ionization and excitation, except $\hbar \omega_{p}$ is inserted in the calculation of the maximum impact parameter (b$_{max}$) rather than the mean ionization potential ($\overline{I}$), and $\chi_{e}$, the number of free electrons per nucleus (hereafter ionization fraction) is used rather than Z$_{B}$. Thus, (Z$_{B}$ + $\chi_{e}$)$\cdot$n = Z$\cdot$n is the total electron density. Ignoring small differences in the relativistic corrections, when the two energy losses are summed, \begin{displaymath} \left( \frac{dE}{d \xi} \right)_{Total} = \left( \frac{dE}{d \xi} \right)_{ie} + \left( \frac{dE}{d \xi} \right)_{Plasma} \end{displaymath} \begin{equation} = - \frac{4 \pi r_{o}^{2} m_{e} c^{2} Z P(E)} {\beta ^{2} A m_{n}} \left[Z ln \left( \sqrt{\gamma -1} \gamma \beta m_{e}c^{2} \right) -Z_{B} ln \overline{I} -\chi_{e} ln (\hbar \omega_{P}) \right], \end{equation} \noindent where P(E) is the relativistic correction. There is an ionization fraction dependence due to the inequality between $ln \overline{I}$ and $ln (\hbar \omega_{P})$. An ionized medium is more efficient at slowing positrons than is a neutral medium. The ionization fraction must be known as a function of mass, radius, and time to determine the energy deposition exactly, but is not currently well understood. Pinto \& Eastman (1996), Liu, Jeffery \& Schultz (1997), and Fransson \& Houck (1996) arrived at different ionization structures for similar models. Unable to improve this situation, we choose to calculate a range of ionization fractions to bracket the possibilities. We consider extreme values of the ionization fraction, 0.01 and 3.0, because the actual values are almost certainly between these during the times we consider. The positron transport was done with a 1D Monte Carlo code. SN models were reduced to 75-150 zones and up to 34 elements. Positrons were emitted at equally spaced time intervals from each zone (volume-weighted, random locations within the zone), weighted according to the $^{56}$Ni(zone,t). Positrons were followed in steps of equal column depth; the time, energy, radial distance, zone, pitch angle, and annihilation probability were re-evaluated at each step. The range of positrons in various solid media has been measured in the laboratory (ICRU Report \#37 (1984)). Figure \ref{range} compares the laboratory results with the ranges of individual positrons through SN ejecta, combining the data from a number of models. The model results for low ionization are in good agreement with the laboratory results, whereas the higher ionization leads to the medium being 2-3 times more efficient at stopping positrons. The spread of ranges for the triply ionized ejecta reflects the variation of the ejecta's composition. A positron will have a longer range in triply ionized iron than in triply ionized carbon because of the $\frac{\chi_{e}}{A}$ dependence of the more efficient energy loss to the plasma. It is unlikely that the zones rich in C, O, Si and the other intermediate elements will maintain as high a level of ionization as the Fe zones (where the decays occur), so the uniformly triply-ionized approximation is a lower limit for positron escape. The 1\% ionized ranges have less scatter because the ionization-excitation range depends upon $\frac{Z_{B}}{A}$, and with $Z_{B} \approx Z$, $\frac{Z_{B}}{A} \approx$ 0.5 in both Fe-rich zones and C-rich zones. The range determines the escape fraction because when the column depth to the surface at a given radius is less than the stopping power for a given emission energy, a positron of that energy can escape (after it deposits a fraction of its energy in the ejecta). A number of groups have transported positrons with the same treatment used with photons, assigning a $\kappa_{e+}$, an ``effective positron opacity", incorrectly giving them an exponential distribution of ranges. We do not use $\kappa_{e+}$, but we can estimate its value for comparison from our results. Defining that $\kappa_{e^{+}} \equiv$ the mean of the inverse of the range expressed in units of inverse column depth (cm$^{2}$ g$^{-1}$); for $\chi_{e}$ = 0.01, $\kappa_{e^{+}}$ = 4; while for $\chi_{e}$ = 3, $\kappa_{e^{+}}$ = 14. Colgate transported positrons with $\kappa_{e^{+}}$ = 10 cm$^{2}$ g$^{-1}$, as did Ruiz-Lapuente. Axelrod argued for $\kappa_{e^{+}}$ = 7 cm$^{2}$ g$^{-1}$, which was the nominal value in most fits shown by CAPP. \subsection{Magnetic Field Considerations} The efficiency of matter at slowing positrons is one factor in determining the positron escape, the quantity of matter traversed enroute to the surface is another. The effects of the total mass and the nickel distribution will be discussed in section 4, but these are secondary to the effects of the magnetic field (CL), which will force positrons to follow curved paths. The progenitor white dwarfs have been observed to have initial field strengths of 10$^{5}$ -10$^{9}$ Gauss (Leibundgut 1995). The field is assumed to diffuse on a long time-scale relative to the positron lifetimes, so the flux is treated as frozen-in. Thus the expansion causes the field strength to evolve according to the relation, B(r) = B(r$_{o}$) r$_{o}^{2}$ r$^{-2}$. If the resulting field is strong enough to make the positron gyroradius smaller than the ejecta at a given time, the geometry of the field must be considered. Little is known about SN Ia magnetic fields, so we consider three scenarios; radial magnetic field, disordered field, no field. Thus we also bracket the extreme magnetic field configurations, and late observations of SN Ia luminosities are potentially probes of the field characteristics. A radial field is an approximation of the effect of the rapid, homologous expansion of the ejecta stretching out the arbitrarily oriented field lines as it changes scale by a factor 10$^{7}$ as it grows. Positrons spiral inward or outward on these field lines, changing pitch angle due to mirroring and beaming, as described by CL. These bend trajectories radially outward, so this ``radial scenario'' is a favorable one for positron escape. A turbulent, confining field is presumably adjusted or generated by expansion dynamics (RL \& Spruit 1997), with the limiting case of a positron having no net radial motion (in mass coordinates) as it meanders near the location of its emission zone for all times. This is called the ``trapping scenario". There is some survival of positrons in place, if their slowing times exceed the age of the supernova, but by definition there is no escape. The third scenario ignores magnetic fields, so the positrons travel straight-line trajectories. This is referred to as the ``free scenario". \section{Bolometric Light Curves} Combining the results of the gamma-ray and positron transport calculations, we obtain the rate of energy deposition throughout the SN ejecta. This energy is then mainly contained in suprathermal free electrons that slow and recombine into atoms, which then deexcite. As discussed for positrons above, once the electrons reach energies of a few keV their slowing times, and the duration of the subsequent processes are short. At late times all the optical photons generated escape immediately without interaction. So, for the time of interest here, the major difference between the decay rate of the radioactivity and the supernova (optical) luminosity is the propagation times of the gamma rays and positrons, and we therefore treat the calculated power deposited and bolometric luminosity as being interchangeable. A typical calculated bolometric light curve (power deposition) is shown in figure \ref{bol}. \subsection{Time Evolution \protect\footnote{Unless otherwise stated, times quoted refer to time since explosion}} The initial climb to peak luminosity is generated by energy deposition from the thermonuclear burning and from the $^{56}$Ni decays, followed by the diffusion of light to the photosphere. The width of the peak is governed by the diffusion time from the $^{56}$Ni to the photosphere. Observed SNe have shown variations of peak width, which should arise naturally out of any successful models. The factors that determine the light curve peak width are currently debated. H\H{o}flich (H\H{o}flich \& Khokhlov 1996) asserts that the ejecta mass, the $^{56}$Ni distribution, and the $^{56}$Ni mass all influence the peak width. He fits narrow peaks with models which underproduce $^{56}$Ni (relative to W7). Pinto \& Eastman (1996) asserts that the peak width only depends upon the first two. Their radiation transport calculation showed that the energy deposition from the $^{56}$Ni adjusted the structure of the ejecta to make the peak width independent of the $^{56}$Ni production. They explain narrow peaks with low-mass models. Because we do not do the optical radiation transfer, we can not apply our calculations to this early epoch. We simply take models shown by other authors to fit individual supernova early light curves and spectra and calculate their subsequent bolometric light curves to be compared with late observations. By 60 days after the explosion, the photosphere has receded to the SN center; after this time our calculations should trace the bolometric light curve. \footnote{SN 1991bg may be the explosion of a 0.6 M$_{\odot}$ WD. If so, bolometric curves may fit the data as early as 30 days. The choice of 60$^{d}$ as the time to initiate light curve fitting follows from Leibundgut \& Pinto 1992.} The light curve decline after 60$^{d}$ is governed by the $^{56}$Co decay and the falling gamma-ray optical depth. Thus the light curve shape is a diagnostic of the mass overlying the Ni-rich zones and the velocity structure. The decline of the gamma deposition fraction (f$_{\gamma}$) makes the light curve steeper than the decay. Faster models reach the transition to positron dominated power, when f$_{\gamma}$ = 0.03, earlier. The time of this transition varies among models, occurring at 130$^{d}$ -520$^{d}$. The ejecta is still too dense at these times for the positrons to have appreciable lifetimes, so the light curves flatten toward the $^{56}$Co decay line (RLS). During this epoch, the various field geometry scenarios produce identical results. Further expansion of the ejecta leads to appreciable positron lifetimes, and in the radial and free scenarios, escape. Then the light curves begin to separate. For the radial and free scenarios, positron escape leads to kinetic energy loss from the system and a drop in the light curve. For the trapping scenario, a finite positron lifetime combined with the exponential decline in the number of newly created positrons leads to a shallow dip followed by a late flattening of the light curve, as positron kinetic energy is stored and delivered a (lengthening) positron slowing time after emission. Figure \ref{intl} contrasts the integrated luminosities for the trapping and radial scenarios as compared to the instantaneous deposition approximation. The delayed luminosity is seen in figure \ref{bol} as the trapping curve is brighter than the instantaneous deposition curve at late-times. For the radial scenario, positrons (and thus kinetic energy) can leave the system, leading to a much larger deviation from the instantaneous deposition. As this energy is increasingly leaving the system at late-times, the radial light curve in figure \ref{bol} is dimmer than either the instantaneous deposition curve or the trapping curve. As the radial and trapping curves diverge, the separation becomes great enough to be measurable, in principle. \subsection{Radial Magnetic Field vs No Field} It turns out that there is surprisingly little difference in the mean path traversed to the surface for these two cases. Assuming isotropic emission of positrons, the mean distance to the surface from a given point, for positrons not near the center of the ejecta, is substantially larger than the radial distance to the surface because of the large solid angle perpendicular to the radius. In the radial field case, mirroring and beaming turn positron trajectories outward, but the extra path due to the spiral around the field adds similar distance as in the no-field case. The net energy deposition and positron escape for the models we consider are almost indistinguishable for these two cases. This result was anticipated by Colgate (1996), and can be approximately demonstrated analytically. We display radial field calculations, but remind the reader that for our spherically symmetric models the conclusions apply equally to field-free scenarios. For a few of the models, there were slight deviations at late-times (t $\geq$ 800$^{d}$, the field-free curves remaining steeper than the radial curves). We show field-free light curves only when the separation between these these two cases is potentially detectable. \subsection{Energy Deposition at Very Late Times (~1000$^{d}$)} The effects of positron escape on the SN light curve are moderated somewhat by gamma-ray energy deposition. Longer-lived radioactivities that were overwhelmed at earlier epochs become important at late times. Two such radionuclei are $^{57}$Ni and $^{44}$Ti. The decay of $^{57}$Ni proceeds with $\tau_{Ni57}$ = 52$^{h}$ and $\tau_{Co57}$ = 392$^{d}$, thus at 500$^{d}$, 28\% of the $^{57}$Co has yet to decay. The $^{57}$Co decay energy is only 1/10 that of the $^{56}$Co. For W7 the energy available from $^{57}$Co equals that available from $^{56}$Co at 1000$^{d}$. Other models have similar $^{57}$Ni/$^{56}$Ni ratios. $^{44}$Ti is an even longer-liver radionucleus, with a mean-life recently estimated to be 85$^{y} \pm$ 1$^{y}$ (Ahmad et al. 1997). The 1.3 MeV decay energy is substantial, but the slow decay rate delays the cross-over to $^{44}$Ti dominated deposition until 2500$^{d}$, when no SN Ia has been detected. Five models had the $^{57}$Co and $^{44}$Ti masses included. For the other models, M$_{Co57}$ = 0.041 M$_{Co56}$ and M$_{Ti44}$ = 1.5 x 10$^{-5}$ M$_{Co56}$, to match W7. Figure \ref{pfrac} shows the fraction of the luminosity that is due to the deposition of positron KE. The positron deposition is dominant from 300$^{d}$ -800$^{d}$ for the radial field geometry. The effects of including longer-lived radioactivities is shown by the splitting of the curves at late times. \subsection{Additional Potential Sources and Effects} There are a number of other potential sources of luminosity, both intrinsic and extrinsic. One extrinsic source is the ``light echo'', bright peak light scattered by dust back into the line of sight after light travel delays. SN 1991T was dominated by a light echo by day 600 (Schmidt et al. 1994), as discussed in section 4. Another extrinsic source is the interaction of the ejecta with the surrounding medium. This interaction will eventually be important, but should be identified by distinctive spectral and temporal characteristics. The SN model light curves presented in this work are based on the assumption that the deposited power is instantly radiated in the UVOIR bands, during the time 1--2 years. Furthermore, during this epoch, we usually have only the V and/or B band observations, instead of the bolometric luminosity, with which to compare models. Fitting model light curves to single or dual bands is susceptible to intrinsic spectral evolution effects. As the secondary electrons' lifetimes increase and the collisional de-excitation rates fall, a delay develops between the positron energy deposition and emission of UVOIR light, an effect called ``freeze-out" by Fransson (1996). If the Fe I and II states form in abundance without photoionization or charge exchange destruction, then [FeII]$\lambda$25.99$\mu$, $\lambda$35.35$\mu$, and [FeI]$\lambda$24.05$\mu$ fine structure emission lines are produced. These lines are beyond the UBVRI bands and are undetected. This effect is referred to as an infrared catastrophe (IRC) by Axelrod (1980). Both freeze-out and IRC phenomena must be considered and are discussed in section 5. It is clear that at very late times, many complicating effects, including new sources of luminosity and sinks outside the normally observed bands can be important. Therefore we confine our conclusions to the times when $^{56}$Co decay dominates the power input and the B and V bands track well the bolometric luminosity. \subsection{SN Models} We considered twenty-one models, which span the range of ejecta mass, $^{56}$Ni mass, and kinetic energy. One deflagration (W7) (Nomoto, Thielemann \& Yokoi 1984), a normally luminous helium detonation (HED8) (HK96), two subluminous helium detonations (WD065, HED6) (Ruiz-Lapuente et al. 1993; HK96), two superluminous helium detonations (HECD, HED9) (Kumagai 1997; HK96), one accretion-induced collapse (ONeMg) (Nomoto 1996), eight delayed or late detonations (DD4, M36, M35, M39, DD2O2C, DD23c, W7DN, W7DT) (Woosley 1991; H\H{o}flich 1995; H\H{o}flich, Wheeler \& Thielemann 1998; Yamaoka 1992), three pulsed, delayed detonations (PDD3, PDD54, PDD1b) (H\H{o}flich, Khokhlov \& Wheeler 1995), and two mergers (DET2, DET2ENV6) were included. Their characteristics are shown in Table 1. In figure \ref{gamma} we show the luminosity due only to the gamma deposition. As expected, the total luminosity roughly traces the amount of $^{56}$Ni produced. The steepness of the early decline measures the mass overlying the Ni. The models then flatten, with slopes becoming nearly equal. In figure \ref{fam400} we add the positron contribution to create bolometric light curves for radial field geometries, assuming 1\% ionization. The curves start at and are normalized to 60$^{d}$ to show the evolution of their shape from 60$^{d}$ -400$^{d}$. There are often many observations during this epoch, so the different shapes provide a test of whether the model (regardless of field characteristics) fits the light curve. One interesting feature is the steepness of the Chandrasekhar mass models, in which the $^{56}$Ni is covered by a large overlying mass. We interpret this steepness to be due to the delayed onset of the positron-dominated epoch. The low mass models enter this epoch earlier, so they flatten toward the decay line. Table \ref{table1} shows the day when gamma deposition falls to equal the positron deposition. HED6, WD065, ONeMg, DET2 have relatively little mass and transition before 180$^{d}$; PDD1b and DET2ENV6 have much more overlying mass and transition at 300$^{d}$ or later. PDD1b thins to gamma photons so slowly, that it remains as bright as the low mass models due primarily to gamma deposition. Other than the extreme model PDD1b, shallow declines after 60$^{d}$ suggest models with less mass overlying the $^{56}$Ni, steep declines suggest more overlying mass. This trend is also evident in the light curves calculated by RLS, who parametrize the slope at various epochs in their Table 3. The separation of WD065 and ONeMg from HED6 after 200$^{d}$ is due to the earlier survival of positrons in WD065 and ONeMg. Figure \ref{fam1000} is an extension of figure \ref{fam400} to 1000$^{d}$, but including the curves from positron trapping scenarios. This epoch emphasizes the differences in positron transport between the field geometries. The most noticeable feature is that the radial models are approximately equivalent and steep, whereas the trapping models flatten according to the percentage of $^{56}$Ni produced in the outer portions of the SN. RLS mention that the late light curves of trapping field scenarios flatten relative to instantaneous deposition, but state that the effect is small. Our results show the effect to be large. Thus, the 400$^{d}$ -800$^{d}$, positron-dominated epoch is a diagnostic of field characteristics, not of model types. Also apparent in this figure is that the ``massive'' models show separation between the field scenarios much later and to a lesser degree than the rest of the models. Massive in this sense refers to a large mass of slow ejecta overlying the $^{56}$Ni-rich zones. The variety of shapes at early epochs and then the dramatic separation between the predictions of positron trapping in a turbulent field geometry and positron escape in a radial or weak field show that 60$^{d}$ and later bolometric light curves yield a wealth of information as to the structure and dynamics of the SN ejecta. \section{Comparison With Observed Light Curves} Ideally, to probe the SN structure using light curves, model-generated bolometric light curves are compared with observed bolometric light curves. Observed bolometric light curve reconstructions, to date, are at best based on measurements in the U,B,V,R,I bands, with some information from the J,H and K bands. As the SN dims, the photometric uncertainties increase and for many light curves, the number of bands observed decreases, leading to less accurate bolometric reconstructions. We use the available bolometric light curves: SN 1992A to day 420 (Suntzeff 1996), SN 1991bg to day 220 (Turratto et al. 1996), SN 1972E to day 420 (Axelrod 1980) \footnote{We do not use the 720 day data point because we consider the extrapolation of an entire bolometric light curve from a 1200$\AA$ wide spectrum to be too uncertain}, SN 1989B to day 135, and SN 1991T to day 108 (Suntzeff 1996). The best epoch to observe positron transport effects is during the 500$^{d}$ -900$^{d}$ range; only SN 92A and SN 72E extend close to that epoch. Nonetheless, the SN 91bg, SN 89B and SN 91T light curves are valuable in checking the fit of a specific model during the gamma-dominated phase. For each model we consider for a particular event, we simply fit the calculated light curve to the ``observed'' bolometric light curve. Thus we avoid the uncertainties in distance, extinction, and bolometric correction. We then show the later, positron-dominated light curves relative to this fit for comparison to the more limited data in one or a few bands. RLS fit models to the bolometric luminosities of SN 1991bg and SN 1992A, Mazzali et al. (1996) fit SN 1991bg. In sections 5.1 5.2 and 5.4 we will discuss our results in relation to theirs. When there is insufficient data to reconstruct bolometric light curves we compare model bolometric shapes to individual band photometry. This assumes that at late epochs there is little color evolution. We must be able to rule out any increasing shift of the luminosity into unobserved bands. Lira (1998) showed that the collective tendency is for the color indices stop evolving after day 100-120$^d$. In figure 10 we show the U, B, V, R, and I band variations of six of the SNe used in this study. B-V peaks around t$\simeq$30-40$^d$, then decreases to approximately zero near day 100 (except SN 1991bg). At the (B-V) peak, the V band contains most of the energy and then declines to B-V $\simeq$ 0 with the B band a potentially important contributor. The V band is the best single band to trace the supernova bolometric light curve. The five bolometric light curves permit us to estimate the error introduced by fitting with only the V band. As shown in figure 11, the inaccuracies in using the V or B band data for the bolometric luminosity are $\leq$0.2$^m$ during the 60$^d$-120$^d$ epoch. The similar procedure in comparing model-generated bolometric light curves to band photometry was also employed by CAPP who used V band data for every epoch. Theoretical arguments about color evolution have been made by Axelrod and Colgate (1996), who disagree as to whether collisional or radiative processes dominate the emission from the recombination cascade. The competition between collisional and radiative processes hinges upon three factors: level of ionization (temperature in the collisional scenario), density, and atomic cross sections. Fransson \& Houck (1996) addressed these issues, calculating multi-band model light curves. The technique worked well for the type II SN 1987A, but the type Ia light curves generated from the model DD4 showed a sharp decrease in U,B,V,R,I around day 500 due to the infrared catastrophe (IRC), and were inconsistent with SN 1972E. Why the IRC does not apparently occur in SN Ia is something of a puzzle. The model DD3 contains more $^{56}$Ni (0.93 M$_{\odot}$ versus 0.62 M$_{\odot}$ for DD4) and thus maintains a higher level of ionization. This delayed the onset of the IRC, but still could not fit the observed light curves of SN 72E. It is important for our discussion that any onset of the IRC is abrupt; this does not lead to a gentle decline of the light curve as we find for positron escape. Fransson \& Houck (1996) also considered clumping in the ejecta. It might delay the onset of the IRC for the inter-clump regions, but it hastens its onset in the clumps. It remains to be seen if spectra modeled with clumping will reproduce the observed spectra. The atomic cross sections provide a third explanation for the lack of color evolution. If the radiative transition probabilities are underestimated, the radiative scenario might dominate. Fransson \& Houck increased the recombination rates by a factor of 3 to model the spectra; perhaps additional adjustments are required. Few of the SNe observed at late epochs show convincing color evolution. Two of the three SNe that continue to evolve after 120$^{d}$, 1986G and 1994D, seem to settle into constant color indices later. All the evidence suggests to us that the V (and probably B) band tracks the bolometric luminosity of SN Ia at late times. We emphasize the SNe for which there are at least two measured bands for most of the observed light curve. \section{Comparison of Models to Individual SNe} The amount, type, and quality of data varies among SNe. A summary of the observations used in this study are listed in table \ref{sumobs}. A wider range of model fits to SNe is shown in Milne (1998). We primarily consider SN Ia models shown by others to describe well the early light curves and/or spectra of particular well-observed events, calculating their light curves to late times. Models are considered for a given event if they can reproduce some combination of the following features: distance-dependent peak luminosity estimates, rise-time to peak luminosity and peak width, early spectral features and light curve shapes for multiple color bands. Generally the early data available do not uniquely define the model parameters, or even the basic type of model. For some cases, the suggested models are quite different. An example of this is SN 1991bg, which is fit with two low-mass models (an 0.6 M$_{\odot}$ AIC and a 0.65 M$_{\odot}$ HeDET), a 1.4 M$_{\odot}$ pulsed model and a 1.4 M$_{\odot}$ deflagration model. Our first test is whether a model can fit the earlier nebular light curve, which probes the transition from gamma to positron domination. We then examine whether models that pass this first test can determine the magnetic field configuration and degree of ionization. Thus, this study is able to add another constraint to the SN fitting puzzle, as well as determining if positrons escape from SN ejecta. \subsection{SN 1992A in NGC 1380} SN 1992A occurred in the S0 galaxy NGC 1380 and was observed extensively. It is often held up as an example of a normal SN Ia. Extinction appears to be minimal, making SN 1992A an excellent candidate for photometric analysis. SN 1992A is one of three SNe treated in both RLS and CAPP. We show the fits of three models to SN 1992A: DD23C, M39 and HED8. DD23C was fit on the recommendation of Peter H\H{o}flich (1998), and also because Kirshner et al. (1992) fit the spectral data with a modified version of DD4, which is similar to DD23C. RLS combined the distance estimates to NGC 1380 with Suntzeff's UVOIR bolometric estimation to suggest that the peak bolometric luminosity was between 42.65 and 43.00 dex. This suggests that a sub-luminous model might be required.\footnote{A recent study by Suntzeff et al. 1998 suggests that NGC 1365 (treated to represent the center of the Fornax cluster) may be a foreground galaxy in the cluster. If this is the case, then SN 1992A may be only mildly sub-luminous, and at a distance in agreement with Branch et al. 1997.} M39 is a delayed detonation that has a peak bolometric luminosity of 43.06. We choose it as a compromise between the delayed detonation scenario suggested by the spectra, and the low luminosity suggested by distance estimates.\footnote{We note that the B-V color of M39 is too red to fit 92A according to the model generated light curves of H\H{o}flich, even with zero extinction.} CAPP listed the distance to NGC 1380 as 16.9 Mpc, with E(B-V) = 0.00$^{m}$, and fit the data with a 1.0 M$_{\odot}$ model which produced 0.4 M$_{\odot}$ of $^{56}$Ni. We instead use the similar model HED8. Figure \ref{bol92a} shows the fits of DD23C and HED8 to the bolometric light curves of SN 1992A. For this object the bolometric light curves are reconstructed to late enough times that we can use them directly to study positron energy input. Assuming a 0.025 dex uncertainty for each data point, DD23C provided the best fit, varying only the overall amplitude, of the suggested models in the 55--420$^{d}$ epoch, fitting with 81\% confidence for the radial curve. The trapping version of that model was rejected at the 99.91\% confidence even when renormalized. The numerical results are shown in table \ref{table2} for 1\% ionization, triple ionization with a radial field geometry, and for 1\% ionization with a trapping field. The low mass model, HED8 did not fit above the 10$^{-4}$ level for any scenario. RLS also fit a 0.96 M$_{\odot}$ model to SN 1992A. Our results are consistent with theirs in that the models are brighter than the data after 100 days. The CAPP fit for 7 $\leq$ $\kappa_{e+}$ $\leq$ 10 remained too bright from 20$^{d}$--320$^{d}$, also in agreement with our results. The best fitting class of models were the delayed detonation models; W7, DET2ENV6 and PDD54 were the only models other than delayed detonation models to fit at better than the 20\% confidence level. We show the fits of two models to the V data, DD23C and M39, in figure \ref{v92a}. Both models follow the falling luminosity better with positron escape than with trapping. Fitting time-invariant ionization scenarios to the V data with the published uncertainties, none of the models provided a fit better than $\chi^{2}$/DOF=18. The numerical fits to the V band data (table 2) show that with the scatter in excess of the stated uncertainties, none of the models provide statistically acceptable fits. In absolute terms, our goodness-of-fit statistics are questionable, but the fits are better for radial scenarios for almost every model.\footnote{The model DET2ENV6 fits the late V band data well (but less so the bolometric data) and may warrant further investigation.} The other delayed detonation models yielded fits similar to DD23C and M39 for the V data, demonstrating that the observation of positron escape at late times is not strongly model-dependent. The 928$^{d}$ point might herald the onset of another source of power. Possibilities include other radioactivity, such as $^{22}$Na or $^{44}$Ti (but not $^{57}$Co, which is too weak), recombination energy lagging earlier, higher ionization input (Clayton et~al. 1992, Fransson \& Kozma 1993), and positrons encountering circumstellar material, as well as others. In summary for SN 1992A: delayed detonations are the most promising model types, all of which overproduce the late light V light curves without escape of positrons. \subsection{SN 1991bg in NGC 4374} SN 1991bg occurred in the galaxy NGC 4374 and is the best observed member of a class of sub-luminous SNe that have a fast decline from peak luminosity. SN 1991bg appeared to be very red at peak suggesting either significant extinction ($\sim$ 0.7$^{m}$), an intrinsically red SN, or both. The fact that SN 1991bg and SN 1986G continued to show color evolution after 120$^{d}$ suggests that they were intrinsically different than normal SN Ia. We fit SN 1991bg with three models that have been suggested to explain this SN; WD065, PDD54 and ONeMg, as well as a fourth model, W7. The models WD065 (a helium detonation) and ONeMg (an accretion induced collapse) decline faster from peak luminosity relative to 1.4 M$_{\odot}$ models because they have less overlying mass (but similar velocity structures) leading to a lower optical thickness. Ruiz-Lapuente (1993) fit WD065 to the maximum and nebular spectra, and later (RLS) to the bolometric light curve. PDD5 is a pulsed delayed-detonation model that produces very little $^{56}$Ni, and maintains a lower level of ionization (relative to W7 and normally luminous models) which decreases the free-free opacity and thus the overall opacity. H\H{o}flich (1996) fit PDD5 to B,V,R,and I band data out to 75$^{d}$, suggesting the distance to NGC 4374 to be 18 $\pm$ 5 Mpc with E(B-V)=0.25$^{m}$. PDD5 is unavailable, so we use the similar PDD54. W7 was included because it provides the best fit of all models, it has not been suggested by other authors. The fits for the four models to the bolometric data are shown in figure \ref{bol91bg}; fits to the V data is shown in figures \ref{v91bg1} and \ref{v91bg2}. Present in the B and V data is a disagreement after 140$^{d}$ between the data taken by Leibundgut et al. (1993) and that taken by Turatto (1996), the Turatto data suggesting the SN was fainter.\footnote{The third data set of Filippenko tends to confirm the measurements of Turatto, but do not extend beyond 140$^{d}$.} The bolometric light curve was calculated by Turatto et al. (1996) from the photometric data, and strongly relied upon the B and V bands. No models were able to reproduce the steep decline of the Turatto data. Placing 0.04 dex error bars on the bolometric data after 50$^{d}$ and ignoring the data after 140$^{d}$, PDD54 provided the best overall fit at the 27\% level. WD065b fit below the 2\% level, ONeMg below 10$^{-5}$. Every model remains too bright to fit the bolometric data out to 250$^{d}$, with no model fitting above the 1\% level. The B and V band data shows the models WD065 and ONeMg to be bright from 140$^{d}$--250$^{d}$, too bright to fit either set of data. PDD54 and W7 were able to fit the Leibundgut data within the uncertainties. RLS fit WD065 to the bolometric data, invoking the weak-field scenario. For our calculated WD065 light curve, the weak-field scenario was fainter than the radial scenario, but the light curve remained too bright to fit the data. The dashed line shows the prediction for 100\% transparency to positrons (zero deposition of positron kinetic energy). This line fits the data, but there is no physical justification for this extreme scenario. Our results agree with the results of CAPP, who tried to fit a 0.7M$_{\odot}$ model that produced 0.1M$_{\odot}$ of $^{56}$Ni and determined that only zero positron deposition ($\kappa_{e+}$=0 in their terminology) would approximate the data. For ONeMg, the slope of our bolometric light curve agrees with Nomoto et al. (1996) from 60$^{d}$--90$^{d}$. It is the 120$^{d}$--450$^{d}$ epoch that excludes ONeMg as an acceptable model. Mazzali et al.(1996) tried to reproduce the spectra of SN 1991bg with a 0.62 M$_{\odot}$ version of W7, and as a by-product generated a bolometric light curve out to 220$^{d}$ post-explosion. Their light curve assumed positron trapping and was too bright to fit the data after 110$^{d}$. They argued that this epoch is too early to expect positron escape and suggested that there is unseen emission leading to an erroneously low bolometric light curve. Our light curves for the low-mass models WD065b and ONeMg have the same characteristics as theirs, and positron escape is insufficient to explain the low luminosity in the 110$^{d}$--220$^{d}$ epoch. The inability to fit any model to the Turatto data tempted us to favor the trend emerging from the Leibundgut data over the Turatto data. But, the light curve from SN 1992K forbids that action. SN 1992K has near peak spectral and photometric properties similar to SN 1991bg, and the B and V data follows a SN 1991bg-like shape. The V data extends out to $\sim$155$^{d}$. The existence of a second example of this steep decline forces us to conclude that there exists a sub-class of type Ia SNe for which we are unable to fit the light curves with the models in our possession without invoking a larger extinction than suggested. The flatness of low mass models during the positron-dominated phase suggests that that class of models possess the opposite tendency from that required. The solution to this problem remains unclear. \subsection{SN 1990N in NGC 4639} SN 1990N was unusual in that Co lines were detected earlier than existing models predicted and the intermediate mass elements had higher velocities (Leibundgut 1991). The near-maximum and post-maximum spectra were normal, suggesting normal models such as W7 could explain them. The late detonation model, W7DN, was created to fit the early spectra and transition to W7. This was accomplished by having a deflagration accelerate into a detonation at M$_{r}$=1.20M$_{\odot}$. Yamaoka (1992) discussed how the extra $^{56}$Ni modifies the rise to peak light to fit SN 1990N. H\H{o}flich (1996) fit DET2ENV2/4 and PDD3 to the multi-band photometry of SN 1990N, for a distance to NGC 4639 of 20 $\pm$ 5 Mpc. The multi-band coverage is very good, but a bolometric light curve has not been published. There was an unfortunate gap in the light curve from 70$^{d}$--200$^{d}$, as the SN was too close to the Sun for observations, hampering our ability to differentiate among model types. Figure \ref{bv90n} shows the models W7DN and PDD3 fit to the B and V data. Both models fit the 190$^{d}$--300$^{d}$ data well and then show a steepening consistent with positron escape. The normally luminous DD and PDD models provided similar fits, with all models giving worse fits for positron trapping. \subsection{SN 1972E in NGC 5253} SN 1972E was not observed pre-maximum and thus the models are not well constrained. We assume the explosion date to be JD2441420 (Axelrod 1980). The photometry out to $^{+}$169$^{d}$ is photoelectric, with later photometry derived from spectra (Kirshner \& Oke 1975). The bolometric light curve published by Axelrod was also generated by a model fit to the Kirshner spectra. We fit three models to 72E: W7, M35 and HED8. RLS fit W7 to the bolometric data, invoking a transition from turbulent confinement (with B$_{o}$=10$^{5}$Ga) to weak-field trajectories around 500$^{d}$ Their W7 light curve fit well before 500$^{d}$. The transition was discussed conceptually by RLS, but not modelled. H\H{o}flich (1996) fit the model M35 to the multi-band photometry of 72E, suggesting the distance NGC 5253 to be 4.0 $\pm$ 0.6 Mpc. The model HED8 is shown because of the agreement with the bolometric light curve. All three models are consistent with the $^{56}$Ni mass suggested by the nebular spectra, 0.5--0.6M$_{\odot}$ (Ruiz-Lapuente \& Lucy 1992). Colgate (1996) suggests a 0.4 M$_{\odot}$ model to explain 72E, this possibility was not investigated due to the lack of a suitable model. Figure \ref{72e} shows the fits of HED8, W7, and M35 to the B, V and bolometric data. None of these models can be rejected, but all fit better with the radial field scenario and significant positron escape. None of the models fit the 700+$^{d}$ data point, with all remaining too bright. \subsection{SN 1991T in NGC 4527} SN 1991T was unusual in the width of the luminosity peak and in the absence of SiII and CaII absorption lines (Filippenko et al. 1997). It is the prototype of superluminous type Ia SNe. It was also an example of a SN whose late emission was overwhelmed by a light echo, as shown by Schmidt et al. (1994). Suntzeff (1996) derived a bolometric light curve to 87$^{d}$ post-B maximum against which we tested models. There was also a gap in observations as the sun was too near the SN. We show the fits of two models to SN 1991T, the late-detonation W7DT, and the superluminous helium detonation HECD. The model W7DT was proposed by Yamaoka et al. (1992) to explain the lack of intermediate mass elements in the pre-maximum spectra, followed by a normal spectrum post-maximum. Nomoto et al. (1996) modelled the bolometric light curve for W7DT and scaled it to fit the V data. H\H{o}flich (1996) suggested PDD3 and DET2ENV2 based upon fits to multi-band photometry, suggesting the distance to NGC 4527 to be 12 $\pm$ 2 Mpc. Liu et al. (1997) argued that the superluminous 1.1 M$_{\odot}$ helium detonation, SC1.1 (which produces 0.8 M$_{\odot}$ of $^{56}$Ni), fit the nebular spectrum (301$^{d}$) better than did the models W7, DD4 and SC0.9. To fit SN 1991T, Liu et al. (1997) assumed a distance of 12--14 Mpc and E(B-V)=0.0$^{m}$--0.1$^{m}$. We approximate SC1.1 with HECD, a 1.06 M$_{\odot}$ helium detonation that produces 0.72 M$_{\odot}$ of $^{56}$Ni. Pinto \& Eastmann (1996) fit DD4 (with pop. II elements added) to the B and V data for SN 1991T out to 60$^{d}$, suggesting a distance of 14 Mpc for E(B-V)=0.1$^{m}$. RLF estimated the $^{56}$Ni production to be 0.7--0.8 M$_{\odot}$ of $^{56}$Ni using nebular spectra; Bowers' (1997) $^{56}$Ni production (when adjusted as explained in section 4) is 0.7$\pm$0.2 M$_{\odot}$. W7DT and HECD are both consistent with this, while PDD3 slightly underproduces $^{56}$Ni. Figure \ref{91t} shows the fits of W7DT and HECD to the V data. The late V data was fit by adding a constant light echo component. Both models show that the radial field scenario with positron escape provides better fits. The dashed lines show the radial light curves without an echo, the dot-dashed lines show the trapping light curves without an echo. The trapping curves remain too bright during the 200$^{d}$--500$^{d}$ epoch even with no contribution from a light echo, arguing strongly against trapping. The 1.1 M$_{\odot}$ and 1.4 M$_{\odot}$ models have similar late light curves, suggesting that they can not resolve the ambiguity between Chandrasekhar and sub-Chandrasekhar mass models for superluminous SNe. The fact that the radial curve explains the data with no echo contribution until 470$^{d}$ perhaps suggests that the light echo that ``turned on'' after 450$^{d}$, an effect that could be explained by the SN light sweeping through a dust cloud. The asymmetry of the image of the SN 1991T light echo would be consistent with reflection off of discrete cloud(s) (Boffi et al. 1998). Unfortunately, the light echo interrupted the positron dominated epoch before the optimal time to observe positron escape ($\sim$600$^{d}$). Whereas it may be possible to eventually account for and subtract out the contributions from a light echo, we do not fit light curves into the light echo phase other than to demonstrate the phenomenon. \subsection{SN 1993L in IC 5270} SN 1993L was observed by CAPP and fit by the same model used for SN 1992A. The strength of the SN 1993L data is the continuous multi-band photometry extending to beyond 500$^{d}$. The data were published without uncertainties. There are no published spectra, so the best model was selected entirely by the fit to the B and V data. An additional complication is that the SN was discovered after maximum, which eliminated that discriminant. CAPP assumed the explosion date to be JD 2449098, the distance to be 20.1 Mpc and the extinction to be E(B-V)=0.75$^{m}$. With these assumptions, SN 1993L appears to be quite similar to SN 1992A. CAPP noted two differences between the two SNe; SN 1993L had a slower nebular velocity and a higher degree of V band curvature from 100$^{d}$--500$^{d}$. We note the additional difference that the SN 1993L light curve was dimmer from 25$^{d}$--55$^{d}$ and again from 100$^{d}$ to the cross-over at 400$^{d}$. Explaining these features is difficult, but we note that some of the PDD and DD models studied by H\H{o}flich \& Khokhlov show these features. Assuming the photometry errors to range from 0.1$^{m}$---0.5$^{m}$ from 60$^{d}$--550$^{d}$, only PDD1b was excluded at the 99\% confidence level, all other models and scenarios were consistent at the 30\% level or better. We show in figure \ref{bv93l} the fits of HED8 and DD4 to the B and V data. The radial curves for both models yielded similar fits within the scatter. For HED8, the trapping curve remains much too bright to fit the data.. SN 1993L does not provide strong evidence of positron escape, but the radial light curves fit at least as well as trapping curves for any models tested. \subsection{SN 1937C in IC 4182} SN 1937C reached a peak B magnitude of 8.71$^{m}$ and was detected on photographic plates for over 600$^{d}$. Branch, Romanishin \& Baron (1996) have argued that that its spectral features are those of normal SNe Ia, while Pierce \& Jacoby (1995) argued that SN 1937C is similar to SN 1991T and thus superluminous. H\H{o}flich (HK96) suggests N32, W7 and DET2 as acceptable models, and a distance of 4.5 $\pm$ 1 Mpc. We use the data of Schaefer (1994), which is a re-analysis of photographic plates from many observers, principally Baade (1938) and Baade \& Zwicky (1938). Pierce and Jacoby (1995) also re-analyzed photographic plates of SN 1937C obtaining a higher value for B$_{max}$=8.94$^{m}$. Figure \ref{b37c} shows the fits of DET2 and W7 to the B data of SN 1937C. The 1\% ionization, radial-field light curves for DET2 and W7 fit the data well at all times, giving the best fits ($\chi^{2}$/dof=1.6) of any model (and any field scenario). The field free scenarios for DET2 and W7 are nearly identical to the radial field scenarios. The light curve for SN 1937C extends late enough with high-quality data to convincingly trace a shape after 400$^d$ consistent with positron escape. We also suggest that models with masses lower than 1.4 M$_{\odot}$ and normally-luminous pulsed-delayed detonation models appear to fit better than do delayed detonation models. We note that the same conclusions are reached if the original Baade and Zwicky data is used, or the re-analyzed data of Pierce and Jacoby is used (Milne 1998). \subsection{SN 1989B in NGC 3627} SN 1989B occurred in NGC 3627 and had considerable extinction. An additional complication is its location in the spiral arm of the host galaxy, giving considerable background contamination. The bolometric light curve of Suntzeff (1996) shows SN 1989B to be similar to SN 1992A, but remaining brighter than SN 1992A from 90 days onward. HK96 suggested the models M37 and M36 based upon fits to the multi-band photometry; we fit M36 to B,V and bolometric data for SN 1989B. The distance suggested by HK96 is 8.7$\pm$3 Mpc. Figure \ref{89b} shows the fits of M36 and ONeMg light curves to the bolometric and V data from SN 1989B. The late bolometric and the later B and V band light curves remained too bright to be fit by M36. HK96 fit the model M37 to this same data, their V light curve remaining bright enough to fit the data. Our light curve for M36 is 0.5$^{m}$ dimmer than the HK96 estimate for M37 at 365$^{d}$. H\H{o}flich \& Khokhlov cautioned against over-interpreting the latter portion of their light curve, but as none of the delayed detonations tested in this study approached the brightness of their M37 light curve, further investigations may be warranted. The only model able to reproduce the light curve beyond 300$^{d}$ is the 0.6 M$_{\odot}$ model, ONeMg. To date, there has been no suggestion that SN 1989B was substantially subluminous; in fact, it has been considered a relatively normal SN Ia. The possibility that SN 1989B was produced by a low-mass WD can be tested, especially when the distance estimate is improved. If this explanation is correct, this SN is the singular example of a light curve best fit with positron trapping. Another explanation is that the late light curve was affected by a light echo or another effect of the complicated background subtraction. There are two reasons to believe that a light echo may have been present. The 330$^{d}$ spectrum showed more continuum emission at 6000$\AA$ than is typically seen in the nebular spectra of SNe Ia. In addition, there was strong Na-D absorption from the host galaxy, an indicator of foreground dust. If the dust were near enough to, or surrounding the SN, a light echo may have been produced. As good spectra exist, this question is also potentially solvable. The existence of two dramatically different explanations for the late light curve of SN 1989B means the subtle effect of positron escape can not be clearly seen or ruled out. \subsection{SN 1986G in NGC 5128} SN 1986G was well observed, but occurred in the dust lane of NGC 5128 and suffered from considerable extinction. SN 1986G had a narrow peak and slow $\lambda$6355$\AA$ Si lines, suggesting that it was intermediate between normal SNe Ia and SN 1991bg. The B-V color index continued to evolve 120$^{d}$ after the explosion, a feature also seen in SN 1991bg, but not observed in normal SN Ia.\footnote{It is important to note that the B-V index appears to approach zero by the last observation, a reversal of the 100$^{d}$--320$^{d}$ behavior.} We fit the models W7, M39 and HED6 to SN 1986G. HK96 fit W7 to multi-band light curves to 80$^{d}$ post-explosion, suggesting the distance to NGC 5128 to be 4.2$\pm$1.2 Mpc. M39 was used because the $^{56}$Ni mass of M39 is in closer agreement with the RLF estimate of 0.38$\pm$0.03M$_{\odot}$. HED6 represents moderately sub-luminous, low-mass models. Figure \ref{bv86g} shows W7, M39 and HED6 with the SN 1986G B and V data. The Cristiani et al. (1992) data is shown without uncertainties. The B band is corrected for 1.1$^{m}$ of estimated extinction. The model curves are normalized at 120$^{d}$, when the B and V bands cross over. All three models roughly reproduce the shape of the V light curve from 60$^{d}$--100$^{d}$. The B data is brighter than the V data after 120$^{d}$ and can be fit by both W7 and M39. HED6 remains brighter than both W7 and M39, and provides a poorer fit to the data. Only the single point at 425$^{d}$ is late enough to test the positron transport conditions. This observation is consistent with radial or no magnetic field and positron escape, but with a realistic uncertainty it might not rule out positron containment. \subsection{SN 1994D in NGC 4526} SN 1994D occurred in NGC 4526 and was observed by three groups until June 1994 and by Cappellaro et al. thereafter. We fit M36 and HED8 for this object. The wealth of multi-band photometry and spectra at early epochs allowed H\H{o}flich (1996) to tightly constrain delayed detonation models and to conclude that M36 was the best, at distance 16.2 $\pm$ 2 Mpc. Liu et al. (1997,1998) found that SC0.9 (a 0.9M$_{\odot}$ model which produces 0.43M$_{\odot}$ of $^{56}$Ni) fits the 301$^{d}$ spectrum. We use the similar model HED8. CAPP fit the V band data of 94D with W7. As seen in figure \ref{fam400}, W7 and M36 are virtually indistinguishable, so we use M36. The B-V index continued to evolve after 120$^{d}$, necessitating the inclusion of both B and V bands.\footnote{As with SN 1986G, B-V approaches zero at late times.} Figure \ref{bv94d} shows the fits of M36 and HED8 to the SN 1994D B and V data. The 150$^{d}$--300$^{d}$ gap precludes our discrimination among model types. The scatter after 300$^{d}$ prevents examination of the positron escape. It seems the light curves can not rule out either delayed deflagrations or He-detonations. \subsection{Summary of Observations} Ten SNe were analyzed, including super- and subluminous events, reddened and unreddened SNe, old ones recorded on photographic plates and recent SNe recorded on CCDs. Of the ten, five show strong evidence of positron escape (92A, 37C, 91T, 72E, 90N), three others are also consistent with significant positron escape but somewhat ambiguous, (93L, 86G, 94D), and for two others it was not clear which model actually described the early light curve and should be tested for its positron transport later (91bg, 89B). Only SN 1989B suggests the possibility that a trapping field may provide a better fit, and there are complications with that interpretation. As a group, the supernova light curves fall more quickly than models, which fit well at early times, extrapolated to later times with all the positron kinetic energy deposited in and radiated by the ejecta. This is consistent with the escape of a substantial fraction of the positrons emitted by $^{56}$Co after one year. Regarding even earlier times, for sub-luminous SNe the Chandrasekhar mass models fit the light curves better than low-mass models. For normally luminous SNe, there are not enough observations from 60$^{d}$--400$^{d}$ to choose between the model masses. The superluminous SNe can apparently be fit equally well by high-mass He-detonation models and nickel-rich Chandrasekhar mass models. \section{Type Ia SN Contributions to the Galactic 511 keV Emission} Table 3 shows the positron survival fraction of type Ia SNe at 2000$^{d}$ and the resulting positron yields for all the models treated in this study. The range of values reflects the extremes of ionization fractions. The lower values correspond to triple ionization, the higher values are for 1\% ionization. The yields vary between the two extremes by roughly a factor of three. The observations suggest that 1\% ionization typically fits better, so we will quote 2/3 of that yield as a conservative estimate for each model. For radial fields, the Chandrasehkar mass models have a lower survival fraction when compared with equally luminous sub-Chandrasehkar mass models, but the larger nickel mass partially compensates for that fact. As a result, for all but the ``heavy " models, the yields are not strongly dependent upon model mass. The radial scenarios have greatly enhanced positron escape yields compared to the trapping scenarios. Positron escape is best observed best in light curves when its relative effect is large, between 400$^{d}$ \& 1000$^{d}$, but the positron yield in absolute terms is determined earlier: 80\% of the escaping positrons do so between 178$^{d}$ -546$^{d}$ for W7 (94$^{d}$ -492$^{d}$ for HED8). It is conceivable, but probably not common, that the trapping field could apply until, say, 500$^{d}$, when the field magnitude decreased to the point that the Larmor radius reached the ejecta radius and the object would cross over to the field-free regime. The ``release time" for a positron of a given energy is proportional to the initial field strength, which might vary over many orders of magnitude. For one object we might incorrectly infer the positron escape for this reason, but not likely for many. The positrons that escape retain a significant fraction of their emitted energy, as shown by CL and in figure \ref{emit}, which compares the emission spectrum (dashed line) to the escape spectrum (solid line). With energies near 400 keV, the positrons have a considerable lifetime in the ISM, giving diffuse galactic 511 keV emission. To estimate the rate of positron injection into the ISM, we assume a 2:2:1 ratio of normal:subluminous:superluminous SNe. Considering DD23C, PDD3, and HED8, a reasonable yield for normal SNe is 8 x 10$^{52}$ positrons. For subluminous SNe, PDD54, DET2ENV6 suggest a yield of 4 x 10$^{52}$ positrons. From W7DT, DD4, and HECD (suggested by 91T) we estimate a yield of 15 x 10$^{52}$ positrons. Employing the above ratio then gives a mean yield of 8 x 10$^{52}$ positrons per SN Ia. This gives a flux (4$\pi$ D$^{2}$)$^{-1}$ $\cdot$ y $\cdot$ SNR $\cdot$ $f_{e+ \gamma}$, where D is the distance, y is the positron yield per SN event, and SNR is the SN Ia rate, and $f_{e+ \gamma}$ is the 511 keV photons emitted per positron. Taking D=8 kpc, y=8 x 10$^{52}$ e+ SN$^{-1}$, SNR=0.0032 SN yr$^{-1}$\footnote{This value was obtained from Capellaro et al. (1997) and assumes H$_{o}$=65 km s$^{-1}$ Mpc$^{-1}$. Hamuy \& Pinto (1998) suggest a slightly larger value, 0.0042 SN yr$^{-1}$.}, and $f_{Ps}$=0.58 photons e+$^{-1}$ (which corresponds to a positronium fraction of 0.95), the flux is 6.3 x 10$^{-4}$ photons cm$^{-2}$ s$^{-1}$. An estimate of the uncertainty in this flux can be obtained by inserting D=7.7 -8.5 kpc, y=3 -10 e+ SN$^{-1}$, SNR=0.003 -0.06 SN yr$^{-1}$, and $f_{\gamma e+^{-1}}$=0.5 -0.65 (corresponding to a positronium fraction of 1.0 -0.9) into the formula. The flux then ranges from (1.2 -8.6) x 10$^{-4}$ photons cm$^{-2}$ s$^{-1}$. The flux is on the order of the Galactic bulge component of the 511 keV flux as measured by OSSE. Purcell (1997) estimated the bulge flux to be 3.5 x 10$^{-4}$ photons cm$^{-2}$ s$^{-1}$ and the galactic plane flux to be 8.9 x 10$^{-4}$ photons cm$^{-2}$ s$^{-1}$. SMM, with a 130$^{\circ}$ FOV, (Share et al. 1988) measured the total flux from the general direction of the galactic center to be 2.4 x 10$^{-3}$ photons cm$^{-2}$ s$^{-1}$; the type Ia SN contribution could be one-fourth of that flux. A more exact treatment of the level of agreement between the 511 keV flux as mapped by OSSE and the spatial distribution of Ia SNe is the subject of a forthcoming paper (Milne 1999). \section{Summary} We calculate the $\gamma$-ray and the positron kinetic energy deposition to produce ``UBVOIR'' bolometric light curves for the time when the photon diffusion time (t$\geq$60$^d$) is short for various models of type Ia SNe. In calculating positron kinetic energy deposition into the ejecta, we calculate it for particular extreme environments, they are: radial magnetic field, turbulence magnetic field, and field free geometries in a 1\% and triple ionization medium throughout the evolution. The deposited energy rate is assumed to instantaneously appear as ``UBVOIR'' bolometric luminosity and is compared with several observed bolometric luminosity when they are available or with B and V bands observed luminosity. In this work, we analyzed ten late time type Ia supernova light curves by comparing the light curves with the calculated light curves. It can be shown clearly, that all of the light curves except of SN 1991bg (explain below) require positron kinetic energy deposition in order to give a reasonable agreement. Without the energy deposition, it is quite obvious that the light curve is too dim to explain the observe light curve at t$\geq$200$^d$ (Colgate, Petschek, Kreise 1980; Capellaro et al. 1997; Ruiz-Lapuenta \& Spruit 1997). We show $\gamma$-ray energy deposition at t$\geq$60$^d$ can hardly be used to distinguished light curve shape between various models, except to differentiate between extreme models such as between low (PDD1b) and high explosion energy models or between low sub-Chandrasekhar mass (i.e. ONeMg) and Chandrasekhar mass (i.e. W7) models. A similar situation also ensues in the epoch when positron kinetic energy deposition is dominant and for the case of magnetic field in the ejecta being radially combed outward. In the case of turbulence magnetic field, the usefulness of the late time light curve to differentiate models is moderately improved due to the effectiveness of ejecta to slow down and to absorb the kinetic energy of positrons. The light curves of SN Ia are dominantly powered by the kinetic energy deposition from $\beta^{+}$ decay of positrons after the nebula becomes optically thin to gamma photons. Before and at the onset of this ``positron phase'' the positrons have short lifetimes regardless of the field geometry, but as the nebula becomes more tenuous positrons can survive and possibly escape for favorable geometries. The longer the positrons stay as energetic particles, the larger is the deficit of energy deposition relative to the full instantaneous energy deposition. This effect is also observed when there are more positrons escape from the ejecta. Comparing late time light curves of suggested models from early light curve and spectra for particular observed supernovae, the deficit of luminosity relative to instantaneous deposition can be seen in the B and/or the V band light curves of SNe 1992A, 1990N, 1937C, 1972E, 1991T. The phenomena give the evidence that positron take its time in depositing its kinetic energy and possibly moves quite far from its origin and even escapes from the ejecta. Further detail comparisons of the calculated light curves of positron energy deposition in radial and tangled magnetic field configurations of type Ia models with the observed V and B band light curves show that the radial field configuration (or synonymously the weak field configuration) produces a better agreement than does the tangled magnetic field configuration, which produces light curves far too flat. The agreement that requires radial field configuration is exhibited by the observed light curves of SN 1992A, SN 1937C, SN 1990N, SN 1972E, and SN 1991T. Models with tangled magnetic field configuration produce a flatter and brighter light curves than the radial field configuration due to the more energy deposition. There are a number of potential sources which can flatten the curves further, but steepening the curves such as by IRC is difficult without color evolution, which to date has not been observed in SN Ia light curve. These facts strengthen the argument that positron escape occurs in SN Ia ejecta. Fitting observed light curves of SN 1993L, SN 1994D, and SN 1986G give ambiguous results that it is not clear which of the radial or the tangled magnetic configuration model can give acceptable fit. The ambiguity is mostly due to the fluctuation in the observed B and V band light curves. The strongest indication for the tangled magnetic configuration seems to be shown by the SN 1989B observed light curve, however there is a possibility that there is a contribution from a light echo at late times. SN 1991bg is somewhat a special case that not all SN 1991bg observed light curves agree with each other completely. Two light curves (Turatto et al. 1996; Filippenko et al. 1992) agree with each other, but no model that we have can give a reasonable fit to the light curves unless we unphysically impose no positron kinetic energy deposition in the ejecta. The light curve observed by Leibundgut et al. (1993) can be fit reasonably well with model PDD54 and W7, but we cannot determine the magnetic field configuration in the ejecta because of no observation done by Leibundgut et al. (1993) beyond day 200. Based on the shape of SN 1937C, SN 1972E, and SN 1992A's light curves, which all follow the predictions of the radial field configuration model, we conclude that in these supernovae the positrons escape with the most efficient transparency as if the positrons moved in a straight line from their origins to the surface and deposit a minimum of their kinetic energy as they escape. These supernovae show that type Ia SNe can be a dominant source of the diffuse Galactic 511 keV line fluxes. To better quantify the suggestion that type Ia supernova may be the main source of positrons that produce the observed Galactic 511 keV line fluxes. We present the number or fraction of positrons that escape from Ia models. The values are in agreement with the values calculated by CL, who demonstrated that the radial magnetic field scenario can easily provide the needed positrons to explain the Galactic annihilation line fluxes. The Galactic 511 keV annihilation line flux distribution has a definite bulge component (Purcell et al. 1997). SN Ia positrons from $^{56}$Co decays may contribute the majority of the bulge flux and a sizable fraction of the entire galactic emission. As the number of SN Ia observed in spiral galaxies increase, the spatial distribution of bulge and disk SNe will be better known. As OSSE maps the galactic center region with increasing exposure and spectral information, the level to which SN Ia positrons escape from SNe may soon be known. \acknowledgments This work is from a dissertation which was submitted to the graduate school of Clemson University by P.A.M. in partial fulfillment of the requirements for the PhD degree in physics. P.A.M. would like to thank P. Ruiz-Lapuente for access to models and for emphasizing the use of bolometric data when available. We thank P. H\H{o}flich for access to many models, giving assistance in locating data and for suggestions of models to test against individual SNe. The isotopically complete models provided by K. Nomoto and S. Kumagai were important for many additional applications. P.A.M. thanks D. Jeffery for spectroscopic insight and assistance locating data. We thank P. Pinto for providing us with model DD4 and for useful discussion on ionization fraction and plasma energy loss. This work was supported by NASA grant DPR S-10987C, via sub-contract to Clemson University. \begin{deluxetable}{lcccccccc} \footnotesize \tablecaption{SN Ia model parameters. \label{table1} } \tablewidth{0pt} \tablehead{ \colhead{Model} & \colhead{Mode of } & \colhead{M$_{\star}$} & \colhead{M$_{Ni}$ } & \colhead{E$_{kin}$} & \colhead{t$_{B}$} & \colhead{\.{E}$_{\gamma}$(60$^{d}$)} & \colhead{\.{E}$_{\gamma}$(100$^{d}$)} & \colhead{T(\.{E}$_{e+}$=\.{E}$_{\gamma}$)} \\ \colhead{Name} & \colhead{Explosion} & \colhead{[M$_{\odot}$]} & \colhead{[M$_{\odot}$]} & \colhead{[10$^{51}$ erg s$^{-1}$]} & \colhead{[days]} \tablenotemark{a} & \colhead{[10$^{42}$ erg s$^{-1}$]} & \colhead{[10$^{42}$ erg]} & \colhead{ [days] \tablenotemark{b}} } \startdata W7 & deflagration & 1.37 & 0.58 & 1.24 &14.0 &1.54 & 0.45 & 260 \nl DD4 & delayed det. & 1.38 & 0.61 & 1.24 & 19.0&1.81 &0.54 & 250 \nl M36 & delayed det. & 1.40 & 0.59 & 1.54 & 13.3 &1.67& 0.48 & 220 \nl M39 & delayed det. & 1.38 & 0.32 & 1.42 & 14.3 &1.06& 0.32 & 240 \nl DD202C & delayed det. & 1.40 & 0.72 & 1.33 &--- &2.02&0.57 & 235 \nl DD23C & delayed det. & 1.34 & 0.60 & 1.17 &18.0&1.83 &0.57 & 245 \nl W7DN & late det. & 1.37 & 0.62 & 1.60 & --- &1.63 &0.49 &230 \nl W7DT & late det. & 1.37 & 0.76 & 1.61 & --- &1.76 &0.53 &205 \nl PDD3 & pul.del.det. & 1.39 & 0.59 & 1.39 & 16.0&1.44 & 0.40 & 190 \nl PDD54 & pul.del.det. & 1.40 & 0.17 & 1.02 & 12.3 &0.66 & 0.19 & 245 \nl PDD1b & pul.del.det. & 1.40 & 0.14 & 0.35 & 11.8 &1.05 &0.45 & 520 \nl WD065 & He-det. & 0.65 & 0.04 & 0.74 &13.5 & 0.09&0.02 & 170 \nl HED6 & He-det. & 0.77 & 0.26 & 0.74 & 14.4&0.44 & 0.12 & 160 \nl HED8 & He-det. & 0.96 & 0.51 & 1.00 & 13.7 &0.97 & 0.26 & 270 \nl HECD & He-det. & 1.07 & 0.72 & 1.35 &--- &1.59 &0.46 & 200 \nl ONeMg & AIC & 0.59 & 0.16 & 0.96 &15.0 &0.18 &0.05 & 130 \nl DET2 & merger det. & 1.20 & 0.62 & 1.55 & 13.7&1.27 & 0.35 & 180 \nl DET2E6 & merger det. & 1.80 & 0.62 & 1.33 & 19.8 &3.30 & 1.04 & 300 \nl \enddata \tablenotetext{a}{t$_{B}$ is the risetime to reach B band maximum.} \tablenotetext{b}{\.{E}$_{\gamma}$ refers to the gamma energy deposition rate, \.{E}$_{e+}$ refers to instantaneous deposition of the positron kinetic energy.} \end{deluxetable} \begin{deluxetable}{llcclllc} \footnotesize \tablecaption{Summary of Observations. \label{sumobs} \tablenotemark{b}} \tablewidth{0pt} \tablehead{ & \colhead{Host} & \colhead{Photometric} & \colhead{Distance} & \colhead{E(B-V)} & \colhead{Suggested} & & \\ \colhead{SN} & \colhead{Galaxy} & \colhead{Bands} & \colhead{Spectra} & \colhead{(Mpc)} & \colhead{(m)} & \colhead{Models} & \colhead{Bolometry} } \startdata 92A & NGC 1380 & UBVRI & 2 & 13.5$\pm$0.7 (3) & 0.00 (1) & DD4 (2) & 1 \nl & & (1) & & 16.9 (4) & 0.1 (2) & DD23C (6) & \nl & & & & 20.9$\pm$3.6 (5) & 0.00 (31),(5)& & \nl 91bg& NGC 4374 & UBVRI & 8,9&16.6 (10) & 0.05 (9) & WD065 (11)&8 \nl & & (7,8,9)& & 21.1$\pm$3.6 (5) & 0.00 (5)& PDD5 (12) & \nl & & & & 16.4$\pm$0.7 (31)& 0.00 (31)& ONeMg (13)& \nl 90N & NGC 4639 &UBVRI & 15&19.1$\pm$2.5 (31)& $\leq$0.15 (18)&W7DN (19)&1 \nl & & (14) & &20.6 (16)& 0.01 (31)&DET2E2/4,PDD3 (20) \nl & & & &25.5$\pm$2.5 (51) & & & \nl 72E & NGC 5253 & UBV & 22 & 4.1$\pm$0.1 (24) & 0.05 (25)& W7 (28) & 23 \nl & & (21,22) & & 2.8$^{+0.7}_{-0.3}$ (25) & 0.02 (27) &M35 (20) & \nl 91T & NGC 4527 & UBVRI &9,26&12-14 (25) & 0.1-0.2 (25)&W7DT (33) & 1 \nl & & (14,29,30)& &13.0 (32) & 0.2 (9) &PDD3, DET2E2 (20) & \nl & & & & & 0.00 (31) & SC1.1 (34)& \nl & & & & & 0.13 (26) & DD4 (35) &\nl 93L & IC 5270 & UBVRI & & 20.1 (4) & 0.2 (4) & W7\tablenotemark{a} (4) & \nl & & (4) & & & & & \nl 37C & IC 4182 & BV & 37 & 4.9$\pm$0.2 (38)& 0.0$\pm$0.14 (36)& N32, W7 & \nl & & (36) & & 2.5$\pm$0.3 (39)&0.13 (39) & DET2 (20) &\nl 89B & NGC 3627 & UBVRI & 40 & 7.6 (31) & 0.37 (40) & M37, M36 (20) &1 \nl & & (40) & & 11.8$\pm$1.0 (41)& & & \nl & & & & 11.6$\pm$1.5 (42)& & & \nl 86G & NGC 5128 & UBVRI & 45 & 3 (46) & 1.09$\pm$0.02 (25) & N32, W7 (20)&\nl & & (43,44,45)& & 6.9 (24) & 0.9 (43) & & \nl & & & & 3.1$\pm$0.1 (47) & 0.88 (49) & & \nl & & & & 3.6$\pm$0.2 (48) & 1.1 (45) & & \nl 94D & NGC 4526 & UBVRI & 50 & 16.8 (4) & 0.03 (18) & M36 (20)&\nl & &(50,17,30)& & 13.7 (50) & & SC0.9 (34)&\nl & & & & & & W7 (4) & \nl \enddata \tablenotetext{a}{W7 was scaled to 1.0 M$_{\odot}$ to fit 93L.} \tablenotetext{b}{REFERENCES.-(1) Suntzeff 1996; (2) Kirshner et al. 1993; (3) Tonry 1991; (4) Cappellaro et al. 1997; (5) Branch et al. 1997; (6) H\H{o}flich 1998; (7) Leibundgut et al. 1993; (8) Turatto et al. 1996; (9) Filippenko et al. 1992; (10) Tully 1988; (11) Ruiz-LaPuente et al. 1993; (12) H\H{o}flich 1996; (13) Nomoto et al. 1996; (14) Lira et al. 1998; (15) Jeffery et al. 1992; (16) Saha et al. 1997; (17) Tanvir et al. 1997; (18) Hamuy et al. 1995; (19) Shigeyama et al. 1992; (20) H\H{o}flich \& Khokhlov 1996; (21) Ardeberg \& de Grood 1973; (22) Kirshner \& Oke 1975; (23) Axelrod 1980; (24) Sandage et al. 1994; (25) Ruiz-LaPuente \& Filippenko 1996; (26) Phillips et al. 1992; (27) Della Valle \& Panagia 1992; (28) Ruiz-LaPuente \& Spruit 1997; (29) Schmidt et al. 1994; (30) Cappellaro 1998; (31) Phillips 1993; (32) Tully et al. 1992; (33) Yamaoka et al. 1992; (34) Liu et al. 1997; (35) Pinto \& Eastman 1996; (36) Schaefer 1994; (37) Minkowski 1939; (38) Sandage et al. 1996; (39) Pierce et al. 1992; (40) Wells et al. 1994; (41) Tammann et al. 1997; (42) Branch et al. 1996; (43) Phillips et al. 1987; (44) Phillips et al. 1997; (45) Cristiani et al. 1992; (46) Hesser et al. 1984; (47) Tonry \& Schecter 1990; (48) Jacoby et al. 1988; (49) Rich 1987; (50) Patat et al. 1996; (51) Saha et al. 1997.} \end{deluxetable} \begin{deluxetable}{ll|cc|c} \footnotesize \tablecaption{Model Fits to SN 1992A. \label{table2}} \tablewidth{0pt} \tablehead{ \colhead{Model} & & \multicolumn{2}{l}{Bolometric (55$^{d}$ -450$^{d}$)} & \colhead{V Band (55$^{d}$ -950$^{d}$)} \\ \colhead{Name} & \colhead{Type \tablenotemark{a}} & \colhead{$\chi^{2}$/DOF} & \colhead{Probability} & \colhead{$\chi^{2}$/DOF} } \startdata W7 & R1\% & 0.84 & 0.68 & 23.9 \nl & R3 & 2.86 & 2.4 x 10$^{-6}$ & 43.7 \nl & T & 2.98 & $<$10$^{-6}$ & 49.0 \nl DD4 & R1\% & 0.86 & 0.66 & 26.4 \nl & R3 & 1.56 & 0.04 & 34.5 \nl & T & 2.13 & 8.2 x 10$^{-6}$ & 43.1 \nl M39 & R1\% & 0.98 & 0.48 & 22.1 \nl & R3 & 1.02 & 0.43 & 23.4 \nl & T & 1.42 & 0.08 & 29.4 \nl DD23C & R1\% & 0.73 & 0.84 & 23.9 \nl & R3 & 1.33 & 0.13 & 30.9 \nl & T & 2.03 & 1.8 x 10$^{-3}$ & 39.9 \nl W7DT & R1\% & 1.28 & 0.16 & 30.1 \nl & R3 & 1.92 & 3.8 x 10$^{-3}$ & 44.5 \nl & T & 6.05 & $<$10$^{-6}$ & 90.7 \nl PDD3 & R1\% & 1.14 & 0.28 & 37.4 \nl & R3 & 3.66 & $<$10$^{-6}$ & 64.6 \nl & T & 5.32 & $<$10$^{-6}$ & 83.0 \nl PDD54 & R1\% & 1.06 & 0.38 & 24.1 \nl & R3 & 1.19 & 0.23 & 24.3 \nl & T & 1.73 & 0.01 & 26.5 \nl PDD1b& R1\% & 4.12 &$<$10$^{-6}$ & 61.2 \nl & R3 & 4.09 &$<$10$^{-6}$ & 60.5 \nl & T & 4.15 &$<$10$^{-6}$ & 61.8 \nl WD065 & R1\% & 3.34 &$<$10$^{-6}$ & 64.0 \nl & R3 & 10.1 & $<$10$^{-6}$ & 130.9 \nl & T & 11.3 & $<$10$^{-6}$ & 153.2 \nl HED6 & R1\% & 2.65 & 1.4 x 10$^{-5}$ & 53.9 \nl & R3 & 10.0 & $<$10$^{-6}$ & 140.2 \nl & T & 14.5 & $<$10$^{-6}$ & 190.4 \nl HED8 & R1\% & 2.00 & 2.1 x 10$^{-3}$ & 41.6 \nl & R3 & 5.97 & $<$10$^{-6}$ & 83.4 \nl & T & 11.4 & $<$10$^{-6}$ & 151.1 \nl HECD & R1\% & 1.16 & 0.26 & 33.5 \nl & R3 & 2.62 & 1.7 x 10$^{-6}$ & 51.2 \nl & T & 5.79 & $<$10$^{-6}$ & 88.8 \nl ONeMg & R1\% & 6.00 & $<$10$^{-6}$ & 98.4 \nl & R3 & 24.1 & $<$10$^{-6}$ & 342.7 \nl & T & 33.8 & $<$10$^{-6}$ & 448.6 \nl DET2 &R1\% & 1.91 & 4.0 x 10$^{-3}$ & 40.4 \nl & R3 & 4.86 & $<$10$^{-6}$ & 71.6 \nl & T & 8.7 & $<$10$^{-6}$ & 118.7 \nl DET2E6 &R1\% & 1.27 & 0.16 & 19.7 \nl & R3 & 1.21 & 0.22 & 19.4 \nl & T & 1.14 & 0.28 & 18.8 \nl \enddata \tablenotetext{a}{R1\% refers to 1\% ionized ejecta with a radial magnetic field, R3 refers to triply ionized ejecta with a radial field, T refers to 1\% ionized ejecta with a trapping field.} \end{deluxetable} \begin{deluxetable}{lcccc} \footnotesize \tablecaption{Positron Yields for SN Ia Models. \tablenotemark{a} \label{table3} } \tablewidth{0pt} \tablehead{ \colhead{Model} & \colhead{Radial} &\colhead{Radial} & \colhead{Trapped} &\colhead{Trapped} \nl & \colhead{Survival} & \colhead{Yield} & \colhead{Survival} & \colhead{Yield} \nl & \colhead{Fraction [\%]} & \colhead{[10$^{52}$ e+]} & \colhead{Fraction [10$^{-4}$]} & \colhead{[10$^{50}$ e+]} } \startdata W7 & 1.8$-$5.5 & 4.4$-$13.1 & 0.2$-$2.8 & 0.4$-$6.8 \nl DD4 & 1.5$-$4.6 & 3.8$-$11.6 & 0.04$-$1.1 & 0.1$-$2.8 \nl M35 & 2.3$-$5.7 & 6.2$-$15.3 & 1.7$-$10.4 & 4.6$-$28.1 \nl M36 & 2.0$-$5.7 & 4.9$-$13.7 & 1.0$-$17.6 & 2.5$-$42.8 \nl M39 & 1.6$-$3.8 & 1.2$-$5.2 & 0.3$-$2.9 & 0.4$-$4.0 \nl DD202C & 1.3$-$4.0 & 6.9$-$8.4 & 0.01$-$0.5 & 0.03$-$1.0 \nl DD23C & 1.4$-$4.2 & 3.3$-$10.1 & 0.08$-$1.6 & 0.2$-$3.8 \nl W7DN & 2.9$-$6.2 & 7.4$-$15.8 & 24.0$-$62.8 & 61$-$160 \nl W7DT & 5.1$-$9.2 & 15.9$-$28.5 & 42.7$-$64.8 & 133$-$187 \nl PDD3 & 1.6$-$5.0 & 3.8$-$12.0 & 0.07$-$1.7 & 0.18$-$4.1 \nl PDD54 & 0.4$-$2.7 & 0.3$-$2.1 & 0.0$-$0.3 & 0.0$-$0.2 \nl PDD1b & 0.0$-$0.1 & 0.0$-$0.1 & 0.0$-$0.0 & 0.0$-$0.0 \nl HED6 & 2.4$-$8.6 & 2.4$-$8.6 & 2.2$-$52.6 & 2.3$-$53 \nl HED8 & 3.6$-$9.1 & 6.6$-$16.9 & 6.3$-$58.2 & 11.8$-$109 \nl HED9 & 0.9$-$7.6 & 2.2$-$19.5 & 11.6$-$86.8 & 30.0$-$223 \nl HECD & 4.3$-$9.7 & 12.5$-$26.8 & 64.8$-$259 & 187$-$750 \nl ONeMg & 3.5$-$11.3 & 2.2$-$7.2 & 0.5$-$15.6 & 0.3$-$10.1 \nl DET2 & 2.7$-$6.3 & 6.9$-$16.2 & 1.1$-$9.8 & 3.0$-$25.1 \nl DET2E6 & 0.1$-$1.0 & 0.3$-$2.4 & 0.0$-$0.0 & 0.0$-$0.1 \nl \enddata \tablenotetext{a}{Survival refers to escape and/or survival until 2000$^{d}$} \end{deluxetable}
1,116,691,498,886
arxiv
\section{Introduction} \label{sec:Introduction} Millimeter-wave (mmWave) communications is a promising technology for future mobile networks due to abundant bandwidth. The standardization organization 3GPP has included mmWave communication in the fifth generation of mobile networks, 5G New Radio (5G-NR) \cite{5GNR_rel15}. As shown in both theory and practical testing, a mmWave system requires beamforming with large antenna arrays at both base station (BS) and user equipment (UE) to overcome severe propagation loss \cite{Rappaport:mmWavewillwork}. The directional transmission and reception require estimation and tracking of the wireless channel, so that antenna arrays can effectively provide beamforming gain. The mmWave channel tracking has been investigated in \cite{7390855,HKG+14_perburtation_tracking,MRM16_GD_tracking,GCY+15,PDW17}. In sector tracking, BS and UE track indices of sector beams that provide highest gain. It is used in IEEE 802.11ad \cite{6979964} and was studied by \cite{7390855} in mobile network. Works \cite{HKG+14_perburtation_tracking,MRM16_GD_tracking,GCY+15,PDW17} exploit mmWave sparse scattering, where CSI is tracked by tracking angle of arrival (AoA), angle of departure (AoD), and gain of multipath components. Such approach allows a more precise angle steering than sector tracking, and it supports advanced multiplexing precoding, e.g., eigenmode transmission. However, these prior works design and evaluate tracking algorithm in a sparse channel model formed by a superposition of a few multipath rays. In a realistic mmWave channel, multipaths exhibit clustered nature and there are non-negligible angular spreads (AS) along the dominant propagation directions \cite{6834753}. Such behavior is better modeled by a clustered sparsity model \cite{3GPP_model}. In this work, we focus on a clustered sparse channel and provide a theoretical analysis of channel tracking accuracy and associated beamforming gain using angle steering. We derive the lower bound of channel parameter tracking accuracy under clustered multipath model, and we verify it with maximum likelihood (ML) based algorithm from literature. We then provide analysis of beamforming gain of angle steering with respect to beam-width, tracking error, and AS. The rest of the paper is organized as follows. In Section~\ref{sec:system_model}, we introduce the system model. Section~\ref{sec:CRLB} includes the performance analysis in terms of accuracy of propagation angle tracking and associated beamforming gain under intra-cluster angular spread. We describe two existing tracking algorithms in Section~\ref{sec:algorithm}, and numerically verify our analysis in Section~\ref{sec:simulation results}. Finally, Section~\ref{sec:Conclusion} concludes the paper. \textit{Notations:} Scalars, vectors, and matrices are denoted by non-bold, bold lower-case, and bold upper-case letters, respectively, e.g. $h$, $\mathbf{h}$ and $\mathbf{H}$. The element in $i$-th row and $j$-th column in matrix $\mathbf{H}$ is denoted by $\{\mathbf{H}\}_{i,j}$. Transpose and Hermitian transpose are denoted by $(.)^{\transpose}$ and $(.)^{\hermitian}$, respectively. The $l_2$-norm of a vector $\mathbf{h}$ is denoted by $||\mathbf{h}||$. $\text{diag}(\mathbf{A})$ aligns diagonal elements of $\mathbf{A}$ into a vector, and $\text{diag}(\mathbf{a})$ aligns vector $\mathbf{a}$ into a diagonal matrix. \section{System Model} \label{sec:system_model} We consider a system in the downlink (DL) with a BS transmitter with $N_{\tx}$ antenna and a UE receiver with $N_{\rx}$ antennas. The multiple-input and multiple-output (MIMO) channel consists of $L$ multipath clusters, and each of them have $R$ intra-cluster rays \cite{3GPP_model} as shown in Fig.~\ref{fig:system_model}(a). We focus on narrowband model in the azimuth plane, and the channel matrix $\mathbf{H}$ is expressed as \begin{align} \mathbf{H} = \frac{1}{\sqrt{LR}}\sum_{l=1}^{L}\sum_{r=1}^{R}g_l e^{j\psi_{l,r}}\mathbf{a}_{\text{Rx}}(\phi_l+\Delta\phi_{l,r})\mathbf{a}_{\text{Tx}}^{\text{H}}(\theta_l+\Delta\theta_{l,r}) \label{eq:channel_model} \end{align} In the above equation, $\mathbf{a}_{\rx}(\phi)\in \mathbb{C}^{N_{\rx}}$ and $\mathbf{a}_{\tx}(\theta)\in \mathbb{C}^{N_{\tx}}$ are spatial responses corresponding to AoA $\phi$ and AoD $\theta$. In uniform linear array (ULA) with half-wavelength antenna spacing, their $k^{\text{th}}$ elements are $\{\mathbf{a}_{\tx}(\theta)\}_k = e^{j\pi(k-1)\sin(\theta)}$ and $\{\mathbf{a}_{\rx}(\phi)\}_k = e^{j\pi(k-1)\sin(\phi)}$, respectively. Cluster-specific parameters $\phi_l$, $\theta_l$, and $g_l$ correspond to the AoA, AoD, and path gain of $l^{\text{th}}$ cluster, respectively. Ray-specific parameters include intra-cluster angular offset (AO) in AoA $\Delta\phi_{l,r}$ and AoD $\Delta\theta_{l,r}$, and complex gain $e^{j\psi_{l,r}}$ of the $r^{\text{th}}$ ray in that cluster, where $\psi_{l,r}$ is uniformly distributed within $(-\pi,\pi]$. Each of the AO is an independent and identically distributed (i.i.d.) random variable with known probability density function (PDF) $p_{\text{AO}}(.)$, with zero mean and variances $\sigma^2_{\text{TAS}}$ and $\sigma^2_{\text{RAS}}$ for transmitter and receiver sides, respectively. We define $\sigma_{\text{RAS}}$ and $\sigma_{\text{TAS}}$ as angular spreads in AoA and AoD. In this work, we focus on channel with $L=1$ cluster\footnote{We consider both line-of-sight (LOS) and non-LOS (NLOS) cluster. The former has zero AS and one ray while the latter has non-zero AS. The results would apply to channels with $L>1$ clusters due to the decoupling \cite{DBLP:journals/corr/Abu-ShabanZASW17}.} and denote its cluster-specific parameters as $\phi_{\text{c}},\theta_{\text{c}},g_{\text{c}}$, and ray-specific parameters as $\Delta\phi_r, \Delta\theta_r, \psi_r$. We define vectors $\veta_{\text{c}} = [\phi_{\text{c}},\theta_{\text{c}},g_{\text{c}}]^{\transpose}$ and $\veta_{\text{r}} = [\Delta\phi_1, \Delta\theta_1, \psi_1 \cdots, \Delta\phi_R, \Delta\theta_R,\psi_R]^{\transpose}$. We focus on two phases in DL: 1) control phase with channel tracking, and 2) data phase with beamformed data communication using acquired CSI as shown in Fig.~\ref{fig:system_model}(b). The DL control phase contains $M$ time slots every $T_{\text{frame}}$. In the $m^{\text{th}}$ slot (with fixed duration $T_{\text{slot}}$), the transmitter uses a training precoder $\mathbf{f}_m\in \mathbb{C}^{N_{\tx}}$ and the receiver\footnote{We focus on UE receiver with single RF-chain, i.e., analog beamforming receiver, as typically considered for cost concern. We also assume that analog combiner has both phase and magnitude control capability \cite{6170865}.} uses a training combiner $\mathbf{w}_m \in \mathbb{C}^{N_{\rx}}$. Both precoder and combiner have unit power, i.e., $\|\mathbf{f}_m\| = \|\mathbf{w}_m\|=1,\forall m$. Assuming perfect synchronization, unit pilot symbol and channel does not change during training, the received signal $\mathbf{y}\in \mathbb{C}^{M}$ is \begin{align} &\mathbf{y} = \text{diag}\left(\mathbf{W}^{\text{H}}\mathbf{H}\mathbf{F}\right) +\mathbf{n}, \label{eq:received_signal_model} \end{align} where $\mathbf{F} = [\mathbf{f}_1,\cdots,\mathbf{f}_M]$ and $\mathbf{W} = [\mathbf{w}_1,\cdots,\mathbf{w}_M]$ are training beamformers at the BS and UE, respectively. We denote thermal noise power at each receiver antenna as $\sigma_\text{n}^2$, and the post-combining noise $\mathbf{n}\in \mathbb{C}^{M}$ is Gaussian random vector\footnote{Noise in independent in different time slots.}, i.e., $\mathbf{n} \in \mathcal{CN}(0,\sigma_\text{n}^2\mathbf{I}_M)$. \begin{figure}[t] \subfloat[Illustration of mmWave MIMO channel with $L=1$ multipath cluster.]{% \includegraphics[clip,width=1\columnwidth]{figures/clustering_channel}% } \vspace{-0mm} \subfloat[The time frame with DL control phase (grey) and DL data phase (blue).]{% \includegraphics[clip,width=1\columnwidth]{figures/time_frame}% } \caption{System model of tracking based mmWave system.} \vspace{-3mm} \label{fig:system_model} \end{figure} Additionally, with practical period $T_{\text{frame}}$, e.g., $\sim$10ms, we assume the ray-specific parameter $\veta_{\text{r}}$ lose time coherence, and their realizations are independent in data phase slots. However, the cluster-specific parameters $\veta_{\text{c}}$ are slow varying within $T_{\text{frame}}$, and we assume they are close to $\veta_{\text{c}}$ from the previous time frame\footnote{In LOS with 50m between BS and UE, 60mile/hr speed results in $0.6^\circ$ change. In NLOS with scatterers 5 meter from UE, 2m/s speed results in $0.4^\circ$ AoA change. Rotation of UE may result in a higher angle change.}. For mathematical tractability, we study data phase gain with ideal angle-steering patterns $g_{\Theta_{\rx}}(\phi|\hat{\phi}_{\text{c}})$ and $g_{\Theta_{\tx}}(\theta|\hat{\theta}_{\text{c}})$ pointing at $\hat{\phi}_{\text{c}}$, $\hat{\theta}_{\text{c}}$ with beam-width $\Theta_{\rx}$ and $\Theta_{\tx}$, respectively. The ideal pattern is defined by rectangular gain envelope and unit power in angular domain, i.e., \begin{align} g_{\Theta_{\rx}}\left(\phi|\hat{\phi}_{\text{c}}\right) = \begin{cases} \sqrt{\pi/\Theta_\rx}, & |\phi-\hat{\phi}_{\text{c}}| \leq \Theta_{\rx}/2\\ 0, & \text{otherwise} \end{cases} \label{eq:ideal_pattern} \end{align} The average beamforming gain is \begin{align} G = \mathbb{E}_{\veta_{\text{r}}}\left\{\left|\sum_{r=1}^{R}\frac{e^{\psi_{r}}}{\sqrt{R}}g_{\Theta_{\rx}}(\phi_{\text{c}} + \Delta\phi_r|\hat{\phi}_{\text{c}})g_{\Theta_{\rx}}(\theta_{\text{c}} + \Delta\theta_r|\hat{\theta}_{\text{c}})\right|^2\right\} \label{eq:beamforming_gain} \end{align} In this work, our goal is to: \begin{itemize} \item Determine the error bound in tracking propagation directions, i.e., $\text{var}(\theta_{\epsilon})$ and $\text{var}(\phi_{\epsilon})$, where $\theta_{\epsilon}= \theta_{\text{c}} -\hat{\theta}_{\text{c}}$ and $\theta_{\epsilon}= \phi_{\text{c}} -\hat{\phi}_{\text{c}}$ with respect to AS. \item Determine the average gain $G$ from data phase beamforming and its relationship with angle estimation error $\phi_{\epsilon}, \theta_{\epsilon}$, beam-widths and AS. \item Compare the gain obtained from two practical angle tracking algorithms (ML channel tracking algorithm and sector tracking) with respect to different beam-widths. \end{itemize} \section{Tracking Performance Analysis} \label{sec:CRLB} In this section, we provide Cramer-Rao lower bound (CRLB) of the angle estimators $\hat{\phi}_c$ and $\hat{\theta}_c$, which are lower bounds of variance of $\phi_{\epsilon}$ and $\theta_{\epsilon}$ with unbiased estimators. We also provide theoretical analysis of beamforming gain $G$. \subsection{CRLB of angle tracking accuracy} We define $\veta = [\veta_{\text{c}}^{\transpose},\veta^{\transpose}_{\text{r}}]^{\transpose}$ that contains both cluster-specific and ray-specific parameters. The error bound of $\veta$ is \begin{align} \mathbb{E}_{\mathbf{y},\veta}\left\{(\veta-\hat{\veta})(\veta-\hat{\veta})^{\transpose}\right\}\succcurlyeq \mathbf{J}_{\veta}^{-1}, \end{align} where $\mathbf{J}_{\veta}$ is the Fisher information matrix (FIM) of parameters, and it is defined as \begin{align} \mathbf{J}_{\veta} = -\mathbb{E}_{\mathbf{y},\veta}\left\{\frac{\partial^2}{\partial \veta \partial \veta^{\transpose}}\ln f(\mathbf{y},\veta) \right\}. \end{align} The likelihood function is $f(\mathbf{y},\veta) = f(\mathbf{y}|\veta) f_{\text{p}} (\veta)$ which contains a-priori probability of $\veta$, i.e., $f_{\text{p}}(\veta)$, and conditional probability $f(\mathbf{y}|\veta)$ from thermal noise. The FIM is the sum of the FIM from observation $\mathbf{J}_\text{w}$ and the FIM from a-priori of parameters $\mathbf{J}_\text{p}$ \cite{5571900} \begin{align} \mathbf{J}_{\veta} = \mathbf{J}_\text{w}+\mathbf{J}_{\text{p}}. \end{align} The elements of $\mathbf{J}_\text{w}$ are defined as \begin{align} \{\mathbf{J}_{\text{w}}\}_{m,n} = &\mathbb{E}_{\mathbf{y}|\veta} \left\{\Re \left[\frac{\partial (-\ln f(\mathbf{y}|{\veta}))}{\partial \veta_m}\frac{\partial (-\ln f^{\hermitian}(\mathbf{y}|{\veta}))}{\partial \veta_n}\right]\right\}\nonumber\\ = & \begin{bmatrix} \mathbf{J}_{\text{c}} & \mathbf{J}_{\text{c,r}} \\ \mathbf{J}^{\transpose}_{\text{c,r}} & \mathbf{J}_{\text{r}} \end{bmatrix}, \end{align} where $\veta_m$ and $\veta_n$ are the $m^{\text{th}}$ and $n^{\text{th}}$ element of vector $\veta$. The exact expression is shown in the Appendix~\ref{appendix:FIM}. The elements of $\mathbf{J}_{\text{p}}$ are defined as \begin{align} \{\mathbf{J}_{\text{p}}\}_{m,n} = &\mathbb{E}_{\veta} \left\{\Re \left[\frac{\partial (-\ln f_{\text{p}}({\veta}))}{\partial \veta_m}\frac{\partial (-\ln f_{\text{p}}^{\hermitian}({\veta}))}{\partial \veta_n}\right]\right\}. \end{align} Due to independent assumption among all parameters, $\mathbf{J}_{\text{p}}$ is a diagonal matrix. By using the knowledge of a-priori probability of AO\footnote{Although AO is typically modeled as Laplacian distribution, we use Gaussian distribution for mathematical convenience.}, $f_{\text{p}}(\Delta\phi_r) = p_{\text{AO}}(\Delta\phi_r)$ and $f_{\text{p}}(\Delta\theta_r) = p_{\text{AO}}(\Delta\theta_r)$, the matrix $\mathbf{J}_{\text{p}}$ becomes \begin{align} \mathbf{J}_{\text{p}} = \sigma^{-2}_{\text{RAS}} \begin{bmatrix} \mathbf{0} & \mathbf{0}\\ \mathbf{0} & \mathbf{E}_1 \end{bmatrix}+ \sigma^{-2}_{\text{TAS}} \begin{bmatrix} \mathbf{0} & \mathbf{0}\\ \mathbf{0} & \mathbf{E}_2 \end{bmatrix} \end{align} where $\mathbf{E}_1 = \mathbf{I}_R \otimes \text{diag}([1,0,0]^{\transpose})$ and $\mathbf{E}_2 = \mathbf{I}_R \otimes \text{diag}([0,1,0]^\transpose)$. The operator $\otimes$ is the Kronecker product. Note that we use the definition of the equivalent FIM (EFIM) \cite{5571900} in order to express the FIM as a block matrix of $\veta_{\text{c}}$ and $\veta{\text{r}}$. It allows us to determine the CRLB of cluster-specific parameter, and the CRLBs of AoA and AoD are \begin{align} \text{var}(\hat{\phi}_\text{c}) \geq \{\mathbf{J}^{-1}_{\veta_{\text{c}}}\}_{1,1},\text{var}(\hat{\theta}_\text{c}) \geq \{\mathbf{J}^{-1}_{\veta_{\text{c}}}\}_{2,2}, \label{eq:CRLB} \end{align} where $\mathbf{J}^{-1}_{\veta_{\text{c}}} = \left(\mathbf{J}_{\text{c}} -\mathbf{J}^{\transpose}_{\text{c,r}}\left(\mathbf{J}_{\text{r}}+\sigma^{-2}_{\text{RAS}}\mathbf{E}_1+\sigma^{-2}_{\text{TAS}}\mathbf{E}_2\right)^{-1}\mathbf{J}_{\text{c,r}}\right)^{-1}.$ \subsection{Data phase beamforming gain with angle steering} In this subsection, we study the impact of angle tracking errors $\phi_{\epsilon}$, $\theta_{\epsilon}$ and beam-widths $\Theta_{\rx}, \Theta_{\tx}$ on beamforming gain in the data phase. \textit{Proposition 1:} The average data phase gain is \begin{align} G = G_{\tx}(\theta_{\epsilon}) G_{\rx}(\phi_{\epsilon}) \label{eq:proposition_gain} \end{align} where the average receiver and transmitter gain $G_{\rx}$ and $G_{\tx}$ are convolutions between the beam pattern and a PDF of corresponding angular spread $p_{\text{AS}}(x)$, where $x \in \{\phi, \theta\}$ \begin{align} G_{\rx}(\phi) = & g^2_{\Theta_{\rx}}\left(\phi|0\right)*p_{\text{AS}}(\phi)\nonumber\\ G_{\tx}(\theta) =& g^2_{\Theta_{\tx}}\left(\theta|0\right)*p_{\text{AS}}(\theta). \end{align} \textit{Proof:} See Appendix~\ref{appendix:gain}. The above proposition reveals the relationship of beamforming gain with cluster angle estimation error and steering beam-width. \section{Practical Angle Tracking Algorithms} \label{sec:algorithm} In this section, we study two tracking algorithms from literature. \textit{ML Channel Tracking:} The first group of algorithms which intend to track propagation angle \cite{MRM16_GD_tracking}. The key idea is to refine channel gain $\hat{g}_c$ and angle estimates $\hat{\phi}_c$ and $\hat{\theta}_c$ in each tracking frame. Denote the estimated channel parameter in $n^{\text{th}}$ tracking frame as $\hat{g}_c^{(n)}$, $\hat{\phi}_{\text{c}}^{(n)}$, and $\hat{\theta}_{\text{c}}^{(n)}$. The estimation steps for complex gain $g$ and propagation angles are given by \begin{align*} &\hat{g}_c^{(n)} = \text{arg} \min_{g} \left\|\mathbf{y}^{(n)} - g\mathbf{W}^{\text{H}} \mathbf{a}_{\rx}(\hat{\phi}_{\text{c}}^{(n-1)})\mathbf{a}^{\hermitian}_{\tx}(\theta^{(n-1)}))\mathbf{F}\right\|^2\\ &\{\hat{\phi}_{\text{c}}^{(n)}, \hat{\theta}_{\text{c}}^{(n)}\} = \text{arg} \min_{\phi,\theta} \left\|\mathbf{y}^{(n)} - \hat{g}_c^{(n)} \mathbf{W}^{\text{H}} \mathbf{a}_{\rx}(\phi)\mathbf{a}^{\hermitian}_{\tx}(\theta)\mathbf{F}\right\|^2 \end{align*} assuming initial estimates $\hat{g}_c^{(0)}$, $\hat{\phi}_{\text{c}}^{(0)}$, and $\hat{\theta}_{\text{c}}^{(0)}$ are known. In each tracking frame, the CSI is iteratively refined. \textit{Sector Tracking:} In sector tracking, the BS and UE steer sector beams adjacent to the previously used sector beams in order to measure signal strength of transmitter and receiver beam pairs. The tracking algorithm tends to find the best beam pairs that results in the highest SNR and keep on updating the information. Sector size is typically chosen to accommodate beam-width supported by antenna array. \section{Numerical Evaluation} \label{sec:simulation results} In this section, we evaluate the tracking performance. In the simulation we use channel model in (\ref{eq:channel_model}), where the centroids of AoA and AoD are uniformly chosen from $[-\pi/2,\pi/2]$. The angular offsets $\Delta\phi_r$ and $\Delta\theta_r$ are zero mean Gaussian random variables. In our simulations, we focus on the AOA tracking and thus set variance of $\Delta\theta_r$ to be $\sigma^2_{\text{TAS}}=0$. The point-to-point SNR is defined in terms of path gain, i.e., $\text{SNR} = g_{\text{c}}^2/\sigma^2_\text{n}$. The CRLB is computed in a deterministic manner except when a random beamformer is used. For the random beamformer, i.e. when each element in $\mathbf{W}$ and $\mathbf{F}$ is randomly chosen from $\{\pm1\pm1j\}/\sqrt{2N_{\rx}}$ and $\{\pm1\pm1j\}/\sqrt{2N_{\tx}}$, respectively \cite{MRM16_GD_tracking}, the CRLB is obtained by averaging random matrices $\mathbf{W}$ and $\mathbf{F}$ in order to evaluate the average performance. Both BS and UE has 32 antennas in order to flexibly use different beam-width. Steering vectors that adapt beam-width in the data phase follow the codebook from (21) of \cite{6847111}. In Fig.~\ref{fig:angle_CRLB}, we evaluate the root mean square error (RMSE) of cluster AoA from the ML tracking approach and CRLB from (\ref{eq:CRLB}). Quasi-omni directional (random) training beamformers $\mathbf{W}$ and $\mathbf{F}$ are used for both simulation and computation of the CRLB. Our results show that the ML reaches the CRLB in high SNR regime as expected. Without AS, the cluster AoA estimation accuracy increases with SNR in dB scale, and doubling the number of training slots provides 3dB SNR improvement. When AoA AS is present, the cluster AoA accuracy bound is close to $\sigma_{\text{RAS}}$. Higher SNR and more training slots provide marginal benefits. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{figures/CRLB_vs_ML} \end{center} \vspace{-3mm} \caption{RMSE of cluster AoA tracking as a function of point-to-point SNR. Different AoA AS $\sigma_{\text{RAS}}$ and training length $M$ are considered.} \vspace{-5mm} \label{fig:angle_CRLB} \end{figure} In Fig.~\ref{fig:gain_vs_error}, the average receiver beamforming gain $G$ from (\ref{eq:proposition_gain}) is presented as a function of angle steering error $\phi_\epsilon$. When there are no AS and steering errors, narrower beams always provide better beamforming gain. Since the steering error is less troublesome according to results in Fig.~\ref{fig:angle_CRLB}, narrower beams are preferred. However, channel with clustered multipaths does not benefits from it. In dashed curves, angle steering gain does not necessarily increases when beam-width becomes narrower. Additionally, with steering error associated with tracking in clustered multipaths, performance difference between different beams becomes marginal, e.g., from $10^{\circ}$ to $5^{\circ}$. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{figures/gain_vs_steeringerror} \end{center} \vspace{-3mm} \caption{Average receiver gain as function of beam pointing error $\phi_{\epsilon}$. Different AoA AS $\sigma_{\text{RAS}}$ and beam-widths $\Theta_{\rx}$ are considered.} \vspace{-4mm} \label{fig:gain_vs_error} \end{figure} In Fig.~\ref{fig:gain_vs_time}(a), we evaluate AoA tracking accuracy of both algorithms from Section~\ref{sec:algorithm}. Without AS, the ML tracking outperforms the sector tracking in terms of accuracy since the precision of the latter is limited by the sector size. With AS, tracking accuracy in both algorithms is degraded. The reason is explained in Fig.~\ref{fig:gain_vs_time}(b) which shows angular power profile over time. The angle tracking error in each frame is associated with different realization of intra-cluster angular offsets, and thus it cannot be improved by higher SNR or longer training. In Fig.~\ref{fig:gain_vs_time}(c), we evaluate the complementary cumulative distribution function (CCDF) of achieving a certain gain in the data phase by using these tracking algorithms, and different beam-width in angle steering are considered. For comparison, we also include a benchmark CCDF curve for angle steering in the true cluster AoA. Without AS, which corresponds to LOS, the benchmark has 30dB gain. The ML tracking provides higher gain as compared to the sector tracking due to better accuracy. Steering with $5^\circ$ beam-width has 3dB higher gain over using $10^\circ$. With AS that corresponds to an NLOS path cluster, gain is not fixed due to fading among intra-cluster rays. Both tracking algorithms have worse performance than the benchmark curve due to the error and angular spread. The advantage of ML over sector tracking becomes marginal and steering narrow beams does not improve the gain. Due to the relationship between the number of antenna elements and the narrowest possible beam, designer should take angular spread into account when the number of UE antenna elements is considered. \begin{figure}[t] \begin{tabular}{cc} \subfloat[Simulated AoA tracking performance over 1s duration.]{% \includegraphics[clip,width=1\columnwidth]{figures/angle_vs_time2}% }\\ \vspace{-0mm} \subfloat[Simulated AoA angular power profile of channel over 1s duration.]{% \includegraphics[clip,width=1\columnwidth]{figures/angular_power_profile2}% }\\ \vspace{-1mm} \subfloat[CCDF of gain in data phase. Different AoA AS and beam-width are considered.]{% \includegraphics[clip,width=1\columnwidth]{figures/gain_cdf2}% } \end{tabular} \caption{Simulation example of channel tracking. In the first 0.5s of simulation, UE moves with speed of [10,0]m/s in x/y axis and has -~50deg/s rotational speed. In the second 0.5s, it moves with speed of [0,10]m/s and has 25deg/s rotational speed. Point-to-point SNR is 0dB. } \vspace{-5mm} \label{fig:gain_vs_time} \end{figure} \section{Conclusion} \label{sec:Conclusion} In this work, we study the performance bound of narrowband channel tracking techniques under clustered sparse channel. We provide the Cramer-Rao lower bound of angle tracking error. Our study reveals that the non-clustered sparse model degrades the angle tracking accuracy. With clustered multipaths, the angle tracking accuracy is bounded by cluster angular spread of propagation path and cannot be further improved by increasing the signal strength or the training duration. We also analyzed the beamforming gain using angle-steering-based beamformer and showed that under angular spread narrow beams do not necessarily provide better gain.
1,116,691,498,887
arxiv
\section{Introduction and statement of the results} Logarithmic potentials of Borel measures in the complex plane play a fundamental role in Logarithmic Potential Theory. This du to the fact that this theory is associated to the Laplace operator which is a linear elliptic partial differential operator of second order. It is well known that in higher dimension plurisubharmonic functions are rather connected to the complex Monge-Amp\`ere operator which is a fully non-linear second order partial differential operator. Therefore Pluripotential theory cannot be described by logarithmic potential. However the class of logarithmic potentials gives a nice class of plurisubharmonic functions which turns out to be in the local domain of definition of the complex Monge-Amp\`ere operator. This study was carried out by Carlehed \cite{5} in the case of a compactly supported measures on $\mathbb{C}^{n}$ or a bounded hyperconvex domain in $\mathbb{C}^{n}$. Our main goal is to extend this study to the complex projective space motivated by the fact that the complex Monge-Amp\`ere operator plays an important role in K\"ahler geometry (see \cite{13}). A large class of singular potentials on which the complex Monge-Amp\`ere is well defined was introduced (see \cite{12}, \cite{8}, \cite{4}). However the global domain of definition of the complex Monge-Amp\`ere operator on compact K\"ahler manifolds is not yet well understood. Using the characterization of the local domain of definition given by Cegrell and Blocki (see \cite{2}, \cite{3}, \cite{7}), we show that it is contained in the local domain of definition of the complex Monge-Amp\`ere operator on the complex projective space $(\mathbb{P}^{n},\omega)$ equipped with the Fubini-Study metric. Let $\mu$ be a probability measure on $\mathbb{P}^{n}$. Then its projective logarithmic potential is defined on $\mathbb{P}^{n}$ as follows : $$ \mathbb{G}_{\mu}(\zeta):=\int_{\mathbb{P}^{n}} G(\zeta,\eta)d\mu(\eta)\quad\hbox{where}\quad G(\zeta,\eta):=\log{\vert\zeta\wedge\eta\vert\over\vert\zeta\vert\vert\eta\vert} $$ \smallskip \begin{thm}\label{thm1} Let $\mu$ be a probability measure on $\mathbb{P}^{n}$. Then the following properties hold.\\ 1. The potential $\mathbb{G}_{\mu}$ is a negative $\omega$-plurisubharmonic function on $\mathbb{P}^{n}$ normalized by the following condition : $$ \int_{\mathbb{P}^{n}} \mathbb{G}_{\mu} \, \omega_{FS}^n = -\alpha_{n}, $$ where $\alpha_{n}$ is a numerical constant.\\ 2. $\mathbb{G}_{\mu}\in W^{1,p}(\mathbb{P}^{n})$ for any $0 < p < 2n$.\\ 3. $\mathbb{G}_{\mu}\in DMA_{loc}(\mathbb{P}^{n},\omega)$. \end{thm} \smallskip We also show a regularizing property of the operator $\mu\rightarrow\mathbb{G}_{\mu}$ acting on probability measures on $\mathbb{P}^{n}$. \begin{thm}\label{thm2} Let $\mu$ be a probability measure on $\mathbb{P}^{n}$ with no atoms. Then the Monge-Amp\`ere measure $(\omega +dd^{c}\mathbb{G}_{\mu})^{n}$ is absolutely continuous with respect to the Fubini-Study volume form on $\mathbb{P}^{n}$. \end{thm} \section{The logarithmic potential, proof of Theorem 1.1} The complex projective space can be covered by a finite number of charts given by $ \mathcal{U}_{k}:=\{[\zeta_{0},\zeta_{1},\cdots,\zeta_{n}]\in\mathbb{P}^{n}\ :\ \zeta_{k}\not=0\}\ ( 0\leq k\leq n)$ and the corresponding coordinate chart is defined on $\mathcal{U}_{k}$ by the formula $$ z^{k}(\zeta)=z^{k}:=(z_{j}^{k})_{0\leq j\leq n, j\not=k}\quad\hbox{where}\quad z_{j}^{k}:={\zeta_{j}\over\zeta_{k}}\quad\hbox{for}\quad j\not=k $$ The Fubini-Study metric $\omega = \omega_{FS}$ is given on $\mathcal U_k$ by $\omega|_{\mathcal{U}_{k}}={1\over 2}dd^{c}\log(1+\vert z^{k}\vert^{2})$. The projective logaritmic kernel on $\mathbb{P}^{n}\times\mathbb{P}^{n}$ is naturally defined by the following formula $$ G(\zeta,\eta):=\log{\vert\zeta\wedge\eta\vert\over\vert\zeta\vert\vert\eta\vert}=\log\sin{d(\zeta,\eta)\over\sqrt{2}}\, \, \hbox{where} \,\, \vert\zeta\wedge\eta\vert^{2}=\sum_{0\leq i<j\leq n}\vert\zeta_{i}\eta_{j}-\zeta_{j}\eta_{i}\vert^{2} $$ and $d$ is the geodesic distance associated to the Fubini-Study metric (see \cite{15},\cite{6}). We recall some definitions and give a useful characterization of the local domain of definition of the complex Monge-Amp\`ere operator given by Z. B\l ocki (see \cite{2},\cite{3}). \begin{defi} Let $\Omega \subset \mathbb C^n$ be a domain. By definition the set $DMA_{loc}(\Omega)$ denotes the set of plurisubharmonic functions $\phi$ on $\Omega$ for which there a positive Borel measure $\sigma$ on $\Omega$ such that for all open $U\subset\subset X$ and $\forall\ (\phi_{j})\in PSH(U)\cap C^{\infty}(U)\searrow\phi$ in $U$, the sequences of measures $(dd^{c}\phi_{j})^{n}$ converges weakly to $\sigma$ in $U$. In this case, we put $(dd^{c}\phi)^{n}=\sigma$. \end{defi} The following result of Blocki gives a useful characterization of the local domain of definition of the complex Monge-Amp\`ere operator. \begin{thm} (Z. B\l ocki \cite{2}, \cite{3}). 1. If $\Omega\subset\mathbb{C}^{2}$ is an open set then $DMA_{loc}(\Omega)=PSH(\Omega)\cap W^{1,2}_{loc}(\Omega)$. 2. If $n\geq 3$, a plurisubharmonic function $\phi$ on a open set $U\subset\mathbb{C}^{n}$ belong to $DMA_{loc}(\Omega)$ if and only if for any $z\in \Omega$ there exists a neighborhood $U_{z}$ of $z$ in $\Omega$ and a sequence $(\phi_{j})\subset PSH(U_{z})\cap C^{\infty}(U_{z})\searrow \phi$ in $U_{z}$ such that the sequences $$ \vert \phi_{j}\vert^{n-p-2}d\phi_{j}\wedge d^{c}\phi_{j}\wedge(dd^{c}\phi_{j})^{p}\wedge(dd^{c}\vert z\vert^{2})^{n-p-1},\quad p=0,1,\cdots,n-2 $$ are locally weakly bounded in $U_{z}$. \end{thm} Observe that by Bedford and Taylor \cite{1}, the class of locally bounded plurisubharmonic functions in $\Omega$ is contained in $DMA_{loc}(\Omega)$. By the work of J.-P. Demailly \cite{9}, any plurisubharmonic function in $\Omega$ bounded near the boundary $\partial \Omega$ is contained in $DMA_{loc}(\Omega)$. Let $(X,\omega)$ be a K\"ahler manifold of dimension $n$. We denote by $PSH (X,\omega)$ the set of $\omega$-plurisubharmonic functions in $X$. Then it is possible to define in the same way the local domain of definition $DMA_{loc} (X,\omega)$ of the complex Monge-Amp\`ere operator on $(X,\omega)$. A function $\varphi \in PSH (X,\omega)$ belongs to $DMA_{loc} (X,\omega)$ iff for any local chart $(U,z)$ the function $\phi := \varphi + \rho \in DMA_{loc} (U)$ where $\rho$ is a K\"ahler potential of $\omega$. Then the previous theorem extends trivially to this general case. Let $(\chi_{j})_{0\leq j\leq n}$ be a fixed partition of unity subordinated to the covering $(\mathcal{U}_{j})_{0\leq j\leq n}$. We define $m_{j}=\int\chi_{j}d\mu$ and $J=\{j\in\{0,1,\cdots,n\}\ :\ m_{j}\not=0\}$. The $J\not=\emptyset$ and for $j\in J$, the measure $\mu_{j}:={1\over m_{j}}\chi_{j}\mu$ is a probability measure on $\mathbb{P}^{n}$ supported in $\mathcal{U}_{j}$ and we have the following convex decomposition of $\mu$ $$ \mu=\sum_{j\in J}m_{j}\mu_{j} $$ Therefore the potential $\mathbb{G}_{\mu}$ can be written as a convex combination $$ \mathbb{G}_{\mu}=\sum_{j\in J} m_j \mathbb{G}_{\mu_{j}}. $$ To show that $\mathbb{G}_{\mu}\in DMA_{loc}(\mathbb{P}^{n},\omega)$, it suffices to consider the case of a compact measure supported in an affine chart. Without loss of generality, we may always assume that $\mu$ is compactly supported in $\mathcal{U}_{0}$ and we are reduced to the study of the potential $\mathbb{G}_{\mu}$ on the open set $\mathcal{U}_{0}$. The restriction of $G(\zeta,\eta)$ to $\mathcal{U}_{0}\times\mathcal{U}_{0}$ can expressed in the affine coordinates as $$ G(\zeta,\eta)=N(z,w)-{1\over 2}\log(1+\vert z\vert^{2})$$ where $$ N(z,w):={1\over 2}\log{\vert z-w\vert^{2}+\vert z\wedge w\vert^{2}\over 1+\vert w\vert^{2}} $$ will be called the projective logarithmic kernel on $\mathbb{C}^{n}$. \begin{lem}\label{diag} 1.The kernel $N$ is upper semicontinuous in $\mathbb{C}^{n}\times\mathbb{C}^{n}$ and smooth off the diagonal of $\mathbb{C}^{n}\times\mathbb{C}^{n}$.\\ 2. For any fixed $w\in\mathbb{C}^{n}$, the function $ N(.,w)\ :\ z\rightarrow N(z,w)$ is plurisubharmonic in $\mathbb{C}^{n}$ and satisfies the following inequality $$ {1\over 2}\log{\vert z-w\vert^{2}\over 1+\vert w\vert^{2}} \leq N(z,w)\leq{1\over 2}\log(1+\vert z\vert^{2}),\quad\forall\ (z,w)\in \mathbb{C}^{n}\times\mathbb{C}^{n} $$ \end{lem} From lemma \ref{diag}, we have the following properties of the projective logarithmic kernel $G$ on $\mathbb{P}^{n}\times\mathbb{P}^{n}$. \begin{cor} 1. The kernel $G$ is a non positive upper semi continuous function on $\mathbb{P}^{n}\times\mathbb{P}^{n}$ and smooth off the diagonal of $\mathbb{P}^{n}\times\mathbb{P}^{n}$.\\ 2. For any fixed $\eta\in\mathbb{P}^{n}$, the function $G(.,\eta)\ :\ \zeta \rightarrow G(\zeta,\eta)$ is a non positive $\omega$-plurisubharmonic function in $\mathbb{P}^{n}$ and smooth in $\mathbb{P}^{n}\setminus\{\eta\}$, hence $G(.,\eta)\in DMA_{loc}(\mathbb{P}^{n},\omega)$. Moreover $(\omega + dd^c G(\cdot,\eta))^n = \delta_\eta$. \end{cor} For a probability measure $\nu$ on $\mathbb{C}^{n}$, we define the projective logarithmic potential of $\nu$ as follows $$ \mathbb{V}_{\nu}(z):={1\over 2}\int_{\mathbb{C}^{n}}\log{\vert z-w\vert^{2}+\vert z\wedge w\vert^{2}\over 1+\vert w\vert^{2}}d\nu(w) $$ \begin{pro}\label{pluris} Let $\nu$ be a probability measure $\nu$ on $\mathbb{C}^{n}$. Then the function $\mathbb{V}_{\nu}(z)$ is plurisubharmonic in $\mathbb{C}^{n}$ and for all $z\in\mathbb{C}^{n}$ $$ \mathbb{V}_{\nu}(z)\leq{1\over 2}\log(1+\vert z\vert^{2}). $$ Also $\mathbb{V}_{\nu}\in DMA_{loc}(\mathbb{C}^{n})$ and $$ (dd^{c}\mathbb{V}_{\nu})^{n}=\int_{\mathbb{C}^{n}\times\cdots\times\mathbb{C}^{n}}dd_{z}^{c}N(.,w_{1})\wedge\cdots\wedge dd_{z}^{c}N(.,w_{w})d\nu(w_{1})\cdots d\nu(w_{n}) $$ \end{pro} \noindent{\bf Proof of theorem \ref{thm1}} As we have seen we have $$ \mathbb G_\mu = \sum_{j \in J} m_j \mathbb G_{\mu_j}, $$ where $\mu_j$ is compactly supported in the affine chart $\mathcal U_j$. Observe that for a fixed $k$ one can write on $\mathcal U_k$ $$ \mathbb{G}_{\mu_k}(\zeta)+{1/2}\log(1+\vert z\vert^{2})= \mathbb{V}_{\mu_k}(z), \, \, \text{where} \, \, z := z^{k} (\zeta) \in \mathbb C^n, $$ which is plurisubharmonic in $\mathbb C^n$. Hence $\mathbb{G}_{\mu}$ is $\omega$-plurisubharmonic in $\mathbb{P}^{n}$. 2. By the co-area formula ( see \cite{10} ) \begin{eqnarray*} \int_{\mathbb{P}^{n}}\mathbb{G}_{\mu}(\zeta)dV(\zeta)&=&\int_{0}^{\pi/\sqrt{2}}\log\sin{r\over\sqrt{2}}A(r)dr\\ &=&-{c_{n}\over\sqrt{2}n^{2}} \end{eqnarray*} where $A(r):=c_{n}\sin^{2n-2}(r/\sqrt{2})\sin(\sqrt{2}r)$ is the area of the sphere about $\eta$ and radius $r$ on $\mathbb{P}^{n}$ and $c_{n}$ is a numerical constant ( see \cite{14} page 168 or \cite{11} lemma 5.6] ).\\ Let $p\geq 1$. Since $\vert\nabla d\vert_{\omega}=1$, also by the co-area formula \begin{eqnarray*} \int_{\mathbb{P}^{n}}\vert\nabla\mathbb{G}_{\mu}(\zeta)\vert^{p}dV(\zeta)&\leq& \int_{\mathbb{P}^{n}}\cot^{p}\Bigl({d(\zeta,\eta)\over\sqrt{2}}\Bigr)d\mu(\eta)dV(\zeta)\\&\leq& 2\sqrt{2}c_{n}\int_{0}^{\pi/2}\sin^{2n-1-p}t dt \end{eqnarray*} which is finite if and only if $p < 2n$. Hence for all $p\in ]0,2n[\ :\ \mathbb{G}_{\mu}\in W^{1,p}(\mathbb{P}^{n})$ ( by concavity of $x^{p}$). 3. When $n = 2$, we can apply the previous result to conclude that $ \mathbb{G}_{\mu}\in DMA_{loc}(\mathbb{P}^{2})$. When $n\geq 3$, we apply Blocki's characterization staded above to show that $\mathbb{G}_{\mu_k} \in DMA_{loc}(\mathcal U_k)$. We consider the following approximating sequence $$ \mathbb{V}^{\epsilon}_{\mu}(z)={1\over 2}\int_{\mathbb{C}^{n}}\log\Bigl({\vert z-w\vert^{2}+\vert z\wedge w\vert^{2}\over 1+\vert w\vert^{2}}+\epsilon^{2}\Bigr)d \mu_k(w)\searrow \mathbb{V}_{\mu}(z), $$ and use the next classical lemma on Riesz potentials to show a uniform estimates on their weighted gradients as required in Blocki's theorem. \begin{lem} Let $\mu$ be a probability measure on $\mathbb{C}^{n}$. For $0<\alpha<2n$, define the Riesz potential of $\mu$ by $$ J_{\mu,\alpha}(z):=\int_{\mathbb{C}^{n}}{d\mu(w)\over\vert z-w\vert^{\alpha}} $$ If $0<p<2n/\alpha$ then $J_{\mu,\alpha}\in L^{p}_{loc}(\mathbb{C}^{n})$. \end{lem} \section{Regularizing property and proof of theorem \ref{thm2}} We prove a regularizing property of the operator $\mu\rightarrow\mathbb{G}_{\mu}$. By the localization process exlained before, the proof of theorem 1.2 follows from the following theorem which generalizes and improves a result of Carlehed ( see \cite{5}). \begin{thm}\label{thm3} Let $\mu$ be a probability measure on $\mathbb{C}^{n}$ with no atoms and let $\psi \in \mathcal L (\mathbb C^n)$. Assume that $\psi$ is smooth in some open subset $U \subset \mathbb C^n$. Then for any $0 \leq m \leq n $, the Monge-Amp\`ere measure $(dd^{c}\mathbb{V}_{\mu})^{m} \wedge (dd^c \psi)^{n - m}$ is absolutely continuous with respect to the Lebesgue measure on $U$. \end{thm} The proof is based on the following elementary lemma. \begin{lem}\label{absolut} Let $(w_1, \cdots , w_n) \in (\mathbb C^n)^n$ fixed such that $w_{1}\not=w_{2}$. Let $\psi \in \mathcal L (\mathbb C^n)$. Assume that $\psi$ is smooth in some open subset $U \subset \mathbb C^n$. Then for any integer $ 0 \leq m \leq n$, the measure $$ \bigwedge_{1 \leq j \leq m} dd^{c}\log(\vert \cdot -w_j\vert^{2}+\vert \cdot \wedge w_j\vert^{2}) \wedge(dd^c \psi) ^{n - m} $$ is absolutely continuous with respect to the Lebesgue measure on $U$. \end{lem} {\bf Proof of theorem \ref{thm3}:} We first assume that $m = n$. Let $K\subset\mathbb{C}^{n}$ be a compact set such that $(dd^{c}\vert z\vert^{2})^{n}(K)=0$. Set $\Delta=\{(w,w,\cdots,w)\ :\ w\in\mathbb{C}^{n}\}$. Since $\mu$ puts no mass at any point, it follows by Fubini's theorem that $\mu^{\otimes n} (\Delta)=0$. By proposition \ref{pluris} $$ \int_K (dd^{c}\mathbb{V}_{\mu})^n =\int_{(\mathbb{C}^{n})^{n}\setminus\Delta}f(w_{1},\cdots,w_{n})d\mu^{\otimes n}(w_{1},\cdots,w_{n}) $$ where $$ f(w_{1},\cdots,w_{n})=\int_{K}dd^{c}\log(\vert z-w_1\vert^{2}+\vert z\wedge w_1\vert^{2})\wedge\cdots\wedge d^{c}\log(\vert z-w_{n}\vert^{2}+\vert z\wedge w_{n}\vert^{2}) $$ By Lemma \ref{absolut}, for any $(w_{1},\cdots,w_{n})\not\in\Delta$, $f(w_1, \cdots, w_n) = 0$, hence $(dd^{c}\mathbb{V}_{\mu})^{n}(K)=0$. The case $1 \leq m \leq n$ follows from Lemma \ref{absolut} in the same way. The proof is complete. \bigskip {\bf Proof of theorem \ref{thm2}:} As we have seen in the proof of Theorem \ref{thm1}, one can write on each coordinate chart $\mathcal U_k$, $$ \mathbb G_\mu (\zeta) = m_k \mathbb{G}_{\mu_k} + \psi_k (z), $$ where $\psi_k \in \mathcal L (\mathbb C^n)$ is a smooth function in $\mathbb C^n$. Using Theorem \ref{thm3} again we conclude that $\mathbb G_\mu \in DMA_{loc} (\mathcal U_k)$. Therefore $\mathbb G_\mu \in DMA_{loc} (\mathbb P^n)$. \section*{Acknowledgements} It is a pleasure to thank my supervisors Ahmed Zeriahi and Said Asserda for their support, suggestions and encouragement. I also would like to thank professor Vincent Guedj for very useful discussions and suggestions. A part of this work was done when the author was visiting l'Institut de Math\'ematiques de Toulouse in March 2016. She would like to thank this institution for the invitation.
1,116,691,498,888
arxiv
\section{Introduction}\label{one} Let $G$ be a compact connected Lie group which acts on a compact connected manifold $M$, the cohomogeneity of the action being equal to one; this means that there exists a $G$-orbit whose codimension in $M$ is equal to one. For such group actions, we investigate the corresponding equivariant cohomology $H^*_G(M)$ (the coefficient ring will always be $\mathbb R$). We are especially interested in the natural $H^*(BG)$-module structure of this space. The first natural question concerning this module is whether it is free, in other words, whether the $G$-action is equivariantly formal. One can easily find examples which show that the answer is in general negative. Instead of being free, we may also wonder whether the above-mentioned module satisfies the (weaker) requirement of being Cohen-Macaulay. It turns out that the answer is in our context always positive: this is the main result of our paper. Before stating it, we mention that the relevance of the Cohen-Macaulay condition in equivariant cohomology was for the first time noticed by Bredon \cite{Bredon}, inspired by Atiyah \cite{At}, who had previously used this notion in equivariant $K$-theory. It has also attracted attention in the theory of equivariant cohomology of finite group actions, see e.g.~\cite{Duflot}. More recently, group actions whose equivariant cohomology satisfies this requirement have been investigated in \cite{FranzPuppe2003}, \cite{GT'}, and \cite{GR}. We adopt the terminology already used in those papers: if a group $G$ acts on a space $M$ in such a way that $H^*_G(M)$ is a Cohen-Macaulay $H^*(BG)$-module, we simply say that the $G$-action is Cohen-Macaulay. \begin{thm}\label{thm:cohom1eqformal} Any cohomogeneity one action of a compact connected Lie group on a compact connected manifold is Cohen-Macaulay. \end{thm} Concretely, if the group action is $G\times M \to M$, then the (Krull) dimension and the depth of $H^*_G(M)$ over $H^*(BG)$ are equal. In fact, we can say exactly what the value of these two numbers is: the highest rank of a $G$-isotropy group. To put our theorem into perspective, we mention the following result, which has been proved in \cite{GR}: an action of a compact connected Lie group on a compact manifold with the property that all isotropy groups have the same rank is Cohen-Macaulay. Consequently, if the $G$-action is transitive, then it is Cohen-Macaulay (see also Proposition \ref{lem:cohomrank} and Remark \ref{assertion} below). We deduce: \begin{cor}\label{<2} Any action of a compact connected Lie group on a compact connected manifold whose cohomogeneity is zero or one is Cohen-Macaulay. \end{cor} We also note that actions of cohomogeneity two or larger are not necessarily Cohen-Macaulay: examples already appear in the classification of $T^2$-actions on $4$-manifolds by Orlik and Raymond \cite{Or-Ra}, see Example \ref{exor} in this paper. In general, a group action is equivariantly formal if and only if it is Cohen-Macaulay and the rank of at least one isotropy group is maximal, i.e.~equal to the rank of the acting group (cf.~\cite{GR}, see also Proposition \ref{gr} below). This immediately implies the following characterization of equivariant formality for cohomogeneity one actions: \begin{cor}\label{cor:eqf} A cohomogeneity one action of a compact connected Lie group on a compact connected manifold is equivariantly formal if and only if the rank of at least one isotropy group is maximal. \end{cor} Corollary \ref{cor:eqf} shows that the cohomogeneity one action $G\times M \to M$ is equivariantly formal whenever $M$ satisfies the purely topological condition $\chi(M)>0$ (indeed, it is known that this inequality implies the condition on the rank of the isotropy groups in Corollary \ref{cor:eqf}). Extensive lists of cohomogeneity one actions on manifolds with positive Euler characteristic can be found for instance in \cite{AlekseevskyPodesta} and \cite{Fr}. The above observation will be used to obtain the following obstruction to the existence of a group action on $M$ of cohomogeneity zero or one. \begin{cor}\label{compact} If a compact manifold $M$ admits a cohomogeneity one action of a compact Lie group, then we have $\chi(M)>0$ if and only if $H^{\rm odd}(M)=\{0\}$. \end{cor} This topic is addressed in Subsection \ref{subsec:ec} below. We also mention that, if $M$ is as in Corollary \ref{compact}, then $\chi(M)>0$ implies that $\pi_1(M)$ is finite, see Lemma \ref{pi1finite}. By classical results of Hopf and Samelson \cite{Ho-Sa}, respectively Borel \cite{Bo}, the fact that $\chi(M)>0$ implies both $H^{\rm odd}(M)=\{0\}$ and the finiteness of $\pi_1(M)$ holds true also in the case when $M$ admits an action of a compact Lie group which is transitive, i.e.~of cohomogeneity equal to zero. This shows, for example, that there is no compact connected Lie group action with cohomogeneity zero or one on a compact manifold with the rational homology type of the connected sum $(S^1\times S^3)\# (S^2 \times S^2)$. However, the 4-manifold $R(1,0)$ of Orlik and Raymond \cite{Or-Ra} mentioned in Example \ref{exor} is homeomorphic to this connected sum and has a $T^2$-action of cohomogeneity two. Thus, the equivalence of $\chi(M)>0$ and $H^{\rm odd}(M)=\{0\}$ holds no longer for group actions with cohomogeneity greater than one. It should be noted that Corollary \ref{compact} is not a new result, as it follows also from a result of Grove and Halperin \cite{GH} about the rational homotopy of cohomogeneity one actions, see Remark \ref{rii} below. The situation when $M$ is odd-dimensional is discussed in Subsection \ref{subsec:misod}. In this case, equivariant formality is equivalent to $\mathrm{rank}\, H = \mathrm{rank}\, G$, where $H$ denotes a regular isotropy of the $G$-action. We obtain a relation involving $\dim H^*(M)$, the Euler characteristic of $G/H$ and the Weyl group $W$ of the cohomogeneity one action: see Corollary \ref{cor:oddd}. This will enable us to obtain some results for cohomogeneity one actions on odd-dimensional rational homology spheres. Finally, in Subsection \ref{torsfre} we show that for a cohomogeneity one action $G\times M \to M$, the $H^*(BG)$-module $H^*_G(M)$ is torsion free if and only if it is free. We note that the latter equivalence is in general not true for arbitrary group actions: this topic is investigated in \cite{Al} and \cite{Fr-Pu2}. \noindent{\bf Acknowledgement.} We would like to thank Wolfgang Ziller for some valuable comments on a previous version of this paper. \section{Equivariantly formal and Cohen-Macaulay group actions}\label{sec:lifting} Let $G\times M \to M$ be a differentiable group action, where both the Lie group $G$ and the manifold $M$ are compact. The equivariant cohomology ring is defined in the usual way, by using the classifying principal $G$ bundle $EG\to BG$, as follows: $H^*_G(M)=H^*(EG \times_G M)$. It has a canonical structure of $H^*(BG)$-algebra, induced by the ring homomorphism $\pi^*:H^*(BG)\to H^*_G(M)$, where $\pi : EG \times_G M \to BG$ is the canonical map (cf.~e.g.~ \cite[Ch.~III]{Hs} or \cite[Appendix C]{GGK}). We say that the group action is {\it equivariantly formal} if $H^*_G(M)$ regarded as an $H^*(BG)$-module is free. There is also a relative notion of equivariant cohomology: if $N$ is a $G$-invariant submanifold of $M$, then we define $H^*_G(M,N)=H^*(EG\times_G M, EG\times_G N)$. This cohomology carries again an $H^*(BG)$-module structure, this time induced by the map $H^*(BG)\to H^*_G(M)$ and the cup product $H^*_G(M)\times H^*_G(M,N)\to H^*_G(M,N)$; see \cite[p.~178]{tomDieck}. \subsection{Criteria for equivariant formality} The following result is known. We state it for future reference and sketch a proof for the reader's convenience. It also involves $T$, which is a maximal torus of $ G$. \begin{prop}\label{abo} If a compact connected Lie group $G$ acts on a compact manifold $M$, then the following statements are equivalent: (a) The $G$-action on $M$ is equivariantly formal. (b) We have $H^*_G(M) \simeq H^*(M)\otimes H^*(BG)$ by an isomorphism of $H^*(BG)$-modules. (c) The homomorphism $i^*: H^*_G(M)\to H^*(M)$ induced by the canonical inclusion $i: M \to EG\times_G M$ is surjective. In this case, it automatically descends to a ring isomorphism ${\mathbb R} \otimes_{H^*(BG)}H^*_G(M) \to H^*(M)$. (d) The $T$-action on $M$ induced by restriction is equivariantly formal. (e) $\dim H^*(M)=\dim H^*(M^T)$, where $M^T$ is the $T$-fixed point set. \end{prop} \begin{proof} A key ingredient of the proof is the Leray-Serre spectral sequence of the bundle $M \stackrel{i}{\to} EG\times_G M \stackrel{\pi}{\to} BG$, whose $E_2$-term is $H^*(M)\otimes H^*(BG)$. Both (b) and (c) are clearly equivalent to the fact that this spectral sequence collapses at $E_2$. The equivalence to (a) is more involved, see for instance \cite[Corollary (4.2.3)]{AlldayPuppe}. The equivalence to (d) is the content of \cite[Proposition C.26]{GGK}. For the equivalence to (e), see \cite[p.~46]{Hs}. \end{proof} We also mention the following useful criterion for equivariant formality: \begin{prop}\label{wealso} An action of a compact connected Lie group $G$ on a compact manifold $M$ with $H^{\rm odd}(M)=\{0\}$ is automatically equivariantly formal. The converse implication is true provided that the $T$-fixed point set $M^T$ is finite. \end{prop} \begin{proof} If $H^{\rm odd}(M)=\{0\}$ then the Leray-Serre spectral sequence of the bundle $ET \times_T M \to BT$ collapses at $E_2$, which is equal to $H^*(M)\otimes H^*(BT)$. To prove the second assertion we note that the Borel localization theorem implies that the kernel of the natural map $H^*_T(M)\to H^*_T(M^T)$ equals the torsion submodule of $H^*_T(M)$, which vanishes in the case of an equivariantly formal action. Because $M^T$ is assumed to be finite, $H^*_T(M^T)$ is concentrated only in even degrees. Consequently, the same holds true for $H^*_T(M)$, as well as for $H^*(M)$, due to the isomorphism $H^*_T(M)\simeq H^*(M)\otimes H^*(BT)$. \end{proof} \subsection{Cohen-Macaulay modules and group actions}\label{subsec:cm} If $R$ is a graded *local Noetherian graded ring, one can associate to any finitely generated graded $R$-module $A$ its Krull dimension, respectively depth (see e.g.~\cite[Section 1.5]{BrunsHerzog}). We always have that $\operatorname{depth} A\leq \dim A$, and if dimension and depth coincide, then we say that the finitely-generated graded $R$-module $A$ is a \emph{Cohen-Macaulay module} over $R$. The following result is an effective tool frequently used in this paper: \begin{lem}\label{ineq} If $0 \to A \to B \to C \to 0$ is a short exact sequence of finitely generated $R$-modules, then we have: \begin{itemize} \item[(i)] $\dim B = {\rm max} \{\dim A, \dim C\}$; \item[(ii)] ${\rm depth} \, A \ge {\rm min} \{ {\rm depth} \, B, {\rm depth } \, C +1\}$; \item[(iii)] ${\rm depth} \, C \ge {\rm min} \{ {\rm depth} \, A-1, {\rm depth } \, B\}$. \end{itemize} \end{lem} \begin{proof} For (i) we refer to \cite[Section 12]{Matsumura} and for (ii) and (iii) to \cite[Proposition 1.2.9]{BrunsHerzog}. \end{proof} \begin{cor}\label{sumofCM} If $A$ and $B$ are Cohen-Macaulay modules over $R$ of the same dimension $d$, then $A\oplus B$ is again Cohen-Macaulay of dimension $d$. \end{cor} \begin{proof} This follows from Lemma \ref{ineq}, applied to the exact sequence $0 \to A \to A \oplus B \to B \to 0$. \end{proof} Yet another useful tool will be for us the following lemma, which is the graded version of \cite[Proposition 12, Section IV.B]{Se}. \begin{lem}\label{serre} Let $R$ and $S$ be two Noetherian graded *local rings and let $\varphi : R\to S$ be a homomorphism that makes $S$ into an $R$-module which is finitely generated. If $A$ is a finitely generated $S$-module, then we have: $$ {\rm depth}_R \, A ={\rm depth}_S \, A \quad {\it and} \quad \dim_R A = \dim_S A.$$ In particular, $A$ is Cohen-Macaulay as $R$-module if and only if it is Cohen-Macaulay as $S$-module. \end{lem} We say that the group action $G\times M \to M$ is {\it Cohen-Macaulay} if $H^*_G(M)$ regarded as $H^*(BG)$-module is Cohen-Macaulay. The relevance of this notion for the theory of equivariant cohomology was for the first time noticed by Bredon in \cite{Bredon}. Other references are \cite{FranzPuppe2003}, \cite{GR}, and \cite{GT'}. The following result gives an example of a Cohen-Macaulay action, which is important for this paper. We first note that if $G$ is a compact Lie group and $K\subset G$ a subgroup, then there is a canonical map $BK\to BG$ induced by the presentations $BG = EG/G$ and $BK= EG/K$. The ring homomorphism $H^*(BG)\to H^*(BK)$ makes $H^*(BK)$ into an $H^*(BG)$-module. \begin{prop}\label{lem:cohomrank} Let $G$ be a compact connected Lie group and $K\subset G$ a Lie subgroup, possibly non-connected. Then $H^*_G(G/K)=H^*(BK)$ is a Cohen-Macaulay module over $H^*(BG)$ of dimension equal to $\mathrm{rank}\, K$. \end{prop} \begin{proof} The fact that $H^*_G(G/K)= H^*(BK)$ is Cohen-Macaulay is a very special case of \cite[Corollary 4.3]{GR} because all isotropy groups of the natural $G$-action on $G/K$ have the same rank. To find its dimension, we consider the identity component $K_0$ of $K$ and note that also $H^*(BK_0)$ is a Cohen-Macaulay module over $H^*(BG)$; by Lemma \ref{serre}, the dimension of $H^*(BK_0)$ over $H^*(BG)$ is equal to the (Krull) dimension of the (polynomial) ring $H^*(BK_0)$, which is $\mathrm{rank}\, K_0$. Let us now observe that $H^*(BK)=H^*(BK_0)^{K/K_0}$ is a nonzero $H^*(BG)$-submodule of the Cohen-Macaulay module $H^*(BK_0)$, hence by \cite[Lemma 5.4]{GT'} (the graded version of \cite[Lemma 4.3]{FranzPuppe2003}), $\dim_{H^*(BG)} H^*(BK)= \dim_{H^*(BG)}H^*(BK_0) =\mathrm{rank}\, K_0.$ \end{proof} As a byproduct, we can now easily deduce the following result. \begin{cor}\label{nonc} If $G$ is a (possibly non-connected) compact Lie group, then $H^*(BG)$ is a Cohen-Macaulay ring. \end{cor} \begin{proof} There exists a unitary group $U(n)$ which contains $G$ as a subgroup. By Proposition \ref{lem:cohomrank}, $H^*(BG)$ is a Cohen-Macaulay module over $H^*(BU(n))$. Because $H^*(BG)$ is Noetherian by a result of Venkov \cite{Ve}, we can apply Lemma \ref{serre} and conclude that $H^*(BG)$ is a Cohen-Macaulay ring. \end{proof} \begin{rem}\label{assertion} The assertions in Proposition \ref{lem:cohomrank} and Corollary \ref{nonc} are not new and can also be justified as follows. Denote by ${\mathfrak g}$ the Lie algebra of a compact Lie group $G$ and by ${\mathfrak t}$ the Lie algebra of a maximal torus $T$ in $G$, then $H^*(BG)=S({\mathfrak g}^*)^G=S({\mathfrak t}^*)^{W(G)}$, where $W(G)=N_G(T)/Z_G(T)$ is the Weyl group of $G$. (Here we have used the Chern-Weil isomorphism and the Chevalley restriction theorem, see e.g.~\cite[p.~311]{Milnor}, respectively \cite[Theorem 4.12]{PaTe}). The fact that the ring $S({\mathfrak t}^*)^{W(G)}$ is Cohen-Macaulay follows from \cite[Proposition 13]{Ho-Ea} (see also \cite[Corollary 6.4.6]{BrunsHerzog} or \cite[Theorem B, p.~176]{Kane}). Proposition \ref{lem:cohomrank} is now a direct consequence of Lemma \ref{serre} and the well-known fact that the $G$-equivariant cohomology of a compact manifold is a finitely generated $H^*(BG)$-module, see e.g.~\cite{Quillen}. \end{rem} In general, if a group action $G\times M \to M$ is equivariantly formal, then it is Cohen-Macaulay. The next result, which is actually Proposition 2.5 in \cite{GR}, establishes a more precise relationship between these two notions. It also involves $$M_{\max}:=\{p \in M \, : \, \mathrm{rank}\, G_p = \mathrm{rank}\, G\}.$$ \begin{prop}\label{gr} {\rm (\cite{GR})} A $G$-action on $M$ is equivariantly formal if and only if it is Cohen-Macaulay and $M_{\max}\neq \emptyset$. \end{prop} Note, in particular, that the $G$-action on $G/K$ mentioned in Proposition \ref{lem:cohomrank} is equivariantly formal if and only if $\mathrm{rank}\, K = \mathrm{rank}\, G$. \section{Topology of transitive group actions on spheres} The following proposition will be needed in the proof of the main result. It collects results that, in a slightly more particular situation, were obtained by Samelson in \cite{Sa}. \begin{prop}\label{samelson} Let $K$ be a compact Lie group, possibly non-connected, which acts transitively on the sphere $S^{m}$, $m\ge 0$, and let $H\subset K$ be an isotropy subgroup. (a) If $m$ is even, then $\mathrm{rank}\, H = \mathrm{rank}\, K$ and the canonical homomorphism $H^*(BK) \to H^*(BH)$ is injective. (b) If $m$ is odd, then $\mathrm{rank}\, H = \mathrm{rank}\, K -1$ and the canonical homomorphism $H^*(BK) \to H^*(BH)$ is surjective. \end{prop} \begin{proof} In the case when both $K$ and $H$ are connected, the result follows from \cite[Satz IV]{Sa} along with \cite[Section 21, Corollaire]{Bo} and \cite[Proposition 28.2]{Bo}. From now on, $K$ or $H$ may be non-connected. We distinguish the following three situations. \noindent{\it Case 1:} $m \ge 2$. Let $K_0$ be the identity component of $K$. The induced action of $K_0$ on $S^{m}$ is also transitive, because its orbits are open and closed in the $K$-orbits. We deduce that we can identify $K_0/(K_0\cap H)$ with $S^{m}$. Since the latter sphere is simply connected, the long exact homotopy sequence of the bundle $K_0\cap H \to K_0 \to S^{m}$ shows that $K_0\cap H$ is connected. This implies that $K_0\cap H$ is equal to $H_0$, the identity component of $H$, and we have the identification $K_0/H_0 = S^{m}$. This implies the assertions concerning the ranks. Let us now consider the long exact homotopy sequence of the bundle $H \to K \to S^{m}$ and deduce from it that the map $\pi_0(H)\to \pi_0(K)$ is a bijection. This means that $H$ and $K$ have the same number of connected components, thus we may set $\Gamma := K/K_0 = H/H_0$. The (free) actions of $\Gamma$ on $BK_0$ and $BH_0$ induce the identifications $BK=BK_0/\Gamma$, respectively $BH=BH_0/\Gamma$ (see e.g.~\cite[Ch.~III, Section 1]{Hs}). Moreover, the map $\pi: BH_0\to BK_0$ is $\Gamma$-equivariant. The induced homomorphism $\pi^*:H^*(BK_0)\to H^*(BH_0)$ is then $\Gamma$-equivariant as well, hence it maps $\Gamma$-invariant elements to $\Gamma$-invariant elements. We have $H^*(BH)=H^*(BH_0)^\Gamma$ and $H^*(BK)=H^*(BK_0)^\Gamma$ which shows that if $m$ is even, then the map $\pi^*|_{H^*(BH_0)^\Gamma}: H^*(BH_0)^\Gamma\to H^*(BK_0)^\Gamma$ is injective. We will now show that if $m$ is odd, then the latter map is surjective. Indeed, since $\pi^*$ is surjective, for any $b \in H^*(BK_0)^\Gamma$ there exists $a \in H^*(BH_0)$ with $\pi^*(a)=b$. But then $a':=\frac{1}{|\Gamma|}\sum_{\gamma\in \Gamma} \gamma a$ is in $H^*(BH_0)^\Gamma$ and satisfies $\pi^*(a')=b$. \noindent{\it Case 2:} $m = 1$. We clearly have $\mathrm{rank}\, H = \mathrm{rank}\, K -1$ in this case. We only need to show that the map $H^*(BK)\to H^*(BH)$ is surjective. Equivalently, using the identification with the rings of invariant polynomials \cite[p.~311]{Milnor}, we will show that the restriction map $S(\mathfrak{k}^*)^K\to S(\mathfrak{h}^*)^H$ is surjective. Choosing an $\operatorname{Ad}_{K}$-invariant scalar product on $\mathfrak{k}$, we obtain an orthogonal decomposition $\mathfrak{k} = \mathfrak{h} \oplus \mathbb R v$, where $v\in \mathfrak{h}^\perp$. The $\operatorname{Ad}$-invariance implies that $\mathfrak{h}$ is an ideal in $\mathfrak{k}$ and $v$ a central element. Given $f\in S(\mathfrak{h}^*)^H$, we define $g:\mathfrak{k}=\mathfrak{h} \oplus \mathbb R v\to \mathbb R$ by $g(X+tv)=f(X)$. Clearly, $g$ is a polynomial on $\mathfrak{k}$ that restricts to $f$ on $\mathfrak{h}$; for the desired surjectivity we therefore only need to show that $g$ is $K$-invariant. As $K/H \cong S^1$ is connected, $K$ is generated by its identity component $K_0$ and $H$. Note that both the $H$- and the $K_0$-action respect the decomposition $\mathfrak{k} = \mathfrak{h}\oplus \mathbb R v$. The $H$-invariance of $f$ therefore implies the $H$-invariance of $g$. Also, the adjoint action of $\mathfrak{k}$ on $\mathfrak{h}$ is the same as the adjoint action of $\mathfrak{h}$ (the $\mathbb R v$-summand acts trivially), so $f$ is $K_0$-invariant, which implies that $g$ is $K_0$-invariant. \noindent{\it Case 3:} $m=0$. Since $\mathrm{rank}\, H = \mathrm{rank}\, K$, a maximal torus $T\subset H$ is also maximal in $K$. The injectivity of $H^*(BK)\to H^*(BH)$ follows from the identifications $H^*(BK)=S(\mathfrak{t}^*)^{W(K)}$, $H^*(BH)=S(\mathfrak{t}^*)^{W(H)}$. \end{proof} \section{Cohomogeneity one actions are Cohen-Macaulay}\label{sec:pro} In this section we prove Theorem \ref{thm:cohom1eqformal}. There are two possibilities for the orbit space $M/G$: it can be diffeomorphic to the circle $S^1$ or to the interval $[0,1]$. If $M/G = S^1$, Theorem \ref{thm:cohom1eqformal} follows readily from \cite[Corollary 4.3]{GR} and the fact that all isotropy groups of the $G$-action are conjugate to each other. From now on we assume that $M/G = [0,1]$. We start with some well-known considerations which hold true in this case. One can choose a $G$-invariant Riemannian metric on $M$ and a geodesic $\gamma$ perpendicular to the orbits such that $G\gamma(0)$ and $G\gamma(1)$ are the two nonregular orbits, and such that $\gamma(t)$ is regular for all $t\in (0,1)$. Let $K^-:=G_{\gamma(0)}$, $K^+:=G_{\gamma(1)}$, and $H$ be the regular isotropy $G_\gamma$ on $\gamma$. We have $H \subset K^{\pm}$. The group diagram $G \supset K^-, K^+ \supset H$ determines the equivariant diffeomorphism type of the $G$-manifold $M$. More precisely, by the slice theorem, the boundaries of the unit disks $D_{\pm}$ in the normal spaces $\nu _{\gamma(0)} G\gamma(0)$, respectively $\nu_{\gamma(1)}G\gamma(1)$ are spheres $K^{\pm}/H = S^{\ell_{\pm}}$. The space $M$ can be realized by gluing the tubular neighborhoods $G/K^{\pm}\times_{K^{\pm}} D^{\pm}$ along their common boundary $G/K^{\pm}\times_{K^{\pm}}K^{\pm}/H=G/H$ (see e.g.~\cite[Theorem IV.8.2]{Br}, \cite[Section 1]{GWZ} or \cite[Section 1.1]{Hoelscher}). We are in a position to prove the main result of the paper in the remaining case: \begin{proof}[Proof of Theorem \ref{thm:cohom1eqformal}] \emph{in the case when $M/G=[0,1]$}. We may assume that the rank of any $G$-isotropy group is at most equal to $b:= \mathrm{rank}\, K^-$, i.e.~ we have \begin{equation}\label{bi}\mathrm{rank}\, H \le \mathrm{rank}\, K^+\le b.\end{equation} By Proposition \ref{samelson}, we have $\mathrm{rank}\, K^- -\mathrm{rank}\, H \le 1$ and consequently $\mathrm{rank}\, H \in \{b-1, b\}$ (alternatively, we can use \cite[Lemma 1.1]{Puettmann}). If $\mathrm{rank}\, H = b$, then all isotropy groups of the $G$-action have the same rank and Theorem \ref{thm:cohom1eqformal} follows from \cite[Corollary 4.3]{GR}. From now on we will assume that \begin{equation}\label{bii}\mathrm{rank}\, H = b-1.\end{equation} This implies that the quotient $K^-/H$ is odd-dimensional, that is, $\ell_-$ is an odd integer. By Proposition \ref{samelson} (b), the homomorphism $H^*(BK^-)\to H^*(BH)$ is surjective. On the other hand, the Mayer-Vietoris sequence of the covering of $M$ with two tubular neighborhoods around the singular orbits can be expressed as follows: \begin{equation*} \ldots \longrightarrow H^*_G(M) \longrightarrow H^*_G(G/K^-)\oplus H^*_G(G/K^+) \longrightarrow H^*_G(G/H) \longrightarrow \ldots \end{equation*} But $H^*_G(G/K^{\pm})=H^*(BK^{\pm})$ and $H^*_G(G/H)=H^*(BH)$, hence the last map in the above sequence is surjective. Thus the Mayer-Vietoris sequence splits into short exact sequences of the form: \begin{equation} \label{eq:MVseq} 0 \longrightarrow H^*_G(M) \longrightarrow H^*_G(G/K^-)\oplus H^*_G(G/K^+) \longrightarrow H^*_G(G/H) \longrightarrow 0. \end{equation} We analyze separately the two situations imposed by equations (\ref{bi}) and (\ref{bii}). \noindent{\it Case 1: $\mathrm{rank}\, K^+ = b$.} By Proposition \ref{lem:cohomrank}, both $H^*_G(G/K^-)$ and $H^*_G(G/K^+)$ are Cohen-Macaulay modules over $H^*(BG)$ of dimension $b$, so by Corollary \ref{sumofCM}, the middle term $ H^*_G(G/K^-)\oplus H^*_G(G/K^+)$ in the sequence \eqref{eq:MVseq} is also Cohen-Macaulay of dimension $b$. Because $H^*_G(G/H)$ is Cohen-Macaulay of dimension $b-1$, we can apply Lemma \ref{ineq} to the sequence \eqref{eq:MVseq} to deduce that $b\leq \operatorname{depth} H^*_G(M)\leq \dim H^*_G(M)=b$, which implies that $H^*_G(M)$ is Cohen-Macaulay of dimension $b$. \noindent {\it Case 2: $\mathrm{rank}\, K^+ = b-1$.} In this case, $H\subset K^+$ are Lie groups of equal rank and therefore, by Proposition \ref{samelson} (a), the canonical map $H^*(BK^+)\to H^*(BH)$ is injective. Exactness of the sequence \eqref{eq:MVseq} thus implies that the restriction map $H^*_G(M) \to H^*_G(G/K^-)$ is injective. We deduce that the long exact sequence of the pair $(M, G/K^-)$ splits into short exact sequences, i.e.~the following sequence is exact: \begin{equation}\label{hgmb} 0 \longrightarrow H^*_G(M)\longrightarrow H^*_G(G/K^-) \longrightarrow H^*_G(M,G/K^-) \longrightarrow 0. \end{equation} We aim to show that $H^*_G(M,G/K^-)$ is a Cohen-Macaulay module over $H^*(BG)$ of dimension $b-1$; once we have established this, we can apply Lemma \ref{ineq} to the sequence \eqref{hgmb} to deduce that $H^*_G(M)$ is Cohen-Macaulay of dimension $b$, in exactly the same way as we used the sequence \eqref{eq:MVseq} in Case 1 above. By excision, $H^*_G(M,G/K^-)=H^*_G(G \times_{K^+} D^+, G/H)$, where $G \times_{K^+} D^+$ is a tubular neighborhood of $G/K^+$ and $G/H = G \times_{K^+} S^+$ is its boundary (i.e.~$S^+$ is the boundary of $D^+$). We have the isomorphism \begin{align*} H^*_G(G \times_{K^+} D^+,G/H) &= H^*(EG\times_G (G\times_{K^+} D^+), EG\times_G (G\times_{K^+} S^+)) \\ &= H^*(EG\times_{K^+} D^+,EG\times_{K^+} S^+)\\ & = H^*_{K^+}(D^+,S^+), \end{align*} which is $H^*(BG)$-linear, where the $H^*(BG)$-module structure on the right hand side is the restriction of the $H^*(BK^+)$-module structure to $H^*(BG)$. Because $H^*(BK^+)$ is a finitely generated module over $H^*(BG)$, it is sufficient to show that $H^*_{K^+}(D^+,S^+)$ is a Cohen-Macaulay module over $H^*(BK^+)$ of dimension $b-1$. Then Lemma \ref{serre} implies the claim, because the ring $H^*(BK^+)$ is Noetherian and *local (by \cite[Example 1.5.14 (b)]{BrunsHerzog}). Denote by $K^+_0$ the connected component of $K^+$. There is a spectral sequence converging to $H^*_{K^+_0}(D^+,S^+)$ whose $E_2$-term equals $H^*(BK^+_0)\otimes H^*(D^+,S^+)$; because $H^*(D^+,S^+)$ is concentrated only in degree $\ell^++1$, this spectral sequence collapses at $E_2$, and it follows that $H^*_{K^+_0}(D^+,S^+)$ is a free module over $H^*(BK^+_0)$ (cf.~e.g.~\cite[Lemma C.24]{GGK}). In particular, it is a Cohen-Macaulay module over $H^*(BK^+_0)$ of dimension $b-1$. Because $H^*(BK^+_0)$ is finitely generated over $H^*(BK^+)$, Lemma \ref{serre} implies that $H^*_{K^+_0}(D^+,S^+)$ is also Cohen-Macaulay over $H^*(BK^+)$ of dimension $b-1$. A standard averaging argument (see for example \cite[Lemma 2.7]{GR}) shows that the $H^*(BK^+)$-module $H^*_{K^+}(D^+,S^+)=H^*_{K^+_0}(D^+,S^+)^{K^+/K^+_0}$ is a direct summand of $H^*_{K^+_0}(D^+,S^+)$: we have $H^*_{K^+_0}(D^+,S^+) = H^*_{K^+}(D^+,S^+) \oplus \ker a$, where $a:H^*_{K^+_0}(D^+,S^+)\to H^*_{K^+_0}(D^+,S^+)$ is given by averaging over the $K^+/K^+_0$-action. It follows that $H^*_{K^+}(D^+,S^+)$ is Cohen-Macaulay over $H^*(BK^+)$ of dimension $b-1$. \end{proof} Combining this theorem with Proposition \ref{gr}, we also immediately obtain Corollary \ref{cor:eqf}. The following statement was shown in the above proof; we formulate it as a separate proposition because we will use it again later. \begin{prop}\label{prop:mvseqexact} Whenever $M/G = [0,1]$ and $\mathrm{rank}\, H = \mathrm{rank}\, K^- -1$, the sequence \eqref{eq:MVseq} is exact. \end{prop} An immediate corollary to this proposition is the following Goresky-Kottwitz-MacPher\-son \cite{GKM} type description of the ring $H^*_G(M)$ (we denote by ${\mathfrak k}^\pm$ and ${\mathfrak h}$ the Lie algebras of $K^{\pm}$, respectively $H$). \begin{cor}\label{cor:ifev} If $M/G=[0,1]$ and $\mathrm{rank}\, H = \mathrm{rank}\, K^- -1$ then $H^*_G(M)$ is isomorphic to the $S(\mathfrak{g}^*)^G$-subalgebra of $S(({\mathfrak k}^-)^*)^{K^-}\oplus S(({\mathfrak k}^+)^*)^{K^+}$ consisting of all pairs $(f,g)$ with the property that $f|_{\mathfrak h}=g|_{\mathfrak h}$. \end{cor} We end this section with an example which concerns Corollary \ref{<2}. It shows that if the cohomogeneity of the group action is no longer smaller than two, then the action is not necessarily Cohen-Macaulay. \begin{ex}\label{exor} The $T^2$-manifold $R(1,0)$ appears in the classification of $T^2$-actions on $4$-manifolds by Orlik and Raymond \cite{Or-Ra}. It is a compact connected 4-manifold, homeomorphic to the connected sum $(S^1\times S^3)\# (S^2\times S^2)$. The $T^2$-action on $R(1,0)$ is effective, therefore of cohomogeneity two, and has exactly two fixed points. The action is not Cohen-Macaulay, because otherwise it would be equivariantly formal (as it has fixed points); as the fixed point set is finite, this would imply using Proposition \ref{wealso} that $H^{\rm odd}(R(1,0)) =\{0\}$, contradicting $H^1(R(1,0))=\mathbb R$. \end{ex} \section{Equivariantly formal actions of cohomogeneity one}\label{sec:equiv} In this section we present some extra results in the situation when $M/G =[0,1]$ and the action $G\times M \to M$ is equivariantly formal. By Corollary \ref{cor:eqf}, this is equivalent to the fact that the rank of at least one of $K^-$ and $K^+$ is equal to the rank of $G$. \subsection{The case when $M$ is even-dimensional} \subsubsection{Cohomogeneity-one manifolds with positive Euler characteristic}\label{subsec:ec} A discussion concerning the Euler characteristic of a compact manifold (of arbitrary dimension) admitting a cohomogeneity one action of a compact connected Lie group can be found in \cite[Section 1.2]{AlekseevskyPodesta} (see also \cite[Section 1.3]{Fr}). By Proposition 1.2.1 therein we have \begin{equation}\label{eq:eulercharacteristic} \chi(M) = \chi(G/K^-)+\chi(G/K^+)-\chi(G/H). \end{equation} The Euler characteristic $\chi(M)$ is always nonnegative, and one has $\chi(M)>0$ if and only if $M/G=[0,1]$, $M$ is even-dimensional, and the rank of at least one of $ K^-$ and $ K^+$ is equal to $\mathrm{rank}\, G$, see \cite[Corollary 1.2.2]{AlekseevskyPodesta}. What we can show with our methods is that $\chi(M)>0$ implies that $H^{\rm odd}(M)=\{0\}$. \begin{prop}\label{cor:ifm} Assume that a compact connected manifold $M$ admits a cohomogeneity one action of a compact connected Lie group. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] $\chi(M)>0$, \item[(ii)] $H^{\rm odd}(M)=\{0\}$, \item[(iii)] $M$ is even-dimensional, $M/G=[0,1]$, and the $G$-action is equivariantly formal. \end{enumerate} These conditions imply that \begin{equation}\label{hstar}\dim H^*(M) = \chi(M) =\chi(G/K^-) + \chi(G/K^+). \end{equation} \end{prop} \begin{proof} The equivalence of (i) and (iii) follows from the previous considerations and Corollary \ref{cor:eqf}. Let us now assume that $\chi(M) >0$. Then (iii) holds: consequently, $\mathrm{rank}\, H = \mathrm{rank}\, G-1$ and the space $M_{\max}$ is either $G/K^-$, $G/K^+$ or their union $G/K^-\cup G/K^+$. A maximal torus $T\subset G$ therefore acts on $M$ with fixed point set $M^T$ equal to $(G/K^-)^T$, $(G/K^+)^T$ or $(G/K^-)^T\cup (G/K^+)^T$, depending on the rank of $K^-$ and $K^+$. As $M^T$ is in particular finite, the equivariant formality of the $G$-action is the same as the condition $H^{\rm odd}(M)=\{0\}$ (see Proposition \ref{wealso}), and (ii) follows. The last assertion in the corollary follows readily from Equation \eqref{eq:eulercharacteristic}. \end{proof} \begin{rem}\label{r1} Note that by the classical result of Hopf and Samelson \cite{Ho-Sa} concerning the Euler-Poincar\'e characteristic of a compact homogeneous space, along with the formula of Borel given by Equation \eqref{borel} below, the topological obstruction given by the equivalence of (i) and (ii) holds true also for homogeneity, i.e., the existence of a transitive action of a compact Lie group. We remark that the equivalence of (i) and (ii) is no longer true in the case when $M$ only admits an action of cohomogeneity two. Consider for example the cohomogeneity two $T^2$-manifold $R(1,0)$ mentioned in Example \ref{exor}: since it is homeomorphic to $(S^1 \times S^3) \# (S^2 \times S^2)$, its Euler characteristic is equal to $2$, and the first cohomology group is $\mathbb R$. \end{rem} \begin{rem} The Euler characteristics of the nonregular orbits appearing in Equation \eqref{hstar} can also be expressed in terms of the occurring Weyl groups: we have $\chi(G/K^\pm)\neq 0$ if and only if $\mathrm{rank}\, K^\pm = \mathrm{rank}\, G$, and in this case $\chi(G/K^\pm)=\frac{|W(G)|}{|W(K^\pm)|}$. \end{rem} \begin{rem}\label{rii} By a theorem of Grove and Halperin \cite{GH}, if $M$ admits a cohomogeneity-one action, then the rational homotopy $\pi_*(M)\otimes \mathbb Q$ is finite-dimensional. This result also implies the equivalence between $\chi(M)>0$ and $H^{\rm odd}(M)=0$, as follows: If $\chi(M)>0$, then one can show using the Seifert-van Kampen theorem that $\pi_1(M)$ is finite, see Lemma \ref{pi1finite} below. Therefore the universal cover of $M$, call it $\widetilde{M}$, is also compact, and carries a cohomogeneity one action with orbit space $[0,1]$. The theorem of Grove and Halperin then states that $\widetilde{M}$ is rationally elliptic. Since the covering $\widetilde{M}\to M$ is finite, we have $\chi(\widetilde{M})>0$, and by \cite[Corollary 1]{Halperin} (or \cite[Proposition 32.16]{FHT}), this implies $H^{\rm odd}(\widetilde{M})=\{0\}$. But $M$ is the quotient of $\widetilde{M}$ by a finite group, say $\Gamma$, hence $H^{\rm odd}(M)=H^{\rm odd}(\widetilde{M})^\Gamma =\{0\}.$ Note that by Proposition \ref{wealso} this line of argument also implies that the $G$-action is equivariantly formal, and hence provides an alternative proof of Corollary \ref{cor:eqf} in the case of an even-dimensional cohomogeneity-one manifold $M$ with $M/G=[0,1]$. \end{rem} \begin{lem}\label{pi1finite} Assume that a compact connected manifold $M$ admits a cohomogeneity one action of a compact connected Lie group. If $\chi(M) >0$ then $\pi_1(M)$ is finite. \end{lem} \begin{proof} We may assume that $\mathrm{rank}\, K^- = \mathrm{rank}\, G$. Consider the tubular neighborhoods of the two non-regular orbits $G/K^+$, respectively $G/K^-$ mentioned before: their union is the whole $M$ and their intersection is $G$-homotopic to a principal orbit $G/H$. The inclusions of the intersection into each of the two neighborhoods induce between the first homotopy groups the same maps as those induced by the canonical projections $\rho^+:G/H\to G/K^+$, respectively $\rho^-:G/H \to G/K^-$ (see \cite[Section 1.1]{Hoelscher}). These are the maps $\rho^+_*:\pi_1(G/H)\to \pi_1(G/K^+)$ and $\rho^-_* : \pi_1(G/H) \to \pi_1(G/K^-)$. Consider the bundle $G/H\to G/K^-$ whose fiber is $K^-/H=S^{\ell_-}$ of odd dimension $\ell_- =\dim K^- - \dim H \ge 1$. The long exact homotopy sequence of this bundle implies readily that the map $\rho^-_*$ is surjective. From the Seifert-van Kampen theorem we deduce that $\pi_1(M)$ is isomorphic to $\pi_1(G/K^+)/A$, where $A$ is the smallest normal subgroup of $\pi_1(G/K^+)$ which contains $\rho^+_*(\ker \rho^-_*)$ (see e.g.~\cite[Exercise 2, p.~433]{Mu}). \noindent {\it Case 1: $\dim K^+-\dim H \ge 1$.} As above, this implies that $\rho_*^+$ is surjective. Consequently, the map $\pi_1(G/K^-)=\pi_1(G/H)/\ker \rho_*^- \to \pi_1(G/K^+)/A$ induced by $\rho^+_*$ is surjective as well. On the other hand, $\mathrm{rank}\, G = \mathrm{rank}\, K^-$ implies that $\pi_1(G/K^-)$ is a finite group. Thus, $\pi_1(G/K^+)/A$ is a finite group as well. \noindent {\it Case 2: $\dim K^+=\dim H$.} We have $K^+/H=S^0$, which consists of two points, thus $\rho^+$ is a double covering. This implies that $\rho^+_*$ is injective and its image, $\rho^+_*(\pi_1(G/H))$, is a subgroup of index two in $\pi_1(G/K^+)$. The index of $\rho^+_*(\ker \rho^-_*)$ in $\rho^+_*(\pi_1(G/H))$ is equal to the index of $\ker \rho^-_*$ in $\pi_1(G/H)$, which is finite (being equal to the cardinality of $\pi_1(G/K^-)$). Thus, the quotient $\pi_1(G/K^+)/\rho^+_*(\ker \rho^-_*)$ is a finite set. Finally, we only need to take into account that the canonical projection map $\pi_1(G/K^+)/\rho^+_*(\ker \rho^-_*)\to \pi_1(G/K^+)/A$ is surjective. \end{proof} \begin{rem} Assume again that $M/G=[0,1]$ and $M$ is even-dimensional. If $M$ admits a metric of positive sectional curvature, then at least one of $K^-$ and $K^+$ has maximal rank, by the so-called ``Rank Lemma" (see \cite[Lemma 2.5]{Grove} or \cite[Lemma 2.1]{GWZ}). By Corollary \ref{cor:eqf} and Proposition \ref{cor:ifm}, the $G$-action is equivariantly formal and $H^{\rm odd}(M)=\{0\}$. This however is not a new result as by Verdiani \cite{Verdiani}, $M$ is already covered by a rank one symmetric space. \end{rem} \subsubsection{Cohomology} Consider again the situation that $M$ is a compact even-dimen\-sional manifold admitting a cohomogeneity one action of a compact connected Lie group $G$ such that $M/G=[0,1]$ and that at least one isotropy rank equals the rank of $G$. In this section we will give a complete description of the Poincar\'e polynomial of $M$, purely in terms of $G$ and the occurring isotropy groups, and eventually even of the ring $H^*(M)$. \begin{prop}\label{cor:cohom} If $M$ is even-dimensional, $M/G=[0,1]$, and the rank of at least one of $K^-$ and $K^+$ equals the rank of $G$, then the Poincar\'e polynomial of $M$ is given by $$P_t(M) = \frac{P_t(BK^-) + P_t(BK^+) - P_t(BH)}{P_t(BG)}.$$ In particular, $P_t(M)$ only depends on the abstract Lie groups $G,K^\pm,H$, and not on the whole group diagram. \end{prop} \begin{proof} Assume that $\mathrm{rank}\, K^-=\mathrm{rank}\, G$. Since the pincipal orbit $G/H$ is odd-dimensional, and $\mathrm{rank}\, H\in \{\mathrm{rank}\, K^-, \mathrm{rank}\, K^--1\}$ (see Proposition \ref{samelson}), we actually have $\mathrm{rank}\, H = \mathrm{rank}\, K^- -1$. Hence by Proposition \ref{prop:mvseqexact}, the Mayer-Vietoris sequence \eqref{eq:MVseq} is exact, which implies that the $G$-equivariant Poincar\'e series of $M$ is $$P_t^G(M)= P_t^G(G/K^-) + P_t^G(G/K^+) - P_t^G(G/H) = P_t(BK^-) + P_t(BK^+) - P_t(BH).$$ We only need to observe that, since the $G$-action on $M$ is equivariantly formal, Proposition \ref{abo} (b) implies that $P_t^G(M) = P_t(M) \cdot P_t(BG)$. \end{proof} \begin{rem}\label{rem:PoincareSeriesSimplified} In case $H$ is connected, the above description of $P_t(M)$ can be simplified as follows. Assume that $\mathrm{rank}\, K^- = \mathrm{rank}\, G$. Then $\mathrm{rank}\, H = \mathrm{rank}\, K^- - 1$, and we have that $K^-/H= S^{\ell_-}$ is an odd-dimensional sphere. Consequently, the Gysin sequence of the spherical bundle $K^-/H\to BH \to BK^-$ splits into short exact sequences, which implies readily that $P_t(BH)=(1-t^{\ell_- +1})P_t(BK^-)$. Similarly, if also the rank of $K^+$ is equal to the rank of $G$, then $P_t(BH)=(1-t^{\ell_++1})P_t(BK^+)$ and we obtain the following formula: $$P_t(M) = \left(\frac{1}{1-t^{\ell_-+1}}+\frac{1}{1-t^{\ell_++1}} -1\right)\cdot \frac{P_t(BH)}{P_t(BG)}.$$ \end{rem} Numerous examples of cohomogeneity one actions which satisfy the hypotheses of Proposition \ref{cor:cohom} can be found in \cite{AlekseevskyPodesta} and \cite{Fr} (since the condition on the isotropy ranks in Proposition \ref{cor:cohom} is equivalent to $\chi(M)>0$, see Proposition \ref{cor:ifm}). Proposition \ref{cor:cohom} allows us to calculate the cohomology groups of $M$ in all these examples. We will do in detail one such example: \begin{ex} (\cite[Table 4.2, line 5]{AlekseevskyPodesta}) We have $G=SO(2n+1)$, $K^-= SO(2n)$, $K^+=SO(2n-1)\times SO(2)$, and $H=SO(2n-1)$. The following can be found in \cite[Ch.~III, Theorem 3.19]{Mi-To}: \begin{align*} {}&P_t(BSO(2n+1))= \frac{1}{(1-t^4)(1-t^8) \cdots (1-t^{4n})} \end{align*} We have $\ell_- = 2n-1$ and $\ell_+=1$, and consequently, using Remark \ref{rem:PoincareSeriesSimplified}, we obtain the following description of the Poincar\'e series of the corresponding manifold $M$: \begin{align*} P_t(M)&=\left(\frac1{1-t^{2n}} + \frac1{1-t^2} -1\right) \cdot (1-t^{4n}) \\ &=1 + t^2 + t^4 + \ldots + t^{2n-2} + 2t^{2n} + t^{2n+2} + \ldots + t^{4n}. \end{align*} \end{ex} Let us now use equivariant cohomology to determine the ring structure of $H^*(M)$ in the case at hand. In Corollary \ref{cor:ifev} we determined the $S(\mathfrak{g}^*)^G$-algebra structure of $H^*_G(M)$, and because of Proposition \ref{abo} (c), the ring structure of $H^*(M)$ is encoded in the $S(\mathfrak{g}^*)^G$-algebra structure. The following proposition follows immediately. \begin{prop}\label{prop:ringstructure} If $M$ is even-dimensional, $M/G=[0,1]$, and the rank of at least one of $K^-$ and $K^+$ equals the rank of $G$, then we have a ring isomorphism \[ H^*(M) \simeq {\mathbb R}\otimes_{S(\mathfrak{g}^*)^G}A, \] where $A$ is the $S(\mathfrak{g}^*)^G$-subalgebra of $S((\mathfrak{k}^-)^*)^{K^+}\oplus S((\mathfrak{k}^+)^*)^{K^+}$ consisting of all pairs $(f,g)$ with the property that $\left.f\right|_{\mathfrak{h}}=\left.g\right|_{\mathfrak{h}}$. \end{prop} \begin{rem} Recall that the cohomology ring of a homogeneous space $G/K$ where $G, K$ are compact and connected, such that $\mathrm{rank}\, G = \mathrm{rank}\, K$, is described by Borel's formula \cite{Bo} as follows: \begin{equation}\label{borel} H^*(G/K) \simeq {\mathbb R}\otimes_{S(\mathfrak{g}^*)^G} S(\mathfrak{k}^*)^K.\end{equation} The above description of the ring $H^*(M)$ can be considered as a version of Borel's formula for cohomogeneity one manifolds. \end{rem} Equation (\ref{borel}) is particularly simple in the case when $K=T$, a maximal torus in $G$. Namely, if ${\mathfrak t}$ is the Lie algebra of $T$ and $W(G)$ the Weyl group of $G$, then $$H^*(G/T) \simeq S({\mathfrak t}^*)/\langle S({\mathfrak t}^*)^{W(G)}_{>0}\rangle,$$ where $\langle S({\mathfrak t}^*)^{W(G)}_{>0}\rangle$ is the ideal of $S({\mathfrak t}^*)$ generated by the non-constant $W(G)$-invariant polynomials. The following example describes the cohomology ring of a space that can be considered the cohomogeneity one analogue of $G/T$. It also shows that the ring $H^*(M)$ depends on the group diagram, not only on the isomorphism types of $G$, $K^\pm$, and $H$, like the Poincar\'e series, see Proposition \ref{cor:cohom} above. \begin{ex} Let $G$ be a compact connected Lie group, $T$ a maximal torus in $G$, and $H\subset T$ a codimension one subtorus. The cohomogeneity one manifold corresponding to $G$, $K^-=K^+:=T$, and $H$ is $M=G\times_T S^2$, where the action of $T$ on $S^2$ is determined by the fact that $H$ acts trivially and $T/H$ acts in the standard way, via rotation about a diameter of $S^2$: indeed, the latter $T$-action has the orbit space equal to $[0,1]$, the singular isotropy groups both equal to $T$, and the regular isotropy group equal to $H$; one uses \cite[Proposition 1.6]{Hoelscher}. Let $\mathfrak{h}$ be the Lie algebra of $H$ and pick $v\in \mathfrak{t}$ such that $\mathfrak{t}=\mathfrak{h} \oplus \mathbb R v$. Consider the linear function $\alpha : \mathfrak{t} \to \mathbb R$ along with the action of $\mathbb Z_2=\{1,-1\}$ on $\mathfrak{t}$ given by $$ \alpha(w+rv) =r, \quad (-1).(w+rv)=w-rv,$$ for all $w\in \mathfrak{h}$ and $r\in \mathbb R$. We denote the induced $\mathbb Z_2$-action on $S(\mathfrak{t}^*)$ by $(-1).f=:\tilde{f},$ for all $f\in S(\mathfrak{t}^*)$. Corollary \ref{cor:ifev} induces the $H^*(BG)=S(\mathfrak{t}^*)^{W(G)}$-algebra isomorphism $$H^*_G(M) \simeq \{(f,g) \in S(\mathfrak{t}^*)\oplus S(\mathfrak{t}^*) \, : \, \alpha \ {\rm divides} \ f-g\},$$ and the right hand side is, as an $S(\mathfrak{t}^*)$-algebra, isomorphic to $H^*_T(S^2)$, where the $T$-action on $S^2$ is the one described above. Note that $T/H$ can be embedded as a maximal torus in $SO(3)$, in such a way that the latter group acts canonically on $S^2$ and induces the identification $S^2= (H\times SO(3))/T$. We apply \cite[Theorem 2.6]{GHZ} for this homogeneous space and deduce that the map $S(\mathfrak{t}^*)\otimes_{S(\mathfrak{t}^*)^{\mathbb Z_2}}S(\mathfrak{t}^*)\to H^*_G(M)$ given by $f_1\otimes f_2\mapsto (f_1f_2, f_1\tilde{f}_2)$ is an isomorphism of $S(\mathfrak{t}^*)^{W(G)}$-algebras, where the structure of $S(\mathfrak{t}^*)^{W(G)}$-algebra on $S(\mathfrak{t}^*)\otimes_{S(\mathfrak{t}^*)^{\mathbb Z_2}}S(\mathfrak{t}^*)$ is given by inclusion into the first factor. By Corollary \ref{cor:ifev} (b), we have the ring isomorphism $$H^*(M) \simeq \mathbb R \otimes_{S({\mathfrak t}^*)^{W(G)}}(S(\mathfrak{t}^*)\otimes_{S(\mathfrak{t}^*)^{\mathbb Z_2}}S(\mathfrak{t}^*)) = \left(S({\mathfrak t}^*)/\langle S({\mathfrak t}^*)^{W(G)}_{>0}\rangle\right)\otimes_{S(\mathfrak{t}^*)^{\mathbb Z_2}}S(\mathfrak{t}^*) .$$ To obtain descriptions in terms of generators and relations we need an extra variable $u$ with $\deg u =2$ and also a set of Chevalley generators \cite{Che} of $S({\mathfrak t}^*)^W$, call them $f_1, \ldots, f_k$, where $k:=\mathrm{rank}\, G$. We have: $$H^*_G(M)=S({\mathfrak t}^*)\otimes \mathbb R[u]/\langle \alpha^2-u^2\rangle, \quad H^*(M)=S({\mathfrak t}^*)\otimes \mathbb R[u]/\langle \alpha^2-u^2, f_1, \ldots, f_k\rangle.$$ Let us now consider two concrete situations. For both of them we have $G=U(3)$ and $T$ is the space of all diagonal matrices in $U(3)$; the role of $H$ is played by the subgroup $\{ {\rm Diag}(1, z_2, z_3) \,: \, |z_2|=|z_3| =1\}$ in the first situation, respectively $\{ {\rm Diag}(z_1, z_2, z_3) \, : \, |z_1|=|z_2|=|z_3| =1, z_1z_2z_3=1\}$ in the second. If we denote the corresponding manifolds by $M_1$ and $M_2$, then \begin{align*} {}& H^*(M_1)\simeq \mathbb R[x_1, x_2, x_3,u]/\langle x_1^2-u^2, x_1+x_2+x_3, x_1x_2+x_1x_3+x_2x_3, x_1x_2x_3\rangle \\ {\rm and} \\ {}& H^*(M_2)\simeq \mathbb R[x_1, x_2, x_3,u]/\langle u^2, x_1+x_2+x_3, x_1x_2+x_1x_3+x_2x_3, x_1x_2x_3\rangle. \end{align*} We observe that, even though $H^*(M_1)$ and $H^*(M_2)$ are isomorphic as groups, by Proposition \ref{cor:cohom}, they are not isomorphic as rings. Indeed, we write \begin{align*} & H^*(M_1)\simeq \mathbb R[x_1, x_2,u]/\langle x_1^2-u^2, x_1^2+x_2^2+x_1x_2, x_1x_2(x_1+x_2)\rangle \\ {\rm and}& {} &{} & {} &{} & {} &{} & {} &{} & {} &{} & {} &{} & {} &{} \\ & H^*(M_2)\simeq \mathbb R[x_1, x_2, u]/\langle u^2, x_1^2+x_2^2+x_1x_2, x_1x_2(x_1+x_2)\rangle \end{align*} and note that $H^*(M_1)$ does not contain an element of degree $2$ and order $2$. \end{ex} \subsection{The case when $M$ is odd-dimensional}\label{subsec:misod} Assume we are given a cohomogeneity one action of a compact connected Lie group $G$ on a compact connected odd-dimensional manifold $M$ such that $M/G=[0,1]$, with at least one isotropy group of maximal rank. This time the principal orbit $G/H$ is even-dimensional, hence the ranks of $G$ and $H$ are congruent modulo $2$. By Proposition \ref{samelson}, the rank of $H$ differs from the singular isotropy ranks by only at most one; consequently, $ \mathrm{rank}\, H=\mathrm{rank}\, G$, i.e.~all isotropy groups of the $G$-action have maximal rank. (Note also that conversely, $\mathrm{rank}\, H = \mathrm{rank}\, G$ implies that $M$ is odd-dimensional.) The applications we will present here concern the Weyl group associated to the cohomogeneity one action of $G$. To define it, we first pick a $G$-invariant metric on $M$. The corresponding Weyl group $W$ is defined as the $G$-stabilizer of the geodesic $\gamma$ modulo $H$. We have that $W$ is a dihedral group generated by the symmetries of $\gamma$ at $\gamma(0)$ and $\gamma(1)$. It is finite if and only if $\gamma$ is closed. (For more on this notion, see \cite[Section 1]{GWZ}, \cite[Section 1]{Ziller}, and the references therein.) In the case addressed in this section we have the following relation between the dimension of the cohomology of $M$ and the order of the occuring Weyl groups: \begin{cor}\label{cor:oddd} If $M/G = [0,1]$ and $\mathrm{rank}\, H = \mathrm{rank}\, G$, then $W$ is finite and \[ \dim H^*(M) = 2 \cdot \frac{\chi(G/H)}{|W|}. \] \end{cor} \begin{proof} Let $T$ be a maximal torus in $H$, which then is automatically a maximal torus in $K^+$, $K^-$, and $G$. We wish to understand the submanifold $M^T$ consisting of the $T$-fixed points. To this end, note that for each $p\in M^T$, the $T$-action on the orbit $Gp$ has isolated fixed points, hence the $T$-action on the tangent space $T_p(Gp)$ has no fixed vectors; if $p$ is regular, then $\nu_p (Gp)$ is one-dimensional, hence $T$ must act trivially on it. Therefore, $M^T$ is a finite union of closed one-dimensional totally geodesic submanifolds of $M$, more precisely: a finite union of closed normal geodesics. The geodesic $\gamma$ is among them and therefore closed; hence, $W$ is finite. The order $|W|$ of the Weyl group has the following geometric interpretation, see again \cite[Section 1]{GWZ} or \cite[Section 1]{Ziller}: the image of $\gamma$ intersects the regular part of $M$ (i.e., $M\setminus (G/K^-\cup G/K^+)$) in a finite number of geodesic segments; this number equals $|W|$. Note that each such geodesic segment intersects the regular orbit $G/H$ in exactly one point, which is obviously in $(G/H)^T$. Consequently, $W(G)$ acts transitively on the components of $M^T$, hence each of the components of $M^T$ meets $G/H$ equally often. This yields a bijective correspondence between $(G/H)^T$ and the components of $M^T\setminus (G/K^-\cup G/K^+)$. The cardinality of $(G/H)^T$ is equal to $\chi(G/H)$ and we therefore have \begin{equation}\label{eq:chi} \chi(G/H) = |W| \cdot (\text{number of components of }M^T). \end{equation} As each component of $M^T$ is a circle and therefore contributes with $2$ to $\dim H^*(M^T)$, we obtain from Proposition \ref{abo} (e) that \begin{align*} \dim H^*(M) &= \dim H^*(M^T) \\ & = 2\cdot (\text{number of components of }M^T) \\ &= 2 \cdot \frac{\chi(G/H)}{|W|}. \end{align*} \end{proof} \begin{rem} If $G\times M \to M$ is a cohomogeneity one action, then different choices of $G$-invariant metrics on $M$ generally induce different Weyl groups (see for instance \cite[Section 1]{GWZ}). However, if the action satisfies the extra hypothesis in Corollary \ref{cor:oddd}, then the Weyl group depends only on the rational cohomology type of $M$ and the principal orbit type of the action. In particular, $W$ is independent on the choice of the metric, but this is merely due to the fact that even the normal geodesics do not depend on the chosen $G$-invariant metric (being just the components of $M^T$, as the proof of Corollary \ref{cor:oddd} above shows). \end{rem} \begin{rem} In the special situation when $M$ has the rational cohomology of a product of pairwise different spheres, the result stated in Corollary \ref{cor:oddd} has been proved by P\"uttmann, see \cite[Corollary 2]{Pu2}. \end{rem} Let us now consider the case when $M$ is a rational homology sphere. Examples of cohomogeneity one actions on such spaces can be found in \cite{GWZ} (see particularly Table E, for linear actions, and Table A for actions on the Berger space $B^7=SO(5)/SO(3)$ and the seven-dimensional spaces $P_k$); the Brieskorn manifold $W^{2n-1}(d)$ with $d$ odd and the $SO(2)\times SO(n)$ action defined in \cite {HH} is also an example; finally, several of the 7-dimensional $\mathbb Z_2$-homology spheres that appear in the classification of cohomogeneity one actions on $\mathbb Z_2$-homology spheres by Asoh \cite{As, As1} are also rational homology spheres. As usual in this section, we assume that $\mathrm{rank}\, H = \mathrm{rank}\, G$. Under these hypotheses, Corollary \ref{cor:oddd} implies that $|W|=\chi(G/H)$. In fact, the latter equation holds under the (seemingly) weaker assumption that the codimensions of both singular orbits are odd, as it has been observed in \cite[Section 1]{Pu2}. Combining this result with Corollary \ref{cor:oddd} we have: \begin{cor}\label{cor:kor} Let $G$ act on $M$ with cohomogeneity one, such that $M/G = [0,1]$ and let $H$ be a principal isotropy group. The following statements are equivalent: (i) $M$ is a rational homology sphere and $\mathrm{rank}\, H = \mathrm{rank}\, G$. (ii) $M$ is a rational homology sphere and the codimensions of both singular orbits are odd. (iii) $|W|=\chi(G/H)$. In any of these cases, the dimension of $M$ is odd and the $G$-action is equivariantly formal. \end{cor} Let us now observe that for a general cohomogeneity one action with $M/G = [0,1]$, the Weyl group is contained in $N(H)/H$. The previous results allow us to make some considerations concerning whether the extra assumption $\mathrm{rank}\, G = \mathrm{rank}\, H$ implies that $W=N(H)/H$. The following example shows that this is not always the case. \begin{ex} Let us consider the cohomogeneity one action determined by $G= SU(3)$, $K^-=K^+=S(U(2)\times U(1))$, and $H=T$, a maximal torus in $K^\pm$. Hoelscher \cite{Ho2} denoted this manifold by $N^7_G$ and showed that it has the same integral homology as ${\mathbb C}P^2 \times S^3$. This implies that $\dim H^*(N^7_G)= 6$. On the other hand, $\chi(G/H) = \chi(SU(3)/T)=6$, hence by Corollary \ref{cor:oddd}, $|W|=2$. Since $N(H)/H=N(T)/T=W(SU(3))$ has six elements, we have $W\neq N(H)/H$. \end{ex} However, it has been observed in \cite[Section 5]{GWZ} that for linear cohomogeneity one actions on spheres, the condition $\mathrm{rank}\, G = \mathrm{rank}\, H$ does imply that $W=N(H)/H$. The following more general result is an observation made by P\"uttmann, see \cite[Footnote p.~226]{Pu2}. We found it appropriate to include a proof of it, which is based on arguments already presented here. \begin{prop} {\rm (P\"uttmann)} \label{letg} Let $G$ act on $M$ with cohomogeneity one, such that $M/G = [0,1]$ and let $H$ be a principal isotropy group. If $\chi(G/H)=|W|$ then $W=N(H)/H$. \end{prop} \begin{proof} We only need to show that $N(H)/H$ is contained in $W$. The hypothesis $\chi(G/H) >0$ implies that $\mathrm{rank}\, G = \mathrm{rank}\, H$, hence $G/H$ is even-dimensional and $M$ is odd-dimensional. Let $T$ be a maximal torus in $H$. By Equation (\ref{eq:chi}), $M^T$ consists of a single closed geodesic $\gamma$. The group $H$ fixes the geodesic $\gamma$ pointwise. Consequently, any element of $N(H)$ leaves $M^T$ invariant, hence its coset modulo $H$ is an element of $W$. \end{proof} The following example shows that the condition $\chi(G/H)=|W|$ is stronger than $W=N(H)/H$. \begin{ex} In general, $\mathrm{rank}\, G = \mathrm{rank}\, H$ and $W=N(H)/H$ do not imply that $\chi(G/H)=|W|$. To see this we consider the cohomogeneity one manifold determined by $G=Sp(2)$, $K^-=K^+= Sp(1) \times Sp(1)$, $H=Sp(1)\times SO(2)$. Its integer homology has been determined in \cite[Section 2.9]{Ho2}: the manifold, denoted there by $N^7_I$, has the same homology as the product $S^3\times S^4$. Consequently, $\dim H^*(N^7_I) = 4$. An easy calculation shows that $\chi(G/H) = |W(Sp(2))|/(|W(Sp(1))|\cdot |W(SO(2))|) = 4$. By Corollary \ref{cor:oddd}, $|W|=2$, hence $\chi(G/H)\neq |W|$. On the other hand, the group $N(H)$ can be determined explicitly as follows. We regard $Sp(2)$ as the set of all $2\times 2$ quaternionic matrices $A$ with the property that $A\cdot A^*=I_2$ and $H=Sp(1) \times SO(2)$ as the subset consisting of all diagonal matrices ${\rm Diag}(q, z)$ with $q\in {\mathbb H}$, $z\in {\mathbb C}$, $|q|=|z| =1$. Then $N(H)$ is the set of all diagonal matrices ${\rm Diag}(q,r)$, where $q, r\in {\mathbb H}$, $|q|=|r|=1$, $r\in N_{Sp(1)}(SO(2))$. This implies that $N(H)/H \simeq N_{Sp(1)}(SO(2))/SO(2) =W(Sp(1))={\mathbb Z}_2$, hence $N(H)/H=W$. \end{ex} \subsection{Torsion in the equivariant cohomology of cohomogeneity one actions} \label{torsfre} It is known \cite{Fr-Pu2} that if $G\times M \to M$ is an arbitrary group action, then the $H^*(BG)$-module $H^*_G(M)$ may be torsion-free without being free. (But note that this cannot happen if $G$ is either the circle $S^1$ or the two-dimensional torus $T^2$, see \cite{Al}.) The goal of this subsection is to show that this phenomenon also cannot occur if the cohomogeneity of the action is equal to one. This is a consequence of the following lemma. \begin{lem} Assume that the action of the compact connected Lie group $G$ on the compact connected manifold $M$ is Cohen-Macaulay. Then the $H^*(BG)$-module $H^*_G(M)$ is free if and only if it is torsion-free. \end{lem} \begin{proof} If $H^*_G(M)$ is a torsion-free $H^*(BG)$-module, then, by \cite[Theorem 3.9 (2)]{GR}, the space $M_{\max}$ is non-empty. Since the $G$-action is Cohen-Macaulay, it must be equivariantly formal by Proposition \ref{gr}. \end{proof} From Theorem \ref{thm:cohom1eqformal} we deduce: \begin{cor} Assume that the action $G\times M \to M$ has cohomogeneity one. Then the $H^*(BG)$-module $H^*_G(M)$ is free if and only if it is torsion-free. \end{cor}
1,116,691,498,889
arxiv
\section{Introduction} \label{sec:intro} Helioseismology has been successful at imaging the solar interior by studying oscillations at the surface \citep[][]{CD2002}. It has been exercised to learn about a wide variety of features in the Sun such as magnetism \citep[][]{Duvall2000, Jensen2001, Birch2010, Birch2013}, convection \citep[][]{Miesch, SandN2009, Han2012, Han2016}, differential rotation \citep[][]{Schou1998}, meridional circulation \citep[][]{Giles1997} and Rossby waves \citep[][]{Loptein2018, Liang2019}. Several techniques such as global-mode helioseismology \citep[e.g.,][]{CD2002}, ring-diagram analysis \citep[][]{Hill1988}, time-distance helioseismology \citep[][]{Duvall1993}, helioseismic holography \citep[see][for a review]{Lindsey2000}, direct modelling \citep[][]{Wood2002} have proven to be successful in imaging the solar interior. Normal-mode coupling, \citep[][]{Wood1989, LandR1992} the method of choice here, uses near-resonance correlations of the surface measurements of solar oscillations in the wavenumber-frequency domain, permitting inferences of time variations of non-axisymmetric features. The first use of mode-coupling in helioseismology goes back to \citet{Wood1989}, and subsequently by \citet{LandR1992}, who adapted the mature geophysical development of the technique to helioseismic problems, and more recently by \citet[][]{Roth2008, Vor2007, Vor2011, Wood2007, Wood2014, Wood2016, Roth2011, Roth2020}. Utilizing the algebra of mode coupling detailed in \citet{Han2017} and \citet{Han2018}, \citet{HandM2019} and \citet{MandH2020} detected and studied properties of Rossby waves. An important test of mode coupling was the recent analysis of \citet{Samarth2021} to infer solar differential rotation and \citet{Hanson2021} to analyze the power spectrum of supergranules, two well established results, thereby validating the method. As with all other helioseismic techniques, mode coupling proceeds by assuming the structure and dynamics present in the Sun to be modelled as small perturbations to the reference model, which suffers from the absence of rotation, flows and magnetism \citep[we use Model S][]{CD1996}. We can express the solar oscillation eigenfunctions as a weighted linear sum of model eigenfunctions. The associated weights, known as coupling coefficients, encode properties of the solar interior. We then state the linear forward problem derived using mode coupling by relating the coupling coefficients and the observed wavefield correlations to the underlying perturbations. \citet{Han2017} reevaluated and extended the results of \citet{LandR1992} and \citet{Han2018} accounted for systematical errors arising from the partial visibility of the Sun (spectral leakage) in mode-coupling measurements, attempting to model the observations better and improve the accuracy of flow velocity inferences in \citet{HanSciA2020}. They found that toroidal flows on large scales are confined to the equatorial regions. To further validate mode coupling as a tool with which to probe convection, \citet{Mani2020} carried out extensive tests to validate inversions of synthetic mode-coupling measurements for toroidal flow while accounting for leakage, and concluded that inversions are strongly influenced by the model assumed for the correlation between flow velocities. In this work, we extend the analysis of spatial scales of the toroidal flow and characterize velocities and power speectra. To this end, we examine 4 yr Helioseismic Magnetic Imager \citep[HMI;][]{HMI2012} and Michelson Doppler Imager \citep[MDI;][]{MDI1995} data each and perform inversions using Regularized-Least-Squares (RLS) and Subtractive Optimally Localized Averages (SOLA) \citep[][]{PandT1994}. We consider leakage in measurements and also evaluate correlated realization noise using the model derived in \citet{Han2018}. As an independent yardstick for the velocities obtained from mode-coupling, we compare inversions at the surface ($\sim0.995R_{\odot}$) with velocities obtained from Local Correlation Tracking \citep[][]{Loptein2018}. LCT obtains horizontal velocities by maximizing the correlation of granulation pattern in successive intensity images - the idea being that the pattern is advected by flows. For a vector flow field ${\bf u}_{o}^{\sigma}({\bf r})$, where \textbf{r} denotes the 3-dimensional spatial co-ordinate and $\sigma$ the flow evolution scale, the Chandrasekhar-Kendall decomposition gives the toroidal (vortical) part of the flow as \begin{equation}\label{flow} {\bf u}_{o}^{\sigma}({\bf r}) = \sum_{s,t} \;w_{st}^{\sigma}(r)\hat{\bf{r}}\times{\boldsymbol\nabla}Y_{s}^t, \end{equation} where $w_{st}^{\sigma}(r)$ are the toroidal-flow coefficients which we seek to infer from measurements, $Y_{st}$ is the spherical harmonic of degree $s$ and the azimuthal order $t$ and $r$ is radius. Toroidal flows are horizontal flows with curl but no divergence, and hence by definition, mass conserving. We ignore $t=0$ as we are only interested in non-axisymmetric flows. \section{Data Analysis} We consider global-mode time-series with $\ell\in\big[50,180\big]$ and the resonant frequency of the mode $\nu_{n\ell}\in\big[1.8,3.6\big]$mHz, where the indices ($n,\ell,m$) denote radial order, spherical-harmonic degree, and the azimuthal order of a mode, respectively. Modes of higher frequencies are known to show systematical errors \citep[][]{Antia2008}, and on the lower frequency end, sampling the time series data off-resonance causes background power to leak in due to the small line widths of these modes. Denoting the temporal Fourier transform of the series by $\phi_{\ell m}^{\omega}$ where $\omega$ is temporal frequency, we compute mode-coupling measurements as $\phi_{\ell m+t}^{\omega+\sigma}\phi_{\ell m}^{\omega*}$. Correlating wavefields at same-spherical-harmonic degree, different azimuthal orders and temporal frequencies renders the measurement sensitive only to odd-degree (odd-$s$) toroidal flow \citep[][]{Han2018, Mani2020}. To facilitate data analysis, we condense $\phi_{\ell m+t}^{\omega+\sigma}\phi_{\ell m}^{\omega*}$ into its linear-least-square fit \citep[][]{Wood2016}, known as $B$-coefficients, given by \begin{equation}\label{leastsquaresfit} B_{st}^{\sigma}(n, \ell) = \sum\limits_{m, \omega}W_{\ell m s t}^{\omega+\sigma}\phi_{\ell m+t}^{\omega+\sigma + t\Omega}\phi_{\ell m}^{\omega*} \end{equation} where $W$ is theoretically derived in the \citet{HanSciA2020} supplementary and also in \citet{Han2018}. These measurements are taken in a rotating frame, at the equatorial rate $\Omega=453$nHz. This leads to the transformation $\sigma\rightarrow\sigma + t\Omega$, which is incorporated in equation~(\ref{leastsquaresfit}). The forward model linearly connects perturbations in internal solar properties with respect to a solar model to observations. In the first-Born approximation, the wavefield correlation is linearly related to the flow through a sensitivity kernel \citep[][]{Wood2006}. Using the forward model derived through mode-coupling in \citet{Han2018}, the inverse problem is stated as \begin{equation}\label{Bst} B_{st}^{\sigma}(n,\ell) = \sum_{s't'}\int_{\odot} dr\: w_{s't'}^{\sigma}(r)\Theta_{st}^{s't'}(r;n,\ell,\sigma). \end{equation} The sum over $s',t'$ indicates that leakage between oscillation signals translates to leakage between neighboring flow wavenumbers. Hence, to accurately estimate the signal in a given channel ($s,t$), contributions from neighboring modes, weighted by leakage matrices \citep[see][for how leakage is modeled]{SandB1994} in the sensitivity kernel $\Theta_{st}^{s't'}(r;n,\ell)$, need to also be taken into account. {\citet{MandH2020} studied properties of odd-$s$ Rossby modes in the frequency-bin range $\sigma\in[0,2]\mu$Hz and concluded that the dominant leakage into a desired mode is from the neighboring odd-$s$; in the absence of tracking, leakage occurs at the same temporal frequency ($\sigma_{s}\rightarrow\sigma_{s}$), and when tracking is applied, it leaks into higher temporal frequencies ($\sigma_{s}\rightarrow\sigma_{s}+2\Omega$, where $\Omega=453$nHz). Hence we restrict the analysis to $[0,0.9]\mu$Hz.} Turbulent convection stochastically excites waves with random phase and amplitudes all over the solar surface. If the Sun had no perturbations (i.e., spherical harmonics were the correct eigenfunctions) and if we could observe the entire solar globe, we would be able to perfectly resolve the excited modes into unique spherical harmonic wavenumbers and model them as uncorrelated \citep[see][supplementary]{Han2018,HanSciA2020}. But spatial windowing results in convolutions in spectral space, inducing leakage in the observed modes, which in turn causes them to be finitely correlated. Here, we simply produce the final noise model (i.e., $\langle|B_{st}^{\sigma}|^2\rangle$ for $B$-coefficients obtained from pure noise) \iffalse \begin{equation}\label{noise} \epsilon_{st}^{\sigma}(n, \ell) = \frac{\sum\limits_{m, \omega}|H_{\ell m s t}^{\omega + \sigma}|^2\langle|\phi_{\ell m+t}^{\omega+\sigma}|^2\rangle\langle|\phi_{\ell m}^{\omega}|^2\rangle}{\big[\sum\limits_{m, \omega}|H_{\ell m s t}^{\omega + \sigma}|^2\big]^2}. \end{equation} \fi \begin{equation}\label{noise} \epsilon_{st}^{\sigma}(n, \ell) = \sum\limits_{m, \omega}|W_{\ell m s t}^{\omega + \sigma}|^2\langle|\phi_{\ell m+t}^{\omega+\sigma}|^2\rangle\langle|\phi_{\ell m}^{\omega}|^2\rangle. \end{equation} To summarise, for both the instruments, \begin{enumerate} \item using Equation~(\ref{leastsquaresfit}) and Equation~(\ref{noise}), we compute $B$-coefficients and noise \big(for $(n,\ell)$ such that $\ell\in\big[50,180\big]$ and $\nu_{n\ell}\in\big[1.8,3.6\big]$mHz, and $\sigma\in[0,0.9]\mu$Hz\big) \item we compute the sensitivity kernels $\Theta_{st}^{s't'}(r;n,\ell)$ only at $\sigma=1\mu$Hz \citep[see Figure 1 of][]{Mani2020} \item invert Equation~(\ref{Bst}) for $w_{st}^{\sigma}(r)$ \big(\textit{for all odd $s\in[1,149]$ and $t\neq0$ for desired depth}\big) \end{enumerate} \section{RLS inversions}\label{RLS inversions} Linear inversion methods provide estimates of toroidal flow velocity $w_{st}^{\sigma}$ at the desired depth $r_0$, using a linear combination of observations $B_{st}^{\sigma}$. We employ two methods for proper comparisons - we describe the Regularized-least-squares (RLS) technique here. For SOLA, see Appendix \ref{SOLA Inversions}. RLS aims at obtaining solutions to inverse problems by navigating the trade-off between the magnitude of the regularized solution and quality of fit to the observed data. The conventional way to achieve optimality is by plotting the goodness of fit against the solution magnitude for different regularization values, resulting in an ``L curve" (Figure~\ref{fig_avgk}, panels B and D), where the knee is often used as the optimal regularization parameter. This ensures that the least-squares solution is neither dominated by contributions from data errors (too little regularization - high solution norm) nor errors due to poor solution fit to the data (too heavily regularized - high residual norm). We desire that regularization creates a better-conditioned problem and mitigates the effects of random noise in data, thereby providing a result that approximates the true solution. Stated differently, regularization in ill-conditioned inverse problems helps in diminishing contributions from small singular values of the inverse matrix ($\bf{F^T F}$, in equation~(\ref{RLSinverse})) that otherwise tend to either result in large-amplitude solutions or leave inferences excessively sensitive to noise. The goal is then stated - for each flow coefficient, minimize the sum of squared differences between data and model, called the residual norm, while penalizing a high solution norm. Owing to its past success \citep[][]{Antia1994}, we parametrize velocity $w_{st}^{\sigma}$ in a cubic $B$-spline basis (which we denote using $X$ to avoid confusion with $B$-coefficients) with $70$ knots, \begin{equation}\label{bspline} w_{st}^{\sigma}(r) = \sum_{k}\beta^{k\sigma}_{st}X_{k}(r). \end{equation} For every triplet ($s,t,\sigma$), the least-squares problem (without regularization) is posed as (using the above, Equation~(\ref{bspline})) \begin{equation}\label{leastsquares} \begin{gathered} \int_{\odot} dr\:\Theta_{st}^{st}(r;n,\ell)\;w_{st}^{\sigma}(r) = B_{st}^{\sigma}(n,\ell),\\ \sum_{k}\Bigg[\int_{\odot} dr\:\Theta_{st}^{st}(r;n,\ell)\;X_{k}(r)\Bigg]\;\beta^{k\sigma}_{st}= B_{st}^{\sigma}(n,\ell). \end{gathered} \end{equation} Here we neglect the $\sum\limits_{s',t'}$ and only retain the self-leakage term $(s',t')=(s,t)$ for simplification. Our motivation for this simplification stems from encouraging results in our prior work - \citet{Mani2020} (Appendix C, and Figure 2, panels C and D) tested this simplified inversion of synthetic $B$-coefficients, without factoring in leakage from neighboring flow modes in the least-squares expression. The final inference of velocities matched well with the input flow profile.\\ Denoting the integral on the left-hand side of equation~(\ref{leastsquares}) by $\boldsymbol{F}$, Equation~(\ref{leastsquares}) is rewritten in matrix form, \begin{equation}\label{RLSinverse} \begin{gathered} \boldsymbol{F}\cdot\boldsymbol{\beta} = \boldsymbol{B}, \\ \boldsymbol{\Big(F^T F\Big)} \cdot\boldsymbol{\beta} = \boldsymbol{F^T B}, \\ \boldsymbol{\beta} = \boldsymbol{\Big(F^T F\Big)^{-1}}\cdot\boldsymbol{F^T B}.\\ \end{gathered} \end{equation} We redefine $\boldsymbol\beta$ to include regularization, i.e., \begin{equation} \boldsymbol{\beta} = \boldsymbol{Q}\cdot\boldsymbol{B}, \end{equation} where $\boldsymbol{Q} = \Big( \boldsymbol{F^T F} + \lambda \boldsymbol{I} \Big)^{-1}\boldsymbol{F^T}$, $\lambda$ is the regularization parameter, and $\boldsymbol{I}$ the Identity matrix. RLS describes a family of methods to solve least-squares problems. The choice of method is as important as how the choice of regularization and the associated parameter. For instance, one can construct a different smoothing matrix (instead of $\bf{I}$) that penalizes higher derivatives in addition to an optimized norm. Here, we use the popular Tikhonov regularization, according to which our minimization problem is given by $\sum\limits_{k}\;||F\beta-B||_2 + \lambda||\beta||_2$, where $||\cdot||_2$ stands for $L_2$ norm. The proper choice of $\lambda$ is, as described earlier, given by the knee of the L-curve, as shown in Figure~\ref{fig_avgk}B. \\ For the depth $r_0$, this allows us to rewrite Equation~(\ref{bspline}) as \begin{equation}\label{wst_bcoeff} \begin{gathered} w_{st}^{\sigma}(r_0) = \sum_{k}\;\Bigg[\sum_{n\ell}Q^{k}_{n\ell}B_{st}^{\sigma}(n,\ell)\Bigg]\;X_{k}(r_0), \\ = \sum_{n\ell}\alpha_{n\ell}(r_0)B_{st}^{\sigma}(n,\ell),\\ \end{gathered} \end{equation} where $\alpha_{n\ell}(r_0) = \sum_k Q^{k}_{n\ell} X_{k}(r_0)$. These $\alpha$ are the RLS inversion coefficients, permitting us to express velocity as a linear combination of observations and to also construct averaging kernels (see Figure~\ref{fig_avgk}A). We estimate the noise model and subtract it from the measured signal power; Equation~(\ref{wst_bcoeff}) is thus rewritten as \begin{equation}\label{flowfinalrls} |w_{st}^{\sigma}(r_o)|^2 \approx |\sum_{n\ell}\alpha_{n\ell}(r_o)B_{st}^{\sigma}(n,\ell)|^2 - \sum_{n\ell}\alpha_{n\ell}(r_o)^2\epsilon_{st}^{\sigma}(n,\ell). \end{equation} \section{Results and discussion}\label{results} Convective power is retrieved as the (small) difference between two large numbers: the observed $B$-coefficient power and the theoretically computed background noise. The left-hand side of Equation~(\ref{wst_bcoeff}), $|w_{st}^{\sigma}(r_0)|^2$, is a power-spectrum function of 4 variables $s,t,\sigma$ and $r_0$, and by construction, a positive quantity. However, the noise model \citep[see][section 5.1]{Han2018} is incomplete because the leakage matrices do not account for centre-to-limb effects \citep[i.e., the ``shrinking Sun",][]{Duvall2009,Zhao2013}, differential rotation etc., which may contribute to small errors in noise model and therefore translate correspondingly to errors in estimating convective power. We realize its shortcomings in the inversion where we often find the power-spectrum to be negative, i.e., the noise power exceeds measurements: $\sum_{n\ell}\alpha_{n\ell}(r_o)^2\epsilon_{st}^{\sigma}(n,\ell)>|\sum_{n\ell}\alpha_{n\ell}(r_o)B_{st}^{\sigma}(n,\ell)|^2$ see panels A and B, Figure~\ref{fig_ps}, region shaded in yellow) - which occurs in these measurements when $s-|t| \gtrsim 10$. We attempt to overcome this issue by seeking agreement of $\evalat{\sqrt{\sum\limits_{t,\sigma}s(s+1)|w_{st}^{\sigma}(r)|^2}}{r=0.995R_{\odot}}$ with velocities obtained from LCT. That is, we subtract the maximum possible fraction of noise from the measurements such that the power spectrum remains positive and appropriately regularize such that velocity as a function of $s$ from mode-coupling and LCT match. We empirically find the fraction of noise power that can be subtracted to be $0.6$, i.e., \begin{equation}\label{flowfinalbeta} |w_{st}^{\sigma}(r_o)|^2 \approx |\sum_{n\ell}\alpha_{n\ell}(r_o)B_{st}^{\sigma}(n,\ell)|^2 - \gamma\sum_{n\ell}\alpha_{n\ell}(r_o)^2\epsilon_{st}^{\sigma}(n,\ell). \end{equation} with $\gamma=0.6$. Using the definition of power spectra $P_{st}^{\sigma}(r) = s(s+1)|w_{st}^{\sigma}(r)|^2$ in keeping with \citet{Mani2020} and \citet{HanSciA2020}, we average the velocity amplitudes over its dependent variables in two ways as below: \begin{equation}\label{eqPs} P(s,r_0) = \sqrt{\sum\limits_{t,\sigma}P_{st}^{\sigma}(r_o) = \sum\limits_{t,\sigma}s(s+1)|w_{st}^{\sigma}(r_o)|^2}, \end{equation} \begin{equation}\label{eqPdiff} P(s',r_0) = \sqrt{\sum\limits_{\sigma,s,t}P_{st}^{\sigma}(r_o) = \sum\limits_{\sigma,s,t}s(s+1)|w_{st}^{\sigma}(r_o)|^2}. \end{equation} where in equation~\ref{eqPdiff}, the summation on $s$ and $t$ are such that $s-|t|=s'$. We plot the velocity distribution (see Figure~\ref{fig_ps}) - $\sqrt{s(s+1)|w_{st}^{\sigma}(r)|^2}$ - at the depth $0.995R_{\odot}$ for these definitions. The observed velocity monotonically decreases with increasing spatial scale (panel C), in contrast to several hydrodynamic simulations and models \citep[e.g.][]{Miesch2008, Lord2014, Hotta2016}. The absence of a peak in power at the supergranular scale ($s\approx120$) leads us to conclude that supergranules are dominantly poloidal \citep[see][]{Langfellner2014,Langfellner2015}. Panel D shows almost a linear drop-off in amplitude for $s-|t| \gtrsim 10$ suggesting comparable amount of power in sectoral and non-sectoral modes. Figure~\ref{fig_isotropy} shows the changing geometry of the flow as more and more wavenumbers are included in the averaging formula described in Equation~(\ref{eqPdiff}). Since $|t|\le s$, $s-|t| =k$ has $s_{\rm max}-k$ modes (excluding $t=0$ and $s=0$ modes) and the power correspondingly changes. Thus, in a perfectly isotropic system where each mode reports the exact same power, we would expect the power to follow a linear declining relationship, from its maximum at $s-|t|=0$ to zero at $s-|t|=s$. Keeping this in mind, to characterize the shape of the flow, we define a measure of `isotropy' as the deviation from the expected linear baseline. The more enhanced or depleted the power is in any segment of $s-|t|$ in comparison the nominal linear power variation, the less isotropic the flow. The lack of isotropy implies that flows are more or less vigorous in different parts of the sphere. In panel A, where the sum is over $s\in[1,49]$, power dominates in sectoral modes, i.e., $\rightarrow$ $s-|t|\sim0$ \citep[e.g,][]{HanSciA2020}. This would imply that flows are preferentially vigorous in the equatorial regions. As we average over a larger range of $(s,t)$, as in panels B ($s\in[1,99]$) and C ($s\in[1,149]$), we see the preference for sectoral enhancement vanishing and the power distributed uniformly across all wavenumbers - an indication that the flow is becoming more isotropic and possibly uniformly distributed across the spherical surface. We are however cautious about strongly interpreting the results of Figure~\ref{fig_isotropy}. Panels B and C, where higher wavenumbers are included, exhibit signs of imperfect noise subtraction at low $s=|t|$ (for the HMI data), in line with the yellow shaded regions in Figure~\ref{fig_ps}A and B. Velocity inferred from HMI data (red curve) in panel C at low $s-|t|$ shows deviation from the nominal isotropy baseline that may not be consistent with our interpretation of the flow being isotropic. But the large fluctuations around the isotropy line are possibly from challenges in modelling noise associated with the high wavenumbers. More evidence of problems in theoretically derived mode-coupling noise has recently come to light \citep{Wood2021}. The Rossby number, which is the ratio of the inverse convective turnover timescale to the rotation rate, quantifies the influence of rotation on the fluid dynamics of the system, i.e., small Rossby numbers correspond to rotationally constrained flow and vice versa. Supergranules \citep[$U\approx300m/s$, $L\approx30-35$Mm, $Ro\sim O(1)$, see][]{Rincon2018} correspond to rotationally balanced flows; their poloidal flows are observed to show a latitudinal variation \citep[][]{Lisle2004,Nagashima2011,Langfellner2015}, a deviation from isotropy that can be ascribed to Coriolis force, whereas properties of granules \citep[$U\approx3$ km/s, $L\approx1-1.5$Mm, $s\sim2500$; these values are taken from][]{Hathaway2013, Hathaway2015}, with Ro$\sim\mathcal{O}(10^3)$, are largely uniform over the entire solar disk. In fact, it has also been noted that supergranules have a preferred sense of vorticity in each hemisphere \citep[clockwise (anticlockwise) in norther (southern) hemisphere; see][]{Duvall2000}. While in this work we only analyze vortical flows up to $s=150$, we believe they similarly approach a state of increasing isotropy as $s_{max}$ is increased from $50$ to $150$ (see Figure~\ref{fig_isotropy}). \citet{Han2012} showed, through time-distance helioseismology \citep{Duvall1993}, that velocity amplitudes on large scales ($s\sim60$) are at least an order of magnitude lower than predictions from numerical simulations. The mechanisms governing the observed continuous decrease in power with increasing spatial scales remain as yet unclear. \citet{Lord2014} for instance have suggested that if the convection zone at layers deeper than $0.97R_\odot$ or so were to be adiabatic, the convective spectrum would start decreasing at scales larger than supergranulation, and thereby highlighting supergranules in the spectrum because of power suppression in the adjacent scales. However, supergranular flows show a wave-like dispersion relation \citep{Gizon2003, Hanson2021}, suggesting that it is a privileged scale, not merely arising from a reduced effective depth of the convection zone. \cite{Featherstone2016} observed a peak power scale highlighted for different Rossby numbers in convective simulations, suggesting that deep convection of larger-scale modes are rotationally inhibited, and scales smaller than this enhanced peak scale behave like non-rotating convection, i.e., they are only weakly influenced by Coriolis force. Our results lends credence to the analysis of \citet{Featherstone2016}: Plots shown in Panel C, Figure~\ref{fig_ps}, and in Panels A through C in Figure~\ref{fig_isotropy}, indicate that large-scale power on the surface is low because they are possibly rotationally inhibited, and smaller scales increasingly behave like non-rotating convection, exhibiting increasing isotropy. Although LCT is used for comparison, we emphasize that it is fundamentally different from seismology. While LCT is successful in measuring surface horizontal velocity fields insofar as the length and time scales of the inferred flow are well separated from those of granules \citep[][]{Rieutord2001}, inversions through seismology makes full use of sensitivity kernels - a product of mode-eigenfunctions - that peak at a few Mm under the surface, permitting inferences of subphotospheric velocities. Additionally, we neglect observations for LCT for latitudes higher than $60^\circ$ in both the hemispheres because of systematics. Since LCT tracks the movement of granules, it implies a specific depth averaging of the large-scale flows that are inferred. This is evidently different from that of seismology. \begin{figure} \subfloat{\includegraphics[width = 6.2in]{f1.png}} \caption{Panels A and B show, for 4 yr HMI (green) and MDI (magenta) data each, inverted using RLS, the B-coefficient power (solid) and the noise power (dashed), in units of $m^2/s^2$ - first and second terms on the RHS of Equation~(\ref{flowfinalrls}) - averaged using Equations~(\ref{eqPs}) and (\ref{eqPdiff}), respectively. The region highlighted in yellow is where noise power exceeds B-coefficient power. Panels C and D show, for 4 yr HMI (red) and MDI (blue) data each, the final velocities, Equation~(\ref{flowfinalbeta}), similarly averaged. The columns and rows share x- and the y-axis labels, respectively. The rows also share the plot legends. Panel C shows that for appropriate regularization, we can obtain an excellent match between mode-coupling inversions and LCT. Note the absence of extra power at supergranular scales ($s\approx120$). LCT is not shown in panel D. The shaded region in the curves of panels C and D denotes $\pm1\sigma$ error.} \label{fig_ps} \end{figure} \begin{figure} \subfloat{\includegraphics[width = 7in]{f2.png}} \caption{Velocity as a function of $s-|t|$ for 1 yr HMI (red) and MDI (blue), each inverted using RLS. From left to right, the upper limit in $s$ over which averaging is performed - see Equation~(\ref{eqPdiff}) - is $49$, $99$ and $149$, respectively. Indeed for low $s-|t|$, sectoral modes contain excess power. As we extend the spatial scale to include higher wavenumbers, we find that, at the surface, the flow becomes more isotropic, i.e., from left to right, power is more uniformly distributed across all harmonics. Sectoral mode power is greater than that of tesseral or zonal modes, simply because the set of available $(s,t)$ to sum over falls linearly with increasing $s-|t|$ (shown as the nominal dashed line). Note that panel C of this figure is the same as panel D of Figure~\ref{fig_ps}} \label{fig_isotropy} \end{figure} \begin{figure} \subfloat{\includegraphics[width = 6.2in]{f3.png}} \caption{Averaging kernels at $r=0.995R_\odot$ (panels A and C, RLS and SOLA, respectively) and the L-curve along with the knee marked (panels B and D, RLS and SOLA, respectively) - this is the optimal choice for the regularization parameter $\lambda$. Shown here are the curves for the convective mode $(s,t)=(33,25)$. Optimal regularization for a well localized averaging kernel and a minimal residual has to be achieved for each $(s,t)$ in the analysis. In SOLA, the trade-off can be better understood as being between optimally matching the averaging kernel with the target function on one hand and obtaining velocities with appropriate norm on the other.} \label{fig_avgk} \end{figure} \section{Summary}\label{conclusion} Our analysis points to the idea that, as higher wavenumbers are gathered into the analysis, toroidal flow becomes increasingly isotropic, i.e., power is more uniformly spread across the full range of spherical harmonics. The present work is a continuation of the analysis carried out in \citet{HanSciA2020} and \citet{Mani2020}. The synthetic tests undertaken in the latter included spatial leakage in the kernels computed for toroidal flow, and the result that inversions are independent of the temporal frequency at which kernels are computed served well in saving computing time for the present analysis. As has been noted in \citet{Han2018}, correlated realization noise brought about by leakage is non-trivial to model and there remains work to be done to better understand the missing ingredients in the model and improve estimates of the leakage matrices. We reported a finding in this document - toroidal flow appears to contain uniform power when smaller spatial scales are analyzed (see Figure~\ref{fig_isotropy}). Figure~\ref{fig_ps}C also confirms part of the result from \citet{Rincon2017} and \citet{Hathaway2015}, that flows at supergranular scale ($s\approx120$) are, at best, weakly toroidal. Future work will focus on appreciating poloidal (horizontal-divergent) flow using mode-coupling and comparing with the velocities obtained here. P.M. is grateful to Samarth G. Kashyap for several clarifying discussions regarding data analysis. Sashi Kiran Mahapatra was immensely helpful in resolving issues relating to computation. P.M and S.M.H. acknowledge that LCT data was generated by Bjoern L\"{o}eptien and provided by the authors of \citet{Chris2020}. The knee of the L-curve was found using the python module given in the github repository \url{ https://github.com/arvkevi/kneed }- Finding a “Kneedle” in a Haystack: Detecting Knee Points in System Behavior.
1,116,691,498,890
arxiv
\section{Introduction} Visual signals are distorted in adverse weather conditions which are usually classified as steady (such as haze, mist, and fog) or dynamic (such as rain and snow) \cite{1nara2003,1garg2004}. The haze issue is studied in this paper. Due to the effect of light scattering through small particles accumulated in the air, hazy images suffer from contrast loss of captured objects \cite{1nara2003}, color distortion \cite{1ancuti2013,He}, and reduction of dynamic range \cite{1zheng2013,1kouf2018}. Existing object detection algorithms might not perform well on the hazy images, especially for those real-world hazy images with heavy haze. Haze is also an issue for heavy rain images \cite{1garg2004}. It is thus important to study single image dehazing for both hazy and rainy images. Single image dehazing is widely studied because of its broad applications. Two popular types of single image dehazing algorithms are model-based ones \cite{He,1fattal2008,1zhuq2015,CVPR16,1LiZG2021} and data-driven ones \cite{Ren,cai,ICCV17,Qu_2019_CVPR,1qin2020,1dongh2020}. The model-based ones are on top of the Koschmieders law \cite{koschmider}. They can improve the visibility of real-world hazy images well regardless of haze degree but they cannot achieve high PSNR and SSIM values on the synthetic sets of hazy images. On the other hand, the data-driven ones perform well on the synthetic sets while their performance is poor for real-world hazy images, especially for those hazy images with heavy haze \cite{glots2020,1liuw2021}. It is thus desired to have a single image dehazing algorithm which is applicable for both the synthesized and the real-world hazy images. In this paper, a novel single image dehazing algorithm is proposed by fusing the model-based and data-driven approaches. Same as the model-based methods in \cite{He,1fattal2008,1zhuq2015,CVPR16,1LiZG2021} and the data-driven ones such as \cite{cai}, both the atmospheric light and the transmission map are required by the proposed algorithm. Rather than using the model-based methods in \cite{He,1zhuq2015,CVPR16,1LiZG2021} or the data-driven methods in \cite{cai,1chen2021}, they are initialized by using model-based methods, and refined by data-driven approaches. The atmospheric light is initialized by using the hierarchical searching method in \cite{Three} which is based on the Koschmieders law \cite{koschmider}. The transmission map is initialized by using the dark direct attenuation prior (DDAP) in \cite{1LiZG2021} which is extended from the dark channel prior (DCP) in \cite{He}. The DDAP is applicable to all the pixels in the hazy image. Unfortunately, the initial transmission map suffers from morphological artifacts. The morphological artifacts caused by the DDAP are then reduced by the novel haze line averaging algorithm in \cite{1LiZG2021} which is designed by using the concept of haze line in \cite{CVPR16}. The initial atmospheric light and transmission map are then refined via data-driven approaches. Pre-trained deep neural networks (DNNs) can be served as fast solvers for complex optimization problems \cite{1nir2021,1hornik1989}. Such an advantage of data-driven approaches is well utilized by our refinement. Since it is very difficult or even impossible to have a pair of a real-world hazy image and its corresponding haze-free image, the popular generative adversarial network (GAN) \cite{1gan2014} is utilized to refine the atmospheric light and transmission map. The initial atmospheric light and transmission map can be regarded as noisy atmospheric light and transmission map. The main function of the GAN is to reduce the noise of the initial atmospheric light and transmission map. Based on this observation, the generator of the proposed GAN is constructed on top of the latest DNN for noise reduction in \cite{CycleISP} and the Res2Net in \cite{1gao2021}. The discriminator of the proposed GAN is based on the PatchGAN in \cite{1isola2017}. The proposed GAN is trained by using 500 hazy images from the multiple real-world foggy image defogging (MRFID) dataset in \cite{1liuw2021} and 500 hazy images from the realistic single image dehazing (RESIDE) dataset in \cite{1LiB2018}. The hazy images in the RESIDE dataset are synthesized by using the Koschmieders law \cite{koschmider}. Existing data-driven dehazing algorithms such as \cite{1qin2020,1dongh2020,1zhao2021,1dong2020} are trained by using more than 10K hazy images in the RESIDE dataset in \cite{1LiB2018}. All ground-truth images of haze-free images in the RESIDE dataset \cite{1LiB2018} are available. The $l_1$ loss function can be applied to measure the restored images for those hazy images in the RESIDE dataset. There is a clean image for each hazy image in the MRFID dataset but the clean and hazy images are captured under different lighting conditions. A new loss function is derived by using an extreme channel which is extended from the dark channel in \cite{He}. The new loss function, the adversarial loss function \cite{1gan2014}, and one more loss function on the gradient of the restored image are used to measure the restored images for the hazy images in the MRFID dataset. The proposed GAN is trained by integrating the $l_1$ loss function for all images in the RESIDE dataset and the three loss functions for all images in the MRFID dataset in each batch. They are analogous to the data-fidelity term and the regularization term of the optimization problems in \cite{Li2015,1farb2008}, respectively. The model-based estimation and the data-driven refinement forms a neural augmentation which can be applied to improve the interpretability of pure data-driven approaches \cite{1nir2021}. Experimental results show that the proposed algorithm is applicable to the synthetic and real-world hazy images, and outperforms existing data-driven dehazing algorithms for the real-world hazy images from the dehazing quality index (DHQI) point of view \cite{1min2019}. One contribution is to propose a novel model-based deep learning framework which fuses model-based and data-driven approaches for single image dehazing. The framework is applicable to the synthetic and real-world hazy images. The number of training data can be reduced significantly. The second contribution is to propose selection of suitable loss functions according to the lighting conditions of the hazy and clean images. Different loss functions are utilized to measure the hazy images in the RESIDE and MRFID datasets. The remainder of this paper is organized as below. Relevant works on single image dehazing are summarized in Section \ref{relatedworks}. Details of the proposed algorithm are presented in Section \ref{newalgorithm}. Experimental results are provided to verify the proposed algorithm in Section \ref{experiment}. Finally, conclusion remarks are provided in Section \ref{conclusion}. \section{Related Works} \label{relatedworks} Suppose that $Z$ is a hazy image and $I$ is the corresponding haze-free image. The relationship between them can be represented by the following Koschmieders law \cite{koschmider}: \begin{equation} \label{eq1} Z_c(p) = I_c(p)t(p) + A_c(1-t(p)), \end{equation} where $p(=(i,j))$ is a pixel position, and $c\in\{R, G, B\}$ is a color channel. $A_c$ is the atmospheric light, and $t$ is the transmission map. When the atmosphere is homogenous, $t(p)$ can be computed by $\exp^{-\alpha d(p)}$, in which $\alpha(>0)$ represents the scattering coefficient of the atmosphere. $d(p)$ is the depth of pixel $(p)$. The term $I_c(p)t_c(p)$ is called direct attenuation, and the term $A_c(1-t(p))$ is called airlight. By using the equation (\ref{eq1}), both visibility and color saturation of a hazy image are reduced. On the other hand, under-exposed and over-exposed regions of the image $I$ could become well-exposed ones in the hazy image $Z$ due to the airlight, and the dynamic range of $Z$ is reduced \cite{1debevec1997,1zheng2013}. Generally, the visual quality of a hazy image becomes poor if the haze is heavy \cite{Li2015}. It is thus important to restore the haze-free image $I$. Single image dehazing is a challenging problem because the transmission map $t$ depends on the unknown and varying depth. There are two popular types of algorithms: model-based and data-driven ones. \begin{figure*}[htb] \centering \includegraphics[width=0.86\textwidth]{Pics/2.png} \caption{The proposed framework for single image dehazing via model-based GAN. The generator is on top of the DNN in \cite{CycleISP}, and the discriminator is based on the PatchGAN in \cite{1isola2017}. Each dual attention block (DAB) \cite{CycleISP} contains spatial attention and channel attention modules. The Recursive Residual Group (RRG) contains several DABs and one $3*3$ convolution layers.} \label{Fig3} \end{figure*} Many model-based dehazing algorithms were proposed by using the model (\ref{eq1}). Both the atmospheric light and the transmission map have to be estimated by the model-based methods. Since the number of freedoms is larger than the number of observations, single image dehazing is an ill-posed problem. Different priors were proposed to reduce the number of freedoms \cite{He,1fattal2008,1zhuq2015,CVPR16}. Among them, the well known DCP in \cite{He} assumes that each local patch of an outdoor haze-free image includes at least one pixel which has at least one color channel with a small intensity. The DCP introduces morphological artifacts to the restored image. Guided image filter (GIF) \cite{1he2013} and weighted GIF (WGIF) \cite{Li2014} can be selected to reduce the morphological artifacts. A color-line prior was proposed in \cite{1fattal2008} by using an observation that small image patches typically exhibit a 1D distribution in the RGB color space. The haze line prior (HLP) in \cite{CVPR16} is based on an observation that the a finite number of different colors which are classified into clusters can represent colors of a haze-free image in the RGB space \cite{1orchard1991}. All the corresponding pixels in each cluster form a haze line in the hazy image. The HLP assumes that there exist at least one haze-free pixel in each haze line. The artifacts of the transmission map caused by the HLP are reduced by using the weighted least squares (WLS) framework \cite{1farb2008}. The priors in \cite{He,CVPR16} are robust to haze degree in hazy images. Unfortunately, both the DCP in \cite{He} and the HLP in \cite{CVPR16} are not true for those pixels in the sky region. A new DDAP was extended from the DCP such that the prior is applied for all the pixels in a hazy image \cite{1LiZG2021}. In addition, these physical priors are not always reliable, leading to inaccurate transmission estimates and producing artifacts in the restored images. The concept of haze line in \cite{CVPR16} was adopted in \cite{1LiZG2021} to form a haze line averaging algorithm which can reduce the morphological artifacts caused by the DCP and DDAP significantly. The transmission map is usually refined by solving a complex optimization problem \cite{He,CVPR16}. Fortunately, the optimal solution can be approximated by a pre-trained DNN \cite{1nir2021}. Amplification of noise in the sky region could also be an issue for the model-based dehazing algorithms \cite{glots2020,Li2015}. There are many data-driven dehazing algorithms, especially deep learning based dehazing algorithms. Convolutional neural networks (CNNs) based algorithms such as \cite{Ren,cai,1qin2020,1dongh2020} have potential to obtain higher SSIM or PSNR values but the dehazed images look blurry and they are not photo realistic. Their performance is poor if the haze is heavy \cite{glots2020}. Qu et al. \cite{Qu_2019_CVPR} proposed an interesting GAN based dehazing algorithm on top of the physical scattering model \cite{koschmider}. Dong et al. \cite{1dong2020} proposed a fully end-to-end GAN with fusion discriminator (FD-GAN) which takes frequency information as additional priors for single image dehazing. The GAN based dehazing algorithm is able to produce photo-realistic images even though generated texture and real one could be different. This problem could be addressed by using the hazy image dataset in \cite{1liuw2021} to train the GANs. Large pairs of hazy and clean images are necessary for the training of the data-driven methods. Most data-driven methods are trained on synthetic hazy images \cite{1zhang2018,1shao2020}. Due to the domain gap between synthetic and real-world data, recent investigations indicate that the data-driven algorithms perform well for synthesized hazy images but poor for real-world hazy images \cite{glots2020,1liuw2021}. This is because that it is almost impossible to obtain the ground-true depth information in most real-world scenarios. The DCP and GANs were used \cite{1yang2018,1li2020,1zhao2021} to address the problem of lacking paired training images for real-world hazy images. The model-based dehazing algorithms can remove haze regardless of haze degree but they could introduce artifacts duo to the inaccurate estimation of $A$ and $t$. On the other hand, the data-driven ones produce visually pleasing results but they do not perform well for real-world hazy images, especially for those images with heavy haze. In this paper, the model-based and data-driven dehazing algorithms are fused together to develop a neural augmented single image dehazing framework. In such a framework, the model-based dehazing algorithm is helpful to reduce the complexity of the data-driven one. The DCP \cite{He} was also applied to estimate the transmission map and atmospheric light in the RefineDNet \cite{1zhao2021}. Only the transmission map is refined by a data-driven approach in \cite{1zhao2021} while both the the atmospheric light and the transmission map are refined by using data-driven approaches in the proposed algorithm. The GIF \cite{1he2013} was used in \cite{1zhao2021} to reduce the morphological artifacts caused by the DCP while the simple haze line averaging algorithm in \cite{1LiZG2021} is used in the proposed algorithm. As pointed out in \cite{1zheng2020}, the simplicity of the model-based method is very important for the neural augmentation. A perceptual fusion strategy is also adopted in the RefineDNet to blend two different dehazing outputs which further increases the complexity of the RefineDNet. In addition, the RefineDNet is trained by using 13,990 synthetic hazy images in the RESIDE dataset \cite{1LiB2018} while the proposed framework is trained by only using 1000 hazy images. \section{Neural Augmented Single Image Dehazing} \label{newalgorithm} Same as the algorithms in \cite{He,1zhuq2015,CVPR16,1LiZG2021}, both the atmospheric light $A$ and the transmission map $t$ are required by the proposed algorithm to restore the haze-free image $I$. They are initialized by model-based methods, and refined by data-driven approaches. The model-based and data-driven approaches form a neural augmentation \cite{1nir2021}. The overall framework is shown in Figure \ref{Fig3}, and it is very neat. The proposed framework well utilizes the approximation capability of multi-layer neural networks \cite{1hornik1989}. \begin{figure*}[!htb] \centering{ \subfigure[a hazy image]{\includegraphics[width=1.3in]{Pics/forest_haze.jpg}} \subfigure[an initial transmission map]{\includegraphics[width=1.3in]{Pics/Initial_Transmission_Map}} \subfigure[a refined transmission map by haze line averaging]{\includegraphics[width=1.3in]{Pics/refined_transmission_map}} \subfigure[a dehazed image by initial transmission map]{\includegraphics[width=1.3in]{Pics/forest_dehaze_I2R_ini}} \subfigure[a dehazed image by refined transmission map]{\includegraphics[width=1.3in]{Pics/forest_dehaze_I2R_ref}}} \caption{Comparison of initial and refined transmission maps as well as their corresponding dehazed images.} \label{FigFig2} \end{figure*} \subsection{Model-Based Initialization of $A$ and $t$} \subsubsection{Initialization of $A$} It can be derived from the model (\ref{eq1}) that the transmission map $t(p)$ is almost zero and the value of $Z_c(p)$ is $A_c$ for a pixel $p$ in the sky region. Thus, the variance of pixel values is usually very small and the average of pixel values is usually large in the sky region. Based on this observation, a novel hierarchical searching method was proposed in \cite{Three} to estimate the atmospheric light $A$. The method is also adopted by the proposed framework to estimate an initial value of the atmospheric light $A$. The hazy image $Z$ is divided into four rectangular regions. The score of each region is defined as the mean of the average pixel value subtracted by the standard deviation of the pixel values within the region along all the color channels. The region with the highest score is further divided into four smaller regions. This process is repeated until the size of the selected region is smaller than a pre-specified threshold such as $32\times 32$. Within the finally selected region, the color vector, which minimizes the distance $\|[Z_r(p)-255, Z_g(p)-255, Z_b(p)-255]\|$ is selected as the atmospheric light. The initial $A$ might not be very accurate, it will be refined by using a data-driven approach. \subsubsection{Initialization of $t$} The transmission map $t$ is first estimated by using the DDAP in \cite{1LiZG2021} which is applicable to all pixels in the hazy image $Z$. The dark direct attenuation of a hazy image $Z$ is defined as \cite{1LiZG2021} \begin{align} \label{rho} \psi_{\rho}(I*t)(p)=\min_{p'\in \Omega_{\rho}(p)}\{\min_{c\in\{R, G, B\}}\{I_c(p')t(p')\}\}, \end{align} where $\Omega_{\rho}(p)$ is a square window centered at the pixel $p$ of a radius $\rho$ which is usually selected as 7. The DDAP assumes that $\psi_{\rho}(I*t)(p)$ is zero for the pixel $p$. An initial transmission map $t_0(p)$ is estimated by using the DDAP as \begin{align} \label{transmap} t_0(p)=1- \psi_{\rho}(\frac{Z}{A})(p). \end{align} For any pixel $p$, it can be derived from the equations (\ref{eq1}) and (\ref{transmap}) that \begin{align} \label{topzp} \frac{t_0(p)}{\|Z(p)-A\|}=\frac{1}{\|I(p)-A\|}. \end{align} However, the operation in (\ref{rho}) could introduce morphological artifacts. Therefore, the equation (\ref{topzp}) becomes \begin{align} \label{topzp1} \frac{t_0(p)}{\|Z(p)-A\|}=\frac{1}{\|I(p)-A\|}+e(p), \end{align} where $e(p)$ is the morphological artifact caused by the DDAP. As shown in Figure \ref{FigFig2}(c), there are indeed visibly morphological artifacts in the restored image $I$ if the $t_0(p)$ is directly applied to remove the haze from the hazy image $Z$. The haze line averaging method in \cite{1LiZG2021} is adopted to reduce the morphological artifacts. The haze line averaging is on top of the following haze line $H(p'')$ \cite{CVPR16}: \begin{align} \label{hijij} H(p'')=\{p|\sum_{c\in \{R, G,B\}}|I_c(p'')-I_c(p)|=0\}. \end{align} Each haze line can be identified by using a color shift hazy pixel which is defined as $(Z(p)-A)$. The $(Z(p)-A)$ can be converted into the spherical coordinates, $[r(p)$, $\theta(p)$, $\psi(p)]$ where $\theta(p)$ and $\psi(p)$ are the longitude and latitude, respectively, and $r(p)$ is $\|Z(p)-A\|$. The morphological artifacts in the equation (\ref{topzp1}) can be reduced by the following haze line averaging \cite{1LiZG2021}: \begin{align} \label{myrrmaxij11} t(p)=\frac{ \sum_{p'\in H_s(p'')}t_0(p')}{\sum_{p'\in H_s(p'')}r(p')}r(p), \end{align} where $H_s(p'')$ is a subset of $H(p'')$, and it is obtained as follows: A 2-D histogram binning of $\theta$ and $\psi$ with uniform edges in the range $[0,2\pi]\times [0,\pi]$ is adopted to generate an initial set of $H(p'')$. The bin size is chosen as $\pi/720\times \pi/720$. An upper bound $\nu$ is defined for the cardinality of the final sets. It is usually selected as 200. Let $|H(p'')|$ be the cardinality of the set $H(p'')$. The set $H(p'')$ is dividedly into $\max\{1,|H(p'')|/\nu)$ sub-sets for a non-empty $H(p'')$. The haze-line averaging (\ref{myrrmaxij11}) integrates the DDAP and the concept of haze line in \cite{CVPR16} seamlessly. As illustrated in Figure \ref{FigFig2}(d), the morphological artifacts are significantly reduced by the haze-line averaging (\ref{myrrmaxij11}). On the other hand, there is remaining artifacts and it will be removed by a data-driven refinement. \subsection{Data-Driven Refinement of $A$ and $t$} \subsubsection{Structure of Data-Driven Refinement} Instead of using the GIF \cite{1he2013} or the WGIF \cite{Li2014}, a DNN is served to refine the transmission map $t$ in the equation (\ref{myrrmaxij11}) in this subsection. The atmospheric light $A$ is also refined by a DNN as shown in Figure \ref{Fig3}. The DNN for the atmospheric light is simpler than the DNN for the transmission map. It is very difficult or even impossible to obtain the corresponding haze-free image $I$ for a real-world hazy image $Z$. Thus, the GAN \cite{1gan2014} is utilized to refine the atmospheric light and transmission map. As indicated in the introduction, the initial transmission map and atmospheric light are noisy. The generator is thus built up on top of the latest DNN for the noise reduction in \cite{CycleISP}. The residual net in \cite{CycleISP} is replaced by the Res2Net in \cite{1gao2021} to well capture the multi-scale features at a granular level. Both spatial attention module and channel attention module are utilized in \cite{CycleISP}. The modules can suppress the less useful features and only allow the propagation of more informative ones. Therefore, they effectively deal with the uneven distribution of haze. The discriminator is based on the PatchGAN in \cite{1isola2017}. \subsubsection{Loss Functions of Data-Driven Refinement} Besides the structure of the proposed GAN, loss functions also play an important role in the proposed GAN. Since both the atmospheric light $A$ and transmission map $t(p)$ are refined by the GAN, the loss functions are defined by using the following restored image $I$: \begin{equation} \label{Iij} I(p)=\frac{Z(p)-A}{\max\{t(p),0.1\}}+A. \end{equation} The proposed loss functions are associated with the training datasets. Since it is difficult for synthesized images to faithfully represent the distribution of real-world hazy images, it could be arguable to train the GAN by only using synthetic hazy images \cite{1yang2018}. Thus, the proposed GAN is trained by using 500 images from the MRFID dataset \cite{1liuw2021} and 500 images with heavy haze regenerated from the RESIDE dataset \cite{1LiB2018}. The RESIDE dataset is an image dehazing benchmark set, and ground-truth images are available in the RESIDE dataset. The MRFID dataset contains foggy and clean images of 200 outdoor scenes in different lighting conditions. For each scene, one clear image and four foggy images of different densities defined as slightly, moderately, highly, and extremely foggy, are manually selected from images taken from these scenes over the course of one calendar year. The generator is trained by such a hybrid strategy to fool the discriminator, while the discriminator is attempting to distinguish between the clean images and the restored images. As such, the proposed GAN aligns the distribution of the restored images with that of the clean images. Due to the different lighting conditions, the clean images are not the ground-truth ones even though they are taken from the same scenes. The conventional loss functions include $l_1$ and $l_2$ loss functions are not applicable for the MRFID dataset. A new loss function is proposed by introducing an extreme channel. Let $\bar{I}_c(p)$ be defined as \begin{align} \bar{I}_c(p)=\min\{I_c(p),255-I_c(p)\}, \end{align} and the extreme channel of the image $I$ is defined as \begin{align} \phi_{\rho}(I)(p)=\psi_{\rho}(\bar{I})(p). \end{align} The limitation of DCP on both the sky regions and high brightness objects can be avoided via the extreme channel. Considering a pair of hazy image $Z$ and clean image $T$ which are captured from the same scene, the corresponding haze-free image of the hazy image is $I$. It can be known from the conventional imaging model \cite{1gonz2002} that \begin{align} [T(p), I(p)]=[L_T(p), L_I(p)]R(p), \end{align} where $L_T(p)$ and $L_I(p)$ are the intensities of ambient light when the images $T$ and $I$ are captured. $R(p)$ is the ambient reflectance coefficient of surface, and it highly depends on the smoothness or texture of the surface. Since both the $L_T(p)$ and $L_I(p)$ are constant in a small neighborhood, $\phi_{\rho}(I)(p)$ and $\phi_{\rho}(T)(p)$ are usually determined by the reflection $R(p)$. Thus, it can be easily derived that \begin{align} \phi_{\rho}(I)(p)\approx \phi_{\rho}(T)(p) \approx 0. \end{align} \begin{figure}[!htb] \centering{ {\includegraphics[width=3.3in]{Pics/extreme_channel_2}} } \caption{MSE between the extreme channels of hazy and clean images in the dataset \cite{1liuw2021}.} \label{FigFig3636} \end{figure} The mean square error (MSE) between the extreme channels of hazy and clean images in the MRFID dataset \cite{1liuw2021} is shown in Figure \ref{FigFig3636}. It can be observed that the MSE is usually decreased when the haze degree is reduced from extremely to slightly. The first loss function for the MRFID dataset is defined by using the extreme channels $\phi_{\rho}(I)$ and $\phi_{\rho}(T)$ as \begin{equation} L_e=\frac{1}{WH}\sum_p \|\phi_{\rho}(I)(p)-\phi_{\rho}(T)\|_2^2, \end{equation} where $W$ and $H$ are the width and height of the image $I$, respectively. The loss function $L_e$ can guarantee that the haze is well removed in the restored image $I$. The second loss function for the MRFID dataset is defined by using the gradients of the restored image $I$ as \begin{align} L_t = \frac{1}{WH}{\displaystyle \sum_{p,c}}(|\nabla_h I_c(p)|+|\nabla_v I_c(p)|), \end{align} where $\nabla_h$ and $\nabla_v$ represent the horizontal and vertical gradients, respectively. The loss function $L_t$ can guarantee that the morphological artifacts are well reduced from the restored image $I$. The third adversarial loss function for the MRFID dataset is defined by using the restored image $I$ and clean image $T$ as \begin{align} L_{adv}=\log(D(T))+\log(1-D(I)), \end{align} where the PatchGAN in \cite{1isola2017} is adopted as the discriminator $D$. The discriminator effectively models the image $I$ as a Markov random field by assuming that pixels separated by more than a patch diameter are independent. The resultant loss can be understood as one type of texture loss. The loss function $L_{adv}$ can guarantee that the textures and reflections of the images $I$ and $T$ are almost the same. The overall loss function for the images in the MRFID dataset is defined as \begin{equation} \label{eq14} L_{d,1}= w_{adv}L_{adv}+w_e L_e + L_t, \end{equation} where $w_{adv}$ and $w_e$ are two constants, and their values are respectively selected as 20 and 10 if not specified in this paper. Experimental results in the section \ref{lossfunctionvaarification} show that the loss function $L_t$ and the patch-based loss functions $L_e$ and $L_{adv}$ tend to smoothen the restored image. The loss functions for the images in the RESIDE dataset are different from those image in the MRFID dataset. Since the ground-truth images are available in the RESIDE dataset, the conventional $l_1$ loss function and the feature-wise loss function via the fine-tuned VGG network for material recognition in \cite{1bell2015} and the color distortion function in \cite{Endo} can be applied to define the loss functions for the corresponding restored images \cite{1li2020}. For simplicity, only the $l_1$ loss function is selected. The loss function for the images in the RESIDE dataset is defined as \begin{align} L_{d,2}=\frac{1}{WH}\sum_p|I(p)-T(p)|, \end{align} where the image $T$ is the ground-truth image. The atmospheric light $A$ and transmission map $t(p)$ in the equation (\ref{myrrmaxij11}) are refined by minimizing the following overall loss function in each batch: \begin{align} L_d=w_RL_{d,2}+L_{d,1}, \end{align} where $w_R$ is a constant, and its value is selected as 100 if not specified. The first and seconds terms are analogous to the data-fidelity and regularization terms of the optimization problems in \cite{Li2015,1farb2008}, respectively. Our implementation is on top of a PyTorch framework with 4 NVIDIA GP100 GPUs. Both mirroring and randomly cropped $128*128$ patches from each input are employed to augment training data. The DNNs in Figure \ref{Fig3} are trained using the proposed loss functions and an Adam optimizer with the batch size as $16$. The learning rate is initially set to $10^{-5}$, then decreased using a cosine annealing schedule. The proposed algorithm is summarized as in the algorithm \ref{algo3}. \begin{algorithm} \label{algo3} { \caption{{\small Model-based single image deep dehazing}} \begin{enumerate} \item[Step 1.] Initialize the atmospheric light $A_0$ from the hazy image $Z$ using the method in \cite{Three}. \item[Step 2.] Initialize the transmission map $t_0$ by using the DDAP as in the equation (\ref{transmap}). \item[Step 3.] Reduce the morphological artifacts of $t_0$ via the nonlocal haze line averaging (\ref{myrrmaxij11}). \item[Step 4.] Refine the atmospheric light and transmission map using the GAN in Figure \ref{Fig3}. \item[Step 5.] Restore the haze free image $I$ via the equation (\ref{Iij}). \end{enumerate}} \end{algorithm} It can be easily verified by the images in Figure \ref{FigFig2} that the dynamic range of the restored image is higher than that of the hazy image \cite{1zheng2013,1kouf2018}. Both the global contrast and the local contrast of the restored image are larger than those of the haze image. The restored image usually looks darker than the hazy image, and it can be brightened by using an existing single image brightening algorithm. \section{Experimental Results} \label{experiment} Extensive experimental results on synthetic and real-world hazy images are provided in this section to validate the proposed model-based deep learning framework with emphasis on illustrating how the model-based method and the data-driven method {\it compensate} each other in the proposed neural augmentation. \subsection{Datasets} The MRFID in \cite{1liuw2021} contains hazy and clean images of 200 outdoor scenes in different lighting conditions. The proposed GAN is trained by using the hazy and clean images of 125 outdoor scenes from the MRFID dataset with the corresponding 500 hazy images as well as 500 images with heavy haze generated using ground-truth images from the RESIDE dataset \cite{1LiB2018}. Depth is estimated by using the algorithm in \cite{1lizq2018}. The scattering coefficients of 100 images are randomly generated in $[1.2, 2.0]$ and those of the others are randomly selected in $[2.5, 3.0]$. The $A_c$'s are randomly generated in $[0.625, 1.0]$ for the color channel $c$ independently. The hazy images of the 25 outdoor scenes from the MRFID dataset with the corresponding 100 hazy images are selected as the the validation set. All these hazy images are randomly selected from the two datasets. The test images comprise 500 outdoor hazy images in the synthetic objective testing set (SOTS) \cite{1LiB2018}, and 79 real-world hazy images in \cite{1LiZG2021} which include 31 images are from the RESIDE \cite{1LiB2018}, and the 19 images from \cite{1fattal2008} and the Internet. \subsection{Comparison of Different Training Strategies} \label{lossfunctionvaarification} Besides the proposed training strategy, two alternative training strategies are to: 1) use the $500$ synthetic heavy haze images from the RESIDE dataset only with the $l_1$ loss function; and 2) use the $500$ real-world hazy images from the MRFID dataset only with the three unsupervised loss functions in the equation (\ref{eq14}). The former is an CNN based refinement while the latter is an GAN based refinement. Their network structures are the same as those in Figure \ref{Fig3}. As shown in Figure \ref{Fig6}, the trees look a little blurry in the red box by the CNN based refinement. Although the GAN based refinement is trained with real-world hazy images, all the loss functions in the equation (\ref{eq14}) tend to over-smooth the restored image. Subsequently, the restored image is even blurrier as shown in Figure \ref{Fig6}. The average DHQI values of the 100 hazy images in the validation dataset from the MRFID \cite{1liuw2021} are shown in in Table \ref{table4}. Clearly, the proposed framework can obtain a higher average DHQI value than the two alternative strategies. \begin{table}[htbp] \centering \caption{Average DHQI values of $100$ hazy images in the validation dataset for different training strategies $\uparrow$.} {\small \begin{tabular}{c|c|c|c} \hline CNN based &GAN based & Neural augmented &Data-driven \\ \hline 53.72 &55.09 &55.13 &47.21\\\hline \end{tabular}} \label{table4} \end{table} \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{Pics/4.jpg} \caption{Comparison of different training strategies. From left to right, hazy images, dehazed images by (b) CNN based strategy, (c) GAN based strategy, and (d) the proposed one. } \label{Fig6} \end{figure} \begin{table*}[!htb] \centering \caption{Average PSNR and SSIM values of 500 outdoor hazy images in the SOTS for different algorithms $\uparrow$.} {\small \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline & FD-GAN \cite{1dong2020}& RefineDNet \cite{1zhao2021} & FFA-Net \cite{1qin2020} & PSD \cite{1chen2021} & DCP \cite{1he2013} & HLP \cite{CVPR16} & MSBDN \cite{1dongh2020} & Ours \\\hline PSNR & 20.78 & 20.80 & 32.13 & 15.15 & 17.49 & 18.06 & 30.25 & 21.58\\\hline SSIM & 0.863 & 0.898 & 0.979 & 0.735 & 0.855 & 0.849 & 0.944 & 0.874\\ \hline \end{tabular}} \label{table3} \end{table*} \begin{table*}[htb] \centering \caption{Average DHQI values of 79 real-world outdoor hazy images for different algorithms $\uparrow$.} {\small \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline & FD-GAN \cite{1dong2020}& RefineDNet \cite{1zhao2021} & FFA-Net \cite{1qin2020} & PSD \cite{1chen2021} & DCP \cite{1he2013} & HLP \cite{CVPR16} & MSBDN \cite{1dongh2020} & Ours \\\hline DHQI &51.00 &57.57 &55.33 &50.60 &51.92 &52.75 &54.32 &59.86 \\ \hline \end{tabular}} \label{table2} \end{table*} \subsection{Comparison with the Data-Driven Method} In order to verify the contrition of initial A and t from the model based approach in the proposed neural augmentation dehazing algorithm, the $A$ and $t$ are learnt directly by the same DNNs in Figure \ref{Fig3} without utilizing the initial values from the model-driven method. The average DHQI values of the $100$ hazy images in the validation dataset from the MRFID \cite{1liuw2021} in different epoches are shown in Figure \ref{Fig7}. The proposed framework converges faster and obtains higher DHQI values than the data-driven method. This is because that the residual information is sparser and it is more convenient to be learnt. The DHQI values of the finally dehazed images by both the methods are shown Table \ref{table4}. Clearly, the proposed framework can obtain a higher average DHQI value than the data-driven method. \begin{figure}[!htb] \centering \includegraphics[width=0.465\textwidth]{Pics/data_driven.jpg} \caption{Comparisons of DHQI values between the data-driven method and the proposed neural augmentation in different training echoes. } \label{Fig7} \end{figure} \begin{figure*}[!htb] \centering \includegraphics[width=0.925\textwidth]{Pics/6.png} \caption{Comparison of different haze removal algorithms. From left to right, hazy images, dehazed images by FD-GAN \cite{1dong2020}, RefineDNet \cite{1zhao2021}, FFA-Net \cite{1qin2020}, PSD \cite{1chen2021}, DCP \cite{1he2013}, HLP \cite{CVPR16}, MSBDN \cite{1dongh2020}, and ours, respectively. } \label{Fig6789} \end{figure*} \subsection{Comparison of Different Dehazing Algorithms} The proposed dehazing algorithm is compared with seven state-of-the-art dehazing algorithms including RefineDNet \cite{1zhao2021}, FFA-Net \cite{1qin2020}, PSD \cite{1chen2021}, DCP \cite{1he2013}, HLP \cite{CVPR16}, MSBDN \cite{1dongh2020}, and FD-GAN \cite{1dong2020}. Among them, the algorithms HLP \cite{CVPR16} and DCP \cite{1he2013} are model-based algorithms, the proposed algorithm and the RefineDNet \cite{1zhao2021} are combination of model-based and data-driven approaches, while the others are data-driven algorithms. The size of the input images must be dividable by 16 for the MSBDN \cite{1dongh2020}. Therefore, the sizes of the input images are resized for the MSBDN \cite{1dongh2020}, and the restored images are resized back to the original ones. The sizes of several images exceed the GPU memory when using the FD-GAN \cite{1dong2020}, their sizes are reduced to half of the original ones, and the dehazed images are resized back to their original sizes. The PSNR and SSIM are first adopted to compare the proposed algorithm with those in \cite{1zhao2021}, \cite{1qin2020}, \cite{1chen2021}, \cite{1he2013}, \cite{CVPR16}, \cite{1dongh2020}, and \cite{1dong2020} by using 500 synthetic outdoor hazy images in the SOTS \cite{1LiB2018}. The average PSNR values of the eight algorithms are given in Table \ref{table3}. The FFA-Net \cite{1qin2020} and MSBDN \cite{1dongh2020} are two CNN based dehazing algorithms, and they are optimized to provide high PSNR and SSIM values on the SOTS dataset. Both the proposed algorithm and the algorithm in \cite{1zhao2021} indeed outperform the model-based algorithms in \cite{CVPR16} and \cite{1he2013}. The quality index DHQI in \cite{1min2019} is then adopted to compare the proposed algorithm with those in \cite{1zhao2021}, \cite{1qin2020}, \cite{1chen2021}, \cite{1he2013}, \cite{CVPR16}, \cite{1dongh2020}, and \cite{1dong2020} by using the 79 real-world hazy images in \cite{1LiZG2021}. The average DHQI values of the 79 real-world outdoor hazy images are given in Table \ref{table2}. Clearly, the proposed algorithm and the algorithm in \cite{1zhao2021} outperform others from the DHQI point of view. Finally, all these dehazing algorithms are compared subjectively as in Figure \ref{Fig6789}. Readers are invited to view to electronic version of figures and zoom in them so as to better appreciate differences among all images. Although the FFA-Net \cite{1qin2020} and MSBDN \cite{1dongh2020} achieve high PSNR and SSIM value, their dehazed results are a little blurry and they are not photo-realistic. In addition, the haze is not reduced well if it is heavy. The DCP \cite{1he2013}, HLP \cite{CVPR16}, RefineDNet \cite{1zhao2021}, FD-GAN \cite{1dong2020}, and the proposed algorithm can be applied to generate photo realistic images. There are visible morphological artifacts in the restored images by the RefineDNet \cite{1zhao2021} and FD-GAN \cite{1dong2020}. The DCP \cite{1he2013} and HLP \cite{CVPR16} restore more vivid and sharper images at the expense of amplified noise in sky regions. \section{Conclusion Remarks and Discussion} \label{conclusion} A new single image dehazing algorithm is introduced by using model-based deep learning frameworks. Both transmission map and atmospheric light are obtained by a neural augmentation which consists of model-based estimation and data-driven refinement. They are then applied to restore a haze-free image. Experimental results validate that the proposed algorithm removes haze well from the synthetic and real-world hazy images. The proposed neural augmentation reduces the number of training data significantly.
1,116,691,498,891
arxiv
\section{Introduction}\label{sec:intro} \subsection{Background} \label{ssec:background} The advances in machine learning (ML) over the past two decades have transformed many disciplines. Within the actuarial literature, we see an explosion of works that apply and develop ML methods for various actuarial tasks \citep[see][]{richman2021b, richman2021a}. Meanwhile, there are many distinctive challenges with insurance data that are not addressed by general-purpose ML algorithms. One prominent example is the usual presence of high-cardinality categorical features (i.e.~categorical features with many levels or \textit{categories}). These features often represent important risk factors in actuarial data. Examples include the occupation in commercial property insurance, or the cause of injury in workers' compensation insurance. Unfortunately, ML algorithms cannot ``understand'' (process) categorical features on their own. The classical textbook approach to this problem, and also the most popular one in practice, is \textit{one-hot encoding}. It turns a categorical feature of $q$ unique categories into numeric representations by constructing $q$ binary attributes, one for each category; for example, a categorical feature with three unique categories will be represented as $[1, 0, 0]$, $[0, 1, 0]$ and $[0, 0, 1]$. One-hot encoding works well with a small number of independent categories, which is the case with most applications in the ML literature \citep[e.g.][]{hastie2009}. However, issues arise as cardinality expands: (i) The orthogonality (i.e.~independence) assumption of one-hot encoding no longer seems appropriate, since the growing number of categories will inevitably start to interact as the feature space gets more and more crowded, (ii) The resultant high-dimensional feature matrix entails computational challenges, especially when used with already computationally expensive models such as neural networks; (iii) The often uneven distribution of data across categories makes it difficult to learn the behaviour of the rare categories. For a motivating example, consider the workers' compensation scheme of the \citet{stateofnewyork2022}. In the four million claims observed from 2000 to 2022, the cause of injury variable has 78 unique categories, with more than 200,000 observations (about 5\%) for the most common cause (lifting) and less than 1000 observations (0.025\%) for the 10 least common causes (e.g.~crash of a rail vehicle, or gunshot). The uneven coverage of claims presents a modelling challenge and has been documented by actuarial practitioners \citep[e.g.][]{pettifer2012}. Furthermore, the cause of injury variable alone may not serve well to differentiate claim severities. Injuries from the same cause can result in vastly different claims experiences. It seems natural to consider its interaction with the nature of injury and part of body variables, each with 57 reported categories. Exploring $4446~(78 \times 57)$ interaction categories is clearly infeasible with one-hot encoding. The example above highlights that one-hot encoding is an inadequate tool for handling high-cardinality categorical features. A few alternatives exist but also have their own drawbacks, as listed below. It is worth pointing out that while the following approaches appear very distinct, they share one common feature. They all encourage the categories to ``communicate'' with each other, in the sense that the information learned from one category should also help with learning of the others. \begin{enumerate} \def(\arabic{enumi}){(\roman{enumi})} \item \emph{Manual (or data-guided) regrouping of categories of similar risk behaviours}. The goal here is to reduce the number of categories so that the refined categories can be tackled with the standard one-hot encoding scheme. This is a working approach, but manual regrouping requires significant domain inputs, which are expensive. Furthermore, data-driven methods such as clustering \citep[see e.g.][]{guiahi2017} require that data be first aggregated by categories, and this aggregation gives away potentially useful information available at more granular levels. \item \emph{Entity embeddings from neural networks}. Proposed by \citet{guo2016}, it seems to be now the most popular approach in the ML-driven stream of actuarial literature \citep{delong2021, shi2021, kuo2021a}. Entity embeddings work to extract a low-dimensional numeric representation of each category, so that categories closer in distance would observe similar response values. However, as the entity embeddings are trained as an early component of a black-box neural network, they offer little transparency towards their effects on the response. \item \emph{Generalised linear mixed models (GLMM) with the high-cardinality categorical features modelled as random effects}. This is another popular approach among actuaries, partly due to its high interpretability \citep[][]{antonio2007, verbelen2019}. However, GLMMs as an extension to GLMs also inherit their limitations---the same ones that have motivated actuaries to start turning away from GLMs and exploring ML solutions \citep[see, e.g.][]{al-mudafer2022,henckaerts2021}. \end{enumerate} Some recent work has appeared in the ML literature aiming to extend GLMMs to take advantage of the more capable ML models, by embedding a ML model into the linear predictor of the GLMMs. The most notable examples include GPBoost \citep{sigrist2022} that combines GLMMs with a gradient boosting algorithm, and LMMNN \citep{simchoni2022} that combines linear mixed models (LMM) with neural networks. Unfortunately, these models present their own limitations in the face of insurance data: GPBoost assumes overly optimistic random effects variance, and LMMNN is limited to a Gaussian-distributed response. While the Gaussian assumption offers computational advantages (in terms of analytical tractability), it is often ill suited to the more general distributions that are required for the modelling of financial and insurance data. None of the existing techniques appears satisfactory for the modelling of high-cardinality categorical features. This paper seeks an alternative approach that more effectively leverages information in the categories, offers greater transparency than entity embeddings, and demonstrates improved predictive performance. In the next section, we introduce our approach. \subsection{Contributions} Our work presents two main contributions. First, we take inspiration from the latest ML developments and propose a generalised mixed effects neural network called \textbf{GLMMNet}. The GLMMNet is an extension to the LMMNN proposed by \citet{simchoni2022}. In a similar vein, the GLMMNet fuses a deep neural network to the GLMM structure: The \textbf{Net}work component of the model provides it with the flexibility required to capture complex non-linear relationships in the data, and the \textbf{GLMM}-like component of the model allows a probabilistic interpretation and offers full transparency on the categorical variables (through the random effects estimates). In contrast with the Gaussian-based LMMNN, the key enhancement of the GLMMNet is its ability to model the entire class of exponential dispersion (ED) family distributions. The proposed extension requires a significant effort as it involves moving away from analytical results. The flexibility it brings, however, makes the GLMMNet approach widely applicable to many insurance contexts, such as in the prediction of claim frequency or the estimation of pure risk premiums. Secondly, we provide a systematic empirical comparison of some of the most popular approaches for modelling high-cardinality categorical features. Although the difficulty with such variables has been a long-standing issue in actuarial modelling, to the best of our knowledge, this paper is the first piece of work to extensively compare a range of the existing approaches (including our own GLMMNet). We believe that such a benchmarking study will be valuable, especially to practitioners facing many options available to them. Importantly, the methodological extensions proposed in this paper, while motivated by actuarial applications, are not limited to this context and can have much wider applicability. The extension of the LMMNN to the GLMMNet would suit any applications where the response cannot be sufficiently modelled by a Gaussian distribution. This unparalleled flexibility with the form of the response distribution, in tandem with the computational efficiency of the algorithm (due to the fast and highly scalable variational inference used to implement GLMMNet), make the GLMMNet a promising tool for many practical ML applications. \subsection{Outline of Paper} In Section \ref{sec:glmmnet}, we introduce the GLMMNet, which fuses neural networks with GLMMs to enjoy both the predictive power of deep learning models and the statistical strength---specifically, the interpretability and likelihood-based estimation---of GLMMs. In Section \ref{sec:simulation}, we illustrate and compare the proposed GLMMNet against a range of alternative approaches across a spectrum of simulation environments, each to mimic some specific elements of insurance data commonly observed in practice. Section \ref{sec:case-study} presents a real-life insurance case study and Section \ref{sec:conclusion} concludes. The code used in the numerical experiments is available on \url{https://github.com/agi-lab/glmmnet}. \section{GLMMNet: A Generalised Mixed Effects Neural Network} \label{sec:glmmnet} This section describes our GLMMNet in detail. We start by defining some notation in Section~\ref{ssec:problem}. We then provide an overview of GLMMs in Section~\ref{ssec:glmm} to set the stage for the introduction of the GLMMNet. Sections~\ref{ssec:glmmnet}--\ref{ssec:glmmnet-predict} showcase the architectural design of the GLMMNet and provide the implementation details. \subsection{Setting and Notation} \label{ssec:problem} We first introduce some notation to formalise the problem discussed above. We write \(Y\) to denote the \emph{response variable}, which is typically one of claim frequency, claim severity or risk premium for most insurance problems. We assume that the distribution of \(Y \in \mathcal{Y}\) depends on some \emph{covariates} or \emph{features}. In this work, we draw a distinction between standard features \(\mathbf{x} \in \mathcal{X}\) (i.e.~numeric features and low-dimensional categorical features) and the high-cardinality categorical features \(\mathbf{z} \in \mathcal{Z}\) consisting of \(K\) features each with \(q_i~(i = 1, \cdots, K)\) categories. The latter is the subject of our interest. All categorical features have been one-hot encoded in the representations $\mathbf{x}$ and $\mathbf{z}$. We assume \(\mathbf{x} \in \mathbb{R}^p\) and \(\mathbf{z} \in \mathbb{R}^q\), where \(p \in \mathbb{Z}\) and \(q = \sum_{i=1}^K q_i \in \mathbb{Z}\) represent the respective dimensions of the vectors. The empirical observations (i.e.~data) are denoted \(\mathcal{D}=(y_i, \mathbf{x}_i, \mathbf{z}_i)_{i=1}^n\), where \(n\) is the number of observations in the dataset. For convenience, let us also write \(\mathbf{y}=[y_1, y_2, \cdots, y_n]^\top \in \mathbb{R}^n, \mathbf{X} = [\mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_n]^\top \in \mathbb{R}^{n \times p}\) and \(\mathbf{Z} = [\mathbf{z}_1, \mathbf{z}_2, \cdots, \mathbf{z}_n]^\top \in \mathbb{R}^{n \times q}\). In general, we use bold font to denote vectors and matrices. The purpose of any predictive model is to learn the conditional distribution \(p(y|\mathbf{x}, \mathbf{z})\). It is generally assumed that the dependence of the response \(Y\) on the covariates is through a true but unknown function \(\mu:(\mathcal{X}, \mathcal{Z}) \to \mathcal{Y}\), such that \begin{align} p(y|\mathbf{x}, \mathbf{z}) = p(y|\mu(\mathbf{x}, \mathbf{z}), \boldsymbol{\phi}), \label{eq:predictive} \end{align} where \(\boldsymbol{\phi}\) represents a vector of any additional or auxiliary parameters of the likelihood. Most commonly $\mu(\mathbf{x}, \mathbf{z})$ will be the conditional mean \(\mathbb{E}(Y|\mathbf{x}, \mathbf{z})\), although it can be interpreted more generally. The predictive model then approximates \(\mu(\cdot)\) by some parametric function \(\hat{\mu}(\cdot)\). Specifically, a linear regression model will assume a linear structure for \(\mu(\cdot)\) with \(\boldsymbol{\phi} = \sigma^2_\epsilon\) to account for the error variance. Indeed, for most ML models, the function $\mu(\cdot)$ is the sole subject of interest (as opposed to the entire distribution $p(y|\mathbf{x}, \mathbf{z})$). There are many different options for choosing the function $\mu(\cdot)$. We discuss and compare the main ones in this paper; see also Section \ref{ssec:candidates}. In what follows, for notational simplicity we will now assume that \(\mathbf{z}\) consists of only one (high-cardinality) categorical feature, i.e.~\(K = 1\) and \(q = q_1\). The extension to multiple categorical features, however, is very straightforward; see also Section~\ref{sssec:re}. \subsection{Generalised Linear Mixed Models} \label{ssec:glmm} Mixed models were originally introduced to model multiple correlated measurements on the same subject (e.g. longitudinal data), but they also offer an elegant solution to the modelling of high-cardinality categorical features \citep{frees2014}. We now give a brief review of mixed models in the latter context. For further details, the books by \citet{mcculloch2001, gelman2007, denuit2019glm} are all excellent references for this topic. Let us start with the simplest linear model case. A one-hot encoded linear model assumes \begin{align} y | \mathbf{x}, \mathbf{z} \sim \mathcal{N}(\mu(\mathbf{x}, \mathbf{z}), \sigma_\epsilon^2), \text{ with } \mu(\mathbf{x}, \mathbf{z}) = \beta_0 + \mathbf{x}^\top \boldsymbol{\beta} + \mathbf{z}^\top \boldsymbol{\alpha}, \label{eq:nopool} \end{align} where \(\mathbf{x}\) are standard features (e.g.~sum insured, age), \(\mathbf{z}\) is a \(q\)-dimensional binary vector representation of a high-cardinality categorical variable (e.g.~occupation), $\beta_0 \in \mathbb{R}$, $\boldsymbol{\beta} = [\beta_1, \beta_2, \cdots, \beta_p]^\top \in \mathbb{R}^p$, and \(\boldsymbol{\alpha} = [\alpha_1, \alpha_2, \cdots, \alpha_q]^\top \in \mathbb{R}^q\) with \(\sum_{i=1}^q \alpha_i = 0\) are the regression coefficients to be estimated. Note that the additional constraint \(\sum_{i=1}^q \alpha_i = 0\) is required to restore identifiability of parameters. In high-cardinality settings, the vector \(\mathbf{z}\) becomes high-dimensional, and hence so must be \(\boldsymbol{\alpha}\). High cardinality therefore introduces a large number of parameters to the estimation problem. Moreover, in most insurance problems, the distribution of categories will be highly imbalanced, meaning that some categories will have much more exposure (i.e.~data) than others. The rarely observed categories are likely to produce highly variable parameter estimates. As established in earlier sections, all this makes it extremely difficult to estimate the \(\boldsymbol{\alpha}\) accurately \citep{gelman2007}. Equation \eqref{eq:nopool} is also referred to as the \textit{no pooling} model by \citet{antonio2014}, as each $\alpha_i$, $i = 1, 2, \cdots, q$ has to be learned independently. At the other end of the spectrum, there is a \textit{complete pooling} model, which simply ignores the \(\mathbf{z}\) and assumes \begin{align} \mu(\mathbf{x}, \mathbf{z}) = \beta_0 + \mathbf{x}^\top \boldsymbol{\beta}. \label{eq:pool} \end{align} In the middle ground between the overly noisy estimates produced by \eqref{eq:nopool} and the over-simplistic estimates from \eqref{eq:pool}, we can find the (linear) mixed models, which assume \begin{align} \mu(\mathbf{x}, \mathbf{z}) = \beta_0 + \mathbf{x}^\top \boldsymbol{\beta} + \mathbf{z}^\top \mathbf{u}, \label{eq:LMM} \end{align} where \(\mathbf{u} = [u_1, \cdots, u_q]^\top\) with \(u_i \stackrel{iid}{\sim} \mathcal{N}(0, \sigma_u^2)\) are known as the \emph{random effects} characterising the deviation of individual categories from the population mean, in contrast with the \emph{fixed effects} \(\beta_0\) and \(\boldsymbol{\beta}\). Model \eqref{eq:LMM} is also known as a \emph{random intercept} model. The central idea in \eqref{eq:LMM} is that instead of assuming some fixed coefficient for each category, we assume that the effects of individual categories come from a distribution, so we only need to estimate (far fewer) parameters that govern the distribution of random effects rather than learning an independent parameter per category. This produces the equivalent effect of partial pooling \citep{gelman2007}: categories with a smaller number of observations will have weaker influences on parameter estimates, and extreme values get regularised towards the collective mean (modelled by the fixed effects, i.e.~\(\beta_0 + \mathbf{x}^\top \boldsymbol{\beta}\)). In the same way that linear mixed models extend linear models, GLMMs extend GLMs by adding random effects capturing between-category variation to complement the fixed effects in the linear predictor. Suppose that we have \(q\) categories in the data, each with \(n_i~(i = 1,2, \cdots, q)\) observations (so the total number of observations is \(n = \sum_{i=1}^q n_i\)). A GLMM is then defined by four components: \begin{enumerate} \def(\arabic{enumi}){(\arabic{enumi})} \item \emph{Unconditional distribution of random effects}. We assume that the random effects \(u_j \stackrel{iid}{\sim} f(u_j|\boldsymbol{\gamma}), ~j= 1, 2, \cdots, q\) depend on a small set of parameters $\boldsymbol{\gamma}$. It is typically assumed that $u_j$ follows a Gaussian distribution with zero mean and variance $\sigma_u^2$, i.e.~\(u_j \stackrel{iid}{\sim} \mathcal{N}(0, \sigma_u^2)\). \item \emph{Response distribution}. GLMMs assume that the responses \(Y_i, i = 1, 2, \cdots, n\), given the random effect \(u_{j[i]}\) for the category it belongs to (where $j[i]$ means that the $i$-th observation belongs to the $j$-th category, $j = 1, 2, \cdots, q$), are conditionally independent with an ED distribution, i.e. \begin{align} p\left(y_{i} | u_{j[i]}, \theta_{i}, \phi \right)= \exp \left[\frac{y_{i} \theta_{i}- b\left(\theta_{i}\right)}{\phi}+c\left(y_{i}, \phi\right)\right], \label{eq:EDF} \end{align} where \(\theta_{i}\) denotes the location parameter and \(\phi\) the dispersion parameter for the ED density, \(b(\cdot)\) and \(c(\cdot)\) are known functions. It follows from the properties of ED family distributions that \begin{align} \mu_{i} & = \mathbb{E}(Y_{i} | u_{j[i]}) = b'(\theta_{i}), \label{eq:mean} \\ \operatorname{Var}(Y_{i} | u_{j[i]}) &= \phi b''(\theta_{i}) = \phi V(\mu_{i}), \label{eq:var} \end{align} where \(b'(\cdot)\) and \(b''(\cdot)\) denote the first and second derivatives of \(b(\cdot)\) and \(V(\cdot)\) is commonly referred to as the variance function. Equation \eqref{eq:mean} implies that the conditional distribution of \(Y_{i} | u_{j[i]}\) in \eqref{eq:EDF} is completely specified by the conditional mean \(\mu_{i} = \mathbb{E}(Y_{i} | u_{j[i]})\) and the dispersion parameter \(\phi\). \item \emph{Linear predictor}. The linear predictor includes both fixed and random effects: \begin{align} \eta_{i} = \beta_0 + \mathbf{x}_{i}^\top \boldsymbol{\beta} + \mathbf{z}_{i}^\top \mathbf{u}, ~~ i = 1, \cdots, n, \label{eq:eta} \end{align} where \(\mathbf{x}_{i}\) denotes a vector of fixed effects variables (e.g.~sum insured, age) and \(\mathbf{z}_{i}\) a vector of random effects variables (e.g.~a binary representation of a high-cardinality feature such as injury code). \item \emph{Link function}. Finally, a monotone differentiable function \(g(\cdot)\) links the conditional mean \(\mu_{ij}\) with the linear predictor \(\eta_{ij}\): \begin{align} g(\mu_{i}) =g(\mathbb{E}[Y_{i}|u_{j[i]}]) = \eta_{i}, ~~ i = 1, \cdots, n, ~ j = 1, \cdots, q, \label{eq:link} \end{align} completing the model. \end{enumerate} As explained earlier, the addition of random effects allows GLMMs to account for correlation within the same category, without overfitting to an individual category as is the case of a one-hot encoded GLM. The shared parameters that govern the distribution of random effects essentially enable information to transfer between categories, which is helpful for learning. \subsection{Model Architecture} \label{ssec:glmmnet} In Section \ref{ssec:background}, we briefly reviewed previous attempts at embedding ML into mixed models. In particular, the LMMNN structure proposed by \citet{simchoni2022} is the closest to what we envision for a mixed effects neural network. However, it suffers from two major shortcomings that restrict its applicability to insurance contexts: First, the LMMNN assumes a Gaussian response with an identity link function, for which analytical results can be obtained, but which is ill suited to modelling skewed, heavier-tailed distributions that dominate insurance and financial data. Second, LMMNNs model the random effects in a non-linear way (through a sub-network in the structure). This complicates the interpretation of random effects, which carries practical significance in many applications. The GLMMNet aims to address these concerns. The architectural skeleton of the model is depicted in Figure~\ref{fig:glmmnet}. We adopt a very similar structure to that of \citet{simchoni2022}, except that we remove the sub-network that they used to learn a non-linear function of \(\mathbf{Z}\). As noted earlier, the main purpose of our modification is to retain the interpretability of random effect predictions from the GLMM's linear predictor. In addition, we also want to avoid an over-complicated structure for the random effects, whose role is to act as a regulariser \citep{gelman2007}. \begin{figure}[H] \begin{center} \includegraphics[width=0.65\linewidth]{figure/GLMMNet} \end{center} \caption{Architecture of GLMMNet}\label{fig:glmmnet} \end{figure} In mathematical terms, we assume that the conditional \(\mathbf{y}|\mathbf{u}\) follows an ED family distribution, with \begin{align*} g(\boldsymbol{\mu})=g(\mathbb{E}[\mathbf{y}| \mathbf{u}])=f(\mathbf{X})+\mathbf{Z}\mathbf{u}, \quad \mathbf{u} \sim \mathcal{N}(\mathbf{0}, \boldsymbol{\Sigma}), \end{align*} where \(g(\cdot)\) is the link function and \(f(\cdot)\) is learned through a neural network. In the following, we give a detailed description of each component in the architecture plotted above, followed by a discussion on how the collective network can be trained in Section \ref{ssec:glmmnet-training}. We remark that, in this work we have allowed \(f(\cdot)\) to be fully flexible to take full advantage of the predictive power of neural networks. We recognise that in some applications, interpretability of the fixed effects may also be desirable. In such cases, it is possible to replace the fixed effects module (described in Section~\ref{sssec:fe}) by a Combined Actuarial Neural Network \citep[CANN;][]{wuethrich2019} structure. \subsubsection{Fixed Effects} \label{sssec:fe} The biggest limitation of GLMMs lies in their linear structure in \eqref{eq:eta}: similar to GLMs, features need to be hand-crafted to allow for non-linearities and interactions \citep{richman2021b, richman2021a}. The proposed GLMMNet addresses this issue by utilising a multi-layer network component for the fixed effects, which is represented as the shaded structure in Figure \ref{fig:glmmnet}. For simplicity here we consider a fully connected feedforward neural network (FFNN), although many other network structures, e.g. convolutional neural networks (CNN) can also be easily accommodated. A FFNN consists of multiple layers of neurons (represented by circles in the diagram) with non-linear activation functions to capture potentially complex relationships between input and output vectors; we refer to \citet{goodfellow2016} for more details. Formally, for a network with \(L\) hidden layers and \(q_l\) neurons in the \(l\)-th layer for \(l=1,\cdots,L\), the \(l\)-th layer is defined in Equation \eqref{eq:hidden-layer} below. \begin{align} \mathbf a^{(l)}: \mathbb{R}^{q_{l-1}} & \to \mathbb{R}^{q_l}, \notag \\ \mathbf v & \mapsto \mathbf a^{(l)}(\mathbf v) = \left[a_1^{(l)}(\mathbf v), \cdots, a_{q_l}^{(l)}(\mathbf v)\right]^\top, \label{eq:hidden-layer} \end{align} with \begin{align} a_j^{(l)}(\mathbf v) = \varphi^{(l)} \left(b_j^{(l)} + \mathbf w_j^{(l)\top} \mathbf v\right), \quad j = 1, \cdots, q_l, \label{eq:hidden-activ} \end{align} where \(\varphi^{(l)}: \mathbb{R} \to \mathbb{R}\) is called the activation function for the \(l\)-th layer, \(b_j^{(l)} \in \mathbb{R}\) and \(\mathbf w_j^{(l)} \in \mathbb{R}^{q_{l-1}}\) respectively represent the bias term (or intercept in statistical terminology) and the network weights (or regression coefficients) for the \(j\)-th neuron in the \(l\)-th layer, and \(q_0\) is defined as the number of (preprocessed) covariates entering the network. The network therefore provides a mapping from the (preprocessed) covariates \(\mathbf x\) to the desired output layer through a composition of hidden layers: \begin{align} f: \mathbb{R}^{q_0} &\to \mathbb{R}^{q_{L+1}} \notag \\ \mathbf x &\mapsto f(\mathbf x) = \left(\mathbf a^{(L+1)} \circ \cdots \circ \mathbf a^{(1)} \right) (\mathbf x). \label{eq:NN-activ} \end{align} Training a network involves making many specific choices; Appendix~\ref{app:nn} gives more details. \subsubsection{Random Effects} \label{sssec:re} Below we give a description of the random effects component of GLMMNet, which is the top (unshaded) structure in Figure \ref{fig:glmmnet}. For illustration, we stay with the case of a single high-cardinality categorical feature with \(q\) unique levels. The column of the categorical feature is first one-hot transformed into a binary matrix \(\mathbf{Z}\) of size \(n \times q\), which forms the input to the random effects component of the GLMMNet. Here comes the key difference from the fixed effects network: The weights in the random effects layer, denoted by \(\mathbf{u}=[u_1, u_2, \cdots, u_q]^\top\), are not fixed values but are assumed to come from a distribution. This way, instead of having to estimate the effects of every individual category---many of which may lack sufficient data to support a reliable estimate, we only need to estimate the far fewer parameters that govern the distribution of $\mathbf{u}$ (in our case, just a single parameter \(\sigma_u^2\) in \eqref{eq:prior}). A less visible yet important distinction of our GLMMNet from the LMMNN \citep{simchoni2022} is that \emph{our model takes a Bayesian approach to the estimation of random effects (as opposed to an exact likelihood approach in the LMMNN)}. The Bayesian approach helps sidestep some difficult numerical approximations of the (marginal) likelihood function (as detailed in Section~\ref{ssec:glmmnet-training}). We should point out that, although the Bayesian approach is popular for estimating GLMMs \citep[e.g.][]{brms2017}, there are further computational challenges that prevent them from being applied to ML mixed models; we elaborate on this in Section \ref{ssec:glmmnet-training}. Following the literature \citep{mcculloch2001, antonio2007}, we assume that the random effects follow a normal distribution with zero mean, and that the random effects are independent across categories: \begin{align} \mathbf{u} \sim \mathcal{N}(\mathbf{0}, \sigma_u^2 \mathbf{I}), \label{eq:prior} \end{align} or equivalently, \(u_i \stackrel{iid}{\sim} \mathcal{N}(0, \sigma_u^2)\) for \(i = 1, 2, \cdots, q\). This is taken as the prior distribution of the random effects \(\mathbf{u}\). In the context of our research problem, we are interested in the predictions for \(\mathbf{u}\), which indicate the deviation of individual categories from the population mean, or in other words, the excess risk they carry. Under the Bayesian framework, \emph{given the posterior distribution \(p(\mathbf{u} | \mathcal{D})\)}, we can simply take the posterior mode \(\hat{\mathbf{u}} = \mathop{\mathrm{argmax}}_\mathbf{u} p(\mathbf{u} | \mathcal{D})\) as a point estimate, which is also referred to as the maximum a posteriori (MAP) estimate. Alternatively, we can also derive interval predictions from the posterior distribution to determine whether the deviations as captured by \(\mathbf{u}\) carry statistical significance. We should note that the estimation of the posterior distribution, which we treated as given in the above, represents a major challenge in Bayesian inference. We come back to this topic and discuss how we tackle it in Section~\ref{ssec:glmmnet-training}. It should also be pointed out that the extension to multiple categorical features is very straightforward: all we need is to add another random effects layer (and one more variance parameter to estimate for each additional feature). \subsubsection{Response Distribution} \label{sssec:rd} As explained earlier, one novel contribution of GLMMNet is its ability to accommodate many non-Gaussian distributions, notably gamma (commonly used for claim severity), (over-dispersed) Poisson (for claim counts), and Bernoulli (for binary classification tasks). Our extension to non-Gaussian distributions is an important pre-requisite for insurance applications. Precisely, we assume that the responses \(Y_i, i = 1, 2, \cdots, n\), given the random effect \(u_{j[i]}\) for the category it belongs to (where $j[i]$ means that the $i$-th observation belongs to the $j$-th category), are conditionally independent with a distribution in the ED family. Equation \eqref{eq:mean} implies that the conditional distribution of \(Y_{i} | u_{j[i]}\) is completely specified by the conditional mean \(\mu_{i}\) and the dispersion parameter \(\phi\), so we can write\footnote{The assumption of constant dispersion parameter $\phi$ is made to comply with the standard assumptions of GLM/GLMMs, but can be relaxed if desired by a slight modification. For the purpose of illustration in this work, we do not pursue this path down further but note that it can be an interesting direction for future research.} \[y_i | u_{j[i]} \sim \mathrm{ED}({\mu}_i, \phi).\] In the GLMMNet, we assume that a link function \(g(\cdot)\) connects the conditional mean \(\mu_{i}\) with the non-linear predictor \(\eta_{i}\) formed by adding together the output from the fixed effects module and the random effects layer: \begin{align} g(\mu_{i}) = \eta_{i} = f(\mathbf{x}_i) + \mathbf{z}_i^\top \mathbf{u} = f(\mathbf{x}_i) + u_{j[i]}, ~~ i = 1, \cdots, n, ~ j=1, \cdots, q. \label{eq:glmmnet-predictor} \end{align} In implementation, we use the inverse link \(g^{-1}(\cdot)\) as the activation function for the final layer, to map the output \([f(\mathbf{x}_i), u_{j[i]}]\) from the fixed effects module and the random effects layer to a prediction for the conditional mean response \(\mu_i = g^{-1}(f(\mathbf{x}_i) + u_{j[i]}).\) Finally, as GLMMNet operates under probabilistic assumptions, the dispersion parameter \(\phi\) (independent of covariates in our setting) can be estimated under maximum likelihood as part of the network. This is implemented by adding an input-independent but trainable weight to the final layer of GLMMNet \citep{keras}, whose value will be updated by (stochastic) gradient descent on the loss function. \subsection{Training of GLMMNet: Variational Inference} \label{ssec:glmmnet-training} In a LMMNN \citep{simchoni2022}, the network is trained by minimising the negative log-likelihood of \(\mathbf{y}\). As the LMMNN assumes a Gaussian density for \(\mathbf{y} | \mathbf{u}\) and \(\mathbf{u} \sim \mathcal{N}(\mathbf{0}, \boldsymbol{\Sigma})\), the marginal likelihood of \(\mathbf{y}\) can be derived analytically, i.e. \(\mathbf{y} \sim \mathcal{N}(f(\mathbf X), \mathbf Z \boldsymbol\Sigma \mathbf Z^\top + \sigma_\epsilon^2 \mathbf I)\). The network weights and biases, as well as the variance and covariance parameters \(\sigma_\epsilon^2\) and \(\sigma_u^2\) are learned by minimising the negative log-likelihood \begin{align*} -\log \mathcal{L}= \frac{1}{2}(\mathbf{y}-f(\mathbf{X}))^\top \mathbf{V}^{-1}(\mathbf{y}-f(\mathbf{X})) + \frac{1}{2} \log \operatorname{det}(\mathbf{V}) + \frac{n}{2} \log (2 \pi), \end{align*} where \(\mathbf{V} = \mathbf Z \boldsymbol\Sigma \mathbf Z^\top + \sigma_\epsilon^2 \mathbf I\) and \(\boldsymbol \Sigma := \operatorname{Cov}(\mathbf u) = \sigma_u^2 \mathbf{I}\). In making the extension to GLMMNet, however, we no longer obtain a closed-form expression for the marginal likelihood: \begin{align} \mathcal{L}(\mathbf{y}; \boldsymbol{\beta}, \sigma_u, \phi) = \prod_{i=1}^n p(y_i | \boldsymbol{\beta}, \sigma_u, \phi) = \prod_{i=1}^n \int p(y_i | \boldsymbol{\beta}, u_{j[i]}, \phi) f(u_{j[i]} | \sigma_u) \mathrm{d} u_{j[i]} \label{eq:glmm-likelihood} \end{align} where \(\boldsymbol{\beta}\) denote the weights and biases in the fixed effects component of the GLMMNet (Section \ref{sssec:fe}), \(j[i]\) indicates the category to which the \(i\)-th observation belongs, \(\sigma_u^2\) is the variance parameter of the random effects, and \(\phi\) is the usual dispersion parameter for the ED density for the response. We take a Bayesian approach to circumvent the difficult numerical approximations required to compute the integral in Equation \eqref{eq:glmm-likelihood}. Under the Bayesian framework, \(\pi(u_j) = f(u_j | \sigma_u) = \mathcal{N}(0, \sigma_u^2), ~j = 1, \cdots, q\), is taken as the prior for the random effects. Our goal is to make inference on the posterior of the random effects, given by \begin{align} p(\mathbf{u} | \mathcal{D})=\frac{\pi(\mathbf{u})p(\mathcal{D}|\mathbf{u})}{p(\mathcal{D})}=\frac{\pi(\mathbf{u})p(\mathcal{D}|\mathbf{u})}{\int \pi(\mathbf{u} )p(\mathcal{D}|\mathbf{u} ) \mathrm{d} \mathbf{u}}, \label{eq:posterior} \end{align} where \(\pi(\mathbf{u}) = \prod_{j=1}^q \pi(u_j)\) and $\mathcal{D}$ represents the data. Note that we again encounter an intractable integral in \eqref{eq:posterior}. The traditional solution is to use Markov chain Monte Carlo (MCMC) to sample from the posterior without computing its exact form; however, the computational burden of MCMC techniques restricts its applicability to large and complex models like a deep neural network. This partly explains why Bayesian methods, although frequently used for estimating GLMMs \citep[e.g.][]{brms2017}, tend not to be as widely adopted among ML mixed models \citep[e.g.][]{sigrist2022,hajjem2014}. With our GLMMNet, we apply \emph{variational inference} to solve this problem. Variational inference is a popular approach among ML researchers working with Bayesian neural networks, due to its computational efficiency \citep{blei2017,zhang2019}. Variational inference does not try to directly estimate the posterior, but proposes a surrogate parametric distribution \(q \in \mathcal{Q}\) to approximate it, therefore turning the difficult estimation into a highly efficient optimisation problem. In practice, the choice of the surrogate distribution is often made based on simplicity or computational feasibility. One particularly popular option is to use a mean-field distribution family \citep{blei2017}, which assumes independence among all latent variables. In this work, we consider a diagonal Gaussian distribution for the surrogate posterior, \[q_{\boldsymbol{\vartheta}}(\mathbf{u})=\mathcal{N}({\boldsymbol{\mu}}_u, {\boldsymbol{\Sigma}}_u),\] where \(\boldsymbol{\mu}_u\) is a \(q\)-dimensional vector of the surrogate posterior mean of the \(q\) random effects, and \(\boldsymbol{\Sigma}_u = \mathrm{diag}(\sigma_1^2, \sigma_2^2, \cdots, \sigma_q^2)\) is a diagonal matrix of the posterior variance. More complex distributions, such as those that incorporate dependencies among latent variables, may provide a more accurate approximation. In this work, however, as we are mainly concerned with estimating the mean and dispersion of the posterior random effects, the chosen diagonal Gaussian surrogate distribution should suffice for this purpose. The vector \(\boldsymbol{\mu}_u\) and the diagonal elements of \(\boldsymbol{\Sigma}_u\) together form the \emph{variational parameters} of the diagonal Gaussian, denoted by \(\boldsymbol{\vartheta}\). The task here is to find the variational parameters \(\boldsymbol{\vartheta}^*\) that make the approximating density \(q_{\boldsymbol{\vartheta}} \in \mathcal{Q}\) closest to the true posterior, where closeness is defined in the sense of minimising the Kullback-Leibler (KL) divergence \citep{kullback1951}. In mathematical terms, this means \begin{align} q_{\boldsymbol{\vartheta}^{*}}(\mathbf{u}) &= \mathop{\mathrm{argmin}}_{q_{\boldsymbol{\vartheta}}(\mathbf{u}) \in \mathcal{Q}} \mathrm{KL}[q_{\boldsymbol{\vartheta}}(\mathbf{u}) \| p(\mathbf{u} | \mathcal{D})] \label{eq:glmm-vi} \\ &= \mathop{\mathrm{argmin}}_{q_{\boldsymbol{\vartheta}}(\mathbf{u}) \in \mathcal{Q}} \mathbb{E}_{q_{\boldsymbol{\vartheta}}(\mathbf{u})} [\log q_{\boldsymbol{\vartheta}}(\mathbf{u}) - \log p(\mathbf{u} | \mathcal{D})] \\ &= \mathop{\mathrm{argmin}}_{q_{\boldsymbol{\vartheta}}(\mathbf{u}) \in \mathcal{Q}} \mathbb{E}_{q_{\boldsymbol{\vartheta}}(\mathbf{u})} [\log q_{\boldsymbol{\vartheta}}(\mathbf{u}) - \log \pi(\mathbf{u}) - \log p(\mathcal{D} | \mathbf{u})] \\ &= \mathop{\mathrm{argmin}}_{q_{\boldsymbol{\vartheta}}(\mathbf{u}) \in \mathcal{Q}} \left(\operatorname{KL}[q_{\boldsymbol{\vartheta}}(\mathbf{u}) \| \pi(\mathbf{u})]-\mathbb{E}_{q_{\boldsymbol{\vartheta}}(\mathbf{u})}[\log p(\mathcal{D} | \mathbf{u})]\right). \end{align} We take the term inside the outermost bracket, which is called the \emph{evidence lower bound} (ELBO) loss or the negative of \emph{variational free energy} in \citet{neal1998}, to be the loss function for the GLMMNet: \begin{align} \mathcal{L}_\mathrm{GLMMNet} &= \operatorname{KL}[q_{\boldsymbol{\vartheta}}(\mathbf{u}) \| \pi(\mathbf{u})]-\mathbb{E}_{q_{\boldsymbol{\vartheta}}(\mathbf{u})}[\log p(\mathcal{D} | \mathbf{u})] \label{eq:glmm-ELBO} \\ &= \mathbb{E}_{q_{\boldsymbol{\vartheta}}(\mathbf{u})}\left[\log \frac{q_{\boldsymbol{\vartheta}}(\mathbf{u})}{\pi(\mathbf{u})} \right]- \mathbb{E}_{q_{\boldsymbol{\vartheta}}(\mathbf{u})}[\log p(\mathcal{D} | \mathbf{u})]. \label{eq:glmm-ELBO-E} \end{align} We see that the loss function used to optimise the GLMMNet is a sum of two parts: a first part that captures the divergence between the (surrogate) posterior and the prior, and a second part that captures the likelihood of observing the given data under the posterior. This decomposition echoes the trade-off between the prior beliefs and the data in any Bayesian analysis \citep{blei2017}. The loss function in \eqref{eq:glmm-ELBO-E} can be calculated via Monte Carlo, that is, by generating posterior samples under the current estimates of the variational parameters $\boldsymbol{\vartheta}$, evaluating the respective expressions at each of the sampled values, and taking the average to approximate the expectation. The derivatives of the loss function can be computed via automatic differentiation \citep{keras} and used by standard gradient descent algorithms for optimisation \citep[e.g.][]{kingma2014}. \subsection{Prediction from GLMMNet} \label{ssec:glmmnet-predict} Recall that \(\mathbf{x}\) denotes the standard input features and \(\mathbf{z} \in \mathbb{R}^q\) is a binary representation of the high-cardinality feature. Let \((\mathbf{x}^*, \mathbf{z}^*)\) represent a new data observation for which a prediction should be made. Note that when making predictions, it is possible to encounter a category that was never seen during training, in which case \(\mathbf{z}^*\) will be a zero vector (as it does not belong to any of the already-seen categories) and the predictor in Equation \eqref{eq:glmmnet-predictor} reduces to \(\eta^* = f(\mathbf{x}^*)\). To make predictions from GLMMNet, we take expectations under the posterior distribution on the random effects \citep{blundell2015,jospin2022}: \begin{align} p(y^* | \mathbf{x}^*, \mathbf{z}^*) &= \mathbb{E}_{p(\mathbf{u} | \mathcal{D})} \left[p(y^* | \mathbf{x}^*, \mathbf{z}^*, \mathbf{u}) \right] \notag \\ & \approx \mathbb{E}_{q_{\boldsymbol{\vartheta}^{*}}(\mathbf{u})} \left[p(y^* | \mathbf{x}^*, \mathbf{z}^*, \mathbf{u}) \right] \label{eq:glmmnet-pred} \end{align} where \(q_{\boldsymbol{\vartheta}^{*}}(\cdot)\) denotes the optimised surrogate posterior. As a result of Equation \eqref{eq:glmmnet-pred}, any functional of \(y^* | \mathbf{x}^*, \mathbf{z}^*\), e.g.~the expectation, can be computed in an analogous way. The expectation in \eqref{eq:glmmnet-pred} can be approximated by drawing Monte Carlo samples from the (surrogate) posterior, as outlined in Algorithm \ref{alg:GLMMNet-predict}. When there are multiple high-cardinality features, each will have its own binary vector representation and optimised surrogate posterior, and the expectation in Equation~\eqref{eq:glmmnet-pred} will be taken with respect to the joint posterior (which is simply a product of the surrogate posteriors when we assume independence between these features). It is also worth pointing out that the model averaging in line \ref{line:ensemble} has the equivalent effect of training an ensemble of networks, which is usually otherwise computationally impractical \citep{blundell2015}. \begin{algorithm}[htbp!] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \caption{Prediction from GLMMNet}\label{alg:GLMMNet-predict} \Input{A new data observation $(\mathbf{x}^*, \mathbf{z}^*)$; number of Monte Carlo iterations $N$} \Output{Predictive distribution: $p(y^* | \mathbf{x}^*, \mathbf{z}^*)$} \For{$i = 1$ to $N$}{ Simulate $\mathbf{u}_{(i)}$ from the surrogate posterior $q_{\boldsymbol{\vartheta}^{*}}(\mathbf{u})$\; Set $p_{(i)} = p(y^* | \mathbf{x}^*, \mathbf{z}^*, \mathbf{u}_{(i)}) = \mathrm{ED}(\mu_{(i)}, \phi)$ where $\mu_{(i)} = g^{-1}\left(f(\mathbf{x}^*) + \mathbf{z}^{*\top}\mathbf{u}_{(i)}\right)$ and $\phi$ is the dispersion parameter for the ED distribution (learned in the GLMMNet)\; } \Return $p(y^* | \mathbf{x}^*, \mathbf{z}^*) = \frac{1}{N} \sum_{i=1}^N p_{(i)}$\; \label{line:ensemble} \end{algorithm} \section{Comparison of GLMMNet with Other Leading Models} \label{sec:simulation} In this section, we compare the performance of the GLMMNet against the most popular existing alternatives for handling high-cardinality categorical features. We list the candidate models in Section~\ref{ssec:candidates} and the performance metrics we use to evaluate the models in Section~\ref{ssec:metrics}. Control over the true data generating process via simulation allows us to highlight the respective strengths of the models. We challenge the models under environments of varying levels of complexity, e.g.~low signal-to-noise ratio, highly imbalanced distribution of categories, and/or skewed response distributions. The simulation environments detailed in Section~\ref{ssec:sim-env} are designed to mimic these typical characteristics of insurance data. Section~\ref{ssec:sim-results} summarises the results. Ensuing insights are used to inform the real data case study in Section~\ref{sec:case-study}. \subsection{Models for Comparison} \label{ssec:candidates} Listed below are the leading models widely used in practice that allow for high-cardinality categorical features. They will be implemented and used for comparison with the GLMMNet in this experiment. \begin{arcenum} \item GLM with \begin{arcitem} \item \texttt{GLM\_ignore\_cat}: complete pooling, i.e.~ignoring the categorical feature altogether. In the notation of Section~\ref{ssec:problem}, it can be represented as $\mu(\mathbf{x}, \mathbf{z}) = g^{-1}(\beta_0 + \mathbf{x}^\top \boldsymbol{\beta})$, where $g(\cdot)$ is the link function, $\beta_0$ and $\boldsymbol{\beta}$ are the regression parameters to be estimated; \item \texttt{GLM\_one\_hot}: one-hot encoding, which can be represented as $\mu(\mathbf{x}, \mathbf{z}) = g^{-1}(\beta_0 + \mathbf{x}^\top \boldsymbol{\beta} + \mathbf{z}^\top \boldsymbol{\alpha})$ where $\boldsymbol{\alpha}$ are the additional parameters for each category in $\mathbf{z}$; \item \texttt{GLM\_GLMM\_enc}: GLMM encoding, which can be represented as $\mu(\mathbf{x}, \mathbf{z}) = g^{-1}(\beta_0 + \mathbf{x}^\top \boldsymbol{\beta} + z'\alpha)$ where $z'$ represents the encoded value (a scalar) of $\mathbf{z}$ and $\alpha$ the additional parameter associated with it (more details in Appendix~\ref{app:glmm-enc}); \end{arcitem} \item Gradient boosting machine (GBM, with trees as base learners) under the same categorical encoding setup as above: \begin{arcitem} \item \texttt{GBM\_ignore\_cat}: $\mu(\mathbf{x}, \mathbf{z}) = \mu(\mathbf{x})$ where $\mu(\cdot)$ is a weighted sum of tree learners; \item \texttt{GBM\_one\_hot}: $\mu(\mathbf{x}, \mathbf{z})$ where $\mu(\cdot, \cdot)$ is a weighted sum of tree learners; \item \texttt{GBM\_GLMM\_enc}: $\mu(\mathbf{x}, \mathbf{z}) = \mu(\mathbf{x}, z')$ where $z'$ represents the encoded value (a scalar) of $\mathbf{z}$; \end{arcitem} \item Neural network with entity embeddings (\texttt{NN\_ee}). Here, $\mu(\mathbf{x}, \mathbf{z})$ is composed of multiple layers of interconnected artificial neurons, where each neuron receives inputs, performs a weighted sum of these inputs, and applies an activation function to produce its output (see also Section~\ref{sssec:fe}). Entity embeddings add a linear layer between each one-hot encoded input and the first hidden layer and are learned as part of the training process. This is currently the most popular approach for handling categorical inputs in deep learning models and serves as our \textbf{target benchmark}; \item GBM with pre-learned entity embeddings (\texttt{GBM\_ee}), which is similar to \texttt{GBM\_GLMM\_enc} except with the $z'$ replaced by a vector of entity embeddings learned from \texttt{NN\_ee}; \item Mixed models: \begin{arcitem} \item \texttt{GLMM} with $\mu(\mathbf{x}, \mathbf{z}) = g^{-1}(\beta_0 + \mathbf{x}^\top \boldsymbol{\beta} + \mathbf{z}^\top \mathbf{u})$ where $\mathbf{u}$ are the random effects (Section~\ref{ssec:glmm}); \item \texttt{GPBoost} \citep{sigrist2021,sigrist2022} with $\mu(\mathbf{x}, \mathbf{z}) = g^{-1}(f(\mathbf{x}) + \mathbf{z}^\top \mathbf{u})$ where $f(\cdot)$ is an ensemble of tree learners; \item The proposed \texttt{GLMMNet}, with $\mu(\mathbf{x}, \mathbf{z}) = g^{-1}(f(\mathbf{x}) + \mathbf{z}^\top \mathbf{u})$ where $f(\cdot)$ is a FFNN (Section~\ref{sssec:fe}). \end{arcitem} \end{arcenum} Each model has its own features and properties, which renders the collection of models suitable for different contexts; Table~\ref{tab:candidates} gives a high-level summary of their different characteristics. In the table, ``interpretability" refers to the ease with which a human can understand a model, as well as the ability to derive insights from the model. For instance, GLMs have high interpretability, because GLMs have a small number of parameters and each parameter carries a physical meaning, which contrasts with the case of a standard deep neural network characterised by a large number of parameters that are meaningless on their own. ``Operational complexity" refers to the resources and time required to implement, train and fine-tune the model. \begin{table}[ht] \centering \begin{footnotesize} \begin{tabular}{p{1.4cm}p{2cm}p{3cm}p{3cm}p{2.3cm}p{1.5cm}} \toprule & \multicolumn{3}{c}{\textbf{Predictive performance}} & \multicolumn{2}{c}{\textbf{Other considerations}} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-6} & \textit{Handling non-linearities} & \textit{Categorical treatment} & \textit{Response distribution} & \textit{Interpretability} & \textit{Operational complexity} \\ \midrule GLM & None (unless performed manually) & Optional categorical encoding methods can be used to preprocess features, e.g.~one-hot encoding & ED family & High (fully transparent model) & Low \\ \midrule GBM & Good & Optional categorical encoding methods can be used to preprocess features, e.g.~one-hot encoding & Gaussian (alternatives can be accommodated by specifying a different loss function) & Medium (through partial dependence plots and feature importance analysis) & Moderate \\ \midrule GLMM & None (unless performed manually) & Through random effects, which are an addition to GLMs (see Section~\ref{ssec:glmm}) & ED family & High (fully transparent model) & Low \\ \midrule Entity embeddings (neural network) & Excellent & Through entity embeddings & Gaussian (alternatives can be accommodated by specifying a different loss function) & Low & High \\ \midrule GPBoost & Good & Through random effects (see Section~\ref{ssec:candidates}) & Selected ED family distributions (Bernoulli, Poisson, gamma, Gaussian) & Medium (transparent random effects) & Moderate \\ \midrule GLMMNet & Excellent & Through random effects (see Section~\ref{ssec:glmmnet}) & ED family & Medium (transparent random effects) & High \\ \bottomrule \end{tabular} \caption{Overview of different characteristics of candidate models} \label{tab:candidates} \end{footnotesize} \end{table} Based on this qualitative comparison only, the GLMMNet has good potential to improve on neural networks with entity embeddings, because it can similarly capture complex relationships while also offering a wider range of response distributions and better interpretability. However, further investigations are needed to understand and illustrate in which circumstances the GLMMNet approach can outperform other approaches, which is the focus of the rest of this section. \subsection{Model Evaluation Criteria} \label{ssec:metrics} For all experiments presented in this paper (including the real data case study in Section~\ref{sec:case-study}), we randomly split the data into a training and a test set. The training set is used to select hyperparameters (by further splitting into an inner-training and a validation set) and fit the pool of candidate models. The different models are then evaluated and compared based on their performance on the test dataset. Specifically, to quantify the predictive performance, we consider the metrics listed below: \begin{arcitem} \item \emph{Accuracy of point predictions}. We report the root mean squared error (RMSE) and the mean absolute error (MAE), which are respectively given by \begin{align*} \mathrm{RMSE}=\sqrt{\frac{1}{n^*} \sum_{i=1}^{n^*}\left(y_i^*-\hat{y}_i^*\right)^2}, ~~ \mathrm{MAE}=\frac{1}{n^*} \sum_{i=1}^{n^*}\left|y_i^*-\hat{y}_i^*\right|, \end{align*} where \(n^*\) is the number of test observations, \(\mathbf{y}^*\) is the target output and \(\hat{\mathbf{y}}^*\) is the (point) prediction of the mean response. \item \emph{Average accuracy of point predictions per category}. In order to gain insights into the category-specific accuracy of point predictions, we consider the RMSE of average prediction for each category: \[\mathrm{RMSE\_avg}=\sqrt{\frac{1}{q}\sum_{j=1}^q (\overline{y}_j^*-\overline{\widehat{y}}^*_j)^2}, \quad \overline{y}_j^*=\frac{1}{n_j^*}\sum_{i:j[i]=j}^{n^*}y_i^*, ~~ \overline{\widehat{y}}_j^*=\frac{1}{n_j^*}\sum_{i:j[i]=j}^{n^*}\widehat{y}_i^*,\] where \(q\) is the number of categories, \(n_j^*\) is the number of test observations in the \(j\)-th category, \(\overline{y}_j^*\) and \(\overline{\widehat{y}}_j^*\) respectively denote the average of the response variable \(y\) and its prediction \(\hat{y}\) for all observations in the \(j\)-th category. This metric serves to measure and compare how well a model is able to capture the between-category differences. This can be interpreted as, for example, the average accuracy of loss predictions on each sub-portfolio, which has practical significance. \item \emph{Accuracy of probabilistic predictions}. The quantification of probabilistic accuracy is especially relevant to actuarial applications \citep{embrechts2022}. In this work, we consider two \emph{proper scoring rules}, the continuous ranked probability score (``CRPS'') and the negative log-likelihood (``NLL''), both of which are commonly used for assessing probabilistic forecasts \citep{gneiting2007,al-mudafer2022,delong2021a}. The CRPS is defined as \begin{equation} \mathrm{CRPS} = \frac{1}{n^*}\sum_{i=1}^{n^*} \mathrm{CRPS}(\hat{F}_i, y_i), \text{ where } \mathrm{CRPS}(\hat{F}_i, y_i) = \int_{-\infty}^{\infty}(\hat{F}_i(z)-\mathds{1}_{\{z \geq y_i\}})^{2} \mathrm{d} z, \label{eq:crps} \end{equation} and where \(\hat F_i(\cdot)\) represents the distribution function (df) of a probabilistic forecaster for the \(i\)-th response value and \(y_i\) represents its observed value. The CRPS is a quadratic measure of the difference between the forecast predictive df and the empirical df of the observation. Hence, the smaller it is the better. The integral in \eqref{eq:crps} has been evaluated analytically and implemented in R for many distributions; we refer to \citet{jordan2019} for details. Computation of the NLL is more straightforward: \begin{equation} \mathrm{NLL} = \frac{1}{n^*}\sum_{i=1}^{n^*} \mathrm{NLL}(\hat{F}_i, y_i), \text{ where } \operatorname{NLL}(\hat{F}_i, y_i) = -\log \hat{f}_i(y_i). \label{eq:nll} \end{equation} and where \(\hat{f}_i(\cdot)\) is the density of the probabilistic forecaster. For models that do not produce a distributional forecast, we construct an artificial predictive distribution by using the point forecast as the mean and estimating a dispersion parameter for the assumed distribution \citep[see Section 4.6 of][]{denuit2019glm}. \end{arcitem} \subsection{Simulation Datasets} \label{ssec:sim-env} We generate \(n\) observations from a (generalised) non-linear mixed model with \(q\) categories (\(q \le n\)). We assume that the responses \(y_i, i = 1, \cdots, n\), each belonging to a category \(j[i] \in \{1, \cdots, q\}\), given the random effects \(\mathbf{u}=(u_j)_{j=1}^q\), are conditionally independent with a distribution in the ED family (e.g.~gamma); denoted as \(y_i | u_{j[i]} \sim \mathrm{ED}({\mu}_i, \phi),\) where \({\mu}_i=\mathbb{E}(y_i | u_{j[i]})\) represents the mean and \(\phi\) represents a dispersion parameter shared across all observations. A link function \(g(\cdot)\) connects the conditional mean \({\mu}_i\) with a non-linear predictor in the form of \begin{align} g(\mu_i)=g(\mathbb{E}[{y}_i| {u}_{j[i]}])=f(\mathbf{x}_i)+u_{j[i]}, \quad i = 1, \cdots, n, ~j = 1, \cdots, q, \label{eq:sim-formula} \end{align} where \(\mathbf{x}_i\) is a vector of fixed effects covariates for the \(i\)-th observation, \(\mathbf{z}_i\) is a vector of random effects variables (in our case, a binary vector representation of a high-cardinality categorical feature), the random effects \({u}_{j} \stackrel{iid}{\sim} \mathcal{N}(0, \sigma_u^2)\) measure the deviation of individual categories from the population mean, and \(f(\cdot)\) is some complex non-linear function of the fixed effects. Here, we consider \begin{align} f(\mathbf{x}) &= 10 \sin \left(\pi x_{1} x_{2}\right)+20\left(x_{3}-0.5\right)^{2}+10 x_{4}+5 x_{5}, \label{eq:friedman-1}\\ x_i & \stackrel{iid}{\sim} \mathcal{U}(0,1), ~ i = 1, \cdots, 10, \notag \end{align} which was studied in \citet{friedman1991} and has become a standard simulation function for regression models (used in e.g. \citealt{kuss2005}; also available in the scikit-learn Python library by \citealt{sklearn}). In our application, we use the Friedman function to test the model's ability to filter out irrelevant predictors (note that \(x_6, \cdots, x_{10}\) is not used in \(f\)) as well as detect interactions and non-linearities of input features, all in the presence of a high-cardinality grouping variable that is subject to random effects. We set up the desired simulation environments by adjusting the following parameters; refer to Appendix \ref{app:data-gen} for details. \begin{arcitem} \item \textit{Signal-to-noise ratio}: a three-dimensional vector that captures the relative ratio of signal strength (as measured by \(\mu_f\), the mean of \(f(\mathbf{x})\)), random effects variance (\(\sigma_u^2\)), and variability of the response (\(\sigma_\epsilon^2\); noise, or equivalently the irreducible error, as this component captures the unexplained inherent randomness in the response). \item \textit{Response distribution}: distributional assumption for the response, e.g.~Gaussian, gamma, or any other member of the ED family. \item \textit{Inverse link}: inverse of the link function, i.e.~the inverse function of \(g(\cdot)\) in Equation~\eqref{eq:sim-formula}. \item \textit{Distribution of categories}: whether to use balanced or skewed distribution for the allocation of categories. A ``balanced'' distribution allocates approximately equal number of observations to each category; a ``skewed'' distribution generates categories from a (scaled) beta distribution; see Appendix~\ref{app:data-gen}. \end{arcitem} Table \ref{tab:scenarios} lists the simulation environments considered in our experiment. Each of the environments in experiments 2--5 has been configured to mimic one specific characteristic of practical insurance data, such as a low signal-to-noise ratio (the specification of $[8,1,4]$ was selected based on estimated parameters from the real insurance case study in Section \ref{sec:case-study}), an imbalanced distribution across categories, or a highly skewed response distribution for claim severity. We give a brief description of each environment below. \begin{arcitem} \item \textit{Experiment 1} simulates the base scenario that adheres to the assumptions of a Gaussian GLMMNet. \item \textit{Experiment 2} simulates a gamma-distributed response, which is often used to model claim severity in lines of business (LoB) such as auto insurance or general liability. \item \textit{Experiment 3} simulates a skewed distribution of categories, which is a common characteristic of high-cardinality categorical features (e.g. car make, injury code). \item \textit{Experiments 4--5} incrementally increase the level of noise in the data and simulate LoBs that are difficult to model by covariates, such as commercial building insurance, catastrophic events and cyber risks. \item \textit{Experiment 6} represents the most challenging scenario, incorporating the complexities of all the other experiments. \end{arcitem} \begin{table}[ht] {\centering \begin{small} \begin{tabular}{@{}cccccc@{}} \toprule Exp ID & \multicolumn{2}{c}{Signal-to-noise} & Response distribution & Inverse link & Distribution of categories \\ \midrule 1 (base) & $[4, 1, 1]$ & (high) & Gaussian & Identity & Balanced \\ 2 & $[4, 1, 1]$ & (high) & \textbf{Gamma} & \textbf{Exponential} & Balanced \\ 3 & $[4, 1, 1]$ & (high) & Gaussian & Identity & \textbf{Skewed} \\ 4 & $\mathbf{[4, 1, 2]}$ & (medium) & Gaussian & Identity & Balanced \\ 5 & $\mathbf{[8, 1, 4]}$ & (low) & Gaussian & Identity & Balanced \\ 6 & $\mathbf{[8, 1, 4]}$ & (low) & \textbf{Gamma} & \textbf{Exponential} & \textbf{Skewed} \\ \bottomrule \end{tabular} \end{small} \caption{Parameters used for five different simulation environments.\\Bold face indicates changes from the base scenario (i.e. experiment 1).} \label{tab:scenarios} } \end{table} For each scenario, we generate 5,000 training observations and 2,500 testing observations. \subsection{Results} \label{ssec:sim-results} In all scenarios studied, the GLMMNet outperforms or at least performs on par with the target benchmark model of an entity embedded neural network. In low to medium noise level environments, the GLMMNet usually outperforms the second best model (i.e.~the entity embedded neural network) by a significant margin. The two outperform all the other mixed models, including GLMM and GPBoost. Figure \ref{fig:boxplots-1} compares the out-of-sample performance of the candidate models (listed in Section~\ref{tab:candidates}) over 50 simulation runs (of 5,000 training and 2,500 testing observations each) under a base scenario (experiment 1 in Table \ref{tab:scenarios}); the proposed GLMMNet is highlighted in green. For all metrics considered, smaller values mean a better fit. \begin{figure}[ht] {\centering \subfloat[Mean absolute error\label{fig:boxplots-1-1}]{\includegraphics[width=0.49\linewidth]{figure/experiment_1/repeats_MAE_1} }\subfloat[Root mean square error\label{fig:boxplots-1-2}]{\includegraphics[width=0.49\linewidth]{figure/experiment_1/repeats_RMSE_1} }\newline\subfloat[Continuously ranked probability score (CRPS)\label{fig:boxplots-1-3}]{\includegraphics[width=0.49\linewidth]{figure/experiment_1/repeats_CRPS_1} }\subfloat[RMSE of average prediction per category\label{fig:boxplots-1-4}]{\includegraphics[width=0.49\linewidth]{figure/experiment_1/repeats_RMSE_avg_1} } } \caption{Boxplots of out-of-sample performance metrics of the different models; GLMMNet highlighted in green. Each experiment is repeated 50 times, with 5,000 training observations and 2,500 testing observations each.}\label{fig:boxplots-1} \end{figure} From Figure \ref{fig:boxplots-1}, we see that \texttt{GLMMNet} is leading in all metrics. In particular, on average it performs better than the current state-of-the-art neural network with entity embeddings (\texttt{NN\_ee}), which is also the next best model. A paired sample Wilcoxon signed rank test on the out-of-sample MAE rejects the null hypothesis---that \texttt{GLMMNet} and \texttt{NN\_ee} perform the same---with strong evidence (\(p < 0.001\)), in favour of the alternative that \texttt{GLMMNet} produces a smaller error. The same conclusion holds for all metrics tested. Indeed, upon closer examination of the paired sample data, we find that for every single simulation run, the performance of \texttt{GLMMNet} surpasses that of \texttt{NN\_ee}. This suggests that optimising the networks via the (more suitable) mixed model likelihood provides a significant benefit to their predictive performance, in terms of both point predictions and (more notably) probabilistic predictions. Ignoring \texttt{GLMMNet}, the best performing models in panels (a)--(c) are \texttt{NN\_ee} and \texttt{GBM\_GLMM\_enc} (a boosting model with GLMM encoding). Both models significantly outperform the linear family of models (GLM or GLMM), showing that a flexible model structure is required to capture the non-linearities in the data. More importantly, without a suitable way of accounting for the high-cardinality categorical feature, a flexible model structure alone is not enough to achieve good performance. For example, the struggle of tree-based models in dealing with categorical features has been well documented \citep{prokhorenkova2018}. In this experiment, with the usual one-hot encoding, the GBM model (\texttt{GBM\_one\_hot}) performs even worse than its GLM counterpart (\texttt{GLM\_one\_hot}); although the reverse is true when both are fitted without the categorical variable altogether. This confirms the motivation for this research: more flexible ML models, when dealing with financial or insurance data, can often be constrained by their capability to model categorical variables with many levels. The ranking of models in Figure~\ref{fig:boxplots-1}(d)---which is designed to compare the models' ability to explain between-category variations---is slightly different. \texttt{GLM\_one\_hot} and \texttt{GLMM} perform better, moving up in the ranking to join the (consistently) top-performing \texttt{GLMMNet} and \texttt{NN\_ee}. This is a result of the balance property (for GLMs with canonical links) which ensures that the sum of fitted values must equal exactly the sum of observed values for any level of the categorical features in the training set \citep{wuethrich2021}. In this case, the training RMSE\_avg will be zero by definition. In practice, it is remarkable that this quality often carries over to an out-of-sample set as well, as demonstrated by the performance of \texttt{GLM\_one\_hot} and \texttt{GLMM} here. Figure~\ref{fig:densities-1} gives a more insightful picture of per-category predictive accuracy by plotting the densities of predictions from selected models against the true underlying density that generated the observations (leftmost). The plot shows a selection of representative categories (top three with the highest response values, two in the middle, and bottom three with the lowest), but the conclusions drawn here hold generally for other categories too. The colours roughly correspond to the range of values under which the predictions fall; darker colours represent smaller values (relative to an average observation), and lighter represent larger values. Ideally, the predicted density shapes should match as closely as possible to the true densities, and the colours should also agree as much as possible. \begin{figure}[!htbp] \begin{center} \includegraphics[width=\linewidth]{figure/experiment_1/ridgeline_1} \caption{True \textit{versus} predicted densities for (selected) categories on the test set in experiment 1}\label{fig:densities-1} \end{center} \end{figure} In line with the previous observations, \texttt{GLMMNet} produces very accurate density estimates and models the variation in the data extremely well. \texttt{NN\_ee}, on the other hand, clearly overestimates the response for Category 9, likely due to an overfit to the training data points. Unsurprisingly, ignoring the categorical variable (\texttt{GLM\_ignore\_cat}) almost eliminates the between-category differences and leads to very poor estimates. When equipped with one-hot encoding, the GLM (\texttt{GLM\_one\_hot}) is able to sense the differences in the mean (as indicated by the colours), as the balanced distribution of categories provides sufficient data points for learning the between-category differences. However, \texttt{GLM\_one\_hot} tends to produce narrower densities than the truth. This indicates a failure to capture the remaining, or within-category, variation in the data, confirming that its linear structure is too restrictive for this dataset. The results from experiments 2--3 are mainly consistent with what we have seen in experiment 1 (see Appendix~\ref{app:sim23}). \texttt{GLMMNet} takes a strong lead in both cases and showcases a statistically significant advantage over its closest competitor \texttt{NN\_ee} in all cases (\(p < 0.01\) from the corresponding Wilcoxon signed rank tests). The result in experiment 2 (gamma-distributed response) is much within expectation; the GLMMNet by design takes good care of the response distribution via its loss function. The result in experiment 3 (skewed distribution of categories) further suggests that the predictive strength of the GLMMNet is, at least to some degree, immune to the shape of the categorical distribution. As the noise level increases (experiments 4--6), the differences in model performance become smaller, and the predictive advantage of the GLMMNet starts to fade away (see Appendix~\ref{app:sim46}). In experiments 5--6, the GBM family of models surpasses both \texttt{GLMMNet} and \texttt{NN\_ee} in CRPS measures. One possible explanation is that the variance of the random effects here is too small with respect to the error variance to warrant the need for accurately modelling the categorical variable. Although \texttt{GLMMNet} may not always provide a boost to the overall predictive performance, it remains one of the top performers in terms of its ability to capture the individual category means (as measured by RMSE\_avg). The latter may be of practical significance in many applications. The outperformance of GBM over deep learning models as observed in experiments 5 and 6 was already discussed in the prior literature, see, e.g. \citet{borisov2021}, though it remains an open question what causes this gap in performance. It is possible that neural networks are more likely to overfit to the high noise in the background, or that they may get stuck in a suboptimal local minimum and cannot gain enough momentum from the data to escape. In general, the high level of noise in the environment prevents the network-based models (including the proposed GLMMNet) from fully realising their predictive power. Further investigations suggest that adding regularisation helps improve the overall performance of the GLMMNet to a great extent, as shown in panel (a) of Figure \ref{fig:boxplots-add}, though it seems to bias the point predictions (panel b). \begin{figure}[htbp!] {\centering \subfloat[Exp 5, CRPS\label{fig:boxplots-add-1}]{\includegraphics[width=0.49\linewidth]{figure/experiment_5/repeats_CRPS_l2} }\subfloat[Exp 5, RMSE of average prediction per category\label{fig:boxplots-add-2}]{\includegraphics[width=0.49\linewidth]{figure/experiment_5/repeats_RMSE_avg_l2} } } \caption{Boxplots of out-of-sample performance metrics of the different models in experiment 5 (middle; high noise Gaussian), \textbf{shown with an $\ell_2$-regularised GLMMNet}. Each experiment is repeated 50 times, with 5,000 training observations and 2,500 testing observations each.\vspace{2mm}}\label{fig:boxplots-add} \end{figure} Contrasting experiments 5 and 6 shows that the \texttt{GLMMNet} ranks higher when the response moves away from Gaussian (see Figure~\ref{fig:boxplots-4} in Appendix~\ref{app:sim46}). The predictive power that the \texttt{GLMMNet} regains in experiment 6 is likely a result of the change in the true distribution of the response. When the shape of the response looks less normal, the outperformance of GBM models over the GLMMNet becomes less distinctive. This comparison highlights the advantage of the GLMMNet in using a loss function that is better aligned with the distribution of the response. We expect this effect to be more pronounced when we work with heavier-tailed distributions that deviate even more from the normal. Across all experiments, we find that the \texttt{GLMMNet}, \texttt{NN\_ee}, and \texttt{GBM\_GLMM\_enc} consistently outperform the other models in terms of predictive performance. In particular, we find that our GLMMNet outperforms or at least performs on par with the target benchmark model of an entity embedded neural network in all scenarios studied. That said, each model has its own strengths and limitations, and the choice of which model to use will depend on the specific needs of the modelling problem. Table~\ref{tab:strengths} lists some of these considerations. We should also note that \texttt{GBM\_ee} has similar properties to \texttt{GBM\_GLMM\_enc} and is therefore not included in the table for brevity of presentation (except for the fact that \texttt{GBM\_ee} is more complicated to implement as it requires training an entity-embedded neural network beforehand). \newcommand{~~\llap{\textbullet}~~}{~~\llap{\textbullet}~~} \begin{table}[ht] \centering \begin{footnotesize} \begin{tabular}{p{1.5cm}p{6.5cm}p{6.5cm}} \toprule & \multicolumn{1}{c}{Strengths} & \multicolumn{1}{c}{Limitations} \\ \midrule \texttt{NN\_ee} & \raggedright ~~\llap{\textbullet}~~ Strong predictive performance, particularly in low-medium noise environments & {\raggedright ~~\llap{\textbullet}~~ Compromised performance in high noise environments} \newline {\raggedright \indent ~~\llap{\textbullet}~~ Limited interpretability} \\ \midrule \texttt{GBM\_GLMM\_enc} & \raggedright ~~\llap{\textbullet}~~ Good predictive performance, particularly in high noise environments \newline \raggedright \indent ~~\llap{\textbullet}~~ Simpler structure and faster to train & {\raggedright ~~\llap{\textbullet}~~ Compromised performance in low-medium noise environments} \newline {\raggedright \indent ~~\llap{\textbullet}~~ Limited interpretability of the effects of the categories} \\ \midrule \texttt{GLMMNet} & \raggedright ~~\llap{\textbullet}~~ Consistently outstanding predictive performance, particularly in low-medium noise environments and/or when the response deviates from the Gaussian distribution \newline \raggedright \indent ~~\llap{\textbullet}~~ Transparency on the category effects (through random effects predictions) \newline \raggedright \indent ~~\llap{\textbullet}~~ Offers naturally probabilistic estimates & {\raggedright ~~\llap{\textbullet}~~ Compromised performance in high noise environments when used without additional regularisation} \newline {\raggedright \indent ~~\llap{\textbullet}~~ Limited interpretability of the fixed effects component} \\ \bottomrule \end{tabular} \caption{Model comparison: strengths and limitations of the top performing models} \label{tab:strengths} \end{footnotesize} \end{table} \section{Application to Real Insurance Data} \label{sec:case-study} In this section, we apply GLMMNet to a real (proprietary) insurance dataset (described in Section~\ref{ssec:data}), and discuss its performance relative to the other models we have considered (Section~\ref{ssec:real-results}). We also demonstrate, in Section~\ref{ssec:practical-insights}, how such an analysis can potentially inform practitioners in the context of technical pricing. \subsection{Description of Data} \label{ssec:data} The data for this analysis were provided by a major Australian insurer. The original data cover 27,351 commercial building and contents reported claims by small and medium-sized enterprises (SME) over the period between 2010 and 2015. The analysis seeks to construct a regression model to predict the ultimate costs of the reported claims based on other individual claim characteristics that can be observed early in the claims process (e.g.~policy-level details and claim causes). A description of the (post-engineered) input features is given in Appendix~\ref{app:predictors}. The response variable is the claim amount (claim severity), which has a very skewed distribution, as shown in Figure~\ref{fig:hist}: even on a log scale, we can still observe some degree of positive skewness. In search for a suitable distribution to model the severity, we fit both lognormal and loggamma distributions to the (unlogged) marginal. Figure~\ref{fig:incurred} indicates that both models may be suitable, with loggamma being slightly more advantageous. We will test both lognormal and loggamma distributions in our experiments below (Section~\ref{ssec:real-results}); specifically, we fit Gaussian and gamma models to the log-transformed response. \begin{figure}[htbp!] \begin{center} \includegraphics[width=0.5\linewidth]{figure/case_study/histogram} \caption{Histogram of claim amounts (on log scale). The $x$-axis numbers have been deliberately removed for confidentiality reasons.}\label{fig:hist} \end{center} \end{figure} \begin{figure}[htbp!] \subfloat[P-P plot (lognormal)\label{fig:incurred-3}] {\includegraphics[width=0.5\linewidth]{figure/case_study/pp_lnorm} }\subfloat[P-P plot (loggamma)\label{fig:incurred-4}]{\includegraphics[width=0.5\linewidth]{figure/case_study/pp_lgamma} } \caption{Probability-probability (P-P) plot of empirical \textit{versus} fitted lognormal and loggamma (theoretical) distributions. The parametric distributions are fitted to the (unlogged) marginal, i.e. without any covariates.}\label{fig:incurred} \end{figure} The high-cardinality categorical variable of interest is the occupation of the business, which is coded through the Australian and New Zealand Standard Industrial Classification (ANZSIC) system. The ANZSIC system hierarchically classifies occupations into Divisions, Subdivisions, Groups and Classes (from broadest to finest). See Appendix~\ref{app:anzsic} for an example. We will look at occupations at the Class level.\footnote{We chose not to incorporate the hierarchical information. Instead, we directly modelled the lowest level (i.e. Class) as a flat categorical feature. The decision was made in light of (1) the increased complexity of estimation in hierarchical models, and (2) the lack of alignment between the hierarchical system built for general-purpose occupation classification and the factors that differentiate claim severities.} There are over 300 unique levels of such occupation classes in this data set. Panel (a) of Figure \ref{fig:cat} highlights the skewed distribution of occupations: the most common occupation has more than double the number of observations than the second most common, and the number of observations decays rapidly for the rarer classes. Panel (b) shows the heterogeneity in claims experiences between different occupation classes; calculations reveal a coefficient of variation of 160\% for the occupation means. One challenge in modelling the occupation variable lies in determining how much confidence we can have in the observed claims experiences; intuitively, we should trust the estimates more when there are more data points, and less when there are few data points. As explained in Section~\ref{ssec:glmm}, the mixed effects models are one solution to this problem, which we will explore further in this section. \begin{figure}[ht!] \begin{center} \subfloat[Number of claims from the top 20 common occupation classes\label{fig:cat-1}] {\includegraphics[width=0.49\linewidth]{figure/case_study/claim_count_by_cat} } \subfloat[Average claim size for the (same) top 20 common occupation classes\label{fig:cat-2}]{\includegraphics[width=0.49\linewidth]{figure/case_study/avg_claim_by_cat}} \caption{Skewed distribution of occupation class and heterogeneity in experience. The axes labels have been removed to preserve confidentiality.}\label{fig:cat} \end{center} \end{figure} The dataset under consideration displays many typical characteristics of insurance data that we studied in Section~\ref{sec:simulation} (see Section~\ref{ssec:sim-env} in particular). For example, it is characterised by an extremely skewed response distribution (Figure~\ref{fig:hist}), a low signal-to-noise ratio (there is little information that covariates can add to the marginal fits, as implied by Figure~\ref{fig:incurred}) and a highly disproportionate distribution of categories (Figure~\ref{fig:cat}). These features of the dataset best align with those of Experiment 6 in Section~\ref{sec:simulation}. If the conclusions generalise, we expect that a regularised version of our GLMMNet will show strong performance. \subsection{Modelling Results} \label{ssec:real-results} We consider the same candidate models as before; see Section \ref{ssec:candidates}. Having noted the underperformance of GLMMNet in the presence of high noise and the benefit of adding regularisation to it, we also consider an \(\ell_2\)-regularised GLMMNet (\texttt{GLMMNet\_l2}) in the hope that the regularisation would help improve performance. We follow the learning pipeline described at the start of Section~\ref{ssec:metrics} and split the data into a 90\% training set and a 10\% test set. Table~\ref{tab:cs-results} presents the out-of-sample metrics for the top performing lognormal and loggamma models respectively, compared against a one-hot encoded GLM as a baseline (full results in Appendix~\ref{app:case-study}).\footnote{In Table~\ref{tab:cs-results} and the relevant tables in Appendix~\ref{app:case-study}, the median absolute error (MedAE) and NLL are calculated on the original scale of the data (i.e. after the predictions are back-transformed). The CRPS is calculated on the log-transformed scale, because there is no closed-form formula for the loggamma CRPS. As a result of this, the CRPS as presented here will be more tolerant of wrong predictions for extreme observations than the NLL.} Note that we have selected slightly different metrics from the simulation experiments to better accommodate the heavier-tailed response. The RMSE, for example, is no longer a suitable measure as it penalises very heavily for predictions that are far from the observed values, which can happen very frequently when there are a few extreme observations in the data. \begin{table}[!htbp] \begin{center} \begin{small} \begin{tabular}{@{}lcccccc@{}} \toprule & \multicolumn{3}{c}{\textbf{Lognormal (out-of-sample)}} & \multicolumn{3}{c}{\textbf{Loggamma (out-of-sample)}} \\ & MedAE & CRPS & NLL & MedAE & CRPS & NLL \\ \midrule GLM\_one\_hot & 4108 & 0.7931 & 9.623 & 1946 & 0.8557 & 9.751 \\ GBM\_GLMM\_enc & 3870 & 0.7666 & 9.584 & \textbf{1536} & 0.7626 & 9.578 \\ NN\_ee & 4086 & 0.7666 & 9.584 & 1606 & \textbf{0.7612} & 9.578 \\ GBM\_ee & 3828 & 0.7665 & 9.584 & 1549 & 0.7629 & 9.579 \\ GLMM & 3864 & 0.7666 & 9.584 & 1570 & 0.7629 & \textbf{9.577} \\ GLMMNet & 3783 & 0.7751 & 9.595 & 1633 & 0.7662 & 9.583 \\ GLMMNet\_l2 & \textbf{3549} & \textbf{0.7634} & \textbf{9.580} & 1618 & 0.7626 & \textbf{9.577} \\ \bottomrule \end{tabular} \end{small} \end{center} \caption{Comparison of lognormal and loggamma model performance (median absolute error, CRPS, negative log-likelihood) on the test (out-of-sample) set. The best values are bolded.} \label{tab:cs-results} \end{table} The three metrics tell consistent stories most of the time. The results for CRPS and NLL are almost perfectly aligned, which is not surprising given that both are measures of how far the observed values depart from the probabilistic predictions. In general, we consider the CRPS and NLL measures to be more reliable estimates of the goodness-of-fit of the models than the MedAE, which only takes into account the point predictions but not the uncertainty associated with them. The results mostly match our expectation. Among the lognormal models, the regularised \texttt{GLMMNet\_l2} takes the lead, followed by \texttt{GBM\_GLMM\_enc}, \texttt{NN\_ee}, \texttt{GBM\_ee} and \texttt{GLMM}, whose performances are so close together that it is impossible to distinguish between the four. The \texttt{GLM} family of models performs poorly on this data, much worse than their mixed model counterpart, i.e.~\texttt{GLMM}, confirming that the presence of the high-cardinality categorical variable interferes with their modelling capabilities. Among the loggamma models, although \texttt{NN\_ee} outperforms \texttt{GLMMNet\_l2} in CRPS, \texttt{GLMMNet\_l2} remains the most competitive model in terms of NLL (on par with \texttt{GLMM}). One practical consideration to note is that when the environment is highly noisy and the underlying model is uncertain, such as here, the interpretability of the model becomes even more important. In such cases, models that are more interpretable are preferred, which in turn, highlights the advantages of \texttt{GLMMNet} over \texttt{NN\_ee}. An important observation is that adding the \(\ell_2\) regularisation term clearly helps GLMMNet navigate the data better. In the lognormal models, the regularised network represents a 1.5\% improvement in the CRPS and NLL from the original \texttt{GLMMNet}, which suggests that the original \texttt{GLMMNet} overfits to the training data (as can be confirmed from its deterioration in the test performance). Indeed, the amelioration of the score proves statistically significant ($p < 0.001$) under the Diebold-Mariano test \citep{gneiting2014}. The results for the loggamma models lead to similar conclusions. Comparing the lognormal and loggamma models, we find that assuming a loggamma distribution for the response improves the model fit in almost all instances (except the GLM models) and across all three performance metrics. While the CRPS and NLL values are reasonably similar between the two tables, the median absolute error (MedAE) halves when moving from lognormal to loggamma. An intuitive explanation is that the lognormal models struggle more with fitting to the extreme claims and thus tend to over-predict the middle-ranged values in an attempt to boost the predictions. The loggamma models, on the other hand, are more comfortable with the extremes, as those can be captured reasonably well by the long tail. These results confirm the need to consider alternative distributions beyond the Gaussian---one major motivation for our proposed extension of the LMMNN to the GLMMNet. Even within the Gaussian framework, as demonstrated by the lognormal models presented here, the GLMMNet can still be useful. Overall, as we have seen in the simulation experiments, the differences in model performance are relatively small due to the low signal-to-noise ratio. Only a limited number of risk factors were observed, and they are not necessarily the true drivers of the claims cost. In the experiments above, a simpler structure like GLMM sometimes performs comparably with the more complex and theoretically more capable GLMMNet. That said, the results clearly demonstrate that the GLMMNet is a promising model in terms of predictive performance. We expect to see better results with less noisy data or a greater volume of data, e.g.~when more information becomes available as a policy progresses and claims develop, reducing the amount of noise in the system. This has been demonstrated in the simulation experiments (Section \ref{ssec:sim-results}). \subsection{GLMMNet: Decomposing Random Effects Per Individual Category} \label{ssec:practical-insights} One main advantage of mixed effects models in dealing with high-cardinality categorical features is the transparency they offer on the effects of individual categories. This is particularly important as it is often the case that the high-cardinality categorical feature is itself a key risk factor. For example, in this data set, there is a lot of heterogeneity in claims experience across different occupations, but the lack of data for some makes it hard to determine how much trust can be given to the experience. The GLMMNet can provide useful insights in this regard. Figure \ref{fig:pr-1} shows the posterior predictions for the effects of individual occupations (ordered by decreasing $z$-scores), plotted with 95\% confidence intervals. Unsurprisingly, occupations with a larger number of observations generally have tighter intervals (i.e.~lower standard deviations); see Figure~\ref{fig:pr-2} of Appendix~\ref{app:case-study}. This prevents us from overtrusting extreme estimates, which may happen simply due to the small sample size. In the context of technical pricing, such an analysis can be helpful for identifying occupation classes that are statistically more or less risky. This is not possible with the often equally or less high-performing neural networks with entity embeddings. With the latter, we can analyse the embedding vectors to identify occupations that are ``similar'' to each other in the sense of producing similar output, but we cannot go any further. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.75\linewidth]{figure/case_study/posterior_estimates} \caption{Posterior predictions of (a randomly selected sample of) the random effects in 95\% confidence intervals from the loggamma GLMMNet; ordered by decreasing $z$-scores. Occupations that do not overlap with the zero line are highlighted in vermillion (if above zero) and blue (if below zero) respectively. Occupations that \textit{do} overlap with the zero line are in the shaded region. The \textit{x}-axis labels have been removed to preserve confidentiality.}\label{fig:pr-1} \end{center} \end{figure} \section{Conclusions} \label{sec:conclusion} High-cardinality categorical features are often encountered in real-world insurance applications. The high dimensionality creates a significant challenge for modelling. As discussed in Section~\ref{ssec:background}, the existing techniques prove inadequate in addressing this issue. Recent advancements in ML modeling hold promise for the development of more effective approaches. In this paper, we took inspiration from the latest developments in the ML literature and proposed a novel model architecture called GLMMNet that targets high-cardinality categorical features in insurance data. The proposed GLMMNet fuses neural networks with GLMMs to enjoy both the predictive power of deep learning models and the transparency and statistical strength of GLMMs. GLMMNet is an extension to the LMMNN proposed by \citet{simchoni2022}. The LMMNN is limited to the case of a Gaussian response with an identity link function. GLMMNet, on the other hand, allows for the entire class of ED family distributions, which are better suited to the modelling of skewed distributions that we see in insurance and financial data. Importantly, in making this extension, we have made several modifications to the original LMMNN architecture, including: \begin{arcitem} \item \emph{The removal of a non-linear transformation on the high-cardinality categorical features \(Z\)}. This is to ensure the interpretability of random effect predictions, which carries practical significance (as discussed in Section~\ref{ssec:practical-insights}). \item \emph{The shift from optimising the exact likelihood to a variational inference approach}. As discussed in Section~\ref{ssec:glmmnet-training}, the integral in the likelihood function does not generally have an analytical solution (with only a few exceptions, and the Gaussian example is one of those). The shift to a variational inference approach circumvents the complicated numerical approximations to the integral. This opens the door to a flexible range of alternative distributions for the response, including Bernoulli, Poisson, Gamma---any member of the ED family. This flexibility makes the GLMMNet widely applicable to many insurance contexts (and beyond). \end{arcitem} In our experiments, we found that the GLMMNet often performed better than or at least comparably with the benchmark model of an entity embedded neural network, which is the most popular approach among actuarial researchers working with neural networks. Although they can both be outperformed by boosting models in an environment of low signal-to-noise ratio, adding regularisation to the GLMMNet was sufficient to bring the model back to the top. The GLMMNet's outperformance over the entity embedded neural network is an achievement worth celebrating, as it comes with the additional benefits of transparency on the random effects as well as the ability to produce probabilistic predictions (e.g.~uncertainty estimates). To sum up, the proposed GLMMNet has at least the following advantages over the existing approaches: \begin{arcitem} \item Improved predictive performance under most scenarios we have tested; \item Interpretability with the random effects; \item Flexibility with the form of the response distribution; \item Ability to handle large datasets (due to the computationally efficient implementation through variational inference). \end{arcitem} All these make GLMMNet a worthy addition to the actuaries' modelling toolbox for handling high-cardinality categorical features. \section*{Acknowledgements} This work was presented at the 2022 Australasian Actuarial Education and Research Symposium (AAERS) in November 2022 (Canberra, Australia). The authors are grateful for the constructive comments received from colleagues present at the event. This research was supported under Australian Research Council's Discovery Project DP200101859 funding scheme. Melantha Wang acknowledges financial support from UNSW Australia Business School. The views expressed herein are those of the authors and are not necessarily those of the supporting organisations. The authors declare that they have no conflicts of interest. \section*{Data and Code} The code used in the numerical experiments, as well as the simulation datasets, is available on \url{https://github.com/agi-lab/glmmnet}. \bibliographystyle{elsarticle-harv}
1,116,691,498,892
arxiv
\section{Introduction} Let $\mathbb{K}$ be the real scalar field $\mathbb{R}$ or the complex scalar field $\mathbb{C}$. As usual, for a positive integer $N$ we define \ell_\infty^N=\{(x_n)_{n=1}^\infty\subset \mathbb{K} \mbox{ bounded }\}$, c_{0}=\left\{ \left( x_{n}\right)_{n=1}^\infty \subset \mathbb{K}:\lim x_{n}=0\right\} $ and $e_{j}$ represents the canonical vector of $c_{0}$ with $1$ in the $j$-th coordinate and $0$ elsewhere. Littlewood's $4/3$ inequality \cite{Litt}, proved in 1930, asserts that \begin{equation*} \left( \sum\limits_{i,j=1}^{\infty }\left\vert U(e_{i},e_{j})\right\vert ^ \frac{4}{3}}\right) ^{\frac{3}{4}}\leq \sqrt{2}\left\Vert U\right\Vert \end{equation* for every continuous bilinear form $U:c_{0}\times c_{0}\rightarrow \mathbb{K} $ or, equivalently, \begin{equation*} \left( \sum\limits_{i,j=1}^{N}\left\vert U(e_{i},e_{j})\right\vert ^{\frac{ }{3}}\right) ^{\frac{3}{4}}\leq \sqrt{2}\left\Vert U\right\Vert \end{equation* for every positive integer $N$ and all bilinear forms $U:\ell _{\infty }^{N}\times \ell _{\infty }^{N}\rightarrow \mathbb{K}$. It is well known that the exponent $4/3$ is optimal and it was recently shown in \cite{diniz2} that the constant $\sqrt{2}$ is also optimal for real scalars. For complex scalars, the constant $\sqrt{2}$ can be improved to $2 \sqrt{\pi }$, although it seems to be not known if this value is optimal. The natural step further is to investigate sums \begin{equation*} \left( \sum\limits_{i_{1},\ldots ,i_{m}=1}^{N}\left\vert U(e_{i_{^{1}}},\ldots ,e_{i_{m}})\right\vert ^{r}\right) ^{\frac{1}{r}} \end{equation* for $m$-linear forms $U:\ell _{\infty }^{N}\times \cdots \times \ell _{\infty }^{N}\rightarrow \mathbb{K}$. The exponent $4/3$ need to be increased to have a similar inequality for multilinear forms; this is what the H.F. Bohnenblust and E. Hille discovered in 1931 (\cite{bh}, and also \cite{sur}). More precisely, the Bohnenblust--Hille inequality asserts that for every positive integer $m$ there is a constant $C_{m}\geq 1$ so that \begin{equation} \left( \sum\limits_{i_{1},\ldots ,i_{m}=1}^{N}\left\vert U(e_{i_{^{1}}},\ldots ,e_{i_{m}})\right\vert ^{\frac{2m}{m+1}}\right) ^ \frac{m+1}{2m}}\leq C_{m}\left\Vert U\right\Vert \label{lf} \end{equation for all positive integers $N$ and all $m$-linear forms $U:\ell _{\infty }^{N}\times \cdots \times \ell _{\infty }^{N}\rightarrow \mathbb{K}$; moreover, the exponent $2m/\left( m+1\right) $ is sharp$.$ Another natural question is: \bigskip \textit{Is it possible to obtain multilinear versions of Littlewood's }$4/3 \textit{\ inequality keeping the exponent }$4/3?$ \bigskip This problem was treated at least in two recent papers (we state the results for complex scalars but the case of real scalars is similar, with slightly different constants): \begin{itemize} \item (\cite{uni}) For all positive integers $N$ and all $m$-linear forms U:\ell _{\infty }^{N}\times \cdots \times \ell _{\infty }^{N}\rightarrow \mathbb{C}$ we hav \begin{equation*} \left( \sum\limits_{i,j=1}^{N}\left\vert U(e_{i},...,e_{i},e_{j},...,e_{j})\right\vert ^{\frac{4}{3}}\right) ^{\frac{ }{4}}\leq \frac{2}{\sqrt{\pi }}\left\Vert U\right\Vert \text{.} \end{equation*} \item (\cite{arapell}) For all positive integers $N$ and all $m$-linear forms $U:\ell _{\infty }^{N}\times \cdots \times \ell _{\infty }^{N}\rightarrow \mathbb{C}$ we have \begin{equation*} \left( \sum\limits_{i_{1},\ldots ,i_{m}=1}^{N}\left\vert U(e_{i_{^{1}}},\ldots ,e_{i_{m}})\right\vert ^{\frac{4}{3}}\right) ^{\frac{ }{4}}\leq \left( \prod\limits_{j=2}^{m}\Gamma \left( 2-\frac{1}{j}\right) ^ \frac{j}{2-2j}}\right) N^{\frac{m-2}{4}}\left\Vert U\right\Vert \end{equation* and the exponent $\frac{m-2}{4}$ is optimal. \end{itemize} In this paper we investigate this problem from a different point of view. More precisely, as a consequence of our main result we show that for all positive integers $m\geq 3$ and bijections $\sigma _{1},\ldots ,\sigma _{m-2} $ from $\mathbb{N}\times \mathbb{N}$ to $\mathbb{N}$ we hav \begin{equation*} \left( \sum_{i,j=1}^{\infty }\left\vert U\left( e_{i},e_{j},e_{\sigma _{1}(i,j)},\ldots ,e_{\sigma _{m-2}(i,j)}\right) \right\vert ^{\frac{4}{3 }\right) ^{\frac{3}{4}}\leq \sqrt{2}\Vert U\Vert \end{equation* for every continuous $m$-linear form $U:c_{0}\times \cdots \times c_{0}\rightarrow \mathbb{K}$. We prefer to begin with the theory of multiple summing operators and state our main result in this context; then the above result (among others) will be just simple consequences of the main result. \section{Multiple summing operators} Let $E,E_{1},...,E_{m}$ and $F$ denote Banach spaces over $\mathbb{K}$ and let $B_{E^{\ast }}$ denote the closed unit ball of the topological dual of E $. If $1\leq q\leq \infty $, by $q^{\ast }$ we represent the conjugate of q$. For $p\geq 1$, by $\ell _{p}(E)$ we mean the space of absolutely $p --summable sequences in $E$; also $\ell _{p}^{w}(E)$ denotes the linear space of the sequences $\left( x_{j}\right) _{j=1}^{\infty }$ in $E$ such that $\left( \varphi \left( x_{j}\right) \right) _{j=1}^{\infty }\in \ell _{p}$ for every continuous linear functional $\varphi :E\rightarrow \mathbb{ }$. The function \begin{equation*} \left\Vert \left( x_{j}\right) _{j=1}^{\infty }\right\Vert _{w,p}=\sup_{\varphi \in B_{E^{\ast }}}\left\Vert \left( \varphi \left( x_{j}\right) \right) _{j=1}^{\infty }\right\Vert _{p} \end{equation* defines a norm on $\ell _{p}^{w}(E)$. The space of all continuous $m$-linear operators $T:E_{1}\times \cdots \times E_{m}\rightarrow F$, with the $\sup $ norm, is denoted by $\mathcal{L}\left( E_{1},...,E_{m};F\right) $. The notion of multiple summing operators, introduced independently by Matos and P\'{e}rez-Garc\'{\i}a (\cite{matos, per}), is a natural extension of the classical notion of absolutely summing linear operators (see \cite{Di}). But multiple summing operators is certainly one of the most fruitful approaches (see \cite{popa2, popa, popa3} for recent papers). For different approaches we mention, for instance \cite{b, dimant, comparing,PeRuSa, RuSa}. \begin{definition} Let $1\leq q_{1},...,q_{m}\leq p<\infty $. A multilinear operator $T\in \mathcal{L}\left( E_{1},...,E_{m};F\right) $ is multiple $\left( p;q_{1},...,q_{m}\right) $--summing if there exists a $C>0$ such that \begin{equation*} \left( \sum_{j_{1},...,j_{m}=1}^{\infty }\left\Vert T\left( x_{j_{1}}^{(1)},...,x_{j_{m}}^{(m)}\right) \right\Vert ^{p}\right) ^{\frac{ }{p}}\leq C\prod_{k=1}^{m}\left\Vert \left( x_{j_{k}}^{(k)}\right) _{j_{k}=1}^{\infty }\right\Vert _{w,q_{k}} \end{equation* for all $\left( x_{j}^{(k)}\right) _{j=1}^{\infty }\in \ell _{q_{k}}^{w}\left( E_{k}\right) $, $k\in \{1,...,m\}$. We represent the class of all multiple $\left( p;q_{1},...,q_{m}\right) $--summing operators from $E_{1},....,E_{m}$ to $F$ by $\Pi _{\mathrm{mult}\left( p;q_{1},...,q_{m}\right) }\left( E_{1},...,E_{m};F\right) $ and $\pi _ \mathrm{mult}\left( p;q_{1},...,q_{m}\right) }\left( T\right) $ denotes the infimum over all $C$ as above. \end{definition} The main result of this section is the following theorem. Its proof is inspired in arguments from \cite{Ar,b}. \begin{theorem} \label{main} \label{iu}Let $n>m\geq 1$ be positive integers and E_{1},...,E_{n},F$ Banach spaces. I \begin{equation} \Pi _{\mathrm{mult}\left( p;q_{1},...,q_{m}\right) }\left( E_{1},...,E_{m};F\right) =\mathcal{L}\left( E_{1},...,E_{m};F\right) , \label{hyp} \end{equation then there is a constant $C>0$ (not depending on $n$) such that \begin{equation*} \begin{array}{l} \displaystyle\left( \sum_{i_{1},...,i_{m}=1}^{\infty }\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},...,x_{i_{1}\cdots i_{m}}^{(n)})\right\Vert ^{p}\right) ^ \frac{1}{p}}\vspace{0.2cm} \\ \textstyle\qquad \leq C\Vert U\Vert \prod_{k=1}^{m}\left\Vert \left( x_{i}^{(k)}\right) _{i=1}^{\infty }\right\Vert _{w,q_{k}}\prod_{k=m+1}^{n}\left\Vert \left( x_{i_{1}\cdots i_{m}}^{(k)}\right) _{i_{1},...,i_{m}=1}^{\infty }\right\Vert _{w,1} \end{array \end{equation* for all $n$-linear forms $U:E_{1}\times \cdots \times E_{n}\rightarrow F$. \end{theorem} \begin{proof} The case $m=1$ is known (see \cite[Corollary 3.3]{b}). For $m\geq 2$ let us proceed by induction on $n$. First we will show that the result holds for n=m+1$. Let $N$ be a positive integer and $x_{i_{1}\cdots i_{m}}^{(m+1)}\in E_{m+1}$. By the Hahn--Banach theorem we can choose norm one functionals \varphi _{i_{1}\cdots i_{m}}$ such that \begin{equation*} \left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)})\right\Vert =\varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)})\right) \end{equation* for all $i_{1},...,i_{m}=1,\ldots ,N$. A duality argument gives us non-negative real numbers $\alpha _{i_{1}\cdots i_{m}}$ such that \begin{equation*} \sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}^{p^{\ast }}=1, \end{equation* where $p^{\ast }$ is the conjugate number of $p$, i.e., $\frac{1}{p}+\frac{ }{p^{\ast }}=1$, and \begin{equation*} \begin{array}{l} \displaystyle\left( \sum_{i_{1},...,i_{m}=1}^{N}\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)})\right\Vert ^{p}\right) ^{\frac{1}{p}} \\ \displaystyle=\sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)})\right\Vert \\ \displaystyle=\sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)})\right) \end{array \end{equation*} Let $r_{j_{1}\cdots j_{m}}$ be the Rademacher functions indexed on $\mathbb{ }\times \cdots \times \mathbb{N}$ (the order is not important). We have \begin{align*} & \int_{0}^{1}\sum_{i_{1},...,i_{m}=1}^{N}r_{i_{1}\cdots i_{m}}(t)\alpha _{i_{1}\cdots i_{m}}\varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1 \cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(m+1)})\right) \,dt \\ & =\sum_{i_{1},...,i_{m}=1}^{N}\sum_{j_{1},...,j_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{j_{1}\cdots j_{m}}^{(m+1)})\right) \int_{0}^{1}r_{i_{1}\cdots i_{m}}(t)r_{j_{1}\cdots j_{m}}(t)\,dt \\ & =\sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)})\right) \\ & =\left( \sum_{i_{1},...,i_{m}=1}^{N}\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)})\right\Vert ^{p}\right) ^{\frac{1}{p}}. \end{align* Henc \begin{align*} & \left( \sum_{i_{1},...,i_{m}=1}^{N}\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)})\right\Vert ^{p}\right) ^{\frac{1}{p}} \\ & \leq \int_{0}^{1}\left\vert \sum_{i_{1},...,i_{m}=1}^{N}r_{i_{1}\cdots i_{m}}(t)\alpha _{i_{1}\cdots i_{m}}\varphi _{i_{1}\cdots i_{m}}\Big U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1 \cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(m+1)})\Big)\right\vert \,\,dt \\ & \leq \sup_{t\in \lbrack 0,1]}\left\vert \sum_{i_{1},...,i_{m}=1}^{N}r_{i_{1}\cdots i_{m}}(t)\alpha _{i_{1}\cdots i_{m}}\varphi _{i_{1}\cdots i_{m}}\Big U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1 \cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(m+1)})\Big)\right\vert \\ & \leq \sup_{t\in \lbrack 0,1]}\sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\Big\Vert U\Big(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)} \sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1}\cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(m+1)}\Big)\Big\Vert \\ & \leq \left( \sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}^{p^{\ast }}\right) ^{\frac{1}{p^{\ast }}}\cdot \sup_{t\in \lbrack 0,1]}\left( \sum_{i_{1},...,i_{m}=1}^{N}\Big\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1 \cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(m+1)})\Big\Vert^{p}\right) ^{\frac{ }{p}} \\ & \leq \sup_{t\in \lbrack 0,1]}\pi _{(p;q_{1},...,q_{m})}\Big(U(\cdot ,...,\cdot ,\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1}\cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(m+1)})\Big)\prod_{k=1}^{m}\left\Vert \left( x_{i}^{(k)}\right) _{i=1}^{N}\right\Vert _{w,q_{k}} \end{align* where in the last inequality we have used \eqref{hyp}. From \eqref{hyp} it follows from the Open Mapping Theorem that there is a constant $C>0$ such that $\pi _{(p;q_{1},...,q_{m})}(\ \cdot \ )\leq C\Vert \cdot \Vert $. Then \begin{equation*} \begin{array}{l} \displaystyle\left( \sum_{i_{1},...,i_{m}=1}^{N}\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)})\right\Vert ^{p}\right) ^{\frac{1}{p}} \\ \displaystyle\leq \sup_{t\in \lbrack 0,1]}C\left\Vert U(\cdot ,...,\cdot ,\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1}\cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(m+1)})\right\Vert \prod_{k=1}^{m}\left\Vert \left( x_{i}^{(k)}\right) _{i=1}^{N}\right\Vert _{w,q_{k}} \\ \displaystyle\leq C\Vert U\Vert \sup_{t\in \lbrack 0,1]}\left\Vert \sum_{i_{1},...,i_{m}=1}^{N}r_{i_{1}\cdots i_{m}}(t)x_{i_{1}\cdots i_{m}}^{(m+1)}\right\Vert \prod_{k=1}^{m}\left\Vert \left( x_{i}^{(k)}\right) _{i=1}^{N}\right\Vert _{w,q_{k}} \\ \displaystyle\leq C\Vert U\Vert \left( \prod_{k=1}^{m}\left\Vert \left( x_{i}^{(k)}\right) _{i=1}^{N}\right\Vert _{w,q_{k}}\right) \left\Vert \left( x_{i_{1}\cdots i_{m}}^{(m+1)}\right) _{i_{1},...,i_{m}=1}^{N}\right\Vert _{w,1} \end{array \end{equation*} The proof is completed by an induction argument, as follows. Suppose that the result is valid for a positive integer $n\geq m+1$. Let $N$ be a positive integer and $E_{n+1}$ a Banach space. Let $x_{i_{1}\cdots i_{m}}^{(n+1)}\in E_{n+1}$ and norm one functionals $\varphi _{i_{1}\cdots i_{m}}$ such tha \begin{equation*} \begin{array}{l} \displaystyle\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n+1)})\right\Vert \\ \displaystyle=\varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n+1)})\right) \end{array \end{equation* for all $i_{1},...,i_{m}=1,\ldots ,N$. A duality argument gives us non-negative real numbers $\alpha _{i_{1}\cdots i_{m}}$ such that \begin{equation*} \sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}^{p^{\ast }}=1 \end{equation* an \begin{align*} & \left( \sum_{i_{1},...,i_{m}=1}^{N}\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n+1)})\right\Vert ^{p}\right) ^{\frac{1}{p}} \\ & =\sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n+1)})\right\Vert \\ & =\sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n+1)})\right) . \end{align*} We also have \begin{align*} & \int_{0}^{1}\sum_{i_{1},...,i_{m}=1}^{N}r_{i_{1}\cdots i_{m}}(t)\alpha _{i_{1}\cdots i_{m}} \\ & \quad \times \varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n)},\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1}\cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(n+1)})\right) \,dt \\ & =\sum_{i_{1},...,i_{m}=1}^{N}\sum_{j_{1},...,j_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n)},x_{j_{1}\cdots j_{m}}^{(n+1)})\right) \\ & \quad \times \int_{0}^{1}r_{i_{1}\cdots i_{m}}(t)r_{j_{1}\cdots j_{m}}(t)\,dt \\ & =\sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n+1)})\right) \\ & =\left( \sum_{i_{1},...,i_{m}=1}^{N}\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n+1)})\right\Vert ^{p}\right) ^{\frac{1}{p}}. \end{align* Hence using the induction hypothesis \begin{align*} & \left( \sum_{i_{1},...,i_{m}=1}^{N}\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n+1)})\right\Vert ^{p}\right) ^{\frac{1}{p}} \\ & \leq \int_{0}^{1}\left\vert \sum_{i_{1},...,i_{m}=1}^{N}r_{i_{1}\cdots i_{m}}(t)\alpha _{i_{1}\cdots i_{m}}\right. \\ & \quad \left. \times \varphi _{i_{1}\cdots i_{m}}\left( U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n)},\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1}\cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(n+1)})\right) \right\vert \,dt \\ & \leq \sup_{t\in \lbrack 0,1]}\sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}\Big\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1 \cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n)},\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1}\cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(n+1)})\Big\Vert \\ & \leq \left( \sum_{i_{1},...,i_{m}=1}^{N}\alpha _{i_{1}\cdots i_{m}}^{p^{\ast }}\right) ^{\frac{1}{p^{\ast }}}\cdot \sup_{t\in \lbrack 0,1]}\left( \sum_{i_{1},...,i_{m}=1}^{N}\Big\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},\ldots ,x_{i_{1}\cdots i_{m}}^{(n)},\right. \\ & \qquad \left. \sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1}\cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(n+1)})\Big\Vert^{p}\right) ^{\frac{1}{p}} \\ & \leq \sup_{t\in \lbrack 0,1]}C\Big\Vert U\Big(\cdot ,...,\cdot ,\sum_{j_{1},...,j_{m}=1}^{N}r_{j_{1}\cdots j_{m}}(t)x_{j_{1}\cdots j_{m}}^{(n+1)}\Big)\Big\Vert \\ & \qquad\times \left( \prod_{k=1}^{m}\Big\Vert\Big(x_{i}^{(k)}\Big)_{i=1}^{N \Big\Vert_{w,q_{k}}\right) \left( \prod_{k=m+1}^{n}\Big\Vert\Big x_{i_{1}\cdots i_{m}}^{(k)}\Big)_{i_{1},...,i_{m}=1}^{N}\Big\Ver _{w,1}\right) \\ & \leq C\left\Vert U\right\Vert \left( \prod_{k=1}^{m}\left\Vert \left( x_{i}^{(k)}\right) _{i=1}^{N}\right\Vert _{w,q_{k}}\right) \left( \prod_{k=m+1}^{n+1}\left\Vert \left( x_{i_{1}\cdots i_{m}}^{(k)}\right) _{i_{1},...,i_{m}=1}^{N}\right\Vert _{w,1}\right) . \end{align* Now we just make $N\rightarrow \infty .$ \end{proof} \begin{example} If $F$ is a Banach space with cotype $2$ it is well known that $\Pi _ \mathrm{mult}\left( 2;2,...,2\right) }\left( ^{m}c_{0};F\right) =\mathcal{L \left( ^{m}c_{0};F\right) .$ From the above theorem we conclude tha \begin{equation*} \begin{array}{l} \displaystyle\left( \sum_{i_{1},...,i_{m}=1}^{\infty }\left\Vert U(x_{i_{1}}^{(1)},...,x_{i_{m}}^{(m)},x_{i_{1}\cdots i_{m}}^{(m+1)},...,x_{i_{1}\cdots i_{m}}^{(n)})\right\Vert ^{2}\right) ^ \frac{1}{2}}\vspace{0.2cm} \\ \textstyle\qquad \leq C\Vert U\Vert \prod_{k=1}^{m}\left\Vert \left( x_{i}^{(k)}\right) _{i=1}^{\infty }\right\Vert _{w,2}\prod_{k=m+1}^{n}\left\Vert \left( x_{i_{1}\cdots i_{m}}^{(k)}\right) _{i_{1},...,i_{m}=1}^{\infty }\right\Vert _{w,1} \end{array \end{equation* regardless of the Banach space $E$ and regardless of the $n$-linear operator $U:c_{0}\times \overset{m\text{ times}}{\cdots }\times c_{0}\times E\times \cdots \times E\rightarrow F$. \end{example} \begin{remark} The constant $C$ that appears in the above theorem can be chosen as the constant from the Open Mapping Theorem used in the coincidence (\ref{hyp}). \end{remark} \begin{remark} The case $F=\mathbb{K}$ and $m=1$ with $q=p=1$ recovers the Defant-Voigt Theorem (see \cite{alencar}). \end{remark} \begin{remark} Theorem \ref{iu} is in some sense optimal. In fact it was recently proved in \cite{aaa} that the Defant-Voigt Theorem is optimal in the following sense: every continuous $m$-linear form is absolutely $(1;1,...,1)$-summing and this result can not be improved to $(p;1,...,1)$-summing with $p<1$. \end{remark} \section{Some applications} In this section we show how the result proved in the previous section is connected to the problem stated in the introduction of this note. \subsection{Variations of Littlewood's $4/3$ theorem and Bohnenblust--Hille inequality} We begin by proving the result stated in the Introduction: \begin{theorem} \label{89}Let $m\geq 3$ be an integer and $\sigma _{1},\ldots ,\sigma _{m-2}$ be bijections from $\mathbb{N}\times \mathbb{N}$ to $\mathbb{N}$. Then \begin{equation} \left( \sum_{i,j=1}^{\infty }\left\vert U\left( e_{i},e_{j},e_{\sigma _{1}(i,j)},\ldots ,e_{\sigma _{n-2}(i,j)}\right) \right\vert ^{\frac{4}{3 }\right) ^{\frac{3}{4}}\leq \sqrt{2}\Vert U\Vert \label{estr} \end{equation for all continuous $m$-linear forms $U:c_{0}\times \cdots \times c_{0}\rightarrow \mathbb{K}$. \end{theorem} \begin{proof} From Littlewood's $4/3$ theorem we know that $\Pi _{\mathrm{mult}\left( 4/3;1,1\right) }\left( ^{2}c_{0};\mathbb{K}\right) =\mathcal{L}\left( ^{2}c_{0};\mathbb{K}\right) $ and the constant involved is $\sqrt{2}$ (or $2 \sqrt{\pi }$ for complex scalars)$.$ By choosing $x_{ij}^{(k)}=e_{\sigma _{k}(i,j)},$ since $\left\Vert \left( e_{\sigma _{k}(i,j)}\right) _{ij=1}^{\infty }\right\Vert _{w,1}=1$ the proof is done. \end{proof} The same argument of the previous theorem can be used to prove the following more general result: \begin{theorem} Let $n>m\geq 1$ be positive integers and $\sigma _{k}:\mathbb{N}^m \to \mathbb{N}$ be bijections for all k=1,...,n-m$. Then there is a constant $L_{m}^{\mathbb{K}}\geq 1$ such tha \begin{equation*} \left( \sum\limits_{i_{1},...,i_{m}=1}^{\infty }\left\vert U(e_{i_{1}},...,e_{i_{m}},e_{\sigma _{1}(i_{1},...,i_{m})},\ldots ,e_{\sigma _{n-m}(i_{1},...,i_{m})})\right\vert ^{\frac{2m}{m+1}}\right) ^{\frac{m+1}{2 }}\leq L_{m}^{\mathbb{K}}\left\Vert U\right\Vert \end{equation* for every bounded $n$-linear form $U:c_{0}\times \cdots \times c_{0}\rightarrow \mathbb{K}$. \end{theorem} \begin{remark} As a matter of fact, the constants $L_{m}^{\mathbb{K}}$ can be estimated. From the proof of Theorem \ref{iu} it is simple to see that $L_{m}^{\mathbb{ }}$ can be chosen as the best known constants of the Bohnenblust--Hille inequality. So, using the estimates of \cite{bohr}, we know that \begin{equation} \begin{array}{llll} L_{m}^{\mathbb{C}} & \leq & \displaystyle\prod\limits_{j=2}^{m}\Gamma \left( 2-\frac{1}{j}\right) ^{\frac{j}{2-2j}}, & \vspace{0.2cm} \\ L_{m}^{\mathbb{R}} & \leq & 2^{\frac{446381}{55440}-\frac{k}{2}}\displaystyl \prod\limits_{j=14}^{k}\left( \frac{\Gamma \left( \frac{3}{2}-\frac{1}{j \right) }{\sqrt{\pi }}\right) ^{\frac{j}{2-2j}}, & \text{ for }m\geq 14 \vspace{0.2cm} \\ L_{m}^{\mathbb{R}} & \leq & \displaystyle\left( \sqrt{2}\right) ^{\sum_{j=1}^{k-1}\frac{1}{j}}, & \text{ for }2\leq m\leq 13 \end{array} \label{bkecbh} \end{equation} \end{remark} The above estimates can be rewritten as (see \cite{bohr}) \begin{equation*} \begin{array}{rcl} L_{m}^{\mathbb{C}} & < & \displaystyle m^{0.21139},\vspace{0.2cm} \\ L_{m}^{\mathbb{R}} & < & 1.3\times m^{0.36482} \end{array \end{equation*} The extension of the Bohnenblust--Hille inequality to $\ell _{p}$ spaces in the place of $\ell _{\infty }$ spaces is divided in two cases: $m<p\leq 2m$ and $p\geq 2m$. The case $p\geq 2m,$ sometimes called Hardy--Littlewood/Praciano-Pereira inequality (see \cite{hardy,pra}) states that there exists a (optimal) constant $C_{m,p}^{\mathbb{K}}\geq 1$ such that, for all positive integers $N $ and all $m$-linear forms $T:\ell _{p}^{N}\times \cdots \times \ell _{p}^{N}\rightarrow \mathbb{K}$, \begin{equation} \left( \sum_{i_{1},\ldots ,i_{m}=1}^{N}\left\vert T(e_{i_{1}},\ldots ,e_{i_{m}})\right\vert ^{\frac{2mp}{mp+p-2m}}\right) ^{\frac{mp+p-2m}{2mp }\leq C_{m,p}^{\mathbb{K}}\left\Vert T\right\Vert . \label{i99} \end{equation} \section{\protect\bigskip Final remark} When $m<p\leq 2m$ the Hardy--Littlewood inequality is also known as Hardy--Littlewood/Dimant-Sevilla--Peris inequality (\cite{dimant2,hardy}). It reads as follows: \begin{theorem}[Hardy--Littlewood/Dimant--Sevilla-Peris] \label{yytt}For $m<p\leq 2m$, there is a constant $C_{\mathbb{K},m,p}\geq 1$ such tha \begin{equation*} \left( \sum_{i_{1},...,i_{m}=1}^{N}\left\vert T(e_{i_{1}},\ldots ,e_{i_{m}})\right\vert ^{\frac{p}{p-m}}\right) ^{\frac{p-m}{p}}\leq C_ \mathbb{K},m,p}\left\Vert T\right\Vert \end{equation* for all positive integers $N$ and all $m$-linear form $T:\ell _{p}^{N}\times \cdots \times \ell _{p}^{N}\rightarrow \mathbb{K}$. Moreover the exponent \frac{p}{p-m}$ is optimal. \end{theorem} In this case we can prove the following (optimal) result, which does not depend on the results developed in the previous sections: \begin{proposition} \label{90}Let $m>n\geq 1$ be positive integers, let $m<p\leq 2m$ and $\sigma _{k}:\mathbb{N}^n \to \mathbb{N}$ be bijections for all $k=1,....,n$. Then there is a constant $C_{\mathbb{K ,m,n,p}\geq 1$ such tha \begin{equation*} \left( \sum\limits_{i_{1},...,i_{n}=1}^{N}\left\vert U(e_{i_{1}},...,e_{i_{n}},e_{\sigma _{1}(i_{1},...,i_{n})},\ldots ,e_{\sigma _{m-n}(i_{1},...,i_{n})})\right\vert ^{\frac{p}{p-m}}\right) ^{\frac{p-m}{p }\leq C_{\mathbb{K},m,n,p}\left\Vert U\right\Vert \end{equation* for all positive integers $N\geq 1$ and all continuous $m$-linear form U:\ell _{p}^{N}\times \cdots \times \ell _{p}^{N}\rightarrow \mathbb{K}$. Moreover, the exponent $p/(p-m)$ is optimal. \end{proposition} \begin{proof} For the sake of simplicity we suppose $n=2$. The general case is similar. Note that, using Theorem \ref{yytt} we have \begin{align*} & \left( \sum_{i,j=1}^{N}\left\vert U(e_{i},e_{j},e_{\sigma _{1}(i,j)},\ldots ,e_{\sigma _{m-2}(i,j)})\right\vert ^{\frac{p}{p-m }\right) ^{\frac{p-m}{p}} \\ & \leq \left( \sum_{i_{1},...,i_{m}=1}^{\infty }\left\vert U(e_{i_{1}},\ldots ,e_{i_{m}})\right\vert ^{\frac{p}{p-m}}\right) ^{\frac{p- }{p}}\leq C_{\mathbb{K},m,p}\left\Vert U\right\Vert . \end{align* The optimality of the exponent $\frac{p}{p-m}$ is proved next using the same argument of the proof of the theorem of Hardy--Littlewood/Dimant-Sevilla-Peris (see \cite{dimant2,hardy}). Consider the $m$-linear form \begin{equation*} U:\ell _{p}\times \cdots \times \ell _{p}\rightarrow \mathbb{K} \end{equation* given by \begin{equation*} U(x^{(1)},...,x^{(m)})=\textstyle\su \limits_{i=1}^{N}x_{i}^{(1)}x_{i}^{(2)}x_{\sigma _{1}(i,i)}^{(3)}\cdots x_{\sigma _{m-2}(i,i)}^{(m)}. \end{equation* From H\"{o}lder's inequality we have \begin{equation*} \left\Vert U\right\Vert \leq N^{\frac{p-m}{p}}. \end{equation* If the theorem is valid for a power $s$, the \begin{align*} & \left( \sum_{i=1}^{N}\left\vert U\left( e_{i},e_{i},e_{\sigma _{1}(i,i)},\ldots ,e_{\sigma _{m-2}(i,i)}\right) \right\vert ^{s}\right) ^ \frac{1}{s}} \\ & =\left( \sum_{i,j=1}^{N}\left\vert U\left( e_{i},e_{j},e_{\sigma _{1}(i,j)},\ldots ,e_{\sigma _{m-2}(i,j)}\right) \right\vert ^{s}\right) ^ \frac{1}{s}} \\ & \leq C_{\mathbb{K},m,p}\left\Vert U\right\Vert \leq C_{\mathbb{K},m,p}N^ \frac{p-m}{p}} \end{align* and thu \begin{equation*} N^{\frac{1}{s}}\leq C_{\mathbb{K},m,p}N^{\frac{p-m}{p}} \end{equation* and henc \begin{equation*} s\geq \frac{p}{p-m}. \end{equation*} \end{proof} It is important to recall that a somewhat similar inequality due to Zalduendo asserts tha \begin{equation*} \left( \sum_{i=1}^{n}\left\vert T(e_{i},\ldots ,e_{i})\right\vert ^{\frac{p} p-m}}\right) ^{\frac{p-m}{p}}\leq \left\Vert T\right\Vert \end{equation* for all positive integers $n$ and all $m$-linear forms $T:\ell _{p}^{n}\times \cdots \times \ell _{p}^{n}\rightarrow \mathbb{K}$ and the exponent $\frac{p}{p-m}$ is optimal. Note that Zalduendo's result and Proposition \ref{90} are slightly different. \bigskip \section{\protect\bigskip Appendix: a variation of the Kahane--Salem--Zygmund and applications} In this section we follow a method of \cite{BAYART} to prove this. Let us denote by $\psi _{2}(x):=\exp (x^{2})-1$ for $x\geq 0$. Let $(\Omega \mathcal{A},\mathbb{P})$ be a probability measure space and let us consider the Orlicz space $L_{\psi _{2}}=L_{\psi _{2}}(\Omega ,\mathcal{A},\mathbb{P}) $ associated to $\psi _{2}$ formed by all real-valued random variables $X$ on $(\Omega ,\mathcal{A},\mathbb{P})$ such that $\mathbb{E}\left( \psi _{2}\left( |X|/c\right) \right) <\infty $ for some $c>0$. The associated Orlicz norm $\Vert \cdot \Vert _{\psi _{2}}$ is given by \begin{equation*} \Vert X\Vert _{\psi _{2}}:=\inf \{c>0\,;\,\mathbb{E}\left( \psi _{2}\left( |X|/c\right) \right) \leq 1\}, \end{equation* and $\left( L_{\psi _{2}},\Vert \cdot \Vert _{\psi _{2}}\right) $ is a Banach space. We shall use the following lemma, which was suggested to us by F. Bayart. \begin{lemma} Let $M$ be a metric space and let $(X(\omega,x))$ a family of random variables defined on $(\Omega,\mathcal{A},\mathbb{P})$ and indexed by $M$. Assume that there exists $A>0$ and a finite set $F\subset M$ such that \begin{itemize} \item[i.] For any $x\in M$, $\Vert X(\cdot ,x)\Vert _{\psi _{2}}\leq A$; \item[ii.] For any $x\in M$, there exists $y\in F$ such that \begin{equation*} \sup_{\omega\in\Omega} |X(\omega,x)-X(\omega ,y)| \leq \frac{1}{2} \sup_{z\in M}|X(\omega,z)|. \end{equation*} \end{itemize} Then for any $R>0$ with $\frac{\text{card}(F)}{\psi_2(R/A)}<1$, there exists $\omega\in \Omega$ satisfying \begin{equation*} \sup_{x\in M}|X(\omega,x)|\leq 2R. \end{equation*} \end{lemma} \begin{proof} This is exactly what is done in \cite{BAYART}, Step 2 and Step 3 of the proof of Theorem 3.1, in an abstract context. For the sake of completeness, we give the details. Given $x\in M$, condition $(ii)$ provides us $y\in F$ such that \begin{equation*} \sup_{\omega \in \Omega }|X(\omega ,x)-X(\omega ,y)|\leq \frac{1}{2 \sup_{z\in M}|X(\omega ,z)|. \end{equation* From \begin{equation*} \left\vert X(\omega ,x)\right\vert \leq \left\vert X(\omega ,x)-X(\omega ,y)\right\vert +\left\vert X(\omega ,y)\right\vert \leq \frac{1}{2 \sup_{z\in M}|X(\omega ,z)|+\sup_{w\in F}|X(\omega ,w)| \end{equation* we get that, for any $\omega \in \Omega $, \begin{equation} \sup_{x\in M}|X(\omega ,x)|\leq 2\sup_{w\in F}|X(\omega ,w)|. \label{prob_lemma} \end{equation Let us fix $R>0$. As in the Step (3) of \cite[Theorem 3.1]{BAYART} we have \begin{equation*} \mathbb{P}\left( \left\{ \omega \in \Omega \,;\,|X(\omega ,x)|>R\right\} \right) =\mathbb{P}\left( \left\{ \omega \in \Omega \,;\,\psi _{2}\left( \frac{|X(\omega ,x)|}{A}\right) >\psi _{2}\left( \frac{R}{A}\right) \right\} \right) \end{equation* The Markov inequality leads us to \begin{equation*} \mathbb{P}\left( \left\{ \omega \in \Omega \,;\,|X(\omega ,x)|>R\right\} \right) \leq \frac{\mathbb{E}\left( \psi _{2}\left( |X(\omega ,x)|/A\right) \right) }{\psi _{2}(R/A)}. \end{equation* Condition $(i)$ provides $\Vert X(\cdot ,x)\Vert _{\psi _{2}}\leq A$, thus the definition of $\Vert \cdot \Vert _{\psi _{2}}$ assures that $\mathbb{E \left( \psi _{2}\left( |X(\omega ,x)|/A\right) \right) \leq 1$. Consequently, we get that for any $\omega \in \Omega $, \begin{equation*} \mathbb{P}\left( \left\{ \omega \in \Omega \,;\,|X(\omega ,x)|>R\right\} \right) \leq \frac{1}{\psi _{2}(R/A)}. \end{equation* Since $F\subset M$ is finite, \begin{equation*} \mathbb{P}\left( \left\{ \omega \in \Omega \,;\,\sup_{w\in F}|X(\omega ,w)|>R\right\} \right) \leq \frac{card\,F}{\psi _{2}(R/A)}. \end{equation* Combining this with \eqref{prob_lemma} we get \begin{equation*} \mathbb{P}\left( \left\{ \omega \in \Omega \,;\,\sup_{x\in M}|X(\omega ,x)|>2R\right\} \right) \leq \frac{card\,F}{\psi _{2}(R/A)}. \end{equation* Thus, if we take $R>0$ such that $\displaystyle\frac{\text{card}(F)}{\psi _{2}(R/A)}<1$, then \begin{equation*} \mathbb{P}\left( \left\{ \omega \in \Omega \,;\,\sup_{x\in M}|X(\omega ,x)|\leq 2R\right\} \right) >0. \end{equation* Therefore, there exists $\omega \in \Omega $ satisfying \begin{equation*} \sup_{x\in M}|X(\omega ,x)|\leq 2R. \end{equation*} \end{proof} The previous approach can be applied in the following situation: let $N\geq 1$ and let $(\varepsilon _{i})_{i\in \{1,\dots ,N\}^{k}}$ be a sequence of independent Bernoulli variables defined on the same probability space $(\Omega ,\mathcal{A}, \mathbb{P})$. Let $M$ be the unit ball of $(\ell _{\infty }^{N})^{n}$ (endowed with the sup norm). For $x=(x^{(1)},\dots ,x^{(n)})$ in $M$ we define for positive integers $n_{1}+\dots +n_{k}=n$ and $j_{l}=n_{1}+\cdots +n_{l},\,l=1,\dots ,k$ \begin{equation*} X(\omega ,x)=\sum_{i\in \{1,\dots ,N\}^{k}}\varepsilon _{i}(\omega )x_{i_{1}}^{(1)}\cdots x_{i_{1}}^{(j_{1})}x_{i_{2}}^{(j_{1}+1)}\cdots x_{i_{2}}^{(j_{2})}\cdots x_{i_{k}}^{(j_{k-1}+1)}\cdots x_{i_{k}}^{(j_{k})} \end{equation* For a fixed value of $x$, the $L^{2}$-norm of this random process can be majorized, using the Khinchin inequality: \begin{align*} \Vert X(\cdot ,x)\Vert _{2}& =\left( \int\limits_{\Omega }\left\vert X(w,x)\right\vert ^{2}d\mathbb{P}\right) ^{1/2} \\ & =\left( \int\limits_{\Omega }\left\vert \sum_{i\in \{1,\dots ,N\}^{k}}\varepsilon _{i}(\omega )x_{i_{1}}^{(1)}\cdots x_{i_{1}}^{(j_{1})}x_{i_{2}}^{(j_{1}+1)}\cdots x_{i_{2}}^{(j_{2})}\cdots x_{i_{k}}^{(j_{k-1}+1)}\cdots x_{i_{k}}^{(j_{k})}\right\vert ^{2}d\mathbb{P \right) ^{1/2} \\ & \leq \left( \sum_{i\in \{1,\dots ,N\}^{k}}\left\vert x_{i_{1}}^{(1)}\cdots x_{i_{1}}^{(j_{1})}x_{i_{2}}^{(j_{1}+1)}\cdots x_{i_{2}}^{(j_{2})}\cdots x_{i_{k}}^{(j_{k-1}+1)}\cdots x_{i_{k}}^{(j_{k})}\right\vert ^{2}\right) ^{1/2} \\ & \leq \Vert \left( x_{i_{1}}^{(j_{1})}\right) _{i_{1}=1}^{N}\Vert _{2}\cdots \Vert \left( x_{i_{k}}^{(j_{k})}\right) _{i_{k}=1}^{N}\Vert _{2} \\ & \leq N^{k/2}. \end{align* Since the $\psi _{2}$-norm of a Rademacher process is dominated by its $L^{2} $-norm, we get \begin{equation*} \Vert X(\cdot ,x)\Vert _{\psi _{2}}\leq CN^{k/2}:=A \end{equation* for some absolute constant $C>0$. Now, let $\delta >0$. For a fixed value of $\omega $, for any $x,y$ in $M$ with $\Vert x-y\Vert <\delta $, the multilinearity of $X(\omega ,\cdot )$ ensures that \begin{equation*} |X(\omega ,x)-X(\omega ,y)|\leq n\delta \sup_{z\in M}|X(\omega ,z)|. \end{equation* We set again $\delta =\frac{1}{2n}$ and s \begin{equation*} |X(\omega ,x)-X(\omega ,y)|\leq \frac{1}{2}\sup_{z\in M}|X(\omega ,z)|. \end{equation* Repeating the previous argument and we observe that there exists a $\delta -net $F$ of $M$ with cardinal less than $\left( 1+\frac{2}{\delta }\right) ^{2nN}=(1+4n)^{2nN}$ (the product $nN$ is the dimension of $(\ell _{\infty }^{N})^{n}$). Setting $R=\lambda N^{(k+1)/2}$ for some large $\lambda $ (not depending on $N$ but eventually depending on $n$), we obtain \begin{equation*} \frac{\text{card}(F)}{\psi _{2}(R/A)}=\frac{(1+4n)^{2nN}}{e^{\left( \frac \lambda N^{(k+1)/2}}{CN^{k/2}}\right) ^{2}}-1}=\frac{(1+4n)^{2nN}}{e^{\left( \frac{\lambda ^{2}N}{C^{2}}\right) }-1}<1 \end{equation* and from the lemma there exists $\omega _{0}\in \Omega $ such that, for any x\in M$, \begin{equation*} |X(\omega _{0},x)|\leq 2R=2\lambda N^{\frac{k+1}{2}}, \end{equation* i.e. \begin{equation*} \left\Vert X(\omega _{0},\cdot )\right\Vert \leq 2\lambda N^{\frac{k+1}{2}}. \end{equation* Now consider an $n$-linear operator $U:c_{0}\times \cdots \times c_{0}\rightarrow \mathbb{K}$ and i \begin{equation*} \left( \sum\limits_{i_{1},\dots ,i_{m}=1}^{N}\left\vert U\left( e_{i_{1}}^{n_{1}},\dots ,e_{i_{k}}^{n_{k}}\right) \right\vert ^{r}\right) ^ \frac{1}{r}}\leq C\left\Vert U\right\Vert \end{equation* for all $U$ and all positive integers $N$, where $\left( e_{i_{1}}^{n_{1}},\dots ,e_{i_{k}}^{n_{k}}\right) $ means $(e_{i_{1}} \overset{\text{{\tiny $n_{1}$ times}}}{\dots },e_{i_{1}},\dots ,e_{i_{k}} \overset{\text{{\tiny $n_{k}$ times}}}{\dots },e_{i_{k}})$, we have \begin{align*} N^{\frac{k}{r}}& =\left( \sum_{i\in \{1,\dots ,N\}^{k}}\left\vert \varepsilon _{i}(\omega )x_{i_{1}}^{(1)}\cdots x_{i_{1}}^{(j_{1})}x_{i_{2}}^{(j_{1}+1)}\cdots x_{i_{2}}^{(j_{2})}\cdots x_{i_{k}}^{(j_{k-1}+1)}\cdots x_{i_{k}}^{(j_{k})}\right\vert ^{r}\right) ^ \frac{1}{r}} \\ & \leq C\left\Vert X(\omega _{0},x)\right\Vert \\ & \leq 2C\lambda N^{\frac{k+1}{2}}. \end{align* Making $N\rightarrow \infty $ we conclude that $r\geq 2k/(k+1)$. This result provides the optimality, for instance, of \cite[Corollary 2.5]{uni}. We also recall that the case $k=n$ recovers the classical Kahane--Salem--Zygmund--Inequality. \bigskip \bigskip \bigskip
1,116,691,498,893
arxiv
\section{Introduction} Throughout this note let $\Lambda$ be a finite dimensional $k$-algebra, where $k$ is an algebraically closed field with characteristic $p \geqslant 0$, and let $G$ be a finite group whose elements act on $\Lambda$ as algebra automorphisms. The \textit{skew group algebra} $\Lambda G$ is the vector space $\Lambda \otimes_k kG$ equipped with a bilinear product $\cdot$ determined by the following rule: for $\lambda, \mu \in \Lambda$, $g, h \in G$, $(\lambda \otimes g) \cdot (\mu \otimes h) = \lambda g(\mu) \otimes gh$, where $g(\mu)$ is the image of $\mu$ under the action of $g$. We write $\lambda g$ rather than $\lambda \otimes g$ to simplify the notation. Correspondingly, the product can be written as $\lambda g \cdot \mu h = \lambda g(\mu) gh$. Denote the identity of $\Lambda$ and the identity of $G$ by $1_{\Lambda}$ and $1_G$ respectively. It has been observed that when $|G|$, the order of $G$, is invertible in $k$, $\Lambda G$ and $\Lambda$ share many common properties \cite{Dionne, Martinez, Reiten}. We wonder for arbitrary groups $G$, under what conditions this phenomen still happens. This problem is considered in \cite{Li}, where under the hypothesis that $\Lambda$ has a complete set $E$ of primitive orthogonal idempotents closed under the action of a Sylow $p$-subgroup $S \leqslant G$, we show that $\Lambda$ and $\Lambda G$ share certain properties such as finite global dimension, finite representation type, etc., if and only if the action of $S$ on $E$ is free. Clearly, this answer generalizes results in \cite{Reiten} since if $|G|$ is invertible in $k$, the only Sylow $p$-subgroup of $G$ is the trivial group. In this note we continue to study representations and homological properties of modular skew group algebras. Using the ideas and techniques described in \cite{Li}, we show that $\Lambda$ and $\Lambda G$ share more common properties under the same hypothesis and condition. Explicitly, we have: \begin{theorem} Let $\Lambda$ and $G$ be as above, and let $S \leqslant G$ be a Sylow $p$-subgroup. $\Lambda$ has a complete set $E$ of primitive orthogonal idempotents closed under the action of $S$. Then: \begin{enumerate} \item If the action of $S$ on $E$ is free, then $\Lambda G$, $\Lambda$, and $\Lambda^S$ (the fixed algebra by $S$) have the same finitistic dimension. \item $\Lambda G$ has finite strong global dimension if and only if $\Lambda ^S$ has finite strong global dimension and $S$ acts freely on $E$. In this situation, $\Lambda G$ and $\Lambda ^S$ have the same strong global dimension; moreover, if $\Lambda \cong \Lambda^S \oplus B$ as $\Lambda^S$-bimodules, then $\Lambda S$ and $\Lambda$ have the same strong global dimension. \end{enumerate} \end{theorem} In \cite{Dionne} it has been proved that if $\Lambda$ is a piecewise hereditary algebra (defined in Section 3) and $|G|$ is invertible in $k$, then $\Lambda G$ is piecewise hereditary as well. The second part of the above theorem generalizes this result by using the homological characterization of piecewise hereditary algebras by Happel and Zacharia in \cite{Happel2} We introduce some notations and conventions here. Throughout this note all modules are finitely generated left modules. Composition of maps and morphisms is from right to left. For an algebra $A$, $\gldim A$, $\fdim A$, and $\sgldim A$ are the global dimension, finitistic dimension, and strong global dimension (defined in Section 4) of $A$ respectively. For an $A$-module $M$, $\pd_A M$ is the projective dimension of $M$. We use $A$-mod to denote the category of finitely generated $A$-modules. Its bounded homotopy category and bounded derived category are denoted by $K^b(A)$ and $D^b(A)$ respectively. \section{projective dimensions and finitistic dimensions} We first describe some background knowledge and elementary results. Most of them can be found in literature. We suggest the reader to refer to \cite{Auslander, Boisen, Cohen, Li, Marcus, Martinez, Passman, Reiten, Zhong} for more details. For every subgroup $H \leqslant G$, elements in $H$ also act on $\Lambda$ as algebra automorphism, so we can define a skew group algebra $\Lambda H$, which is a subalgebra of $\Lambda G$. The induction functor and the restriction functor are defined in the usual way. For a $\Lambda H$ module $V$, the induced module is $V \uparrow _H^G = \Lambda G \otimes _{\Lambda H} V$, where $\Lambda G$ acts on the left side. Every $\Lambda G$-module $M$ can be viewed as a $\Lambda H$-module, denoted by $M \downarrow _H^G$. Observe that $\Lambda G$ is a free (both left and right) $\Lambda H$-module. Therefore, these two functors are exact, and perverse projective modules. \begin{proposition} Let $H \leqslant G$ be a subgroup. Then: \begin{enumerate} \item Every $\Lambda H$-module $V$ is isomorphic to a summand of $V \uparrow _H^G \downarrow _H^G$. \item If $|G:H|$ is invertible in $k$, then every $\Lambda G$-module $M$ is isomorphic to a summand of $M \downarrow _H^G \uparrow _H^G$. \end{enumerate} \end{proposition} \begin{proof} This is Proposition 2.1 in \cite{Li}. \end{proof} The above proposition immediately implies: \begin{corollary} Let $H \leqslant G$ be a subgroup. For every $M \in \Lambda G$-mod, $\pd _{\Lambda H} M \downarrow _H^G \leqslant \pd_{\Lambda G} M$. If $|G: H|$ is invertible in $k$, then the equality holds. \end{corollary} \begin{proof} Take a minimal projective resolution $P^{\bullet} \rightarrow M$ of $\Lambda G$-modules and apply the restriction functor termwise. Since this functor is exact and preserves projective modules, we get a projective resolution $P^{\bullet} \downarrow _H^G \rightarrow M \downarrow _H^G$ of $\Lambda H$-modules, which might not be minimal. Thus $\pd _{\Lambda H} M \downarrow _H^G \leqslant \pd_{\Lambda G} M$, which is true even for the case that either $\pd _{\Lambda H} M \downarrow _H^G = \infty$ or $\pd_{\Lambda G} M = \infty$. Now suppose that $|G:H|$ is invertible in $k$. Take a minimal projective resolution $Q^{\bullet} \rightarrow M \downarrow _H^G$ of $\Lambda H$-modules and apply the induction functor termwise. Since it is exact and preserves projective modules, we get a projective resolution $Q^{\bullet} \uparrow _H^G \rightarrow M \downarrow _H^G \uparrow _H^G$ of $\Lambda G$-modules, so $\pd _{\Lambda G} M \downarrow _H^G \uparrow _H^G \leqslant \pd_{\Lambda H} M \downarrow _H^G$. But $M$ is isomorphic to a summand of $M \downarrow _H^G \uparrow _H^G$, so $\pd _{\Lambda G} M \leqslant \pd_{\Lambda H} M \downarrow _H^G$. Putting two inequalities together, we get the equality. \end{proof} Let $E = \{ e_i \} _{i \in [n]}$ be a set of primitive orthogonal idempotents in $\Lambda$. We say it is \textit{complete} if $\sum _{i \in [n]} e_i = 1_{\Lambda}$. Throughout this note we assume that $G$ has a Sylow $p$-subgroup $S$ such that $E$ is closed under the action of $S$. That is, $g(e_i) \in E$ for all $i \in [n]$ and $g \in S$. In practice this condition is usually satisfied. A trivial case is that $|G|$ is invertible in $k$, and hence $S$ is the trivial subgroup. We introduce some notations here. Let $\Lambda ^S$ be the space consisting of all elements in $\Lambda$ fixed by $S$. Clearly, $\Lambda ^S$ is a subalgebra of $\Lambda$, and $S$ acts on it trivially. For every $M \in \Lambda S$-mod, elements $v \in M$ satisfying $g (v) = v$ for every $g \in S$ form a $\Lambda ^S$-module, which is denoted by $M^S$. Let $F^S$ be the functor from $\Lambda S$-mod to $\Lambda ^S$-mod, sending $M$ to $M^S$. This is indeed a functor since for every $\Lambda S$-module homomorphism $f: M \rightarrow N$, $M^S$ is mapped into $N^S$ by $f$. We collect in the following proposition some results taken from \cite{Li}. \begin{proposition} Let $S \leqslant G$ and $E$ be as above and suppose that $E$ is closed under the action of $S$. Then: \begin{enumerate} \item The set $E$ is also a complete set of primitive orthogonal idempotents of $\Lambda S$, where we identify $e_i$ with $e_i 1_S$. \item $\Lambda ^S = \{ \sum_{g \in S} g(\mu) \mid \mu \in \Lambda \}$. \item The global dimension $\gldim \Lambda S < \infty$ if and only if $\gldim \Lambda < \infty$ and the action of $S$ on $E$ is free. \end{enumerate} Moreover, if the action of $S$ on $E$ is free, then \begin{enumerate} \setcounter{enumi}{3} \item $\Lambda S$ is a matrix algebra over $\Lambda^S$, and hence is Morita equivalent to $\Lambda^S$. \item The functor $F^S$ is exact. \item The regular representation $_{\Lambda S} \Lambda S \cong \Lambda ^{|S|}$, where $\Lambda$ is the trivial $\Lambda S$-module. \item A $\Lambda S$-module $M$ is projective (resp., injective) if and only if the $\Lambda$-module $_{\Lambda} M$ is projective (resp., injective). \end{enumerate} \end{proposition} \begin{proof} See Lemmas 3.1, 3.2, 3.6 and Propositions 3.3, 3.5 in \cite{Li}. \end{proof} Recall for a finite dimensional algebra $A$, its \textit{finitistic dimension}, denoted by $\fdim A$, is the supremum of projective dimensions of finitely generated indecomposable $A$-modules $M$ with $\pd_A M < \infty$. If $A$ has finite global dimension, then $\fdim A = \gldim A$. The famous finitistic dimension conjecture asserts that the finitistic dimension of every finite dimensional algebra is finite. For more details, see \cite{Zimmermann}. An immediate consequence of Proposition 2.1 and Corollary 2.2 is: \begin{lemma} Let $H \leqslant G$ be a subgroup of $G$. Then $\fdim \Lambda H \leqslant \fdim \Lambda G$. If $|G:H|$ is invertible in $k$, the equality holds. \end{lemma} \begin{proof} Take an arbitrary indecomposable $V \in \Lambda H$-mod with $\pd _{\Lambda H} V < \infty$ and consider $\tilde{V} = V \uparrow _H^G$. We claim that $\pd _{\Lambda G} \tilde{V} < \infty$. Indeed, by applying the induction functor to a minimal projective resolution $P^{\bullet} \rightarrow V$ termwise, we get a projective resolution $\Lambda G \otimes _{\Lambda H} P^{\bullet} \rightarrow \tilde{V}$. The finite length of the first resolution implies the finite length of the second one. Therefore, $\pd _{\Lambda G} \tilde{V} < \infty$. Consequently, $\pd _{\Lambda G} \tilde{V} \leqslant \fdim \Lambda G$. By Corollary 2.2, $\pd_{\Lambda G} \tilde{V} \geqslant \pd_{\Lambda H} \tilde{V} \downarrow _H^G$. Since $V$ is isomorphic to a summand of $\tilde{V} \downarrow _H^G$ by Proposition 2.1, we get $\fdim A \geqslant \pd _{\Lambda G} \tilde{V} \geqslant \pd_{\Lambda H} \tilde{V} \downarrow _H^G \geqslant \pd _{\Lambda H} V$. In conclusion, $\fdim \Lambda G \geqslant \fdim \Lambda H$. Now suppose that $|G : H|$ is invertible in $k$. Take an arbitrary indecomposable $M \in \Lambda G$-mod with $\pd _{\Lambda G} M < \infty$. By applying the restriction functor $\downarrow _H^G$ to a minimal projective resolution $Q^{\bullet} \rightarrow M$ termwise, we get a projective resolution of $M \downarrow _H^G$, which is of finite length. Therefore, $\pd _{\Lambda H} M \downarrow _H^G \leqslant \fdim \Lambda H$. By Corollary 2.2, we have $\pd _{\Lambda G} M = \pd _{\Lambda H} M \downarrow _H^G \leqslant \fdim \Lambda H$. In conclusion, $\fdim \Lambda G \leqslant \fdim \Lambda H$. Combining this with the inequality in the previous paragraph, we have $\fdim \Lambda G = \fdim \Lambda H$. \end{proof} \begin{lemma} If a Sylow $p$-subgroup $S \leqslant G$ acts freely on $E$, then $\fdim \Lambda S \leqslant \fdim \Lambda$. \end{lemma} \begin{proof} Take an arbitrary indecomposable $M \in \Lambda S$-mod with $\pd _{\Lambda S} M = n < \infty$ and a minimal projective resolution: \begin{equation*} \ldots \rightarrow 0 \rightarrow P^n \rightarrow \ldots \rightarrow P^1 \rightarrow P^0 \rightarrow M \rightarrow 0. \end{equation*} Regarded as $\Lambda$-modules, we get a projective resolution: \begin{equation*} \ldots \rightarrow 0 \rightarrow _{\Lambda} P^n \rightarrow \ldots \rightarrow _{\Lambda} P^1 \rightarrow _{\Lambda} P^0 \rightarrow _{\Lambda}M \rightarrow 0. \end{equation*} Clearly, for every $0 \leqslant s \leqslant n$, the syzygy $\Omega^s (M)$ viewed as a $\Lambda$-module is a direct sum of $\Omega^s (_{\Lambda} M)$ and a projective $\Lambda$-module. We claim that for each $0 \leqslant s \leqslant n$, the syzygy $\Omega^s (_{\Lambda} M) \neq 0$. Otherwise, $\Omega^s (M)$ viewed as a $\Lambda$-module is projective. By (7) of Proposition 2.3, $\Omega^s (M)$ is a projective $\Lambda S$-module, which must be 0. But this implies $\pd _{\Lambda S} M < n$, contradicting our choice of $M$. Therefore, for each $0 \leqslant s \leqslant n$, $\Omega^s (_{\Lambda} M) \neq 0$. Consequently, $_{\Lambda} M$ has a summand with projective dimension $n$, so $n \leqslant \fdim \Lambda$. In conclusion, $\fdim \Lambda S \leqslant \fdim \Lambda$. \end{proof} Now we can prove the first statement of Theorem 1.1. \begin{proposition} If $G$ has a Sylow $p$-subgroup $S$ acting freely on $E$, then $\Lambda G$ and $\Lambda$ have the same finitistic dimension. \end{proposition} \begin{proof} Since $|G : S|$ is invertible in $k$, by Lemma 3.1, $\fdim \Lambda G = \fdim \Lambda S$. Also by this lemma, $\fdim \Lambda S \geqslant \fdim \Lambda$ since $S$ contains the trivial group. The previous lemma tells us $\fdim \Lambda S \leqslant \fdim \Lambda$. Putting all these pieces of information together, we get $\fdim \Lambda G = \fdim \Lambda$ as claimed. \end{proof} From the previous proposition we conclude immediately that if the action of $S$ on $E$ is free, then the finiteness of finitistic dimension of $\Lambda$ implies the finiteness of finitistic dimension of $\Lambda G$. We wonder whether this conclusion is true in general. That is, \begin{conjecture} Let $\Lambda, G, E$ and $S$ be as before and suppose that $E$ is closed under the action of $S$. If $\fdim \Lambda < \infty$ (or even stronger $\gldim \Lambda < \infty)$, then $\fdim \Lambda G < \infty$. \end{conjecture} Hopefully the proof of this question can shed light on the final proof of the finitistic dimension conjecture. \section{strong global dimensions and piecewise hereditary algebras} A finite dimensional $k$-algebra $A$ is called \textit{piecewise hereditary} if the derived category $D^b(A)$ is equivalent to the derived category $D^b(\mathcal{H})$ of a hereditary abelian category $\mathcal{H}$ as triangulated categories; see \cite{Happel1, Happel2, Happel3, HRS}. If $A$ is piecewise hereditary, then $\gldim A < \infty$ since the property having finite global dimension is invariant under derived equivalence \cite{Happel1}. Therefore, $D^b(A)$ is triangulated equivalent to the homotopy category $K^b (_A \mathcal{P})$, where $_A \mathcal{P}$ is the full subcategory of $A$-mod consisting of all finitely generated projective $A$-modules. Now let $A$ be a finite dimensional algebra with $\gldim A < \infty$. Since $D^b(A) \cong K^b (_A\mathcal{P})$, we identify these two triangulated categories. For every indecomposable object $0 \neq P^{\bullet} \in K^b(_A \mathcal{P})$, consider its preimages $\tilde{P}^{\bullet} \in C^b (_A\mathcal{P})$, the category of so-called \textit{perfect complexes}, i.e., each term of $\tilde{P} ^{\bullet}$ is a finitely generated projective $A$-module, and all but finitely many terms of $\tilde{P} ^{\bullet}$ are 0. We can choose $\tilde{P} ^{\bullet}$ such that it is \textit{minimal}. That is, $\tilde{P} ^{\bullet}$ has no direct summands isomorphic to \begin{equation*} \xymatrix{ \ldots \ar[r] & 0 \ar[r] & P^s \ar[r]^{id} & P^{s+1} \ar[r] & 0 \ar[r] & \ldots}, \end{equation*} or equivalently, each differential map in $\tilde{P} ^{\bullet}$ sends its source into the radical of the subsequent term. Clearly, this choice is unique for each $P^{\bullet} \in K^b( _A \mathcal{P})$ up to isomorphism, so in the rest of this note we identify $P^{\bullet}$ and $\tilde{P}^{\bullet}$. Hopefully this identification will not cause trouble to the reader. Take an indecomposable $P^{\bullet} \in K^b(_A \mathcal{P})$ and identify it with $\tilde{P}^{\bullet}$. Therefore, there exist $r \leqslant s \in \mathbb{Z}$ such that $P^r \neq 0 \neq P^s$, and $P^n = 0$ for $n > s$ or $n < r$. We define its length $l(P^{\bullet})$ to be $s - r$. \footnote{By this definition, the length of a minimal object $X^{\bullet} \in K^b (_AP)$ counts the number of arrows between the first nonzero term and the last nonzero term in $X^{\bullet}$, rather than the number of nonzero terms in $X^{\bullet}$. In particular, a projective $\Lambda$-module, when viewed as a stalk complex in $K^b (_AP)$, has length 0.} The \textit{strong global dimension} of $A$, denoted by $\sgldim A$, is defined as $\sup \{ l(P^{\bullet}) \mid P^{\bullet} \in K^b (_A \mathcal{P}) \text{ is indecomposable} \}$. By taking minimal projective resolutions of simple modules, it is easy to see that $\sgldim A \geqslant \gldim A$. Moreover, if $A$ is hereditary, then $\sgldim A = \gldim A \leqslant 1$ (see \cite{Happel1}). It is not clear for what algebras of finite global dimension, $\sgldim A = \gldim A$. Happel and Zacharia proved the following result, characterizing piecewise hereditary algebras. \begin{theorem} (Theorem 3.2 in \cite{Happel3}) A finite dimensional $k$-algebra is piecewise hereditary if and only if $\sgldim A < \infty$. \end{theorem} In \cite{Dionne} Dionne, Lanzilotta, and Smith show that if $\Lambda$ is a piecewise hereditary algebra, and $|G|$ is invertible in $k$, then the skew group algebra $\Lambda G$ is also piecewise hereditary. This result motivates us to characterize general piecewise hereditary skew group algebras. Using the above characterization, we take a different approach by showing that $\sgldim \Lambda G = \sgldim \Lambda$ under suitable conditions. We first prove a result similar to Lemma 2.4. \begin{lemma} Let $H$ be a subgroup of $G$ and suppose that $\gldim \Lambda G < \infty$. Then $\sgldim \Lambda H \leqslant \sgldim \Lambda G$. The equality holds if $|G : H|$ is invertible in $k$. \end{lemma} \begin{proof} Note that $\gldim \Lambda H \leqslant \gldim \Lambda G < \infty$ by (2) of Corollary 2.2 in \cite{Li}. Take an indecomposable object $P^{\bullet} \in K^b (_{\Lambda H} \mathcal{P})$. Up to isomorphism, its minimal preimage in $C^b (_{\Lambda H} \mathcal{P})$ can be written as: \begin{equation*} \xymatrix{ \ldots \rightarrow 0 \rightarrow P^r \ar[r]^-{d^r} & \ldots \ar[r] & P^{s-1} \ar[r]^-{d^{s-1}} & P^s \rightarrow 0 \rightarrow \ldots} \end{equation*} Applying the exact functor $\Lambda G \otimes _{\Lambda H} -$ termwise to the above complex, we get an object $\tilde{P} ^{\bullet} = P^{\bullet} \uparrow _H^G \in C^b (_{\Lambda G} \mathcal{P})$ as follows: \begin{equation*} \xymatrix{ \ldots \rightarrow 0 \rightarrow \Lambda G \otimes P^r \ar[r]^-{1 \otimes d^r} & \ldots \ar[r] & \Lambda G \otimes P^{s-1} \ar[r]^-{1 \otimes d^{s-1}} & \Lambda G \otimes P^s \rightarrow 0 \rightarrow \ldots} \end{equation*} Applying the restriction functor termwise, the second complex gives an object $\tilde{P}^{\bullet} \downarrow _H^G \in C^b (_{\Lambda H} \mathcal{P})$. We claim that $P^{\bullet}$ is isomorphic to a direct summand of $\tilde{P} ^{\bullet} \downarrow _H^G$. Indeed, there is a family of maps $(\iota ^i) _{i \in \mathbb{Z}}$ defined as follows: for $i > s$ or $i < r$, $\iota_i = 0$; for $r \leqslant i \leqslant s$, $\iota^i$ sends $v \in P^i$ to $1 \otimes v \in \Lambda G \otimes _{\Lambda H} P^i$. The reader can check that $(\iota^i) _{i \in \mathbb{Z}}$ defined in this way indeed gives rise to a chain map $\iota^{\bullet}: P^{\bullet} \rightarrow \tilde{P} ^{\bullet} \downarrow _H^G$. We define another family of maps $(\delta ^i) _{i \in \mathbb{Z}}$ as follows: for $i > s$ or $i < r$, $\delta^i = 0$; for $r \leqslant i \leqslant s$, $\delta^i$ sends $h \otimes v \in \Lambda H \otimes _{\Lambda H} P^i$ to $hv \in P^i$ and sends all vectors in $g \otimes v \in \bigoplus _{1 \neq g \in G/H} g \otimes P^i$ to 0. Again, the reader can check that $(\delta ^i) _{i \in \mathbb{Z}}$ gives rise to a chain map $\delta^{\bullet}: \tilde{P} ^{\bullet} \downarrow _H^G \rightarrow P^{\bullet}$. Moreover, we have $\delta^{\bullet} \circ \iota^{\bullet}$ is the identity map. Therefore, $P^{\bullet}$ is isomorphic to a direct summand of $\tilde{P} ^{\bullet} \downarrow _H^G$. This claim has the following consequence: for every indecomposable object $X^{\bullet} \in K^b (_{\Lambda H} \mathcal{P})$ (identified with a minimal perfect complex in $C^b (_{\Lambda H} \mathcal{P})$), there is an indecomposable object $\tilde{X}^{\bullet} \in K^b (_{\Lambda G} \mathcal{P})$ such that $X^{\bullet}$ is isomorphic to a direct summand of $\tilde{X} ^{\bullet} \downarrow _H^G$. Clearly, we have $l( \tilde{X} ^{\bullet}) \geqslant l(X^{\bullet})$. Therefore, by definition, $\sgldim \Lambda G \geqslant \sgldim \Lambda H$. Now suppose that $|G : H|$ is invertible in $k$. Take an indecomposable object $P^{\bullet} \in K^b (_{\Lambda G} \mathcal{P})$. Up to isomorphism, its minimal preimage in $C^b (_{\Lambda G} \mathcal{P})$ can be written as: \begin{equation*} \xymatrix{ \ldots \rightarrow 0 \rightarrow P^r \ar[r]^-{d^r} & \ldots \ar[r] & P^{s-1} \ar[r]^-{d^{s-1}} & P^s \rightarrow 0 \rightarrow \ldots} \end{equation*} Applying the restriction functor and the induction functor termwise to the above complex, we get another object in $\tilde{P} ^{\bullet} = P^{\bullet} \downarrow _H^G \uparrow _H^G \in C^b (_{\Lambda G} \mathcal{P})$ as follows: \begin{equation*} \xymatrix{ \ldots \rightarrow 0 \rightarrow \Lambda G \otimes P^r \ar[r]^-{1 \otimes d^r} & \ldots \ar[r] & \Lambda G \otimes P^{s-1} \ar[r]^-{1 \otimes d^{s-1}} & \Lambda G \otimes P^s \rightarrow 0 \rightarrow \ldots} \end{equation*} We claim that $P^{\bullet}$ is isomorphic to a direct summand of $\tilde{P} ^{\bullet}$. Define a family of maps $(\theta ^i) _{i \in \mathbb{Z}}$ as follows: for $i > s$ or $i < r$, $\theta^i = 0$; for $r \leqslant i \leqslant s$, $\theta^i$ sends $v \in P^i$ to $\frac{1} {|G: H|} \sum _{g \in G/H} g \otimes g^{-1}v \in \Lambda G \otimes _{\Lambda H} P^i$. We check that $(\theta^i) _{i \in \mathbb{Z}}$ defined in this way gives rise to a chain map $\theta^{\bullet}: P^{\bullet} \rightarrow \tilde{P} ^{\bullet}$. Indeed, for $r \leqslant i \leqslant s-1$ and $v \in P^i$, we have \begin{align*} (\theta^{i+1} \circ d^i)(v) & = \theta^{i+1} (d^i(v)) = \frac{1} {|G: H|} \sum _{g \in G/H} g \otimes g^{-1} d^i(v) \\ & = \frac{1} {|G: H|} \sum _{g \in G/H} g \otimes d^i(g^{-1}v) = (1 \otimes d^i) (\theta^{i+1} (v)). \end{align*} Define another family of maps $(\rho^i) _{i \in \mathbb{Z}}$ as follows: for $i > s$ or $i < r$, $\rho^i = 0$; for $r \leqslant i \leqslant s$, $\rho^i$ sends $g \otimes v \in \Lambda G \otimes _{\Lambda H} P^i$ to $gv \in P^i$. Again, we check that $(\rho^i) _{i \in \mathbb{Z}}$ gives rise to a chain map $\rho^{\bullet}: \tilde{P} ^{\bullet} \rightarrow P^{\bullet}$ as shown by: \begin{equation*} (\rho^{i+1} \circ (1 \otimes d^i))(g \otimes v) = \rho^{i+1} (g \otimes d^i(v))) = g d^i (v) = d^i (gv) = d^i (\rho^i (g \otimes v)). \end{equation*} Moreover, $\rho^{\bullet} \circ \theta^{\bullet}$ is the identity map. Therefore, as claimed, $P^{\bullet}$ is isomorphic to a summand of $\tilde{P} ^{\bullet}$. This claim tells us that for every indecomposable object $\tilde{X} ^{\bullet} \in K^b (_{\Lambda G} \mathcal{P})$ (identified with a minimal perfect complex in $C^b (_{\Lambda G} \mathcal{P})$), there is an indecomposable object $X^{\bullet} \in K^b (_{\Lambda H} \mathcal{P})$ such that $\tilde{X} ^{\bullet}$ is isomorphic to a direct summand of $X^{\bullet} \uparrow _H^G$. Therefore, $l(X^{\bullet}) \geqslant l( \tilde{X} ^{\bullet})$, and $\sgldim \Lambda H \geqslant \sgldim \Lambda G$. The two inequalities force $\sgldim \Lambda G = \sgldim \Lambda H$. \end{proof} \begin{remark}\normalfont In this lemma we actually proved a stronger conclusion. That is, the induction and restriction functor induce an `induction' functor and a `restriction' functor between the homotopy categories of perfect complexes. Moreover, every indecomposable object $X^{\bullet} \in K^b (_{\Lambda H} \mathcal{P})$ can be obtained by applying the `restriction' functor to an indecomposable object in $K^b (_{\Lambda G} \mathcal{P})$ and taking a direct summand. When $|G:H|$ is invertible, every indecomposable object $\tilde{X} ^{\bullet} \in K^b (_{\Lambda G} \mathcal{P})$ can be obtained by applying the `induction' functor to an indecomposable object in $K^b (_{\Lambda H} \mathcal{P})$ and taking a direct summand. \end{remark} Now we are ready to prove the second part of our main theorem. \begin{proposition} Let $\Lambda$, $G$, $S$, and $E$ as before. Then $\Lambda G$ has finite strong global dimension if and only if $\Lambda ^S$ has finite strong global dimension and $S$ acts freely on $E$. In this situation, $\Lambda G$ and $\Lambda ^S$ have the same strong global dimension. \end{proposition} \begin{proof} Since the strong global dimension equals that of $\Lambda S$, without loss of generality we assume that $G = S$. Note that the strong global dimension is always greater than or equal to the global dimension. Therefore, $\Lambda S$ has finite global dimension, so $S$ must act freely on $E$ by (3) of Proposition 2.3. Then by (4) of this proposition, $\Lambda S$ is Morita equivalent to $\Lambda ^S$, so they have the same finite strong global dimension. Conversely, if the action of $S$ on $E$ is free, then again $\Lambda S$ and $\Lambda^S$ are Morita equivalent, so they have the same strong global dimension. \end{proof} An immediate corollary of this proposition and Theorem 3.1 is: \begin{corollary} Let $\Lambda$, $G$, $S$, and $E$ as before. Then $\Lambda G$ is piecewise hereditary if and only if the action of $S$ on $E$ is free and $\Lambda^S$ is piecewise hereditary. \end{corollary} \begin{proof} By Theorem 3.1, a finite dimensional algebra is piecewise hereditary if and only if its strong global dimension if finite. The conclusion follows from the above proposition. \end{proof} In the rest of this section we try to get a get a criterion such that the strong global dimension of $\Lambda$ equals that of $\Lambda S$ when the action of $S$ on $E$ is free. Since we already know that $\Lambda S$ is Morita equivalent to $\Lambda^S$ (Proposition 2.3), instead we show that $\sgldim \Lambda = \sgldim \Lambda ^S$ under a certain condition. Note that $\Lambda^S$ is a subalgebra of $\Lambda$, so we can define the corresponding restriction functor from $\Lambda$-mod to $\Lambda^S$-mod and the induction functor $\Lambda \otimes _{\Lambda^S} -$ from $\Lambda^S$-mod to $\Lambda$-mod. By \cite{Li}, $\Lambda$ is both a left free and a right free $\Lambda^S$-module, but might not be a free bimodule; see Example 3.6 in that paper. Now assume that $\Lambda = \Lambda^S \oplus B$ as $\Lambda^S$-bimodule. For instance, if $\Lambda^S$ is commutative, this condition is satisfied. Under this assumption, we have a split bimodule homomorphism $\zeta: \Lambda \to \Lambda^S$. For $M \in \Lambda^S$-mod, we define two linear maps: \begin{align*} \psi: M \rightarrow (\Lambda \otimes _{\Lambda ^S} M) \downarrow _{\Lambda ^S} ^{\Lambda}, \quad v \mapsto 1 \otimes v, \quad v \in M;\\ \varphi: (\Lambda \otimes _{\Lambda ^S} M) \downarrow _{\Lambda ^S} ^{\Lambda} \rightarrow M, \quad \lambda \otimes v \mapsto \zeta(\lambda) v, \quad v \in M, \lambda \in \Lambda. \end{align*} These two maps are well defined $\Lambda^S$-module homomorphisms (to check it, we need the assumption that $A^S$ is a summand of $\Lambda$ as $\Lambda^S$-bimodules). We also observe that when $\lambda \in \Lambda ^S$, \begin{equation*} \varphi (\lambda \otimes v) = \zeta (\lambda) v = \lambda v. \end{equation*} Therefore, $\varphi \circ \psi$ is the identity map. \begin{proposition} Suppose that a Sylow $p$-subgroup $S \leqslant G$ acts freely on $E$, and $\Lambda \cong \Lambda^S \oplus B$ as $\Lambda^S$-bimodules. Then $\sgldim \Lambda = \sgldim \Lambda S$. \end{proposition} \begin{proof} By Lemma 3.2 and Proposition 3.4, $\sgldim \Lambda \leqslant \sgldim \Lambda S = \sgldim \Lambda^S$. Therefore, it suffices to show $\sgldim \Lambda \geqslant \sgldim \Lambda^S$. The proof is similar to that of Lemma 3.2, using the corresponding induction and restriction functors for the pair $(\Lambda^S, \Lambda)$ and the two maps $\psi, \varphi$ defined above. Take an indecomposable object $P^{\bullet} \in K^b (_{\Lambda^S} \mathcal{P})$ and write its minimal preimage in $C^b (_{\Lambda^S} \mathcal{P})$ as follows: \begin{equation*} \xymatrix{ \ldots \rightarrow 0 \rightarrow P^r \ar[r]^-{d^r} & \ldots \ar[r] & P^{s-1} \ar[r]^-{d^{s-1}} & P^s \rightarrow 0 \rightarrow \ldots} \end{equation*} Applying $\Lambda \otimes _{\Lambda^S} -$ and the restriction functor termwise, we get another object in $\tilde{P} ^{\bullet} = P^{\bullet} \uparrow _{\Lambda^S} ^{\Lambda} \downarrow _{\Lambda^S} ^{\Lambda} \in C^b (_{\Lambda^S} \mathcal{P})$ : \begin{equation*} \xymatrix{ \ldots \rightarrow 0 \rightarrow \Lambda \otimes P^r \ar[r]^-{1 \otimes d^r} & \ldots \ar[r] & \Lambda \otimes P^{s-1} \ar[r]^-{1 \otimes d^{s-1}} & \Lambda \otimes P^s \rightarrow 0 \rightarrow \ldots} \end{equation*} We claim that $P^{\bullet}$ is isomorphic to a direct summand of $\tilde{P} ^{\bullet}$. As in the proof of the previous lemma, $(\psi_i) _{i \in \mathbb{Z}}$ gives rise to a chain map $\psi^{\bullet}: P^{\bullet} \rightarrow \tilde{P} ^{\bullet}$. We check the commutativity: for $r \leqslant i \leqslant s-1$ and $v \in P^i$, \begin{equation*} (\psi_{i+1} \circ d^i)(v) = \psi_{i+1} (d^i(v)) = 1 \otimes d^i(v) = (1 \otimes d^i) (1 \otimes v) = (1 \otimes d^i) (\psi_i (v)). \end{equation*} Similarly, the family of maps $(\varphi_i) _{i \in \mathbb{Z}}$ gives rise to a chain map $\varphi^{\bullet}: \tilde{P} ^{\bullet} \rightarrow P^{\bullet}$ as shown by: \begin{align*} (\varphi_{i+1} \circ (1 \otimes d^i))(\lambda \otimes v) & = \varphi_{i+1} (\lambda \otimes d^i(v)) = \zeta(\lambda) d^i(v)\\ & = d^i(\zeta (\lambda) v) = (d^i \circ \varphi_i)(\lambda \otimes v). \end{align*} Moreover, $\varphi ^{\bullet} \circ \psi ^{\bullet}$ is the identity map, so as claimed, $P^{\bullet}$ is isomorphic to a summand of $\tilde{P} ^{\bullet}$. Therefore, for every indecomposable object $X^{\bullet} \in K^b (_{\Lambda^S} \mathcal{P})$, there is an indecomposable object $\tilde{X} ^{\bullet} \in K^b (_{\Lambda} \mathcal{P})$ such that $X^{\bullet}$ is isomorphic to a direct summand of $\tilde{X} ^{\bullet} \downarrow _{\Lambda^S} ^{\Lambda}$. Consequently, $l(X^{\bullet}) \leqslant l( \tilde{X} ^{\bullet})$, and $\sgldim \Lambda^S \leqslant \sgldim \Lambda$. \end{proof}
1,116,691,498,894
arxiv
\section{First identification of massive AGB stars and previous spectroscopic surveys} The first identification of truly massive ($>$ 3$-$4 M$_\odot$) asymptotic giant branch (AGB) stars dates back to about 30 years ago (Wood, Bessel \& Fox 1993). Photometric surveys of AGB stars in the Magellanic Clouds (MCs) uncovered the first examples of very luminous O-rich AGB stars, while the C-rich AGB stars are generally found to be much fainter. These stars were found to be long-period variables (of Mira type) with periods between $\sim$500 and 800 days and enriched in heavy neutron-rich s-process elements. They were found to be concentrated in a narrow luminosity range, with bolometric magnitudes (M$_{bol}$) between $-$7 and $-$6, being consistent with relatively high-mass progenitors (see Wood, Bessel \& Fox 1993, for more details). Subsequent high-resolution optical spectroscopic surveys of visually bright AGB stars in both MCs (LMC and SMC) discovered that these stars are rich in Li (see Figure 1), which confirmed the activation of the hot bottom burning (HBB) process in these stars (see e.g., Smith \& Lambert 1989, 1990; Plez, Smith \& Lambert 1993; Smith et al. 1995). \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true]{Fig1.eps}} \caption{\footnotesize Li abundances (derived by Smith et al. 1995) as a function of M$_{bol}$ in O-rich HBB AGB stars in both Magellanic Clouds. Solar metallicitiy model abundances for HBB from Sackmann \& Boothroyd (1992) are shown for two stellar masses. Adapted from Smith et al. (1995).\label{eta} } \end{figure} A more detailed chemical abundance analysis was carried out by Plez, Smith \& Lambert in the early nineties. This study shows that the Li-rich HBB stars in the SMC display very low C isotopic ratios (very near to the equilibrium values) as expected from HBB models. However, these stars are not rich in Rb but rich in other s-process elements like Zr and Nd (see Table 5 in Plez, Smith \& Lambert 1993). This suggests that these low-metallicity HBB stars produce s-process elements via the $^{13}$C neutron source (see also Abia et al. 2001). More recently, HBB stars have been identified in the very low-metallicity dwarf galaxy IC 1613 (Menzies, Whitelock \& Feast 2015). One of these stars, G3011, displays a strong Li line and it is very likely Li-rich (see Fig. 2 in Menzies, Whitelock \& Feast 2015). The average metallicity of IC 1613 is even lower than the SMC, down to [Fe/H]$\sim$$-$1.6 dex, but the HBB stars are likely younger and more metal-rich. In our own Galaxy, high-resolution optical spectroscopic surveys of very luminous OH/IR stars were carried out about 10 years ago (Garc\'{\i}a-Hern\'andez et al. 2006, 2007). Most of the stars with periods longer than 400 days and OH expansion velocities (V$_{exp}$(OH)) higher than 6 kms$^{-1}$ were found to be Li-rich; with the Li abundances (log$\varepsilon$(Li)) ranging from 1 to 3 dex, which confirm them as massive HBB AGB stars. These massive Galactic AGB stars, however, are not rich in the s-element Zr (Garc\'{\i}a-Hern\'andez et al. 2007). In strong contrast with the SMC HBB stars, these Galactic stars displayed strong Rb overabundances; [Rb/Fe] ranging from 0 to 2.6 dex (see Garc\'{\i}a-Hern\'andez et al. 2006). In short, the more massive O-rich AGB stars of our Galaxy display strong Rb overabundances with only mild Zr enhancements, as expected from the strong activation of the Ne$^{22}$ neutron source (Garc\'{\i}a-Hern\'andez et al. 2006). Figure 2 displays the Rb abundances obtained by Garc\'{\i}a-Hern\'andez et al. (2006) versus the V$_{exp}$(OH). The V$_{exp}$(OH) can be used as a distance-independent mass indicator in OH/IR stars. The strong Rb enhancement (up to $\sim$100 times solar) confirms the activation of the $^{22}$Ne neutron source in massive AGBs. The AGB nucleosynthesis models can reproduce the observed correlation between the Rb abundances and the stellar mass. However, they cannot explain the extremely high Rb abundances seen in the more extreme stars (Garc\'{\i}a-Hern\'andez et al. 2006). \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics[clip=true,angle=-90]{Fig2.eps}} \caption{\footnotesize Rb abundances in massive Galactic AGB stars, as obtained by Garc\'{\i}a-Hern\'andez et al. (2006), versus OH expansion velocity (V$_{exp}$(OH)). The abundance estimates that correspond to the photospheric abundance needed to fit the stellar component are shown with red triangles. A maximum error bar of $\pm$0.8 dex is also shown for comparison. The star with high V$_{exp}$(OH) and no Rb, which does not follow the correlation observed is TZ Cyg, which possibly is a non-AGB star as it is not a long period, Mira-like variable. Adapted from Garc\'{\i}a-Hern\'andez et al. (2006). \label{eta2}} \end{figure} More recently, a few massive Galactic AGB stars at the beginning of the thermally pulsing phase have been identified, permitting us to study the nucleosynthesis at the early AGB stages (Garc\'{\i}a-Hern\'andez et al. 2013). These stars are super Li-rich (log$\varepsilon$(Li) up to $\sim$4 dex) and the strong Li is seen together with the complete lack of the s-process elements Rb, Zr, and Tc, as predicted by the theoretical models. This confirms that HBB is strongly activated at the early AGB stages and that the s-process is dominated by the $^{22}$Ne neutron source. \section{Metallicity effects} The metallicity has a strong impact in the chemical evolution of massive AGB stars. Table 1 summarizes the observational properties of massive AGB stars in the Galaxy and the MCs. There are differences in the dust production, dredge-up efficiency and AGB lifetime with metallicity. Also, the HBB is activated for lower initial masses at lower metallicity. The Galactic stars are s-process poor with very high Rb/Zr ratios typical of high neutron density and the $^{22}$Ne neutron source. On the other hand, the MCs stars are rich in s-elements but Rb-poor, with low Rb/Zr ratios characteristic of the lower neutron density of the $^{13}$C neutron source. \begin{table*} \caption{Main observational properties of Galactic HBB AGB stars compared to those of Magellanic Clouds (MCs) HBB AGB stars. Differences are attributed to metallicity effects.} \label{abun} \begin{center} \begin{tabular}{lccccccc} \hline \\ & Dust & & AGB & HBB & & Neutron & Neutron \\ & production & Dredge-up & lifetime & activation & [s/Fe] & density & source \\ \hline \\ Galaxy & efficient & non-efficient & small \# of TPs & for M$>$4M$_\odot$ & $<$0.5 dex & high & $^{22}$Ne \\ MCs & non-efficient & efficient & large \# of TPs & for M$>$3M$_\odot$ & $>$0.5 dex & low & $^{13}$C \\ \hline \end{tabular} \end{center} \end{table*} But, where are the low-metallicity Rb-rich AGB counterparts? In 2009, we carried out a high-resolution optical spectroscopic survey of luminous obscured O-rich stars (including most of the known OH/IR stars) in the Magellanic Clouds and we found the low-metallicity Rb-rich massive AGB counterparts (Garc\'{\i}a-Hern\'andez et al. 2009). These stars display extremely high Rb abundances (from $\sim$10$^{3}$ to 10$^{5}$ times solar). Interestingly, the Rb abundance jumps at a M$_{bol}$ of $-$7 (see Fig. 3 in Garc\'{\i}a-Hern\'andez et al. 2009); a result that it is very useful to distinguish such stars in other Local Group galaxies. \section{Massive AGB or super-AGB stars?} AGB evolutionary models for the LMC predict the very luminous massive AGB stars mentioned above and the contribution of HBB to the luminosity explains their luminosities in excess of the AGB theoretical limit. According to these models, the Rb-rich AGBs in the LMC could have progenitor masses of at least 6$-$7 M$_\odot$ (see Fig. 7 in Ventura, D'Antona \& Mazzitelli 2000) so it is actually not clear if they are massive AGB or super-AGB stars. Indeed, both massive AGB and super-AGB stars can be easily separated from massive red supergiants (RSGs) by using the photometric variability or the chemical composition from high-resolution spectroscopy. However, the theoretical nucleosynthesis models predict massive AGB and super-AGBs to be chemically identical. In both types of stars, the HBB dominates and the s-process nucleosynthesis is mainly driven by the $^{22}$Ne neutron source. Doherty et al. (2017) present the s-process predictions for super-AGB stars at several metallicities. It is clear that the s-process pattern of super-AGB stars is identical to the one expected in massive AGB stars, with Rb being by far the most abundant s-process element. \section{The Rb problem} The detection of Rb-rich AGB stars in the MCs (Garc\'{\i}a-Hern\'andez et al. 2009) uncovered a Rb problem that has two parts: i) the extremely high Rb overabundances observed ([Rb/Fe]$\sim$3$-$5 dex); and ii) the large [Rb/Zr] ($\geq$3$-$4) ratios. The standard theoretical models (e.g., van Raai et al. 2012) are far from matching the observational results and within the framework of the s-process it is not possible to overproduce Rb without co-producing Zr at similar levels. These standard theoretical models qualitatively describe the observations of Rb-rich AGB stars in both the MCs and our Galaxy, in the sense that increasing Rb abundances with increasing stellar mass and decreasing metallicity are theoretically predicted (see Table 2 in Garc\'{\i}a-Hern\'andez et al. 2009). However, the Rb abundance and the [Rb/Zr] ratio do not reach extreme values. The extremely high Rb enhancements and [Rb/Zr] ratios are likely artefacts of the abundance analysis. So, the adopted hydrostatic model atmospheres likely fail to represent the real stars. More realistic model atmospheres for extreme AGB stars have been developed; i.e., taking into account the presence of a circumstellar envelope with a radial wind (Zamora et al. 2014). The main result of these new pseudo-dynamical (extended atmosphere) models is that the effect of the circumstellar envelope is dramatic and the new Rb abundances are much lower (sometimes by orders of magnitude) than those obtained with classical hydrostatic models (e.g., Garc\'{\i}a-Hern\'andez et al. 2006, 2009). On the other hand, Zr is practically non affected by the presence of the circumstellar envelope and the Zr abundances with pesudo-dynamical models are nearly solar; i.e., very similar to those from the hydrostatic ones. This is because the ZrO bandhead used in the chemical analysis is formed deeper in the atmosphere and much less affected than Rb (see Zamora et al. 2014). The Rb abundances in the full sample of massive AGB stars in our Galaxy have been recently re-calculated with these extended atmosphere models by P\'erez-Mesa et al. (2017). The results are shown in Figure 3, which compares the abundances with hydrostatic models with the new ones with pseudo-dynamical models. An important result is that the Rb abundances strongly depend on the mass-loss rate, which is unknown for these stars. Thus, in order to break the degeneracy in the model fits, accurate mass-loss rate estimates in these stars (e.g., by observing the CO rotational lines in the radio domain) would be needed. \begin{figure*}[tbh] \resizebox{0.90\hsize}{!}{\includegraphics[clip=true]{Fig3.eps}} \caption{\footnotesize Rb abundances vs. the expansion velocity (v$_{exp}$(OH)) for extended model atmospheres with $\dot{M}$ = 10$^{-8}$, 10$^{-7}$ and 10$^{-6}$ M$_{\odot}yr^{-1}$ (green triangles, yellow squares and cyan diamonds, respectively), as obtained by P\'erez-Mesa et al. (2017), in comparison with those obtained by Garc\'{\i}a-Hern\'andez et al. (2006) from hydrostatic models (blue dots). Adapted from P\'erez-Mesa et al. (2017). \label{eta3}} \end{figure*} In short, the new Rb abundances and [Rb/Zr] ratios derived with more realistic AGB model atmospheres significantly resolve the problem of the mismatch between the observations of massive Rb-rich AGB stars and the theoretical predictions. The new [Rb/Fe] abundances and [Rb/Zr] ratios range from 0.0 to 1.3 dex and from $-$0.3 to $+$0.7, respectively (see Fig. 11 in P\'erez-Mesa et al. 2017). These ranges are in good agreement with the synthetic results of the Monash models; both the standard (van Raai et al. 2012; Karakas \& Lugaro 2016) and the delayed superwind models (Karakas, Garc\'{\i}a-Hern\'andez \& Lugaro 2012). However, the FRUITY\footnote{FUll-Network Repository of Updated Isotopic Tables and Yields: http:// fruity.oa-teramo.inaf.it/.} AGB models predict too low Rb abundances, being at odds with the observational evidence. Also, we very recently re-calculated the abundances of Li by using these pseudo-dynamical models (P\'erez-Mesa et al.; these proceedings). The main finding is that the Li abundances are only slightly affected by circumstellar effects and the Li abundances are practically identical to those previously obtained with hydrostatic models, which further confirm the HBB activation in massive AGBs of our Galaxy. In this case, the much lower Li column density as compared to Rb, likely explains the results with extended atmosphere models. Finally, we note that the far-IR Herschel observations of extreme Galactic OH/IR stars also confirm the HBB nature of these stars. The O and C isotopic ratios obtained from the observed water and CO lines are consistent with HBB (see e.g., Justtanont et al.; these proceedings). \section{On-going spectroscopic surveys and future facilities} The SDSS-IV/APOGEE-2 is an on-going massive spectroscopic survey of Galactic giant stars in the near-IR H-band (see Blanton et al. 2017 and references therein). The resolution is about 20,000 and the H-band permits to obtain the abundances of up to 15 elements by using molecular (e.g., OH, CO and CN) and atomic lines. The abundances of some s-process elements like Nd and Ce as well as the C isotopic ratios can be also obtained in an important fraction of the stars observed. The APOGEE-2 survey is the continuation of APOGEE-1, which already observed more than 150,000 stars. It is composed by two identical instruments in both hemispheres. APOGEE-2 will observe more than 300,000 stars (mainly RGB and AGB stars). Only a few detailed chemical analysis (from high-resolution near-IR spectroscopic data) of massive AGB stars have been carried out so far (e.g., McSaveney et al. 2007). Thus, APOGEE-2 will permit better studies of the HBB, third dredge-up and the s-process in massive AGB stars by observing a larger number of stars; especially in the inner Galaxy and the MCs. New discoveries are also expected; e.g., the very recent identification of very low-metallicity HBB-AGB stars towards the Galactic bulge (Zamora et al., in preparation). In addition, we are now in the Gaia era. Gaia is expected to provide the distances (and so the luminosities) for all types of Galactic AGB stars, giving strong constraints to the AGB theoretical models. For example, the luminosities would help to better distinguish massive AGBs from the more luminous super-AGB stars. However, we anticipate that this is going to be rather difficult because these stars are very faint (and strongly variable) in the optical Gaia bands. Finally, the future James Webb Space Telescope (JWST) will be much more sensitive than Spitzer, with access to the near-IR range. This mission will permit near- and mid-IR spectroscopic studies of individual massive AGB and super-AGB stars in Local Group galaxies. Such observations would permit to study their dust chemical composition, mass-loss rates, dust production, as well as their temperatures and possibly abundances (see Boyer; these proceedings). \section{Summary} In summary, multiwavelength (from the optical to the far-IR) spectroscopic observations confirm the HBB and $^{22}$Ne activation in massive AGBs (and super-AGB stars). Accurate mass-loss rates in these stars can break the degeneracy of the more realistic extended model atmospheres developed for these stars, which will permit more reliable Rb abundances. At present, we can efficiently distinguish between massive AGBs and RSGs; by using the photometric variability information and the chemical composition from high-resolution spectroscopic observations. However, massive AGB stars are chemically identical to super-AGBs and actual observational samples may contain super-AGB stars. On-going massive spectroscopic (and photometric) surveys like SDSS-IV/APOGEE-2 and Gaia as well as future facilities like the JWST will contribute to a better understanding of massive AGB stars and perhaps to the first unambiguous detection of a super-AGB star. \begin{acknowledgements} DAGH acknowledges support provided by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant AYA$-$2014$-$58082$-$P. DAGH was also funded by the Ram\'on y Cajal fellowship number RYC$-$2013$-$14182. \end{acknowledgements} \bibliographystyle{aa}
1,116,691,498,895
arxiv
\section{Introduction} Part-of-speech (POS) tagging is a process to tag tokens in a string with their corresponding part-of-speech (e.g., noun, verb, etc). POS tagging is considered as one of the most basic tasks in NLP, as it is usually the first component in an NLP pipeline. This is because POS tags are shown to be useful features in various NLP tasks, such as named entity recognition~\cite{luo2015joint, aguilar2017multi}, machine translation~\cite{sennrich-haddow:2016:WMT, niehues2017exploiting} and constituency parsing~\cite{dyer2016}. Therefore, for any language, building a successful NLP system usually requires a well-performing POS tagger. There are quite a number of research on Indonesian POS tagging~\cite{pisceldo2009,wicaksono2010,rashel2014,abka2016}. However, almost all of them are not evaluated on a common dataset. Even when they are, their train-test split are not the same. This lack of a common benchmark dataset makes a fair comparison among these works difficult. Moreover, despite the success of neural network models for English POS tagging~\cite{ma2016,rei2016}, the use of neural networks is generally unexplored for Indonesian. As a result, published results may not reflect the actual state-of-the-art performance of Indonesian POS tagger. In this work, we explored different neural network architectures for Indonesian POS tagging. We evaluated our experiments on the IDN Tagged Corpus~\cite{dinakaramani2014}. Our best model achieves 97.47 $F_1$ score, a new state-of-the-art result for Indonesian POS tagging on the dataset. We release the dataset split that we used to serve as a benchmark for future work. \section{Related Work} Pisceldo et al.~\cite{pisceldo2009} built an Indonesian POS tagger by employing a conditional random field (CRF)~\cite{lafferty2001} and a maximum entropy model. They used contextual unigram and bigram features and achieved accuracy scores of 80-90\% on PANL10N\footnote{http://www.panl10n.net} dataset tagged manually using their proposed tagset. The dataset consists of 15K sentences. Another work used a hidden Markov model enhanced with an affix tree to better handle out-of-vocabulary (OOV) words~\cite{wicaksono2010}. They evaluated their models on the same PANL10N dataset and achieved more than 90\% overall accuracy and roughly 70\% accuracy for the OOV cases. We note that while the datasets are the same, the split could be different. Thus, making a fair comparison between them is difficult. Dinakaramani et al.~\cite{dinakaramani2014} proposed IDN Tagged Corpus, a new manually annotated POS tagging corpus for Indonesian. The corpus consists of 10K sentences and 250K tokens, and its tagset is different than that of the PANL10N dataset. The corpus is available online.\footnote{https://github.com/famrashel/idn-tagged-corpus} A rule-based tagger is developed in~\cite{rashel2014} using the aformentioned dataset, and is able to achieve an accuracy of 80\%. One of the neural network-based POS taggers for Indonesian is proposed in~\cite{abka2016}. They used a feedforward neural network with an architecture similar to that proposed in~\cite{collobert2011}. They evaluated their methods on the new POS tagging corpus~\cite{dinakaramani2014} and separated the evaluation of multi- and single-word expressions. They experimented with several word embedding algorithms trained on Indonesian Wikipedia data and reported macro-averaged $F_1$ score of 91 and 73 for the single- and multi-word expression cases respectively. We remark that the choice of macro-averaged $F_1$ score is more suitable than accuracy for POS tagging because of the class imbalance in the dataset. There are too many words with NN as the true POS tag, so accuracy is not the best metric in such case. \section{Methodology} \subsection{Dataset} We used the IDN Tagged Corpus proposed in~\cite{dinakaramani2014}. The corpus contains 10K sentences and 250K tokens that are tagged manually. Due to the small size,\footnote{As a comparison, Penn Treebank corpus for English has 40K sentences.} we used 5-fold cross-validation to split the corpus into training, development, and test sets. We did not split multi-word expressions but treated them as if they are a single token. All 5 folds of the dataset are available publicly\footnote{https://github.com/kmkurn/id-pos-tagging/blob/master/data/dataset.tar.gz} to serve as a benchmark for future work. \subsection{Baselines} We used two simple baselines: majority tag (\textsc{Major}) and memorization (\textsc{Memo}). \textsc{Major} simply predicts the majority POS tag found in the training set for all words. \textsc{Memo} remembers the word-tag assignments from the training set and uses them to predict the tags on the test set. If there is an unknown word, it simply outputs the majority tag found in the training set. \subsection{Comparisons} \subsubsection{Rule-based tagger} We adopted a rule-based tagger designed by Rashel et al.~\cite{rashel2014building} as one of our comparisons. Firstly, the tagger tags named entities and multi-word expressions based on a dictionary. Then, it uses MorphInd~\cite{larasati2011indonesian} to tag the rest of the words. Finally, they employ 15 hand-crafted rules to resolve ambiguous tags in the post-processing step. We want to note that we did not use their provided tokenizer since the IDN Tagged Corpus dataset is already tokenized. Their implementation is available online.\footnote{https://github.com/andryluthfi/indonesian-postag} \subsubsection{Conditional random field (CRF)} We used CRF~\cite{lafferty2001} as another comparison since it is the most common non-neural model for sequence labeling tasks. We employed contextual words as well as affixes as features. For some context window size $d$, the complete list of features is: \begin{enumerate} \item the current word, as well as $d$ preceding and succeeding words; \item two and three leading characters of the current word and $d$ preceding and succeeding words; \item two and three trailing characters of the current word and $d$ preceding and succeeding words. \end{enumerate} The last two features are meant to capture prefixes and suffixes in Indonesian which usually consist of two or three characters. One advantage of this feature extraction approach is that it does not require language-specific tools such as stemmer or morphological segmenter. This advantage is particularly useful for Indonesian which does not have well-established tools for such purposes. We padded the input sentence with padding tokens to ensure that every token has enough preceding and succeeding words for context window size $d$. For the implementation, we used \texttt{pycrfsuite}.\footnote{https://github.com/scrapinghub/python-crfsuite} \subsubsection{Neural network-based tagger} Our neural network-based POS tagger can be divided into 3 steps: embedding, encoding, and prediction. First, the tagger embeds the words and optionally additional features of such words (e.g., affixes). From this embedding process, we get vector representations of the words and the features. Next, the tagger learns contextual information in the encoding step via either a feedforward network with context window or a bidirectional LSTM~\cite{hochreiter1997}. Finally, in prediction step, the tagger predicts the POS tags from the output of the encoding step using either a softmax or a CRF layer. \textbf{Embedding}.~~In the embedding step, the tagger obtains vector representations of each word and additional features. We experimented with several additional features: prefixes, suffixes, and characters. Prefix features are the first 2 and 3 characters of the word. Likewise, suffix features are the last 2 and 3 characters of the word.\footnote{If the word has less than 3 characters, then all the prefixes and suffixes are equal to the word itself.} For the character features, we followed~\cite{ma2016} by embedding each character and composing the resulting vectors with a max-pooled CNN. The final embedding of a word is then the concatenation of all these vectors. Fig.~\ref{fig:embed} shows an illustration of the process. \begin{figure}[!t] \includegraphics[width=\linewidth]{embed.png} \caption{Illustration of the embedding step. The word and its affixes are embedded to obtain their vector representations. Character embeddings of the word are composed with a max-pooled CNN. The final word embedding is the concatenation of all the result vectors.}\label{fig:embed} \end{figure} \textbf{Encoding}.~~In the encoding step, the tagger learns contextual information by using either a feedforward network with context window or a bidirectional LSTM (biLSTM). The feedforward network accepts as input the concatenation of the embedding of the current word and $d$ preceding and succeeding words for some context window size $d$. Formally, given a sequence of word embedding $\textbf{x}_1, \textbf{x}_2, \ldots, \textbf{x}_n$, the input of the feedforward network at timestep $t$ is \begin{equation} \textbf{z}_t = \textbf{x}_{t-d} \oplus \ldots \oplus \textbf{x}_t \oplus \ldots \oplus \textbf{x}_{t+d} \end{equation} where $\oplus$ denotes a concatenation. The feedforward network then computes \begin{align} \textbf{o}_t &= \text{FF}(\textbf{z}_t) \\ &= W^{(2)} (\tanh(W^{(1)} \textbf{z}_t) \ast \textbf{r}_t) \end{align} where $\textbf{o}_t$ is the output vector, $\textbf{r}_t$ is a dropout mask vector, and $W^{(1)}, W^{(2)}$ are parameters.\footnote{There are bias vectors included as parameters, but we omit them from the equation for brevity.} The output vector $\textbf{o}_t$ has length equal to the number of possible tags. Its $j$-th component defines the (unnormalized) log probability of the $t$-th word having tag $j$. On the other hand, the biLSTM accepts as input the sequence of word embeddings, and for each timestep, the output from the forward and backward LSTM are concatenated to form the final output. Formally, the output at each timestep $t$ can be expressed as \begin{equation} \textbf{h}_t = \overrightarrow{\textbf{h}}_t \oplus \overleftarrow{\textbf{h}}_t \end{equation} where \begin{align} \overrightarrow{\textbf{h}}_t &= \overrightarrow{\text{LSTM}}(\textbf{x}_t, \overrightarrow{\textbf{h}}_{t-1}) \\ \overleftarrow{\textbf{h}}_t &= \overleftarrow{\text{LSTM}}(\textbf{x}_t, \overleftarrow{\textbf{h}}_{t-1}) \end{align} The vector $\textbf{h}_t$ is then passed through $\text{FF}(\cdot)$ as before to obtain $\textbf{o}_t$. \textbf{Prediction}.~~In the prediction step, the tagger predicts the POS tag of the $t$-th word based on the output vector $\textbf{o}_t$. We tested two approaches: a softmax layer with greedy decoding and a CRF layer with Viterbi decoding. With a softmax layer, the tagger simply normalizes $\textbf{o}_t$ and predicts using greedy decoding, i.e. picking the tag with the highest probability. In contrast, with a CRF layer, the tagger treats $\textbf{o}_t$ as emission probability scores, models the tag-to-tag transition probability scores, and uses Viterbi algorithm to select the most probable tag sequence as the prediction. We refer readers to~\cite{lample2016} to read more about how the CRF layer and Viterbi decoding work. We want to note that when we only embed words, encode using feedforward network, and predict using greedy decoding, the tagger is effectively the same as that in~\cite{abka2016}. Also, when only the word and character features are used, with a biLSTM and CRF layer, the tagger is effectively the same as that in~\cite{ma2016}. Our implementation code is available online.\footnote{https://github.com/kmkurn/id-pos-tagging} \subsection{Experiments Setup} For all models, we preprocessed the dataset by lowercasing all words, except when the characters were embedded. For the CRF model, we used L2 regularization whose coefficient was tuned to the development set. As we mentioned previously, we tuned the context window size $d$ to the development set as well. For the neural tagger, we set the size of the word, affix, and character embedding to 100, 20, and 30 respectively. We applied dropout regularization to the embedding layers. The max-pooled CNN has 30 filters for each filter width. We set the feedforward network and the biLSTM to have 100 hidden units. We put a dropout layer before the biLSTM input layer. We tuned the learning rate, dropout rate, context window size, and CNN filter width to the development set. As we said earlier, we experimented with different configurations in the embedding, encoding, and prediction step. We evaluated each configuration on the development set as well. At training time, we used a batch size of 8, decayed the learning rate by half if the $F_1$ score on the development set did not improve after 2 epochs, and stopped the training early if the score still did not improve after decaying the learning rate 5 times. To address the exploding gradient problem, we normalized the gradient norm at 1, following the suggestion in~\cite{reimers2017}. To handle the out-of-vocabulary problem, we converted singleton words and affixes occurring fewer than 5 times in the training data into a special token for unknown words/affixes. \subsection{Evaluation} Since the dataset is highly imbalanced (majority of words are nouns), using accuracy score as the evaluation metric is not appropriate as it gives a high score to a model that always predicts nouns regardless of input. Therefore, we decided to use $F_1$ score which considers both precision and recall of the predictions. Since there are multiple tags, there are two flavors to compute an overall $F_1$ score: micro and macro average. For POS tagging task where the tags do not span multiple words, micro-average $F_1$ score is exactly the same as accuracy score. Thus, macro-average $F_1$ score is our only option. However, there is still an issue. Macro-average $F_1$ score computes the overall $F_1$ score by averaging the $F_1$ score of each tag. This approach means that when the model wrongly predicts a rarely occurring tag (e.g., foreign word), it is penalized as heavily as it does a frequent tag. To address this problem, we used weighted macro-average $F_1$ score which takes into account the tag proportion imbalance. It computes the weighted average of the scores where each weight is equal to the corresponding tag's proportion in the dataset. This functionality is available in the \texttt{scikit-learn} library.\footnote{http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1\_score.html} \section{Results and Discussion} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Dev $F_1$ score of each neural tagger architecture}\label{tbl:dev-neural} \centering \begin{tabular}{|l|r|} \hline \multicolumn{1}{|c}{\textbf{Architecture}} & \multicolumn{1}{|c|}{\textbf{Mean} $\mathbf{F_1}$} \\ \hline Feedforward + softmax & 97.25 \\ Feedforward + CRF & 97.46 \\ biLSTM + softmax & 97.57 \\ biLSTM + CRF & \textbf{97.60} \\ \hline \end{tabular} \end{table} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Test $F_1$ score of each method, averaged over 5 folds}\label{tbl:results} \centering \begin{tabular}{|l|r|} \hline \multicolumn{1}{|c}{\textbf{Method}} & \multicolumn{1}{|c|}{$\mathbf{F_1}$} \\ \hline \textsc{Major} & 9.39 (0.21) \\ \textsc{Memo} & 90.62 (0.82) \\ Rashel et al.~\cite{rashel2014building} & 85.77 (0.22)\\ CRF & 96.22 (0.22) \\ biLSTM + CRF & \textbf{97.47 (0.11)} \\ \hline \end{tabular} \end{table} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Feature ablation of the best neural tagger}\label{tbl:ablation} \centering \begin{tabular}{|l|r|} \hline \multicolumn{1}{|c}{\textbf{Neural tagger}} & \multicolumn{1}{|c|}{$\mathbf{F_1}$} \\ \hline biLSTM + CRF & 96.06 (+0.00) \\ biLSTM + CRF + chars & 97.42 (+1.36) \\ biLSTM + CRF + chars + prefix & 97.50 (+0.08) \\ biLSTM + CRF + chars + prefix + suffix & 97.60 (+0.10) \\ \hline \end{tabular} \end{table} Firstly, we report on our tuning experiments for the neural tagger. Table~\ref{tbl:dev-neural} shows the evaluation results of the many configurations of our neural tagger on the development set. We group the results by the encoding and prediction step configuration. For each group, we show the highest $F_1$ score among many embedding configurations. As we can see, biLSTM with CRF layer achieves 97.60 $F_1$ score, the best score on the development set. This result agrees with many previous work in neural sequence labeling that a bidirectional LSTM with CRF layer performs best~\cite{rei2016,lample2016,ma2016}. Therefore, we will use this tagger to represent the neural model hereinafter. \begin{figure}[!t] \includegraphics[width=\linewidth]{conf_mat1.pdf} \caption{Confusion matrix of the best biLSTM with CRF tagger from the development set of the first fold. The tagger seems to have difficulties dealing with words annotated as X and confuse WH as SC.}\label{fig:cf} \end{figure} To understand the performance of the neural model for each tag, we plot the confusion matrix from the development set of the first fold in Fig.~\ref{fig:cf}. The figure shows that the model can predict most tags almost perfectly, except for X and WH tag. The X tag is described as "a word or part of a sentence which its category is unknown or uncertain".\footnote{http://bahasa.cs.ui.ac.id/postag/downloads/Tagset.pdf} The X tag is rather rare, as it only appears 397 times out of over 250K tokens. Some words annotated as X are typos and slang words. Some foreign terms and abbreviations are also annotated with X. The model might get confused as such words are usually tagged with a noun tag (NN or NNP). We also see that the model seems to confuse question words (WH) such as \textit{apa} (what) or \textit{siapa} (who) as SC since these words may be used in subordinate clauses as well. Looking at the data closely, we found that the tagging of such words are inconsistent. This inconsistency contributes to the inability of the model to distinguish the two tags well. Next, we present the result of evaluating the baselines and other comparisons on the test set in Table~\ref{tbl:results}. The $F_1$ scores are averaged over the 5 cross-validation folds. We see that \textsc{Major} baseline performs very poorly compared to the \textsc{Memo} baseline, which surprisingly achieves over 90 $F_1$ points. This result suggests that \textsc{Memo} is a more suitable baseline for this dataset in contrast with \textsc{Major}. The result also provides evidence to the usefulness of our evaluation metric which heavily penalizes a simple majority vote model. Furthermore, we notice that the rule-based tagger by Rashel et al.~\cite{rashel2014} performs worse than \textsc{Memo}, indicating that \textsc{Memo} is not just suitable but also quite a strong baseline. Moving on, we observe how CRF has 6 points advantage over \textsc{Memo}, signaling that incorporating contextual features and modeling tag-to-tag transitions are useful. Lastly, the biLSTM with CRF tagger performs the best with 97.47 $F_1$ score. To understand how each feature in the embedding step affects the neural tagger, we performed feature ablation on the development set and put the result in Table~\ref{tbl:ablation}. We see that with only words as features (first row), the neural tagger only achieves 96.06 $F_1$ score. Employing character features boosts the score up to 97.42, a gain of 1.36 points. Adding prefix and suffix features improves the performance further by 0.08 and 0.10 points respectively. From this result, we see that it is the character features that positively affect the neural tagger the most. \section{Conclusion} We experimented with several baselines and comparisons for Indonesian POS tagging task. Our comparisons include a rule-based tagger, a well-established probabilistic model for sequence labeling (CRF), and a neural model. We tested many configurations for our neural model: the features (words, affixes, characters), the architecture (feedforward, biLSTM), and the output layer (softmax, CRF). We evaluated all our models on the IDN Tagged Corpus~\cite{dinakaramani2014}, a manually annotated and publicly available Indonesian POS tagging dataset. Our best model achieves 97.47 $F_1$ score, a new state-of-the-art result on the dataset. We make our cross-validation split available publicly to serve as a benchmark for future work. \bibliographystyle{IEEEtranBST/IEEEtran}
1,116,691,498,896
arxiv
\section{Introduction} \label{sec:intro} Absolute luminosity measurements are of general interest to colliding-beam experiments at storage rings. Such measurements are necessary to determine the absolute cross-sections of reaction processes and to quantify the performance of the accelerator. The required accuracy on the value of the cross-section depends on both the process of interest and the precision of the theoretical predictions. At the LHC, the required precision on the cross-section is expected to be of order 1--2\%. This estimate is motivated by the accuracy of theoretical predictions for the production of vector bosons and for the two-photon production of muon pairs~\cite{ref:ws-precision-mangano,ref:ws-precision-anderson,ref:thorne,ref:delorenzi}. In a cyclical collider, such as the LHC, the average instantaneous luminosity of one pair of colliding bunches can be expressed as~\cite{ref:moller} \begin{equation} \label{eq:luminosity} L = N_1 \, N_2 \, f \sqrt{(\vec{v}_1-\vec{v}_2)^2-\frac{(\vec{v}_1\times \vec{v}_2)^2}{c^2}}\int{\rho_1(x,y,z,t)\rho_2(x,y,z,t) \ {\rm d}x\,{\rm d}y\,{\rm d}z\,{\rm d}t} \ , \end{equation} where we have introduced the revolution frequency $f$ (11245 Hz at the LHC), the numbers of protons $N_1$ and $N_2$ in the two bunches, the corresponding velocities $\vec{v}_1$ and $\vec{v}_2$ of the particles,\footnote{In the approximation of zero emittance the velocities are the same within one bunch.} and the particle densities for beam~1 and beam~2, $\rho_{1,2}(x,y,z,t)$. The particle densities are normalized such that their individual integrals over all space are unity. For highly relativistic beams colliding with a very small half crossing-angle \ensuremath{\alpha}\xspace, the M{\o}ller factor $\sqrt{(\vec{v}_1-\vec{v}_2)^2-{(\vec{v}_1\times \vec{v}_2)^2}/{c^2}}$ reduces to $2c \cos^2{\ensuremath{\alpha}\xspace} \simeq 2c$. The integral in Eq.~\ref{eq:luminosity} is known as the beam overlap integral. Methods for absolute luminosity determination are generally classified as either direct or indirect. Indirect methods are {\em e.g.} the use of the optical theorem to make a simultaneous measurement of the elastic and total cross-sections~\cite{ref:totem,ref:alfa}, or the comparison to a process of which the absolute cross-section is known, either from theory or by a previous direct measurement. Direct measurements make use of Eq.~\ref{eq:luminosity} and employ several strategies to measure the various parameters in the equation. The analysis described in this paper relies on two direct methods to determine the absolute luminosity calibration: the ``van der Meer scan'' method (VDM)~\cite{ref:vdm-method,ref:vdm-LHC} and the ``beam-gas imaging'' method (BGI)~\cite{ref:massi-bgmethod}. The BGI method is based on reconstructing beam-gas interaction vertices to measure the beam angles, positions and shapes. It was applied for the first time in LHCb (see Refs.~\cite{ref:lhcb-kshort,ref:balagura-moriond,ref:plamen-moriond}) using the first LHC data collected at the end of 2009 at $\sqrt{s} = 900~\ensuremath{\mbox{GeV}}\xspace$. The BGI method relies on the high precision of the measurement of interaction vertices obtained with the LHCb vertex detector. The VDM method exploits the ability to move the beams in both transverse coordinates with high precision and to thus scan the colliding beams with respect to each other. This method is also being used by other LHC experiments~\cite{ref:otherlhc}. The method was first applied at the CERN ISR~\cite{ref:vdm-method}. Recently it was demonstrated that additional information can be extracted when the two beams probe each other such as during a VDM scan, allowing the individual beam profiles to be determined by using vertex measurements of $pp$ interactions in beam-beam collisions (beam-beam imaging)~\cite{ref:vdmimaging}. In principle, beam profiles can also be obtained by scanning wires across the beams~\cite{ref:wire-method} or by inferring the beam properties by theoretical calculation from the beam optics. Both methods lack precision, however, as they both rely on detailed knowledge of the beam optics. The wire-scan method is limited by the achievable proximity of the wire to the interaction region which introduces the dependence on the beam optics model. The LHC operated with a $pp$ centre-of-mass energy of 7~TeV (3.5~TeV per beam). Typical values observed for the transverse beam sizes are close to 50~\ensuremath{\upmu \mbox{m}}\xspace and 55~mm for the bunch length. The half-crossing angle was typically 0.2~\ensuremath{\mbox{mrad}}\xspace. Data taken with the LHCb detector, located at interaction point (IP) 8, are used in conjunction with data from the LHC beam instrumentation. The measurements obtained with the VDM and BGI methods are found to be consistent, and an average is made for the final result. The limiting systematics in both measurements come from the knowledge of the bunch populations $N_1$ and $N_2$. All other sources of systematics are specific to the analysis method. Therefore, the comparison of both methods provides an important cross check of the results. The beam-beam imaging method is applied to the data taken during the VDM scan as an overall cross check of the absolute luminosity measurement. Since the absolute calibration can only be performed during specific running periods, a relative normalization method is needed to transport the results of the absolute calibration of the luminosity to the complete data-taking period. To this end we defined a class of visible interactions. The cross-section for these interactions is determined using the measurements of the absolute luminosity during specific data-taking periods. Once this visible cross-section is determined, the integrated luminosity for a period of data-taking is obtained by accumulating the count rate of the corresponding visible interactions over this period. Thus, the calibration of the absolute luminosity is translated into a determination of a well defined visible cross-section. In the present paper we first describe briefly the LHCb detector in Sect.~\ref{sec:lhcb}, and in particular those aspects relevant to the analysis presented here. In Sect.~\ref{sec:relative} the methods used for the relative normalization technique are given. The determination of the number of protons in the LHC bunches is detailed in Sect.~\ref{sec:bct}. The two methods which are used to determine the absolute scale are described in Sect.~\ref{sec:vdm} and \ref{sec:beamgas}, respectively. The cross checks made with the beam-beam imaging method are shown in Sect.~\ref{sec:vdmimagingsigma}. Finally, the results are combined in Sect.~\ref{sec:results-conclusions}. \section{The LHCb detector} \label{sec:lhcb} The LHCb detector is a magnetic dipole spectrometer with a polar angular coverage of approximately 10 to 300~mrad in the horizontal (bending) plane, and 10 to 250~mrad in the vertical plane. It is described in detail elsewhere~\cite{ref:lhcb}. A right-handed coordinate system is defined with its origin at the nominal $pp$ interaction point, the $z$ axis along the average nominal beam line and pointing towards the magnet, and the $y$ axis pointing upwards. Beam~1 (beam~2) travels in the direction of positive (negative) $z$. The apparatus contains tracking detectors, ring-imaging Cherenkov detectors, calorimeters, and a muon system. The tracking system comprises the vertex locator (VELO) surrounding the $pp$ interaction region, a tracking station upstream of the dipole magnet and three tracking stations located downstream of the magnet. Particles traversing the spectrometer experience a bending-field integral of around 4~Tm. The VELO plays an essential role in the application of the beam-gas imaging method at LHCb. It consists of two retractable halves, each having 21 modules of radial and azimuthal silicon-strip sensors in a half-circle shape, see Fig.~\ref{fig:velo-sketch}. Two additional stations ({\it Pile-Up System}, PU) upstream of the VELO tracking stations are mainly used in the hardware trigger. The VELO has a large acceptance for beam-beam interactions owing to its many layers of silicon sensors and their close proximity to the beam line. During nominal operation, the distance between sensor and beam is only 8~mm. During injection and beam adjustments, the two VELO halves are moved apart in a retracted position away from the beams. They are brought to their nominal position close to the beams during stable beam periods only. \begin{figure}[tbp] \centering \includegraphics*[width=0.9\textwidth]{figs/Fig01.pdf} \caption{A sketch of the VELO, including the two Pile-Up stations on the left. The VELO sensors are drawn as double lines while the PU sensors are indicated with single lines. The thick arrows indicate the direction of the LHC beams (beam~1 going from left to right), while the thin ones show example directions of flight of the products of the beam-gas and beam-beam interactions. \label{fig:velo-sketch} } \end{figure} The LHCb trigger system consists of two separate levels: a hardware trigger (L0), which is implemented in custom electronics, and a software High Level Trigger (HLT), which is executed on a farm of commercial processors. The L0 trigger system is designed to run at 1~MHz and uses information from the Pile-Up sensors of the VELO, the calorimeters and the muon system. They send information to the L0 decision unit (L0DU) where selection algorithms are run synchronously with the 40~MHz LHC bunch-crossing signal. For every nominal bunch-crossing slot ({\em i.e.} each 25 ns) the L0DU sends decisions to the LHCb readout supervisor. The full event information of all sub-detectors is available to the HLT algorithms. A trigger strategy is adopted to select $pp$ inelastic interactions and collisions of the beam with the residual gas in the vacuum chamber. Events are collected for the four bunch-crossing types: two colliding bunches (\bx{bb}), one beam~1 bunch with no beam~2 bunch (\bx{be}), one beam~2 bunch with no beam~1 bunch (\bx{eb}) and nominally empty bunch slots (\bx{ee}). Here ``\bx{b}'' stands for ``beam'' and ``\bx{e}'' stands for ``empty''. The first two categories of crossings, which produce particles in the forward direction, are triggered using calorimeter information. An additional PU veto is applied for \bx{be} crossings. Crossings of the type \bx{eb}, which produce particles in the backward direction, are triggered by demanding a minimal hit multiplicity in the PU, and vetoed by calorimeter activity. The trigger for \bx{ee} crossings is defined as the logical OR of the conditions used for the \bx{be} and \bx{eb} crossings in order to be sensitive to background from both beams. During VDM scans specialized trigger conditions are defined which optimize the data taking for these measurements (see Sect.~\ref{sec:vdm:conditions}). The precise reconstruction of interaction vertices (``primary vertices'', PV) is an essential ingredient in the analysis described in this paper. The initial estimate of the PV position is based on an iterative clustering of tracks (``seeding''). Only tracks with hits in the VELO are considered. For each track the distance of closest approach (DOCA) with all other tracks is calculated and tracks are clustered into a seed if their DOCA is less than 1~mm. The position of the seed is then obtained using an iterative procedure. The point of closest approach between all track pairs is calculated and its coordinates are used to discard outliers and to determine the weighted average position. The final PV coordinates are determined by iteratively improving the seed position with an adaptive, weighted, least-squares fit. In each iteration a new PV position is evaluated. Participating tracks are extrapolated to the $z$ coordinate of the PV and assigned weights depending on their impact parameter with respect to the PV. The procedure is repeated for all seeds, excluding tracks from previously reconstructed primary vertices, retaining only PVs with at least five tracks. For this analysis only PVs with a larger number of tracks are used since they have better resolution. For the study of beam-gas interactions only PVs with at least ten tracks are used and at least 25 tracks are required for the study of $pp$ interactions. \section{Relative normalization method} \label{sec:relative} The absolute luminosity is obtained only for short periods of data-taking. To be able to perform cross-section measurements on any selected data sample, the relative luminosity must be measured consistently during the full period of data taking. The systematic relative normalization of all data-taking periods requires specific procedures to be applied in the trigger, data-acquisition, processing and final analysis. The basic principle is to acquire luminosity data together with the physics data and to store it in the same files as the physics event data. During further processing of the physics data the relevant luminosity data is kept together in the same storage entity. In this way, it remains possible to select only part of the full data-set for analysis and still keep the capability to determine the corresponding integrated luminosity. The luminosity is proportional to the average number of visible proton-proton interactions in a beam-beam crossing, \mueff{\ensuremath{\mathrm{vis}}}. The subscript ``\ensuremath{\mathrm{vis}}'' is used to indicate that this holds for an arbitrary definition of the visible cross-section. Any stable interaction rate can be used as relative luminosity monitor. For a given period of data-taking, the integrated interaction rate can be used to determine the integrated luminosity if the cross-section for these visible interactions is known. The determination of the cross-section corresponding to these visible interactions is achieved by calibrating the absolute luminosity during specific periods and simultaneously counting the visible interactions. Triggers which initiate the full readout of the LHCb detector are created for random beam crossings. These are called ``luminosity triggers''. During normal physics data-taking, the overall rate is chosen to be 997 Hz, with 70\% assigned to \bx{bb}, 15\% to \bx{be}, 10\% to \bx{eb} and the remaining 5\% to \bx{ee} crossings. The events taken for crossing types other than \bx{bb} are used for background subtraction and beam monitoring. After a processing step in the HLT a small number of ``luminosity counters'' are stored for each of these random luminosity triggers. The set of luminosity counters comprise the number of vertices and tracks in the VELO, the number of hits in the PU and in the scintillator pad detector (SPD) in front of calorimeters, and the transverse energy deposition in the calorimeters. Some of these counters are directly obtained from the L0, others are the result of partial event-reconstruction in the HLT. During the final analysis stage the event data and luminosity data are available on the same files. The luminosity counters are summed (when necessary after time-dependent calibration) and an absolute calibration factor is applied to obtain the absolute integrated luminosity. The absolute calibration factor is universal and is the result of the luminosity calibration procedure described in this paper. The relative luminosity can be determined by summing the values of any counter which is linear with the instantaneous luminosity. Alternatively, one may determine the relative luminosity from the fraction of ``empty'' or invisible events in \bx{bb} crossings which we denote by \PZ. An invisible event is defined by applying a counter-specific threshold below which it is considered that no $pp$ interaction was seen in the corresponding bunch crossing. Since the number of events per bunch crossing follows a Poisson distribution with mean value proportional to the luminosity, the luminosity is proportional to $-\ln \PZ$. This ``zero count'' method is both robust and easy to implement~\cite{ref:zaitsev}. In the absence of backgrounds, the average number of visible $pp$ interactions per crossing can be obtained from the fraction of empty \bx{bb} crossings by $\mueff{\ensuremath{\mathrm{vis}}} = -\ln{\PZ[\bx{bb}]}$. Backgrounds are subtracted using \begin{equation} \label{eq:mu} \mueff{\ensuremath{\mathrm{vis}}} = -\left(\ln{\PZ[\bx{bb}]} - \ln{\PZ[\bx{be}]} - \ln{\PZ[\bx{eb}]} + \ln{\PZ[\bx{ee}]}\right) \, , \end{equation} where $\PZ[i] (i=\bx{bb},\bx{ee},\bx{be},\bx{eb})$ are the probabilities to find an empty event in a bunch-crossing slot for the four different bunch-crossing types. The $\PZ[\bx{ee}]$ contribution is added because it is also contained in the $\PZ[\bx{be}]$ and $\PZ[\bx{eb}]$ terms. The purpose of the background subtraction, Eq.~\ref{eq:mu}, is to correct the count-rate in the \bx{bb} crossings for the detector response which is due to beam-gas interactions and detector noise. In principle, the noise background is measured during \bx{ee} crossings. In the presence of parasitic beam protons in \bx{ee} bunch positions, as will be discussed below, it is not correct to evaluate the noise from $\PZ[\bx{ee}]$. In addition, the detector signals are not fully confined within one 25~ns bunch-crossing slot. The empty (\bx{ee}) bunch-crossing slots immediately following a \bx{bb}, \bx{be} or \bx{eb} crossing slot contain detector signals from interactions occurring in the preceding slot (``spill-over''). The spill-over background is not present in the \bx{bb}, \bx{be} and \bx{eb} crossings. Therefore, since the detector noise for the selected counters is small ($< \, 10^{-5}$ relative to the typical values measured during \bx{bb} crossings) the term $\ln{\PZ[\bx{ee}]}$ in Eq.~\ref{eq:mu} is neglected. Equation~\ref{eq:mu} assumes that the proton populations in the \bx{be} and \bx{eb} crossings are the same as in the \bx{bb} crossings. With a population spread of typically 10\% and a beam-gas background fraction $< \, 10^{-4}$ compared to the $pp$ interactions the effect of the spread is negligible, and is not taken into account. The results of the zero-count method based on the number of hits in the PU and on the number of tracks in the VELO are found to be the most stable ones. An empty event is defined to have $< 2$~hits when the PU is considered or $< 2$~tracks when the VELO is considered. A VELO track is defined by at least three hits on a straight line in the radial strips of the silicon detectors of the VELO. The number of tracks reconstructed in the VELO is chosen as the most stable counter. In the following we will use the notation $\sigeff{\ensuremath{\mathrm{vis}}}$ ($=\sigeff{VELO}$) for the visible cross-section measured using this method, except when explicitly stated otherwise. Modifications and alignment variations of the VELO also have negligible impact on the method, since the efficiency for reconstructing at least two tracks in an inelastic event is very stable against detector effects. Therefore, the systematics associated with this choice of threshold is negligible. The stability of the counter is demonstrated in Fig.~\ref{fig:muratio} which shows the ratio of the relative luminosities determined with the zero-count method from the multiplicity of hits in the PU and from the number of VELO tracks. Apart from a few threshold updates in the PU configuration, the PU was also stable throughout LHCb 2010 running, and it was used as a cross check. Figure~\ref{fig:countersratio} covers the whole period of LHCb operation in 2010, with both low and high number of interactions per crossing. Similar cross checks have been made with the counter based on the number of reconstructed vertices. These three counters have different systematics, and by comparing their ratio as a function of time and instantaneous luminosity we conclude that the relative luminosity measurement has a systematic error of 0.5\%. \begin{figure}[tbp] \centering \includegraphics*[width=0.48\textwidth]{figs/Fig02a.pdf} \includegraphics*[width=0.48\textwidth]{figs/Fig02b.pdf} \caption{ Ratio between \mueff{\ensuremath{\mathrm{vis}}} values obtained with the zero-count method using the number of hits in the PU and the track count in the VELO versus $\mueff{VELO}$. The deviation from unity is due to the difference in acceptance. The left (right) panel uses runs from the beginning (end) of the 2010 running period with lower (higher) values of \mueff{VELO}. The horizontal lines indicate a $\pm1$\% variation. \label{fig:muratio} } \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.8\textwidth, clip, trim=0.cm 0.cm 0.3cm 0cm]{figs/Fig03.pdf} \caption{ \label{fig:countersratio} Ratio between \mueff{\ensuremath{\mathrm{vis}}} values obtained with the zero-count method using the number of hits in the PU and the track count in the VELO as a function of time in seconds relative to the first run of LHCb in 2010. The period spans the full 2010 data taking period (about half a year). The dashed lines show the average value of the starting and ending periods (the first and last 25 runs, respectively) and differ by $\approx1$\%. The changes in the average values between the three main groups ($t < 0.4\times 10^7$~s, $0.4\times 10^7 < t < 1.2\times 10^7$~s, $t > 1.2\times 10^7$~s) coincide with known maintenance changes to the PU system. The upward excursion near $1.05 \times 10^7$~s is due to background introduced by parasitic collisions located at 37.5~m from the nominal IP present in the bunch filling scheme used for these fills to which the two counters have different sensitivity. The downward excursion near $0.25 \times 10^7$~s is due to known hardware failures in the PU (recovered after maintenance). The statistical errors are smaller than the symbol size of the data points. } \end{figure} The number of protons, beam sizes and transverse offsets at the interaction point vary across bunches. Thus, the \mueff{\ensuremath{\mathrm{vis}}} value varies across \bx{bb} bunch crossings. The spread in \mueff{\ensuremath{\mathrm{vis}}} is about 10\% of the mean value for typical runs. Due to the non-linearity of the logarithm function one first needs to compute \mueff{\ensuremath{\mathrm{vis}}} values for different bunch crossings and then to take the average. However, for short time intervals the statistics are insufficient to distinguish between bunch-crossing IDs, while one cannot assume \mueff{\ensuremath{\mathrm{vis}}} to be constant when the intervals are too long due to {\em e.g.} loss of bunch population and emittance growth. If the spread in instantaneous \mueff{\ensuremath{\mathrm{vis}}} is known, the effect of neglecting it in calculating an average value of \mueff{\ensuremath{\mathrm{vis}}} can be estimated. The difference between the naively computed \mueff{\ensuremath{\mathrm{vis}}} value and the true one is then \begin{equation} \mueff[\mathrm{biased}]{\ensuremath{\mathrm{vis}}} - \mueff[{\mathrm{true}}]{\ensuremath{\mathrm{vis}}} = -\ln\langle \PZ[i]\rangle - (-\langle\, \ln \PZ[i]\,\rangle) = \langle\, \ln(\frac{\PZ[i]}{\,\langle \PZ[i]\,\rangle})\rangle \, , \end{equation} where the average is taken over all beam-beam crossing slots $i$. Therefore, the biased \mueff{\ensuremath{\mathrm{vis}}} value can be calculated over short time intervals and a correction for the spread of \mueff{\ensuremath{\mathrm{vis}}} can in principle be applied by computing $\PZ[i]/\langle\, \PZ[i]\, \rangle $ over long time intervals. At the present level of accuracy, this correction is not required.\footnote{The relative luminosity increases by $0.5\%$ when the correction is applied.} The effect is only weakly dependent on the luminosity counter used. \section{Bunch population measurements} \label{sec:bct} To measure the number of particles in the LHC beams two types of beam current transformers are installed in each ring~\cite{ref:bct}. One type, the DCCT (DC Current Transformer), measures the total current of the beams. The other type, the FBCT (Fast Beam Current Transformer), is gated with 25~ns intervals and is used to measure the relative charges of the individual bunches. The DCCT is absolutely calibrated, thus is used to constrain the total number of particles, while the FBCT defines the relative bunch populations. The procedure is described in detail in Ref.~\cite{ref:note-bcnwg}. All devices have two independent readout systems. For the DCCT both systems provide reliable information and their average is used in the analysis, while for the FBCT one of the two systems is dedicated to tests and cannot be used. The absolute calibration of the DCCT is determined using a high-precision current source. At low intensity (early data) the noise in the DCCT readings is relatively important, while at the higher intensities of the data taken in October 2010 this effect is negligible. The noise level and its variation is determined by interpolating the average DCCT readings over long periods of time without beam before and after the relevant fills. In addition to the absolute calibration of the DCCTs, a deviation from the proportionality of the FBCT readings to the individual bunch charges is a potential source of systematic uncertainty. The FBCT charge offsets are cross checked using the ATLAS BPTX (timing) system~\cite{ref:ATLAS-BPTX}. This comparison shows small discrepancies between their offsets. These deviations are used as an estimate of the uncertainties. Since the FBCT equipment is readjusted at regular intervals, the offsets can vary on a fill-by-fill basis. Following the discussion in Ref.~\cite{ref:note-bcnwg}, an estimate of 2.9\% is used for the uncertainty of an individual bunch population product of a colliding bunch pair. Owing to the DCCT constraint on the total beam current, the overal uncertainty is reduced when averaging results of different bunch pairs within a single fill. As will be discussed in Sect.~\ref{sec:vdm}, for the analysis of the VDM data a method can be used which only needs the assumption of the linearity of the FBCT response. The LHC radio frequency (RF) system operates at 400 MHz, compared to the nominal 40 MHz bunch frequency. If protons circulate in the ring outside the nominal RF buckets, the readings of the DCCT need to be corrected before they are used to normalize the sum of the FBCT signals. We define ``satellite'' bunches as charges in neighbouring RF buckets compared to the nominal bucket. Satellite bunches inside the nominally filled bunch slots can be detected by the LHC experiments when there is no (or a very small) crossing angle between the two beams. The satellites would be observed as interactions displaced by a multiple of 37.5~cm from the nominal intersection point. For a part of the 2010 run the ATLAS and CMS experiments were operating with zero crossing angle and displaced interactions were indeed observed~\cite{ref:note-bcnwg}. The ``ghost charge'' is defined as the charge outside the nominally filled bunch slots. The rates of beam-gas events produced by ``ghost'' and nominal protons are measured using the beam-gas trigger. The ghost fraction is determined by comparing the number of beam-gas interactions during \bx{ee} crossings with the numbers observed in \bx{be} and \bx{eb} crossings. The timing of the LHCb trigger is optimized for interactions in the nominal RF buckets. The trigger efficiency depends on the time of the interaction with respect to the phase of the clock (modulo 25~\ensuremath{\mbox{ns}}\xspace). A measurement of the trigger efficiency was performed by shifting the clock which is usually synchronized with the LHC bunch-crossing time by 5, 10 and 12.5~ns and by comparing the total beam-gas rates in the nominal crossings. From these data the average efficiency for ghost charge is obtained to be $\epsilon_{\mathrm{average}} = 0.86 \,\pm\, 0.14$ ($0.84 \,\pm\, 0.16$) for beam~1 (beam~2). The ghost charge is measured for each fill during which an absolute luminosity measurement is performed and is typically 1\% of the total beam charge or less. The contribution of ``ghost'' protons to the total LHC beam current is subtracted from the DCCT value before the sum of the FBCT bunch populations is constrained by the DCCT measurement of the total current. The uncertainty assigned to the subtraction of ghost charge varies per fill and is due to the trigger efficiency uncertainty and the limited statistical accuracy. These two error components are of comparable size. \section{The van der Meer scan (VDM) method} \label{sec:vdm} The beam position scanning method, invented by van der Meer, provides a direct determination of an effective cross-section \sigeff{\ensuremath{\mathrm{vis}}} by measuring the corresponding counting rate as a function of the position offsets of two colliding beams~\cite{ref:vdm-method}. At the ISR only vertical displacements were needed owing to the crossing angle between the beams in the horizontal plane and to the fact that the beams were not bunched. For the LHC the beams have to be scanned in both transverse directions due to the fact that the beams are bunched~\cite{ref:vdm-LHC}. The cross-section \sigeff{\ensuremath{\mathrm{vis}}} can be measured for two colliding bunches using the equation~\cite{ref:vdmimaging} \begin{equation} \label{eq:vdm} \sigeff{\ensuremath{\mathrm{vis}}} = \frac{\int\!\mueff{\ensuremath{\mathrm{vis}}}(\ensuremath{\Delta_x}\xspace, \ensuremath{\Delta_{y_0}}\xspace)\,d \ensuremath{\Delta_x}\xspace\, \int\!\mueff{\ensuremath{\mathrm{vis}}}(\ensuremath{\Delta_{x_0}}\xspace, \ensuremath{\Delta_y}\xspace)\,d\ensuremath{\Delta_y}\xspace}{N_1 N_2 \mueff{\ensuremath{\mathrm{vis}}}(\ensuremath{\Delta_{x_0}}\xspace, \ensuremath{\Delta_{y_0}}\xspace)\cos{\ensuremath{\alpha}\xspace}} \, , \end{equation} where $\mueff{\ensuremath{\mathrm{vis}}}(\ensuremath{\Delta_x}\xspace, \ensuremath{\Delta_y}\xspace)$ is the average number of interactions per crossing at offset $(\ensuremath{\Delta_x}\xspace, \ensuremath{\Delta_y}\xspace)$ corresponding to the cross-section \sigeff{\ensuremath{\mathrm{vis}}}. The interaction rates $R(\ensuremath{\Delta_x}\xspace$,$\ensuremath{\Delta_y}\xspace)$ are related to $\mueff{\ensuremath{\mathrm{vis}}}(\ensuremath{\Delta_x}\xspace, \ensuremath{\Delta_y}\xspace)$ by the revolution frequency, $R(\ensuremath{\Delta_x}\xspace$,$\ensuremath{\Delta_y}\xspace) = f\, \mueff{\ensuremath{\mathrm{vis}}}(\ensuremath{\Delta_x}\xspace$,$\ensuremath{\Delta_y}\xspace)$. These rates are measured at offsets $\ensuremath{\Delta_x}\xspace$ and $\ensuremath{\Delta_y}\xspace$ with respect to their nominal positions at offsets $(\ensuremath{\Delta_{x_0}}\xspace,\ensuremath{\Delta_{y_0}}\xspace)$. The scans consist of creating offsets $\ensuremath{\Delta_x}\xspace$ and $\ensuremath{\Delta_y}\xspace$ such that practically the full profiles of the beams are explored. The measured rate integrated over the displacements gives the cross-section. The main assumption is that the density distributions in the orthogonal coordinates $x$ and $y$ can be factorized. In that case, two scans are sufficient to obtain the cross-section: one along a constant $y$-displacement $\ensuremath{\Delta_{y_0}}\xspace$ and one along a constant $x$-displacement $\ensuremath{\Delta_{x_0}}\xspace$. It is also assumed that effects due to bunch evolution during the scans (shape distortions or transverse kicks due to beam-beam effects, emittance growth, bunch current decay), effects due to the tails of the bunch density distribution in the transverse plane and effects of the absolute length scale calibration against magnet current trims are either negligible or can be corrected for. \subsection{Experimental conditions during the van der Meer scan} \label{sec:vdm:conditions} VDM scans were performed in LHCb during dedicated LHC fills at the beginning and at the end of the 2010 running period, one in April and one in October. The characteristics of the beams are summarized in Table~\ref{tbl1}. In both fills there is one scan where both beams moved symmetrically and one scan where only one beam moved at a time. Precise beam positions are calculated from the LHC magnet currents and cross checked with vertex measurements using the LHCb VELO, as described below. \begin{table}[tb] \centering \caption{Parameters of LHCb van der Meer scans. $N_{1,2}$ is the typical number of protons per bunch, $\beta^{\star}$ characterizes the beam optics near the IP, \nb{tot} (\nb{coll}) is the total number of (colliding) bunches per beam, $\mueff[{\mathrm{max}}]{\ensuremath{\mathrm{vis}}}$ is the average number of visible interactions per crossing at the beam positions with maximal rate. $\tau_{N_1\, N_2}$ is the decay time of the product of the bunch populations and $\tau_L$ is the decay time of the luminosity. \label{tbl1} } \vskip 1mm \begin{tabular}{lcccccccc} & \textbf{25 Apr} & \textbf{15 Oct} \\ \hline LHC fill number & 1059 & 1422 \\ $N_{1,2}$ ($10^{10}$ protons)& 1 & 7--8 \\ $\beta^{\star}$ (m) & 2 & 3.5\\ \nb{coll}/\nb{tot} & 1/2 & 12/16\\ $\mueff[{\mathrm{max}}]{\ensuremath{\mathrm{vis}}}$ & 0.03 & 1\\ Trigger & minimum bias & 22.5~kHz random \\ & & $\sim$130~Hz minimum bias \\ & & beam-gas \\ $\tau_{N_1\, N_2}$ (h) & 950 & 700 \\ $\tau_L$ (h) & 30 & 46 \\ \end{tabular} \end{table} In April the maximal beam movement of $\pm 3\sigma$ was achieved only in the first scan, as in the second only the first beam was allowed to move.\footnote{We refer here to $1\sigma$ as the average of the approximate widths of the beams.} During the second October scan, both beams moved one after the other, covering the whole separation range of $\approx 6\sigma$ to both sides. However, the beam steering procedure was such that in the middle of the scan the first beam jumped to an opposite end point and then returned, so that the beam movement was not continuous. This potentially increases hysteresis effects in the LHC magnets. In addition, the second scan in October had half the data points, so it was used only as a cross check to estimate systematic errors. During the April scans the event rate was low and it was possible to record all events containing visible interactions. A loose minimum bias trigger was used with minimal requirements on the number of SPD hits ($\ge 3$) and the transverse energy deposition in the calorimeters ($\ge 240$~MeV). In October the bunch populations were higher by a factor $\sim7.5$, therefore, in spite of slightly broader beams (the optics defined a $\beta^\star$ value of 3.5~m instead of 2~m in April), the rate per colliding bunch pair was higher by a factor of $\sim30$. There were twelve colliding bunch pairs instead of one in April. Therefore, a selective trigger was used composed of the logical OR of three independent criteria. The first decision accepted random bunch crossings at 22.5~kHz (20~kHz were devoted to the twelve crossings with collisions, 2~kHz to the crossings where only one of two beams was present, and 0.5~kHz to the empty crossings). The second decision used the same loose minimum bias trigger as the one used in April but its rate was limited to 130~Hz. The third decision collected events for the beam-gas analysis. For both the April and October data the systematic error is dominated by uncertainties in the bunch populations. In April this uncertainty is higher (5.6\%) due to a larger contribution from the offset uncertainty at lower bunch populations~\cite{ref:note-bcnwg}. In October the measurement of the bunch populations was more precise, but its uncertainty (2.7\%) is still dominant in the cross-section determination~\cite{ref:bcnwg2}. Since the dominant uncertainties are systematic and correlated between the two scans, we use the less precise April scan only as a cross check. The scans give consistent results, and in the following we concentrate on the scan taken in October which gives about a factor two better overall precision in the measurement. The LHC filling scheme was chosen in such a way that all bunches collided only in one experiment (except for ATLAS and CMS where the bunches are always shared), namely twelve bunch pairs in LHCb, three in ATLAS/CMS and one in ALICE. The populations of the bunches colliding in LHCb changed during the two LHCb scans by less than 0.1\%. Therefore, the rates are not normalized by the bunch population product $N_1\, N_2$ of each colliding bunch pair at every scan point, but instead only the average of the product over the scan duration is used. This is done to avoid the noise associated with the $N_{1,2}$ measurement. The averaged bunch populations are given in Table~\ref{tbl2}. The same procedure is applied for the April scan, when the decay time of ${N_1\, N_2}$ was longer, 950 instead of 700 hours in October. \begin{table}[bt] \centering \caption{ Bunch populations (in $10^{10}$ particles) averaged over the two scan periods in October separately. The bottom line is the DCCT measurement, all other values are given by the FBCT. The first 12 rows are the measurements in bunch crossings (BX) with collisions at LHCb, and the last two lines are the sums over all 16 bunches. \label{tbl2} } \vskip 1mm \begin{tabular}{ccccccccc} & \multicolumn{2}{c}{\bf Scan 1} & \multicolumn{2}{c}{\bf Scan 2} \\ {BX} & $N_1$ & $N_2$ & $N_1$ & $N_2$ \\ \hline 2027 & 8.425 & 7.954 & 8.421 & 7.951 \\ 2077 & 7.949 & 7.959 & 7.944 & 7.957 \\ 2127 & 7.457 & 7.563 & 7.452 & 7.561 \\ 2177 & 6.589 & 7.024 & 6.584 & 7.021 \\ 2237 & 7.315 & 8.257 & 7.311 & 8.255 \\ 2287 & 7.451 & 7.280 & 7.446 & 7.278 \\ 2337 & 7.016 & 7.219 & 7.012 & 7.217 \\ 2387 & 7.803 & 6.808 & 7.798 & 6.805 \\ 2447 & 7.585 & 7.744 & 7.580 & 7.742 \\ 2497 & 7.878 & 7.747 & 7.874 & 7.745 \\ 2547 & 6.960 & 6.244 & 6.955 & 6.243 \\ 2597 & 7.476 & 7.411 & 7.472 & 7.409 \\ \hline All, FBCT & 120.32 & 119.07 & 120.18 & 118.99 \\ \hline DCCT & 120.26 & 119.08 & 120.10 & 118.98 \\ \end{tabular} \end{table} In addition to the bunch population changes, the luminosity stability may be limited by the changes in the bunch profiles, {\em e.g.} by emittance growth. The luminosity stability is checked several times during the scans when the beams were brought back to their nominal position. The average number of interactions per crossing is shown in Fig.~\ref{fig:evolution} for the October scan. The luminosity decay time is measured to be 46 hours (30 hours in April). This corresponds to a 0.7\% luminosity drop during the first, longer, scan along either \ensuremath{\Delta_x}\xspace or \ensuremath{\Delta_y}\xspace (0.9\% in April). The scan points have been taken from lower to higher \ensuremath{\Delta_x}\xspace, \ensuremath{\Delta_y}\xspace values, therefore, the luminosity drop effectively enhances the left part of the integral and reduces its right part, so that the net effect cancels to first order since the curve is symmetric. The count rate $R(\ensuremath{\Delta_{x_0}}\xspace,\ensuremath{\Delta_{y_0}}\xspace)$ at the nominal position entering Eq.~\ref{eq:vdm}, is measured in the beginning, in the middle and at the end of every scan, so that the luminosity drop also cancels to first order. Therefore, the systematic error due to the luminosity drop is much less than 0.7\% and is neglected. \begin{figure}[tbp] \centering \includegraphics*[width=0.45\textwidth]{figs/Fig04.pdf} \caption{Evolution of the average number of interactions per crossing at the nominal beam position during the October scans. In the first (second) scan the parameters at the nominal beam position were measured three (four) times both during the \ensuremath{\Delta_x}\xspace scan and the \ensuremath{\Delta_y}\xspace scan. The straight line is a fit to the data. The luminosity decay time is 46~hours. \label{fig:evolution} } \end{figure} The widths of the profiles of the luminous region did not change within the statistical uncertainties when the beams were brought to their nominal positions during the first and the second scans in \ensuremath{\Delta_x}\xspace and \ensuremath{\Delta_y}\xspace. In addition, the width of the profiles measured in the two VDM scans did not change. These facts also indicate that the effect of the emittance growth on the cross-section measurement is negligible. \subsection{Cross-section determination} In accordance with the definition of the most stable relative luminosity counter, a visible event is defined as a $pp$ interaction with at least two VELO tracks. The twelve colliding bunch pairs of the VDM scan in October are analysed individually. The dependence on the separation \ensuremath{\Delta_x}\xspace and \ensuremath{\Delta_y}\xspace of \mueff{\ensuremath{\mathrm{vis}}} summed over all bunches is shown in Fig.~\ref{fig:vdmprofiles}. Two scans are overlaid, the second is taken at the same values of \ensuremath{\Delta_x}\xspace and \ensuremath{\Delta_y}\xspace but with twice as large a step size and different absolute beam positions. One can see that the \ensuremath{\Delta_y}\xspace curves are not well reproduced in the two scans. The reason for this apparent non-reproducibility is not understood. It may be attributed to hysteresis effects or imperfections in the description of the optics.\footnote{Imperfections in the description of the optics can manifest themselves as second order effects in the translation of magnet settings into beam positions or beam angles.} \begin{figure}[tbp] \centering \includegraphics*[width=0.7\textwidth]{figs/Fig05.pdf} \caption{Number of interactions per crossing summed over the twelve colliding bunches versus the separations \ensuremath{\Delta_x}\xspace (top), \ensuremath{\Delta_y}\xspace (bottom) in October. The first (second) scan is represented by the dark/blue (shaded/red) points and the solid (dashed) lines. The spread of the mean values and widths of the distributions obtained individually for each colliding pair are small compared to the widths of the VDM profiles, so that the sum gives a good illustration of the shape. The curves represent the single Gaussian fits to the data points described in the text. \label{fig:vdmprofiles} } \end{figure} \begin{table}[bt] \centering \caption{ Mean and RMS of the VDM count-rate profiles summed over the twelve colliding bunch pairs obtained from data in the two October scans (scan 1 and scan 2). The statistical errors are 0.05~\ensuremath{\upmu \mbox{m}}\xspace in the mean position and 0.04~\ensuremath{\upmu \mbox{m}}\xspace in the RMS. \label{tbl3} } \vskip 1mm \begin{tabular}{lcccc} & {\bf Scan}& \boldmath\ensuremath{\Delta_x}\xspace \ {\bf scan} & \boldmath\ensuremath{\Delta_y}\xspace \ {\bf scan} \\ \hline Mean (\ensuremath{\upmu \mbox{m}}\xspace) &$1$& 1.3 & 3.1 \\ &$2$& 2.8 & 9.2 \\ RMS (\ensuremath{\upmu \mbox{m}}\xspace) &$1$& 80.6 & 80.8 \\ &$2$& 80.5 & 80.7 \\ \end{tabular} \end{table} The mean and RMS values of the VDM count-rate profiles shown in Fig.~\ref{fig:vdmprofiles} are listed in Table~\ref{tbl3}. Single Gaussian fits to the individual bunch profiles return $\chi^2$ values between 2.7 and 4.3 per degree of freedom. Double Gaussian fits provide a much better description of the data and are therefore used in the analysis. The single Gaussian fits give cross-section values typically 1.5 to 2\% larger than the ones obtained with a double Gaussian. It is found that the fit errors can be reduced by approximately a factor two if the fits to the \ensuremath{\Delta_x}\xspace and \ensuremath{\Delta_y}\xspace curves are performed simultaneously and the value measured at the nominal point $\mueff{\ensuremath{\mathrm{vis}}}(\ensuremath{\Delta_{x_0}}\xspace$,$\ensuremath{\Delta_{y_0}}\xspace)$ is constrained to be the same in both scans. The first fit parameter is chosen to be $\int\!\mueff{\ensuremath{\mathrm{vis}}}\,d\ensuremath{\Delta_x}\xspace \, \int\!\mueff{\ensuremath{\mathrm{vis}}}\,d\ensuremath{\Delta_y}\xspace / \mueff{\ensuremath{\mathrm{vis}}}(\ensuremath{\Delta_{x_0}}\xspace, \ensuremath{\Delta_{y_0}}\xspace)$, so that a correlation of both integrals and the value at the nominal point is correctly taken into account in the resulting fit error. Other fit parameters are: the two integrals along \ensuremath{\Delta_x}\xspace and \ensuremath{\Delta_y}\xspace, and $\sigma_1$, $\Delta\sigma$ and a common central position of the Gaussian function for the \ensuremath{\Delta_x}\xspace and similarly for the \ensuremath{\Delta_y}\xspace curves. Here $\sigma_1$ and $\sigma_2=\sqrt{\sigma_1^2+\Delta\sigma^2}$ are the two Gaussian widths of the fit function. The relative normalization of the two Gaussian components and the value at the nominal point are derived from the nine fit parameters listed above. The $\chi^2$ value per degree of freedom of the fit is between 0.7 and 1.8 for all bunch pairs. The product of bunch populations $N_1\, N_2$ of the twelve colliding bunches have an RMS spread of 12\%. The analysis of the individual bunch pairs gives cross-sections consistent within statistical errors, which typically have values of 0.29\% in the first scan. The sensitivity of the method is high enough that it is possible to calibrate the {\it relative} bunch populations $N_{1,2}^i/\sum_{j=1}^{16} N_{1,2}^j$ measured with the FBCT system by assuming a linear response. Here $i$ runs over the twelve bunches colliding in LHCb and $j$ over all 16 bunches circulating in the machine. By comparing the FBCT with the ATLAS BPTX measurements it is observed that both may have a non-zero offset~\cite{ref:note-bcnwg,ref:bcnwg2}. A discrete function $s_\mathrm{\ensuremath{\mathrm{vis}}}^i$ is fitted to the twelve measurements $\sigeff[i]{\ensuremath{\mathrm{vis}}}$ using three free parameters: the common cross-section \sigeff{\ensuremath{\mathrm{vis}}} and the two FBCT offsets for the two beams $N_{1,2}^0 \begin{equation} \label{eq:offset} s_\mathrm{\ensuremath{\mathrm{vis}}}^i = \sigeff{\ensuremath{\mathrm{vis}}} \prod_{b=1,2}\left[ \frac{(N_b^i-N_b^0)}{N_b^i} \ \frac{\sum_{j=1}^{16}N_b^j}{\sum_{j=1}^{16}(N_b^j-N_b^0)} \right] \, , \end{equation} which corrects the relative populations $N_{1,2}^i$ for the FBCT offsets $N_{1,2}^0$ and takes into account that the total beam intensities measured with the DCCT constrain the sums of all bunch populations obtained from the FBCT values. The sum over all bunches of the quantites $N_{1,2}^j$, $\sum_{j=1}^{16}N_{1,2}^j$, is normalized to the DCCT value prior to the fit and the fit using Eq.~\ref{eq:offset} evaluates the correction due to the FBCT offsets alone. The results of this fit are shown in Fig.~\ref{fig11}, where the data points $\sigeff[i]{\ensuremath{\mathrm{vis}}}$ are drawn without offset correction and the lines represent the fit function of Eq.~\ref{eq:offset}. The use of two offsets improves the description of the points compared to the uncorrected simple fit. The $\chi^2$ per degree of freedom and other relevant fit results are summarized in Table~\ref{tab:vdm:res}. In addition, the table also shows results for the case where the ATLAS BPTX is used instead of the FBCT system. \begin{table}[tbp] \begin{center} \caption{ Results for the visible cross-section fitted over the twelve bunches colliding in LHCb for the October VDM data together with the results of the April scans. $N_{1,2}^0$ are the FBCT or BPTX offsets in units of $10^{10}$ particles. They should be subtracted from the values measured for individual bunches. The first (last) two columns give the results for the first and the second scan using the FBCT (BPTX) to measure the relative bunch populations. The cross-section from the first scan obtained with the FBCT bunch populations with offsets determined by the fit is used as final VDM luminosity calibration. The results of the April scans are reported on the last row. Since there is only one colliding bunch pair, no fit to the FBCT offsets is possible. \label{tab:vdm:res} } \vskip 1mm \begin{tabular}{lcc|cccc} & \multicolumn{4}{c}{\bf October data} \\ \hline & \multicolumn{2}{c|}{\bf FBCT} & \multicolumn{2}{c}{\bf ATLAS BPTX} \\ & {\bf Scan 1} & {\bf Scan 2}& {\bf Scan 1} & {\bf Scan 2}\\ \hline & \multicolumn{2}{c}{{\bf \it with fitted offsets}} & \multicolumn{2}{c}{{\bf \it with fitted offsets}} \\ \sigeff{\ensuremath{\mathrm{vis}}} (mb) & ${\bf \underline{58.73\,\pm\,0.05}}$ & $57.50\,\pm\,0.07$ & $58.62\,\pm\,0.05$ & $57.45\,\pm\,0.07$ \\ $N_1^0$ & $0.40\,\pm\,0.10$ & $0.29\,\pm\,0.15$ & $-0.10\,\pm\,0.12$ & $-0.23\,\pm\,0.17$ \\ $N_2^0$ & $-0.02\,\pm\,0.10$ & $0.23\,\pm\,0.13$ & $-0.63\,\pm\,0.12$ & $-0.34\,\pm\,0.15$ \\ $\chi^2$/\ensuremath{\mathrm{ndf}}\xspace & 5.8 / 9 & 7.6 / 9 & 6.9 / 9 & 7.3 / 9 \\ \hline & \multicolumn{2}{c}{{\bf \it with offsets fixed at zero}} & \multicolumn{2}{c}{{\bf \it with offsets fixed at zero}} \\ \sigeff{\ensuremath{\mathrm{vis}}} (mb) & $58.73\,\pm\,0.05$ & $57.50\,\pm\,0.07$ & $58.63\,\pm\,0.05$ & $57.46\,\pm\,0.07$ \\ $\chi^2$/\ensuremath{\mathrm{ndf}}\xspace & 23.5 / 11 & 21.9 / 11 & 66.5 / 11 & 23.5 / 11 \\ \hline & \multicolumn{4}{c}{\bf April data} \\ \hline & {\bf Scan 1} & {\bf Scan 2}\\ $\sigeff{\ensuremath{\mathrm{vis}}}$ (mb)& {\bf $59.6\pm0.5$} & { $57.0\pm0.5$} \\ \end{tabular} \end{center} \end{table} \begin{figure}[tbp] \centering \includegraphics*[width=0.6\textwidth]{figs/Fig06.pdf} \caption{ Cross-sections without correction for the FBCT offset for the twelve bunches of the October VDM fill (data points). The lines indicate the results of the fit as discussed in the text. The upper (lower) set of points is obtained in the first (second) scan. \label{fig11} } \end{figure} One can see that the offset errors in the first scan are $(0.10-0.12) \times 10^{10}$, or 1.5\% relative to the average bunch population $\langle \, N_{1,2}\, \rangle = 7.5 \times 10^{10}$. The sensitivity of the method, therefore, is very high, in spite of the fact that the RMS spread of the bunch population products $N_1\, N_2$ is 12\%. The quoted errors are only statistical. For the first scan, the relative cross-section error is 0.09\%. Since the fits return good $\chi^2$ values, the bunch-crossing dependent systematic uncertainties (such as emittance growth and bunch population product drop) are expected to be lower or comparable. An indication of the level of the systematic errors is given by the difference of about two standard deviations found for $N_{1,2}^0$ between the two scans. All principal sources of systematic errors which will be discussed below (DCCT scale uncertainty, hysteresis, and ghost charges) cancel when comparing bunches within a single scan. In spite of the good agreement between the bunches within the same scan, there is an overall 2.1\% discrepancy between the scans. The reason is not understood, and may be attributed to a potential hysteresis effect or similar effects resulting in uncontrollable shifts of the beam as a whole. The results of the first scan with the FBCT offsets determined by the fit are taken as the final VDM luminosity determination (see Sect.~\ref{sec:results-vdm}). The 2.1\% uncertainty estimated from the discrepancy is the second largest systematic error in the cross-section measurement after the uncertainties in the bunch populations. In the April data the situation is similar: the discrepancy between the cross-sections obtained from the two scans is $(4.4\,\pm\,1.2)$\%, the results may be found in Table~\ref{tab:vdm:res}. Since the April measurement is performed using corrected trigger rates proportional to the luminosity instead of VELO tracks, the results have been corrected for the difference in acceptances. The correction factor is determined by studying random triggers and is $\sigeff{VELO}/\sigeff{April~trigger} = 1.066$, where $\sigeff{VELO}$ is the usual definition of $\sigeff{\ensuremath{\mathrm{vis}}}$. \subsection{Systematic errors} \subsubsection{Reproducibility of the luminosity at the nominal beam positions} Figure~\ref{fig:evolution} shows the evolution of the luminosity as a function of time for the periods where the beams were at their nominal positions during the VDM scan. One expects a behaviour which follows the loss of beam particles and the emittance growth. Since these effects occur at large time-scales compared to the duration of the scan, the dependence on these known effects can be approximated by a linear evolution. As shown in Fig.~\ref{fig:evolution}, the luminosity did not always return to the expected value when the beams returned to their nominal positions. The $\chi^2/\ensuremath{\mathrm{ndf}}\xspace$ with respect to the fitted straight line is too large (40/12), thus, the non-reproducibility cannot be attributed fully to statistical fluctuations and another systematic effect is present. The origin of this effect is not understood but it may be similar to the one which causes the non-reproducibility of the beam positions observed in the shift of the two scan curves. Therefore, a systematic error of 0.4\% is assigned to the absolute scale of the \mueff{\ensuremath{\mathrm{vis}}} measurement to take this observation into account. The systematic error is estimated as the amount which should be added in quadrature to the statistical error of 0.25\% to produce a $\chi^2/\ensuremath{\mathrm{ndf}}\xspace$ equal to one. Since the absolute scale of the \mueff{\ensuremath{\mathrm{vis}}} measurement enters the cross-section linearly (Eq.~\ref{eq:vdm}), the same systematic error of 0.4\% is assigned to the cross-section measurement. \subsubsection{Length scale calibration} The beam separation values \ensuremath{\Delta_x}\xspace and \ensuremath{\Delta_y}\xspace are calculated from the LHC magnet currents at every scan step. There is a small non-reproducibility in the results of two scans, as shown in Fig.~\ref{fig:vdmprofiles}. The non-reproducibility may be attributed to a mismatch between the actual beam positions and the nominal ones. Therefore, it is important to check the \ensuremath{\Delta_x}\xspace and \ensuremath{\Delta_y}\xspace values as predicted by the magnet currents, and in particular their scales which enter linearly in the cross-section computation (Eq.~\ref{eq:vdm}). One distinguishes a possible differential length scale mismatch between the two beams from a mismatch of their average position calibration. A dedicated mini-scan was performed in October where the two beams were moved in five equidistant steps both in $x$ and $y$ keeping the nominal separation between the beams constant. During the scan along $x$ the beam separation was $80\ \ensuremath{\upmu \mbox{m}}\xspace$ in $x$ and $0\ \ensuremath{\upmu \mbox{m}}\xspace$ in $y$. Here 80~\ensuremath{\upmu \mbox{m}}\xspace is approximately the width of the luminosity profile of the VDM scan (see Table~\ref{tbl3}). This separation was chosen to maximize the derivative $dL/d\Delta(x)$, {\em i.e.} the sensitivity of the luminosity to a possible difference in the length scales for the two beams. If {\em e.g.} the first beam moves slightly faster than the second one compared to the nominal movement, the separation $\Delta(x)$ gets smaller and the effect can be visible as an increase of the luminosity. Similarly, the beam separation used in the $y$ scan was $0\ \ensuremath{\upmu \mbox{m}}\xspace$ and $80\ \ensuremath{\upmu \mbox{m}}\xspace$ in $x$ and $y$, respectively. The behaviour of the measured luminosity during the length-scale calibration scans is shown in Fig.~\ref{fig:lengthlum}. As one can see, the points show a significant deviation from a constant. This effect may be attributed to different length scales of the two beams. More specifically, we assume that the real positions of the beams $x_{1,2}$ could be obtained from the values $x_{1,2}^0$ derived from the LHC magnet currents by applying a correction parametrized by $\epsilon_x$ \begin{equation} \label{eq:vdm:size} x_{1,2} = (1\,\pm\,\epsilon_x/2)\, x_{1,2}^0\, , \end{equation} and similarly for $y_{1,2}$. The $+$ ($-$) sign in front of $\epsilon_x$ holds for beam~1 (beam~2). Assuming a Gaussian shape of the luminosity dependence on \ensuremath{\Delta_x}\xspace during the VDM scan, we get \begin{equation} \label{eq:vdm:shift} \frac{1}{L}\,\frac{dL}{d(x_1+x_2)/2} = -\epsilon_x\frac{\ensuremath{\Delta_x}\xspace}{\Sigma_x^2} \, . \end{equation} Here $\ensuremath{\Delta_x}\xspace = 80$~\ensuremath{\upmu \mbox{m}}\xspace is the fixed nominal beam separation. A similar equation holds for the $y$ coordinate. In the approximation of a single Gaussian shape of the beams, the width of the VDM profile, $\Sigma_{x}$, is defined as \begin{equation} \label{eq:capsigma} \Sigma_x = \sqrt{\sigma_{1x}^2+\sigma_{2x}^2+4(\sigxp{z})^2\tan^2{\ensuremath{\alpha}\xspace}} \, , \end{equation} where $\sigxp{z}$ is the width of the luminous region in the $z$ coordinate and $\sigma_{bx}$ is the width of beam $b$ ($b=1,2$). A similar equation can be written for $\Sigma_y$. From the slopes observed in Fig.~\ref{fig:lengthlum} we obtain $\epsilon_x=2.4\%$ and $\epsilon_y=-1.9\%$. The same behaviour is observed for all bunches separately. \begin{figure}[tbp] \centering \includegraphics*[width=0.48\textwidth]{figs/Fig07a.pdf} \includegraphics*[width=0.48\textwidth]{figs/Fig07b.pdf} \caption{Average number of interactions (\mueff{VELO}) versus the centre of the luminous region summed over the twelve colliding bunches and measured during the length scale scans in $x$ (left) and in $y$ (right) taken in October. The points are indicated with small horizontal bars, the statistical errors are smaller than the symbol size. The straight-line fit is overlaid. \label{fig:lengthlum} } \end{figure} Since $\ensuremath{\Delta_x}\xspace = (x_1^0-x_2^0)+\epsilon\, (x_1^0+x_2^0)/2$, the \ensuremath{\Delta_x}\xspace correction depends on the nominal mid-point between the beams $(x_1^0+x_2^0)/2$. In the first scan this nominal point was always kept at zero, therefore, no correction is needed. During the second scan this point moved with nominal positions $0\to355.9\ \ensuremath{\upmu \mbox{m}}\xspace\to0$. Therefore, a correction to the \ensuremath{\Delta_x}\xspace values in Fig.~\ref{fig:vdmprofiles} is required. The central point should be shifted to the right (left) for the $x$ ($y$) scan. The left (right) side is thus stretched and the opposite side is shrunk. After correction the shift between the scans is reduced in $y$, but appears now in $x$, so that the discrepancy cannot be fully explained by a linear correction alone. The correction which stretches or shrinks the profiles measured in the second scan influences the integrals of these profiles and the resulting cross-sections very little. The latter changes on average by only 0.1\%, which we take as an uncertainty and which we include into the systematic error. In Table~\ref{tab:vdm:res} the numbers are given with the correction applied. During a simultaneous parallel translation of both beams, the centre of the luminous region should follow the beam positions regardless of the bunch shapes. Since it is approximately at $(x_1+x_2)/2=(x_1^0+x_2^0)/2$ and similarly for $y$, the corrections to the position of the centre due to $\epsilon_{x,y}$ are negligible. The luminous centre can be determined using vertices measured with the VELO. This provides a precise cross check of the common beam length scales $(x_1^0+x_2^0)/2$ and $(y_1^0+y_2^0)/2$. The result is shown in Fig.~\ref{fig:parallel}. The LHC and VELO length scales agree within ($-0.97 \,\pm\, 0.17$)\% and ($-0.33 \,\pm\,0.15$)\% in $x$ and $y$, respectively. The scale of the transverse position measurement with the VELO is expected to be very precise owing to the fact that it is determined by the strip positions of the silicon sensors with a well-known geometry. For the cross-section determination we took the more precise VELO length scale and multiplied the values from Table~\ref{tab:vdm:res} by \mbox{$(1-0.0097)\times (1-0.0033) = 0.9870$}. In addition, we conservatively assigned a 1\% systematic error due to the common scale uncertainty. \begin{figure}[tbp] \centering \includegraphics*[width=0.44\textwidth]{figs/Fig08a.pdf} \includegraphics*[width=0.44\textwidth]{figs/Fig08b.pdf} \caption{Centre of the luminous region reconstructed with VELO tracks versus the position predicted by the LHC magnet currents. The points are indicated with small horizontal bars, the statistical errors are smaller than the symbol size. The points are fitted to a linear function. The slope calibrates the common length scale. \label{fig:parallel} } \end{figure} In April, no dedicated length scale calibration was performed. However, a cross check is available from the distance between the centre of the luminous region measured with the VELO and the nominal centre position. The comparison of these distances between the first and second scan when either both beams moved symmetrically or only the first beam moved, provides a cross check which does not depend on the bunch shapes. From this observation the differences of the length scales between the nominal beam movements and the VELO reference are found to be ($-1.3\,\pm\,0.9$)\% and ($1.5\,\pm\,0.9$)\% for \ensuremath{\Delta_x}\xspace and \ensuremath{\Delta_y}\xspace, respectively. Conservatively a 2\% systematic error is assigned to the length scale calibration for the scans taken in April. \subsubsection{Coupling between the \boldmath$x$ and \boldmath$y$ coordinates in the LHC beams} \label{sec:vdm:coupling} The LHC ring is tilted with respect to the horizontal plane, while the VELO detector is aligned with respect to a coordinate system where the $x$ axis is in a horizontal plane~\cite{ref:lhcrotation}. The van der Meer equation (Eq.~\ref{eq:vdm}) is valid only if the particle distributions in $x$ and in $y$ are independent. To check this condition the movement of the centre of the luminous region along $y$ is measured during the length scale scan in $x$ and vice versa. This movement is compatible with the expected tilt of the LHC ring of 13~mrad at LHCb~\cite{ref:lhcrotation} with respect to the vertical and the horizontal axes of the VELO. The corresponding correction to the cross-section is negligible ($<10^{-3}$). To measure a possible $x$-$y$ correlation in the machine the two-dimensional vertex map is studied by determining the centre position in one coordinate for different values of the other coordinate. For the analysis, data were collected with the beams colliding head-on at LHCb in Fill 1422, during which also the VDM scan data were taken. Figure~\ref{fig:xymap} shows the $x$-$y$ profile of the luminous region. The centre positions of the $y$ coordinate lie on a straight line with a slope of 79~mrad. The slopes found in the corresponding $x$-$z$ and $y$-$z$ profiles are $-92$~\ensuremath{\upmu\mbox{rad}}\xspace and $44$~\ensuremath{\upmu\mbox{rad}}\xspace. These slopes are due to the known fact that the middle line between the two LHC beams is inclined with respect to the $z$ axis. This is observed with beam gas events, the inclination varies slightly from fill to fill. The measurement of the beam directions will be described in detail in Sect.~\ref{sec:beamgas}. Taking into account these known correlations of $x$ and $y$ with $z$ and also the known 13~mrad tilt of the LHC ring, one can calculate the residual slope of the $x$-$y$ correlation, which is predicted to be 77~mrad. \begin{figure}[tbp] \centering \includegraphics*[width=0.5\textwidth]{figs/Fig09.pdf} \caption{Contours of the distribution of the $x$-$y$ coordinates of the luminous region. The contour lines show the values at multiples of 10\% of the maximum. The points represent the $y$-coordinates of the centre of the luminous region in different $x$ slices. They are fitted with a linear function. \label{fig:xymap} } \end{figure} If the beam profiles are two-dimensional Gaussian functions with a non-zero correlation between the $x$ and $y$ coordinates, the cross-section relation (Eq.~\ref{eq:vdm}) should be corrected. We assume that the $x$-$y$ correlation coefficients of the two beams, $\zeta$, are similar and, therefore, close to the measured correlation in the distribution of the vertex coordinates of the luminous region, $\zeta=0.077$. In this case the correction to the cross-section is $\zeta^2/2=0.3$\%. We do not apply a corresponding correction, but instead include 0.3\% uncertainty as an additional systematic error. \subsubsection{Cross check with the \boldmath$z$ position of the luminous region} \label{sec:vdmimaging} \begin{figure}[btp] \centering \includegraphics*[width=0.5\textwidth]{figs/Fig10.pdf} \caption{Movement of the centre of the luminous region in $z$ during the first scan in $x$ taken in October. \label{fig:zmove} } \end{figure} A cross check of the width of the luminosity profile as a function of \ensuremath{\Delta_x}\xspace is made by measuring the movement of the $z$ position of the centre of the luminous region during the first VDM scan in the $x$ coordinate in October (see Fig.~\ref{fig:zmove}). Assuming Gaussian bunch density distributions and identical widths of the two colliding beams, the slope is equal to \begin{equation} \label{eq:dzlum} \frac{\mathrm{d} z_{\otimes}}{\mathrm{d} (\ensuremath{\Delta_x}\xspace)} = -\frac{\sin2\ensuremath{\alpha}\xspace}{4}\frac{\sigma_z^2-\sigma_x^2}{\sigma_x^2\cos^2\ensuremath{\alpha}\xspace+\sigma_z^2\sin^2\ensuremath{\alpha}\xspace} \, , \end{equation} where $\sigma_{x,z}$ are the beam widths in the corresponding directions and $\mathrm{d} z_{\otimes}$ is the induced shift in the $z$-coordinate of the centre of the luminous region. We approximate $\sigma_z$ as $\sqrt{2}\sigxp{z}$. Using the slope observed in Fig.~\ref{fig:zmove} one gets the expected width of the luminosity profile versus \ensuremath{\Delta_x}\xspace, $\sigma_x^\mathrm{VDM} = \sqrt{2}\sigma_x = 78$~\ensuremath{\upmu \mbox{m}}\xspace, in agreement with the measured value of 80~\ensuremath{\upmu \mbox{m}}\xspace. A further cross check is described in Sect.~\ref{sec:vdmimagingsigma}. \subsection{Results of the van der Meer scans} \label{sec:results-vdm} The cross-section results obtained with the VDM scans are given in Table~\ref{tbl8}. We recall that the effective cross-section defined by interactions with at least two VELO tracks is used in the analysis. The results of the two measurement periods are consistent. During the second scan in October the beam movements were not continuous, so the results might suffer from hysteresis effects. In the second scan in April only one beam moved limiting the separation range. Therefore, only the first scans are shown in Table~\ref{tbl8} for April and October. Both in the April and in the October measurements the difference observed in the results of the two scans is included as a systematic error. The complete list of errors taken into account is given in Table~\ref{tbl10}. The uncertainties are uncorrelated and therefore added in quadrature. As already anticipated in Sect.~\ref{sec:vdm:conditions} the first scan taken in October is retained as the final result of the VDM method since it has much reduced systematic errors compared to the April scans. \begin{table}[tb] \centering \caption{Cross-section of $pp$ interactions producing at least two VELO tracks, measured in the two van der Meer scans in April and in October. \label{tbl8} } \vskip 1mm \begin{tabular}{lcc} & \boldmath$\sigeff{\mathbf\ensuremath{\mathrm{vis}}}\,\mathbf{(mb)}$ & {\bf Relative uncertainty (\%)} \\ \hline April & 59.7 & 7.5\\ October & {\bf \underline{58.35}} & 3.64\\ \end{tabular} \end{table} \begin{table}[bt] \centering \caption{Summary of relative cross-section uncertainties for the van der Meer scans in October and April. Due to the lower precision in the April data some systematic errors could not be evaluated and are indicated with ``$-$''. \label{tbl10} } \vskip 1mm \begin{tabular}{lcc} {\bf Source} & \multicolumn{2}{c}{\bf relative uncertainty (\%)} \\ & {\bf October} & {\bf April} \\ \hline Relative normalization stability & 0.5 & 0.5 \\ \hline DCCT scale & 2.7 & 2.7 \\ DCCT baseline noise & negligible& 3.9 \\ FBCT systematics & 0.2 & 2.9 \\ Ghost charge & 0.2 & 0.1 \\ \hline Statistical error & 0.1 & 0.9 \\ \hline $N_1\, N_2$ drop & negligible& negligible \\ Emittance growth & negligible& negligible \\ Reproducibility at nominal position & 0.4 & $-$ \\ Difference between two scans & 2.1 & 4.4 \\ Average beam length scale & 1.0 & 2.0 \\ Beam length scale difference & 0.1 & $-$ \\ Coupling of $x$ and $y$ coordinates & 0.3 & $-$ \\ \hline Total & 3.6 & 7.5 \\ \end{tabular} \end{table} \section{The beam-gas imaging (BGI) method} \label{sec:beamgas} Tracks measured in the LHCb detector allow vertices from the interactions of beam particles with the residual gas in the machine (beam-gas interactions) and from the beam-beam collisions to be reconstructed. The beam-gas imaging method is based on the measurement of the distributions of these vertices to obtain an image of the transverse bunch profile along the beam trajectory. This allows the beam angles, profiles and relative positions to be determined. The residual gas in the beam vacuum pipe consists mainly of relatively light elements such as hydrogen, carbon and oxygen. An important prerequisite for the proper reconstruction of the bunch profiles is the transverse homogeneity of the visualizing gas (see Sect.~\ref{sec:gradient}). A dedicated test performed in October 2010 measured the beam-gas interaction rates as function of beam displacement in a plane perpendicular to the beam axis. The correction to the beam overlap integral due to a possible non-uniform transverse distribution of the residual gas is found to be smaller than 0.05\% and is neglected. Compared to the VDM method, the disadvantage of a small rate is balanced by the advantage that the method is non-disruptive, without beam movements. This means that possible beam-beam effects are constant and effects which depend on the beam displacement, like hysteresis, are avoided. Furthermore, the beam-gas imaging method is applicable during physics fills. The half crossing-angle \ensuremath{\alpha}\xspace is small enough to justify setting $\cos^2 \ensuremath{\alpha}\xspace =1$ in Eq.~\ref{eq:luminosity}. In the approximation of a vanishing correlation between the transverse coordinates the $x$ and $y$ projections can be factorized. At the level of precision required, the bunch shapes are well described by Gaussian distributions. Thus, their shapes are characterized in the $x$-$y$ plane at the time of crossing by their widths $\sigma_{bi}$, their mean positions $\ensuremath{\xi}\xspace_{bi}$ ($i=x, y$), and by their bunch length $\sigma_{bz}$. The index $b$ takes the values 1 and 2 according to the two beams. With these approximations, Eq.~(\ref{eq:luminosity}) for a single pair of colliding bunches reduces to~\cite{ref:simonwhite \begin{equation} L= \frac{N_1\, N_2\,f}{2\pi\sqrt{1+\tan^2{\ensuremath{\alpha}\xspace}(\sigma_{1z}^2 + \sigma_{2z}^2) /(\sigma_{1x}^2 + \sigma_{2x}^2)}} ~ \prod_{i=x,y} \frac{1}{\sqrt{\sigma_{1i}^2 + \sigma_{2i}^2}} \exp{\left(-\frac{1}{2} \frac{(\ensuremath{\xi}\xspace_{1i}-\ensuremath{\xi}\xspace_{2i})^2}{\sigma_{1i}^2+\sigma_{2i}^2}\right)} \, , \label{eq:obslumi} \end{equation} where the denominator of the first factor in the product corrects for the crossing angle. The analysis is applied for each individual colliding bunch pair, {\em i.e.} bunch populations, event rates and beam profiles are considered per bunch pair. Thus, each colliding bunch pair provides an internally consistent measurement of the same visible cross-section. The observables $\sigma_{bi}$ and $\ensuremath{\xi}\xspace_{bi}$ are extracted from the transverse distributions of the beam-gas vertices reconstructed in the \bx{bb} crossings of the colliding bunch pairs (see Sect.~\ref{sec:bganalysis}). The beam overlap-integral is then calculated from the two individual bunch profiles. The simultaneous imaging of the $pp$ luminous region further constrains the beam parameters. The distribution of $pp$-collision vertices, produced by the colliding bunch pair and identified by requiring $-150<z<150$~mm, is used to measure the parameters of the luminous region. Its positions \posxp{i} and transverse widths \sigxp{i}, \begin{equation} \posxp{i} = \frac{\ensuremath{\xi}\xspace_{1i}\sigma_{2i}^2 + \ensuremath{\xi}\xspace_{2i}\sigma_{1i}^2}{\sigma_{1i}^2 + \sigma_{2i}^2} ~~~ \mbox{and} ~~~ \sigxp[2]{i} = \frac{\sigma_{1i}^2 \sigma_{2i}^2}% {\sigma_{1i}^2 + \sigma_{2i}^2} \,, \label{eq:lumi-gaussian} \end{equation} constrain the bunch observables. Owing to the higher statistics of $pp$ interactions compared to beam-gas interactions, the constraints of Eq.~(\ref{eq:lumi-gaussian}) provide the most significant input to the overlap integral. Equation~(\ref{eq:lumi-gaussian}) is valid only for a zero crossing angle. It will be shown in Sect.~\ref{sec:crossing-effect} that the approximation is justified for this analysis. The bunch lengths $\sigma_{bz}$ are extracted from the longitudinal distribution of the $pp$-collision vertices.\footnote{In fact, only the combination $(\sigma_{1z}^2 + \sigma_{2z}^2)$ can be obtained.} Because the sizes $\sigma_{bz}$ are approximately 1000 times larger than $\sigma_{bx}$, the crossing angle reduces the luminosity by a non-negligible factor equal to the first square root factor in Eq.~(\ref{eq:obslumi}). The case of non-collinear beams is described in more detail in Sect.~\ref{sec:crossing-effect}. The BGI method requires a vertex resolution comparable to or smaller than the transverse beam sizes. The knowledge of the vertex resolution is necessary to unfold the resolution from the measured beam profiles. The uncertainty in the resolution also plays an essential role in determining the systematic error. The beam-gas interaction rate determines the time needed to take a {\em snapshot} of the beam profiles and the associated statistical uncertainty. When the time required to collect enough statistics is large compared to the time during which the beam stays stable, it becomes necessary to make additional corrections. This introduces systematic effects. \subsection{Data-taking conditions} \label{sec:bg:conditions} The data used for the results described in the BGI analysis were taken in May 2010. In the data taken after this time the event rate was too high to select beam-gas events at the trigger level. In October a more selective trigger was in place and sufficient data could be collected. However, in this period difficulties were observed with the DCCT data for LHC filling schemes using bunch trains. One should observe that the VDM data taken in October used a dedicated fill with individually injected bunches so that these problems were not present. In the selected fills, there were 2 to 13 bunches per beam in the machine. The number of colliding pairs at LHCb varied between 1 and 8. The trigger included a dedicated selection for events containing beam-gas interactions (see Sect.~\ref{sec:lhcb}). The HLT runs a number of algorithms designed to select beam-gas interactions with high efficiency and low background. The same vertex algorithm is used for the \bx{be}, \bx{eb} and \bx{bb} crossings, but different $z$-selection cuts are applied. For \bx{bb} crossings the region $-0.35 < z < 0.25$~m is excluded to reject the overwhelming amount of $pp$ interactions.\footnote{Due to the asymmetric VELO geometry, the background from $pp$ vertices near the upstream end of the VELO is more difficult to reject, hence the asymmetric $z$ selection.} \subsection{Analysis and data selection procedure} \label{sec:bganalysis} The standard vertex reconstruction algorithms in LHCb are optimized to find $pp$ interaction vertices close to $z=0$. This preference is removed for this particular analysis such that no explicit bias is present in the track and vertex selection as a function of $z$. The resolution of the vertex measurement has to be known with high precision. Details of the resolution study are given in Sect.~\ref{sec:vxresolution}. \begin{table} \small \begin{center} \caption{LHC fills used in the BGI analysis. The third and fourth columns show the total number of (colliding) bunches \nb{tot} (\nb{coll}), the fifth the typical number of particles per bunch, the sixth the period of time used for the analysis, and for the fills used in the BGI analysis the seventh and eighth the measured angles in $x$ (in mrad) of the individual beams with respect to the LHCb reference frame (the uncertainties in the angles range from 1 to 5 \ensuremath{\upmu\mbox{rad}}\xspace). The last two columns give the typical number of events per bunch used in the BGI vertex fits for each of the two beams. \label{tab:fills} } \vskip 1mm \begin{tabular}{lcrrccrrlrr} {Fill} & {part} & ntot& \nb{coll}& $N$ & {time (h)} & $\ensuremath{\alpha}\xspace_\mathrm{beam~1}$& $\ensuremath{\alpha}\xspace_\mathrm{beam~2}$& { analysis}&{events 1}&{events 2}\\ \hline \boldmath 1089 & & 2 & 1 & $2 \times 10^{10}$ & 15 & $0.209$ & -0.371& BGI 1 &1270&720\\ 1090 & & 2 & 1 & $2 \times 10^{10}$ & 4 & $0.215$ & -0.355& BGI 2 &400&300\\ 1101 & & 4 & 2 & $2 \times 10^{10}$ & 6 & $-0.329$ & 0.189 & BGI 3 &730&400\\ 1104 & A & 6 & 3 & $2 \times 10^{10}$ & 5 & $0.211$ & -0.364& BGI 4 &510&350\\ 1104 & B & 6 & 3 & $2 \times 10^{10}$ & 5 & $0.211$ & -0.364& BGI 5 &520&350\\ 1117 & & 6 & 3 & $2 \times 10^{10}$ & 6 & $-0.327$ & 0.185 & BGI 6 &700&500\\ 1118 & & 6 & 3 & $2 \times 10^{10}$ & 5 & $-0.332$ & 0.181 & BGI 7 &500&400\\ 1122 & & 13 & 8 & $2 \times 10^{10}$ & 3 & $-0.329$ & 0.182 & BGI 8 &300&250\\ \end{tabular} \end{center} \end{table} The BGI method relies on the unambiguous selection of beam-gas interactions, also during \bx{bb} crossings where an overwhelming majority of $pp$ collisions is present. The beam-gas fraction can be as low as $10^{-5}$ depending on the beam conditions. The criteria to distinguish beam-gas vertices from $pp$ interactions exploit the small longitudinal size of the beam spot (luminous region). As an additional requirement only vertices formed with exclusively forward (backward) tracks are accepted as beam~1(2)-gas interactions and vertices are required to be made with more than ten tracks. A further selection on the transverse distance from the measured beam-axis is applied to reject spurious vertices\footnote{Interactions in material and random associations of tracks.} ($\pm 2$~mm). Due to the worsening of the resolutions for large distances from $z=0$ and due to the presence of $pp$ interactions near $z=0$, to determine the width of the beams the analysis regions are limited to $-700<z<-250$~mm for beam~1 and $250 < z < 800$~mm for beam~2.\footnote{The vertex resolution for beam~2 has a weaker $z$ dependence, so the sensitivity is improved by enlarging the region.} The selection of $pp$ events requires $-150 < z < 150$~mm and only accepts vertices with more than 20 tracks. The background of beam-gas interactions in the $pp$ interaction sample is negligible owing to the high $pp$ event rate. The transverse profiles of the two beams are measured for each individual colliding bunch by projecting the vertex position on a plane perpendicular to the beam direction. The direction of the beam is determined on a fill-by-fill basis using the beam-gas interactions observed in \bx{be} and \bx{eb} crossings, which are free of $pp$ interactions. The direction of the beam axis can be determined with 1 to 5~\ensuremath{\upmu\mbox{rad}}\xspace precision depending on the fill. Out of the many LHC fills only seven are selected for the BGI analysis. It is required that all necessary data (DCCT, FBCT, luminosity counters and vertex measurements) are present during a sufficiently long period and that the bunch populations and emittances are sufficiently stable during the selected period. The list of used fills is given in Table~\ref{tab:fills}. The table shows the total number of bunches and the number of bunches colliding at LHCb, the typical number of protons per bunch, the measured beam slopes with respect to the LHCb reference frame, and the duration of the period used for the analysis. The bunch population and size cannot be assumed to be constant during the analysis period. Therefore, the DCCT and FBCT data and the vertex measurements using $pp$ interactions are binned in periods of 900 seconds. This binning choice is not critical. The chosen value maintains sufficient statistical precision while remaining sensitive to variations of the beams. The distributions of beam-gas interactions do not have sufficient statistics and are accumulated over the full periods shown in Table~\ref{tab:fills}. The analysis proceeds by determining a time-weighted average for the bunch-pair population product and the width and position of the $pp$ beam spot. The weighting procedure solves the difficulty introduced by short periods of missing data by a logarithmic interpolation for the bunch populations and a linear interpolation for the bunch profiles. The averages defined by the latter procedure can be directly compared to the single measurement of the profiles of the single beams accumulated over the full period of multiple hours. A systematic error is assigned to account for the approximations introduced by this averaging procedure. \subsection{Vertex resolution} \label{sec:vxresolution} The measured vertex distribution is a convolution of the true width of each beam with the resolution function of the detector. Since the resolution is comparable to the actual beam size (approximately 35~\ensuremath{\upmu \mbox{m}}\xspace in the selected fills), it is crucial to understand this effect, and to be able to unfold it from the reconstructed values. The vertex resolution is parametrized as a function of the multiplicity, or number of tracks used to reconstruct the vertex, and as a function of the $z$ position of the interaction. Beam-gas vertices alone are used to measure the positions and spatial extent of each beam; however, these events are rare in comparison to beam-beam vertices. To avoid binning the beam-gas vertices in both number of tracks and $z$ position, beam-beam events are initially used to measure the dependence of the resolution on the number of tracks. Once this dependence is known, the beam-gas vertices are used to find the $z$ dependence of the resolution, taking into account the contribution to the resolution given by the number of tracks found in each vertex. The resolution as a function of the number of tracks in a vertex is determined using $pp$ interactions which occur around $z = 0$. The reconstructed tracks from each event are randomly split into two independent sets of equal size. The vertex reconstruction is applied to each set of tracks, and if exactly one vertex is found from each track collection it is assumed to be from the same original interaction. Then, if the number of tracks making each of these two vertices is the same, the residuals in $x$ and $y$ are taken as an estimate of the vertex resolution. \begin{figure}[tbp] \begin{center} \includegraphics*[width=0.8\textwidth,clip,trim=0mm 0mm 0mm 0mm]{figs/Fig11.pdf} \caption{Primary vertex resolution $\sigma_{\mathrm{res}}$ in the transverse directions $x$ (full circles) and $y$ (open circles) for beam-beam interactions as a function of the number of tracks in the vertex, \ensuremath{N_{\mathrm{Tr}}}\xspace. The curves are explained in the text. \label{fig:vtx-resol} } \end{center} \end{figure} \begin{table}[tbp] \centering \caption{ Fit parameters for the resolution of the transverse positions $x$ and $y$ of reconstructed beam-beam interactions as a function of the number of tracks. The errors in the fit parameters are correlated. \label{table:res_fit_params} } \vskip 1mm \begin{tabular}{ lcc } & \boldmath$x$ & \boldmath$y$ \\\hline Factor $A$ (mm) & $0.215 \,\pm\, 0.020$ & $0.202 \,\pm\, 0.018$ \\ Power $B$ & $1.023 \,\pm\, 0.054$ & $1.008 \,\pm\, 0.053$ \\ Constant $C$ ($10^{-3}$~mm) & $5.463 \,\pm\, 0.675$ & $4.875 \,\pm\, 0.645$ \\ \end{tabular} \end{table} The resolution is calculated as the width of the Gaussian function fitted to the residual distributions divided by $\sqrt 2$, as there are two resolution contributions in each residual measurement. The resolutions are shown as a function of the number of tracks in Fig.~\ref{fig:vtx-resol}. Since the VELO is not fully symmetric in the $x$ and $y$ coordinates, the analysis is performed in both coordinates separately. Indeed, one observes a small, but significant difference in the resolution. The points of Fig.~\ref{fig:vtx-resol} are fitted with a function which parametrizes the resolution in terms of a factor $A$, a power $B$ and a constant $C$, as a function of the track multiplicity \ntr \begin{equation} \label{eq:track:dep} \sigma_{\mathrm{res}} = \frac{A}{\ensuremath{N_{\mathrm{Tr}}}\xspace^{B}} + C \, . \end{equation} The values of the fit parameters are given in Table~\ref{table:res_fit_params}. \begin{figure}[btp] \begin{center} \includegraphics*[width=0.495\textwidth,clip,trim=3mm 0mm 10mm 0mm]{figs/Fig12a.pdf} \includegraphics*[width=0.495\textwidth,clip,trim=3mm 0mm 10mm 0mm]{figs/Fig12b.pdf} \caption{The $z$ dependence of the resolution correction factor \ensuremath{F_{z}}\xspace in $x$ (full circles) and $y$ (open circles) for beam~1-gas (left) and beam~2-gas (right) interactions. \label{fig:vtx-resol2} } \end{center} \end{figure} Beam-gas vertices, selected in \bx{be} and \bx{eb} bunch crossings, are reconstructed in the same manner as the $pp$ vertices. Every beam-gas event which yields two vertices after splitting the track sample into two is used in the analysis, without requiring that the two vertices are reconstructed with an equal number of tracks. Accounting for the resolution coming from the track multiplicity means that a correction factor \ensuremath{F_{z}}\xspace is calculated as a function of $z$ position. This is the factor by which the beam-beam resolution at $z = 0$ must be multiplied to find the true resolution for an event with a particular number of tracks, at a certain position in $z$. The contribution to the resolution from the track multiplicity is taken into account for coordinate $v$ ($v=x,y$) according to \begin{equation} \label{eq:zdep} \ensuremath{F_{z}}\xspace = \frac{v_{1} - v_{2}}{\sqrt{\sigma_{\ntrks{1}}^{2} + \sigma_{\ntrks{2}}^{2}}} \, , \end{equation} where the index ${1,2}$ signifies the two vertices with measured position $v_1$ and $v_2$, and ${\ntrks{1}, \ntrks{2}}$ the number of tracks in each. The quantities $\sigma_{\ntrks{1,2}}$ are the resolutions $\sigma_{\mathrm{res}}$ expected for vertices at $z=0$ made of \ensuremath{N_{\mathrm{Tr}}}\xspace tracks as defined in Eq.~\ref{eq:track:dep}. The correction factors \ensuremath{F_{z}}\xspace are plotted in Fig.~\ref{fig:vtx-resol2}. In order to better understand the resolution as a function of $z$, it is instructive to consider the geometry of the VELO, shown in Fig.~\ref{fig:velo-sketch}. Figure~\ref{fig:vtx-resol2} shows that around the interaction point of $z = 0$ the correction factor is close to one, which signifies that the resolution is nearly independent of the type of event, whether beam-beam or beam-gas. This is expected, since the VELO is optimized to reconstruct vertices near $z=0$. The correction factor increases as the vertices move away from the interaction region. \subsection{Measurement of the beam profiles using the BGI method} \label{sec:profiles} \begin{figure}[tbp] \centering \includegraphics[width=0.95\textwidth]{figs/Fig13.png} \caption{Positions of reconstructed beam-gas interaction vertices for \mbox{\bx{be}} (black points) and \mbox{\bx{eb}} (grey points) crossings during Fill 1104. The measured beam angles $\alpha_{1,2}$ and offsets $\delta_{1,2}$ at $z=0$ in the horizontal (top) and vertical (bottom) planes are shown in the figure. \label{fig:beamAngles} } \end{figure} In Fig.~\ref{fig:beamAngles} the positions of the vertices of beam-gas interactions of the single beams in {\bx{be}} and {\bx{eb}} crossings are shown in the $x$-$z$ and $y$-$z$ planes. The straight line fits provide the beam angles in the corresponding planes. Whereas we can use the non-colliding bunches to determine the beam directions, the colliding bunches are the only relevant ones for luminosity measurements. As an example the $x$ and $y$ profiles of one colliding bunch pair are shown in Fig.~\ref{fig:beamprofiles-12}. The physical bunch size is obtained after deconvolving the vertex resolution. The resolution function and physical beam profile are drawn separately to show the importance of the knowledge of the resolution. In Fig.~\ref{fig:beamprofiles-pp} the corresponding fits to the luminous region of the same bunch pair are shown, both for the full fill duration and for a short period of 900 s. The fits to the distributions in $x$ and $y$ of the full fill have a $\chi^2/{\ensuremath{\mathrm{ndf}}\xspace} \approx 10$, probably due to the emittance growth of the beam. The corresponding fits to data taken during the shorter period of 900~s give satisfactory values, $\chi^2/{\ensuremath{\mathrm{ndf}}\xspace} \approx 1$. The fact that the resolution at $z = 0$ is small compared to the size of the luminous region makes it possible to reach small systematic uncertainties in the luminosity determination as shown in Sect.~\ref{sec:bg:results}. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.45\textwidth]{figs/Fig14a.pdf} \includegraphics[width=0.45\textwidth]{figs/Fig14b.pdf} \includegraphics[width=0.45\textwidth]{figs/Fig14c.pdf} \includegraphics[width=0.45\textwidth]{figs/Fig14d.pdf} \end{center} \caption{Distributions of the vertex positions of beam-gas events for beam~1 (top) and beam~2 (bottom) for one single bunch pair (ID 2186) in Fill 1104. The left (right) panel shows the distribution in $x$ ($y$). The Gaussian fit to the measured vertex positions is shown as a solid black curve together with the resolution function (dashed) and the unfolded beam profile (shaded). Note the variable scale of the horizontal axis. \label{fig:beamprofiles-12} } \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.45\textwidth]{figs/Fig15a.pdf} \includegraphics[width=0.45\textwidth]{figs/Fig15b.pdf} \includegraphics[width=0.45\textwidth]{figs/Fig15c.pdf} \includegraphics[width=0.45\textwidth]{figs/Fig15d.pdf} \includegraphics[width=0.45\textwidth]{figs/Fig15e.pdf} \includegraphics[width=0.45\textwidth]{figs/Fig15f.pdf} \end{center} \caption{Distributions of vertex positions of $pp$ interactions for the full fill duration (left) and for a 900 s period in the middle of the fill (right) for one colliding bunch pair (ID 2186) in Fill 1104. The top, middle and bottom panels show the distributions in $x$, $y$ and $z$, respectively. The Gaussian fit to the measured vertex positions is shown as a solid black curve together with the resolution function (dashed) and the unfolded luminous region (shaded). Owing to the good resolution the shaded curves are close to the solid curves and are therefore not clearly visible in the figures. The fit to the $z$ coordinate neglects the vertex resolution. Note the variable scale of the horizontal axis. \label{fig:beamprofiles-pp} } \end{figure} For non-colliding bunches it is possible to measure the width of the beam in the region of the interaction point at $z=0$ since there is no background from $pp$ collisions. One can compare the measurement at the IP with the measurement in the region outside the IP which needs to be used for the colliding bunches. After correcting for the resolution no difference is observed, as expected from the values of $\beta^*$ of the beam optics used during the data taking. One can also compare the width measurements of the colliding bunches far from the IP using the beam-gas events from beam~1 and beam~2 to predict the width of the luminous region using Eq.~\ref{eq:lumi-gaussian}. Figure~\ref{fig:constraint-pp} shows that there is overall consistency. In addition to the data used in the BGI analysis described here, also higher statistics data from later fills are used for this comparison. The cross check reaches a precision of 1--1.6\% for the consistency of the width measurements at large $z$ compared to the measurement at $z=0$, providing good evidence for the correctness of the parametrization of the $z$ dependence of the vertex resolution. \begin{figure}[tb] \begin{center} \includegraphics[width=0.7\textwidth]{figs/Fig16.pdf} \end{center} \caption{Comparison of the prediction for the luminous region width from measurements based on beam-gas events of individual bunches which are part of a colliding bunch pair with the direct measurement of the luminous region width for these colliding bunches. The panels on the left show the results for bunches in the fills with $\beta^* = 2$~m optics used in this analysis, the right panels show four colliding bunches in a fill taken with $\beta^* = 3.5$~m optics. The fill and bunch numbers are shown on the vertical axis. The vertical dotted line indicates the average and the solid lines the standard deviation of the data points. The lowest point indicates the weighted average of the individual measurements; its error bar represents the corresponding uncertainty in the average. The same information is given above the data points. The fills with the $\beta^* = 3.5$~m optics are not used for the analysis due to the fact that larger uncertainties in the DCCT calibration were observed. \label{fig:constraint-pp} } \end{figure} The relations of Eq.~\ref{eq:lumi-gaussian} are used to constrain the width and position measurements of the single beams and of the luminous region in both coordinates separately. Given the high statistics of vertices of the luminous region the $pp$ events have the largest weight in the luminosity calculation. Effectively, the beam-gas measurements determine the relative offsets of the two beams and their width ratio $\rho \begin{equation} \label{eq:rho} \rho_i = \sigma_{2i}/\sigma_{1i} \ \ (i = x,y)\, . \end{equation} According to Eq.~\ref{eq:obslumi}, neglecting crossing angle effects and beam offsets, the luminosity is proportional to $A_{\mathrm{eff}}^{-1}$, \begin{equation} \label{eq:invarea} A_{\mathrm{eff}}^{-1} = \frac{1}{\sqrt{ \left( \rule{0pt}{10pt} {{\sigma_{1x}}^2 + {\sigma_{2x}}^2 } \right) \left( \rule{0pt}{10pt} {{\sigma_{1y}}^2 + {\sigma_{2y}}^2} \right) }} \, . \end{equation} This quantity can be rewritten using the definition of $\rho_i$ and Eq.~\ref{eq:lumi-gaussian} \begin{equation} \label{eq:rhofactor} A_{\mathrm{eff}}^{-1} = \prod_{i=x,y} \frac{\rho_i}{(1+\rho_i^2)\sigxp{i}} \, , \end{equation} which shows, especially for nearly equal beam sizes, the weight of the measurement of the width of the luminous region $\sigxp{i}$ in the luminosity determination. The expression has its maximum for $\rho_i=1$ which minimizes the importance of the measurement errors of $\rho_i$. The luminosity changes if there is an offset between the two colliding bunches in one of the transverse coordinates. An extra factor appeared already in Eq.~\ref{eq:obslumi} to take this into account \begin{equation} C_{\mathrm{offset}} = \prod_{i=x,y} \exp{\left(-\frac{1}{2} \frac{(\ensuremath{\xi}\xspace_{1i}-\ensuremath{\xi}\xspace_{2i})^2}{\sigma_{1i}^2+\sigma_{2i}^2}\right)} \, , \label{eq:offsetfactor} \end{equation} which is unity for head-on beams. By examining both relations of Eq.~\ref{eq:lumi-gaussian} a system of two constraint equations and six measurable quantities emerges which can be used to improve the precision for each transverse coordinate separately. This fact is exploited in a combined fit where the individual beam widths $\sigma_{1i}$, $\sigma_{2i}$, and the luminous region width \sigxp{i} together with the corresponding position values $\ensuremath{\xi}\xspace_{bi}$ and \posxp{i} are used as input measurements. Several choices are possible for the set of four fit-parameters, trivially the set $\sigma_{1i}$, $\sigma_{2i}$, $\ensuremath{\xi}\xspace_{1i}$, $\ensuremath{\xi}\xspace_{2i}$ can be used. The set $\Sigma_{i}$ (Eq.~\ref{eq:capsigma}), $\rho_{i}$ (Eq.~\ref{eq:rho}), $\Delta_{\ensuremath{\xi}\xspace i} = \ensuremath{\xi}\xspace_{1i} - \ensuremath{\xi}\xspace_{2i}$ and $\posxp{i}$ is used which makes it easier to evaluate the corresponding luminosity error propagation. The results for the central values are identical, independently of the set used. \subsection{Corrections and systematic errors} \label{sec:bgsyst} In the following, corrections and systematic error sources affecting the BGI analysis will be described. The uncertainties related to the bunch population normalization have already been discussed in Sect.~\ref{sec:bct}. \subsubsection{Vertex resolution} As mentioned above, the uncertainty of the resolution potentially induces a significant systematic error in the luminosity measurement. To quantify its effect the fits to the beam profiles have been made with different choices for the parameters of the resolution functions within the limits of their uncertainty. One way to estimate the uncertainty is by comparing the resolution of simulated $pp$ collision events determined using the method described in Sect.~\ref{sec:vxresolution}, {\em i.e.} by dividing the tracks into two groups, with the true resolution which is only accessible in the simulation owing to the prior knowledge of the vertex position. The uncertainty in the number of tracks (\ensuremath{N_{\mathrm{Tr}}}\xspace) dependence at $z=0$ is estimated in this way to be no larger than 5\%. The uncertainty in the $z$ dependence is estimated by analysing events in \bx{eb} and \bx{be} crossings. For these crossings all events, including the ones near $z=0$, can be assumed to originate from beam-gas interactions. A cross check of the resolution is obtained by comparing the measurement of the transverse beam profiles at the $z$ range used in \bx{bb} events with the ones obtained near $z=0$. Together with the comparison of the width of the luminous region and its prediction from the beam-gas events we estimate a 10\% uncertainty in the $z$ dependence of the resolution. It should be noted that the focusing effect of the beams towards the interaction point is negligible compared to the precision needed to determine the resolution. The effect of the uncertainties in the \ensuremath{N_{\mathrm{Tr}}}\xspace dependence and $z$ dependence on the final results are estimated by repeating the analysis varying the resolution within its uncertainty. The conservative approach is taken to vary the different dependencies coherently in both $x$ and $y$ and for the dependence on \ensuremath{N_{\mathrm{Tr}}}\xspace and $z$ simulataneously. The resulting uncertainties in the cross-section depend on the widths of the distributions and are therefore different for each analysed bunch pair. \subsubsection{Time dependence and stability} The beam and data-taking stability are taken into account when selecting suitable fills to perform the beam-gas imaging analysis. This is an essential requirement given the long integration times needed to collect sufficient statistics for the beam-gas interactions. A clear decay of the bunch-populations and emittance growth is observed over these long time periods. It is checked that these variations are smooth and that the time-average is a good approximation to be used to compare with the average beam profiles measured with vertices of beam-gas interactions. No significant movement of the luminous region is observed during the fills selected for the BGI analysis. The systematics introduced by these variations are minimized by the interpolation procedure described in Sect.~\ref{sec:bganalysis} and are estimated to amount to less than 1\%. \subsubsection{Bias due to unequal beam sizes and beam offsets} When the colliding bunches in a pair have similar widths ($\rho_i=1$), the $\rho$-dependence in Eq.~\ref{eq:rhofactor}, $\rho_i/(1+\rho_i^2)$, is close to its maximum. Thus, when the precision of measuring $\rho$ is similar to its difference from unity, the experimental estimate of the $\rho$-factor is biased towards smaller values. In the present case the deviation from unity is compatible with the statistical error of the measurement for each colliding bunch pair. These values are typically 15\% in the $x$ coordinate and 10\% in the $y$ coordinate. The size of the ``$\rho$ bias'' effect is of the order of 1\% in $x$ and 0.5\% in $y$. A similar situation occurs for the offset factor $C_{\mathrm{offset}}$ for bunches colliding with non-zero relative transverse offset. The offsets are also in this case compatible with zero within their statistical errors and the correction can only take values smaller than one. The average expected ``offset bias'' is typically 0.5\% per transverse coordinate. Since these four sources of bias (unequal beam sizes and offsets in both transverse coordinates) act in the same direction, their overall effect is no longer negligible and is corrected for on a bunch-by-bunch basis. We assume a systematic error equal to half of the correction, {\em i.e.} typically $1.5\%$. The correction and associated uncertainty depends on the measured central value and its statistical precision and therefore varies per fill. \subsubsection{Gas pressure gradient} \label{sec:gradient} The basic assumption of the BGI method is the fact that the residual gas pressure is uniform in the plane transverse to the beam direction and hence the interactions of the beams with the gas produce an image of the beam profile. An experimental verification of this assumption is performed by displacing the beams and recording the rate of beam-gas interactions at these different beam positions. In Fill 1422 the beam was displaced in the $x$ coordinate by a maximum of 0.3~mm. Assuming a linear behaviour, the upper limit on the gradient of the interaction rate is 0.62~Hz/mm at 95\%~CL compared to a rate of $2.14 \,\pm\, 0.05$~Hz observed with the beam at its nominal position. When the profiles of beam~1-gas and beam~2-gas interactions are used directly to determine the overlap integral $A_{\mathrm{eff}}$, the relative error $\delta_A$ on the overlap integral is given b \begin{equation} \label{eq:gradient} \delta_A=\frac{A_{\mathrm{eff}}(a=0)}{A_{\mathrm{eff}}(a\ne0)}-1 = \frac{{a}^{2}\,{\sigma_{x}^{2}}}{2\,{b}^{2}} \, , \end{equation} where $a$ is the gradient, $b$ the rate when the beam is at its nominal position, and $\sigma_{x}$ is the true beam width, using the approximation of equal beam sizes. This result has been derived by comparing the overlap integral for beam images distorted by a linear pressure gradient with the one obtained with ideal beam images. With the measured limit on the gradient, the maximum relative effect on the overlap is then estimated to be less than $4.2 \times 10^{-4}$. However, the BGI method uses the width of the luminous region measured using $pp$ interactions as a constraint. This measurement does not depend on the gas pressure gradient. The gas pressure gradient enters through the measurements of the individual widths which are mainly used to determine the ratio between the two beam widths. These are equally affected, thus, the overall effect of an eventual gas pressure gradient is much smaller that the estimate from Eq.~\ref{eq:gradient} and can safely be neglected in the analysis. \subsubsection{Crossing angle effects} \label{sec:crossing-effect} The expression for the luminosity (Eq.~\ref{eq:obslumi}) contains a correction factor for the crossing angle $C_{\mathrm{\ensuremath{\alpha}\xspace}}$ of the form~\cite{ref:simonwhite} \begin{equation} \label{eq:anglecor} C_{\mathrm{\ensuremath{\alpha}\xspace}} = \left[ {1+\tan^2{\ensuremath{\alpha}\xspace} (\sigma_{1z}^2 + \sigma_{2z}^2)/(\sigma_{1x}^2 + \sigma_{2x}^2)} \right]^{-\frac{1}{2}} \, . \end{equation} For a vanishing crossing angle and equal bunch lengths, the bunch length $\sigma_z$ is obtained from the beam spot measurement assuming that the two beams have equal size, by $\sigma_z = \sqrt{2} \, \sigxp{z}$. In the presence of a crossing angle the measured length of the luminous region depends on the lengths of the bunches, on the crossing angle and on the transverse widths of the two beams in the plane of the crossing angle. The bunch lengths need not necessarily be equal. Evaluating the overlap integral of the two colliding bunches over the duration of the bunch crossing, one finds for the width of the luminous region in the $z$ coordinat \begin{equation} \label{eq:lumz} \sigxp{z} = \left[ \frac{\tan^2{\ensuremath{\alpha}\xspace}}{\sigxp[2]{x}} + \frac{4 \, \cos^2{\ensuremath{\alpha}\xspace}}{\sigma_{1z}^2 + \sigma_{2z}^2} \right]^{-\frac{1}{2}}\, . \end{equation} Solving for ${\sigma_{1z}^2 + \sigma_{2z}^2}$, the right-hand side of Eq.~\ref{eq:anglecor} can be written in terms of the measured quantities \ensuremath{\alpha}\xspace, \sigxp{z}, \sigxp{x}, $\sigma_{1x}$, and $\sigma_{2x}$ \begin{equation} \label{eq:lumzanglecor} C_{\mathrm{\ensuremath{\alpha}\xspace}} = \left[ 1+\frac{4\sin^2{\ensuremath{\alpha}\xspace} \sigxp[2]{z}} {\left( 1-(\tan{\ensuremath{\alpha}\xspace}{\sigxp{z}}/{\sigxp{x}})^2 \right)(\sigma_{1x}^2+\sigma_{2x}^2)} \right]^{-\frac{1}{2}}\, . \end{equation} The dependence of the estimate of \sigxp{z} on $\sigma_z$ and of the overall correction on $\sigma_x$ is shown in Fig.~\ref{fig:crossingangle} for typical values of the parameters. The difference with respect to a naive calculation assuming equal beam sizes and using the simple $\sqrt{2}$ factor to obtain the bunch lengths from the luminous region length is in all relevant cases smaller than $1\%$. For the beam conditions in May 2010 the value of the crossing angle correction factor $C_{\mathrm{\ensuremath{\alpha}\xspace}}$ is about 0.95. To take into account the accuracy of the calculation a 1\% systematic error is conservatively assigned to this factor. There are other small effects introduced by the beam angles. The average angle of the beams is different from 0 in the LHCb coordinate system. This small difference introduces a broadening of the measured transverse widths of the luminous region, since in this case the projection is taken along the nominal LHCb axis. Another effect is more subtle. The expression (Eq.~\ref{eq:lumi-gaussian}) for the width of the luminous region assumes a vanishing crossing angle. It is still valid for any crossing angle if one considers the width for a fixed value of $z$. When applying Eq.~\ref{eq:lumi-gaussian} as a function of $z$ one can show that the centre of the luminous region is offset if in the presence of a non-vanishing crossing angle the widths of the two beams are not equal. Thus, when these two conditions are met the luminous region is rotated. The rotation angle $\phi_i$ ($i=x,y$) is given by \begin{equation} \label{eq:phi-rotation} \tan{\phi_i} = \tan{\ensuremath{\alpha}\xspace_i}\frac{1-\rho_i^2}{1+\rho_i^2} \, , \end{equation} where $\rho$ is defined in Eq.~\ref{eq:rho}. With the parameters observed in this analysis the effect of the rotation is smaller than $10^{-3}$. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.48\textwidth, clip, trim=0.2cm 0cm 0.1cm 0cm]{figs/Fig17a.pdf} \includegraphics[width=0.48\textwidth, clip, trim=0.2cm 0cm 0.1cm 0cm]{figs/Fig17b.pdf} \end{center} \caption{ Left: the dependence of the length of the luminous region \sigxp{z} on the single bunch length $\sigma_z$ under the assumption that both beams have equal length bunches. The dotted line shows the $\sqrt{2}$ behaviour expected in the absence of a crossing angle. The solid black line shows the dependence for equal transverse beam sizes $\sigma_x = 0.045$~mm, the shaded region shows the change for $\rho = 1.2$ keeping the average size constant. Right: the dependence of the luminosity reduction factor $C_{\mathrm{\ensuremath{\alpha}\xspace}}$ on the transverse width of the beam $\sigma_x$ for a value of $\sigxp{z} = 35$~mm. The solid line shows the full calculation for $\rho = 1$ (equal beam widths) with the shaded area the change of the value up to $\rho = 1.2$, keeping the transverse luminous region size constant. The dotted line shows the result of the naive calculation assuming a simple $\sqrt{2}$ relation for the length of the individual beams. All graphs are calculated for a half crossing-angle $\ensuremath{\alpha}\xspace = 0.2515$~mrad. \label{fig:crossingangle} } \end{figure} \subsection{Results of the beam-gas imaging method} \label{sec:bg:results} \newcommand{\ensuremath{{{u}}}}{\ensuremath{{{u}}}} \newcommand{\ensuremath{{{c}}}}{\ensuremath{{{c}}}} \newcommand{\ensuremath{{{f}}}}{\ensuremath{{{f}}}} \begin{table} \begin{center} \caption{ Measurements of the cross-section \sigeff{\ensuremath{\mathrm{vis}}} with the BGI method per fill and overall average (third column). All errors are quoted as percent of the cross-section values. {\em DCCT scale}, {\em DCCT baseline noise}, {\em FBCT systematics} and {\em Ghost charge} are combined into the overall {\em Beam normalization} error. The {\em Width syst} row is the combination of {\em Resolution syst} (the systematic error in the vertex resolution for $pp$ and beam-gas events), {\em Time dep. syst} (treatment of time-dependence) and {\em Bias syst} (unequal beam sizes and beam offset biases), and is combined with {\em Crossing angle} (uncertainties in the crossing angle correction) into {\em Overlap syst}. The {\em Total error} is the combination of {\em Relative normalization stability}, {\em Beam normalization}, {\em Statistical error}, and {\em Overlap syst}. {\em Total systematics} is the combination of the latter three only and can be broken down into {\em Uncorrelated syst} and {\em Correlated syst}, where ``uncorrelated'' applies to the avergaging of different fills. Finally, {\em Excluding norm} is the uncertainty excluding the overall {\em DCCT scale} uncertainty. The grouping of the systematic errors into (partial) sums is expressed as an indentation in the first column of the table. The error components are labelled in the second column by \ensuremath{{{u}}}, \ensuremath{{{c}}}\ or \ensuremath{{{f}}}\ dependending on whether they are uncorrelated, fully correlated or correlated within one fill, respectively. \label{tab:bgfinal} } \vskip 1mm \begin{tabular}{lcrrrrrrrrrrrr} \small &&{\bf average} &{\bf 1089}&{\bf 1090}&{\bf 1101}&{\bf 1104}&{\bf 1117}&{\bf 1118}&{\bf 1122} \\ \hline Cross-section \sigeff{\ensuremath{\mathrm{vis}}} (mb) && {\bf \underline{59.94}} & 61.49& 59.97& 57.67& 56.33& 61.63& 61.84& 61.04\\ \hline Relative normalization &\ensuremath{{{c}}}& 0.50& 0.50& 0.50& 0.50& 0.50& 0.50& 0.50& 0.50\\ stability & &&&&&&&\\ \hline ~~~DCCT scale &\ensuremath{{{c}}}& 2.70& 2.70& 2.70& 2.70& 2.70& 2.70& 2.70& 2.70\\ ~~~DCCT baseline noise &\ensuremath{{{f}}}& 0.36& 0.97& 1.01& 0.43& 0.29& 0.29& 0.29& 0.14\\ ~~~FBCT systematics &\ensuremath{{{f}}}& 0.91& 3.00& 3.00& 2.61& 2.10& 2.41& 2.41& 1.98\\ ~~~Ghost charge &\ensuremath{{{f}}}& 0.19& 0.70& 0.65& 1.00& 0.60& 0.38& 0.55& 0.35\\ Beam normalization && 2.88& 4.21& 4.21& 3.91& 3.48& 3.65& 3.67& 3.37\\ \hline Statistical error &\ensuremath{{{u}}}& 0.96& 4.06& 4.73& 3.09& 2.56& 1.89& 2.66& 1.82\\ \hline ~~~~~~Resolution syst &\ensuremath{{{c}}}& 2.56& 2.79& 2.74& 2.54& 2.86& 2.37& 2.47& 2.44\\ ~~~~~~Time dep. syst &\ensuremath{{{c}}}& 1.00& 1.00& 1.00& 1.00& 1.00& 1.00& 1.00& 1.00\\ ~~~~~~Bias syst &\ensuremath{{{c}}}& 1.61& 1.14& 1.81& 1.35& 1.89& 1.19& 1.35& 2.05\\ ~~~~~~Gas homogeneity &&negl.&negl.&negl.&negl.&negl.&negl.&negl.&negl.\\ ~~~Width syst && 3.20& 3.18& 3.43& 3.05& 3.56& 2.83& 2.99& 3.34\\ ~~~Crossing angle &\ensuremath{{{c}}}& 1.00& 1.00& 1.00& 1.00& 1.00& 1.00& 1.00& 1.00\\ Overlap syst && 3.35& 3.33& 3.58& 3.21& 3.70& 3.00& 3.15& 3.49\\ \hline ~~~Uncorrelated syst &\ensuremath{{{f}}}& 0.93& 3.08& 3.07& 2.80& 2.18& 2.44& 2.47& 2.01\\ ~~~Correlated syst &\ensuremath{{{c}}}& 4.35& 4.43& 4.62& 4.25& 4.62& 4.08& 4.19& 4.44\\ Total systematics && 4.45& 5.39& 5.55& 5.08& 5.11& 4.75& 4.87& 4.88\\ \hline Total error && 4.55& 6.75& 7.29& 5.95& 5.71& 5.11& 5.55& 5.20\\ Excluding norm && 3.63& 6.17& 6.75& 5.28& 5.01& 4.31& 4.82& 4.42\\ \end{tabular} \end{center} \end{table} \begin{figure}[tbp] \centering \includegraphics[width=0.6\textwidth]{figs/Fig18.pdf} \caption{ Results of the beam-gas imaging method for the visible cross-section of the $pp$ interactions producing at least two VELO tracks, \sigeff{\ensuremath{\mathrm{vis}}}. The results for each fill (indicated on the vertical axis) are obtained by averaging over all colliding bunch pairs. The small vertical lines on the error bars indicate the size of the uncorrelated errors, while the length of the error bars show the total error. The dashed vertical line indicates the average of the data points and the dotted vertical lines show the one standard-deviation interval. The weighted average is represented by the lowest data point, where the error bar corresponds to its total error. \label{fig:bgresult-fill} } \end{figure} With the use of the beam-gas imaging method seven independent measurements of an effective reference cross-section were performed. The main uncertainties contributing to the overall precision of the cross-section measurement come from the overlap integral and from the measurement of the product of the bunch populations. The systematic error in the overlap integral is composed of the effect of the resolution uncertainties, the treatment of the time dependence, the treatment of the bias due to the non-linear dependencies in $\rho$ and $\Delta(\ensuremath{\xi}\xspace)$ and the crossing angle corrections. It also takes into account small deviations of the beam shape from a single Gaussian observed in the VDM scans. The normalization error has components from the DCCT scale, and its baseline fluctuations, FBCT systematics, and systematics in the relative normalization procedure (Sect.~\ref{sec:relative}). For multi-bunch fills the results obtained for each colliding pair are first averaged taking the correlations into account. The results of the averaging procedure applied on a per-fill basis are shown in Table~\ref{tab:bgfinal}. For fills with multiple bunches the numbers are a result of an average over individual bunches. Errors are divided into two types: correlated and uncorrelated errors. On a fill-by-fill basis the statistical errors, ghost charge and DCCT baseline corrections are treated as uncorrelated errors. The latter two sources are of course correlated when bunches within one fill are combined. The FBCT systematic uncertainty, which is dominated by the uncertainty in its offset is treated taking into account the fact that the sum is constrained. Owing to the constraint on the total beam current provided by the DCCT, averaging results for different colliding bunch pairs within one fill reduces the error introduced by the FBCT offset uncertainty. A usual error propagation is applied taking the inverse square of the uncorrelated errors as the weights. The difference with respect to the procedure applied for the VDM method is due to the fact that a fit using the FBCT offsets as free parameters cannot be applied here. Not all fills have the required number of crossing bunch pairs, and the uncorrelated bunch-to-bunch errors are too large to obtain a meaningful result for the FBCT offset. Each colliding bunch pair provides a self-consistent effective cross-section determination. The analysis proceeds by first averaging over all individual colliding bunch pairs within a fill and then by averaging over fills taking all correlations into account. Thus, an effective cross-section result can also be quoted per fill. These are shown in Fig.~\ref{fig:bgresult-fill}. The spread in the results is in good agreement with the expectations from the uncorrelated errors. The final beam-gas result for the effective cross-section is: $\sigeff{\ensuremath{\mathrm{vis}}} = 59.9 \,\pm\, 2.7 \ \mbox{mb}$. The uncertainties in the DCCT scale error and the systematics of the relative normalization procedure are in common with the VDM method. The uncertainty in $\sigeff{\ensuremath{\mathrm{vis}}}$ from the BGI method without these common errors is 2.2 mb. \section{Cross checks with the beam-beam imaging method} \label{sec:vdmimagingsigma} During the VDM scan the transverse beam images can be reconstructed with a method described in Ref.~\cite{ref:vdmimaging}. When one beam ({\em e.g.} beam~1) scans across the other (beam~2), the differences of the measured coordinates of $pp$ vertices with respect to the nominal position of beam~2 are accumulated. These differences are projected onto a plane transverse to beam~2 and summed over all scan points. The resulting distribution represents the density profile of beam~2 when the number of scan steps is large enough and the step size is small enough. By inverting the procedure, beam~1 can be imaged using the relative movement of beam~2. Since the distributions are obtained using measured vertex positions, they are convolved with the corresponding vertex resolution. After deconvolving the vertex distributions with the transverse vertex resolution a measurement of the transverse beam image is obtained. This approach is complementary to the BGI and VDM methods. \begin{figure}[tbp] \centering \includegraphics*[width=0.6\textwidth]{figs/Fig19.pdf} \caption{ Visible cross-section measurement using the beam-beam imaging method for twelve different bunch pairs (filled circles) compared to the cross-section measurements using the VDM method (open circles). The horizontal line represents the average of the twelve beam-beam imaging points. The error bars are statistical only and neglect the correlations between the measurements of the profiles of two beams. The band corresponds to a one sigma variation of the vertex resolution parameters. \label{fig:vdm-bbimaging} } \end{figure} The beam-beam imaging method is applied to the first October VDM scan since the number of scanning steps of that scan is twice as large as that of the second scan. The events are selected by the minimum bias trigger. Contrary to the 22.5~kHz random trigger events with only luminosity data, they contain complete detector information needed for the tracking. The minimum-bias trigger rate was limited to about 130~Hz on average. The bias due to the rate limitation is corrected by normalizing the vertex distributions at every scan point to the measured average number of interactions per crossing. The transverse planes with respect to the beams are defined using the known half crossing-angle of 170~\ensuremath{\upmu\mbox{rad}}\xspace and the measured inclination of the luminous ellipsoid with respect to the $z$ axis as discussed in Sect.~\ref{sec:vdm:coupling}. The measured common length scale correction and the difference in the length scales of the two beams described by the asymmetry parameters $\epsilon_{x,y}$ in Eq.~\ref{eq:vdm:size}, are also taken into account. The luminosity overlap integral is calculated numerically from the reconstructed normalized beam profiles. The effect of the VELO smearing is measured and subtracted by comparing with the case when the smearing is doubled. The extra smearing is performed on an event-by-event basis using the description of the resolution given in Eq.~\ref{eq:track:dep} with the parameters $A$, $B$ and $C$ taken from Table~\ref{table:res_fit_params}. To improve the vertex resolution, only vertices made with more than 25 tracks are considered. This reduces the average cross-section correction due to the VELO resolution to 3.7\%. Similar to the BGI method, the beam-beam imaging method measures the beam profiles perpendicular to the beam directions. For the luminosity determination in the presence of a crossing angle their overlap should be corrected by the factor $C_{\mathrm{\ensuremath{\alpha}\xspace}}$ (see Eq.~\ref{eq:anglecor}) due to a contribution from the length of the bunches. The average correction for the conditions during the VDM fill in October is 2.6\%. A bunch-by-bunch comparison of the cross-section measurement with the beam-beam imaging method and the VDM method is shown in Fig.~\ref{fig:vdm-bbimaging}. The cross-section is measured at the nominal beam positions. The FBCT bunch populations with zero offsets normalized to the DCCT values and corrected for the ghost charge are used for the cross-section determination. The band indicates the variation obtained by changing the vertex resolution by one standard deviation in either direction. The obtained cross-section of 59.1~mb is in good agreement with the value of 58.4~mb reported in Table~\ref{tab:vdm:res}. The comparison is very sensitive since the overall bunch population normalization and the length scale uncertainty are in common. Uncorrelated errors amount to about 1\%. The main uncorrelated errors in the beam-beam imaging method are the VELO systematics and the statistical error which are each at the level of 0.4\%. The main uncorrelated errors in the VDM method are the stability of the working point (0.4\%) and the statistics (0.1\%). The difference between the two methods is smaller than but similar to the difference between the two October scan results observed with the VDM method. The width of the VDM rate profile and the widths of the individual beams are related following Eq.~\ref{eq:capsigma}. Thus the widths $\Sigma_{x,y}$ are directly {\em measured} with the VDM rate profile and {\em predicted} using the measured widths of the beams. The widths can be compared directly using the RMS of the distributions, the widths (RMS) of single Gaussian fits or the RMS of double Gaussian fits. The variation among these different values are of the order of 1\% and limit the sensitivity. However, it should be noted that these are just numerical differences; Eq.~\ref{eq:capsigma} holds for arbitrary beam shapes. The ratio of the measured and predicted width is 0.994 and 0.996 in the $x$ and $y$ coordinate, respectively. The statistical uncertainties are 0.3\% and the uncertainties due to the knowledge of the vertex resolution are 0.2\%. Considering the sensitivity of the comparison, we note good agreement. \section{Results and conclusions} \label{sec:results-conclusions} The beam-gas imaging method is applied to data collected by LHCb in May 2010 using the residual gas pressure and provides an absolute luminosity normalization with a relative uncertainty of 4.6\%, dominated by the knowledge of the bunch populations. The measured effective cross-section is in agreement with the measurement performed with the van der Meer scan method using dedicated fills in April 2010 and October 2010. The VDM method has an overall relative uncertainty of 3.6\%. The final VDM result is based on the October data alone which give significantly lower systematic uncertainties. The common DCCT scale error represents a large part of the overall uncertainty for the results of both methods and is equal to 2.7\%. To determine the average of the two results the common scale should be removed before calculating the relative weights. Table~\ref{tab:final} shows the ingredients and results of the averaging procedure. The combined result has a 3.4\% relative error. Since the data-sets used for physics analysis contain only a subset of all available information (see Sect.~\ref{sec:relative}), a small additional error is introduced {\em e.g.}\ by using \mueff{\ensuremath{\mathrm{vis}}} information averaged over bunch crossings. Together with the uncertainty introduced by the long term stability of the relative normalization this results in a final uncertainty in the integrated luminosity determination of 3.5\%. We have taken the conservative approach to assign a 0.5\% uncertainty representing the relative normalization variation to all data-sets and not to single out one specific period as reference. The results of the absolute luminosity measurements are expressed as a calibration of the visible cross-section \sigeff{\ensuremath{\mathrm{vis}}}. This calibration has been used to determine the inclusive $\phi$ cross-section in $pp$ collisions at $\sqrt{s} = 7$~TeV~\cite{ref:lhcb-phi}.\footnote{In fact, for the early data-taking period on which this measurement is based, the hit count in the SPD is used to define the visible cross-section. This cross-section differs from \sigeff{\ensuremath{\mathrm{vis}}} defined in this paper by 0.5\%.} The relative normalization and its stability have been studied for the data taken with LHCb in 2010 (see Sect.~\ref{sec:relative}). Before the normalization can be used for other data-sets an appropriate study of the relative normalization stability needs to be performed. \begin{table} \begin{center} \caption{Averaging of the VDM and BGI results and additonal uncertainties when applied to data-sets used in physics analyses. \label{tab:final}} \vskip 1mm \begin{tabular}{lrrrrrrrrrrrr} &{\bf Average}&{\bf VDM}&{\bf BGI}\\ {Cross-section (mb)} & 58.8 & 58.4 & 59.9\\ \hline {~~~~~~DCCT scale uncertainty (\%)} & 2.7 & 2.7 & 2.7\\ {~~~~~~Uncorrelated uncertainty (\%)} & 2.0 & 2.4 & 3.7\\ {~~~Cross-section uncertainty (\%)} & 3.4 & 3.6 & 4.6\\ \hline {~~~~~~Relative normalization stability (\%)} & 0.5 & & \\ {~~~~~~Use of average value of \mueff{\ensuremath{\mathrm{vis}}} (\%)} & 0.5 & & \\ {~~~Additional uncertainty for other data-sets (\%)} & 0.7 & & \\ {Total uncertainty for large data sets (\%)} & 3.5 & & \\ \end{tabular} \end{center} \end{table} While the VDM data have been taken during dedicated fills, no dedicated data taking periods have yet been set aside for the BGI method. It is, therefore, remarkable that this method can reach a comparable precision. A significantly improved precision in the DCCT scale can be expected in the near future. In addition, a controlled pressure bump in the LHCb interaction region would allow us to apply the beam-gas imaging method in a shorter period, at the same time decreasing the effects from non-reproducibility of beam conditions and increasing the statistical precision. The main uncertainty in the VDM result, apart from the scale error, is due to the lack of reproducibility found between different scanning strategies. Dedicated tests will have to be designed to understand these differences better. Finally, it is also very advantageous to perform beam-gas measurements in the same fill as the van der Meer scans. This would allow cross checks to be made with a precision which does not suffer from scale uncertainties in the bunch population measurement. Furthermore, a number of parameters which limit the precision of the BGI method can be constrained independently using the VDM scan data, such as the relative beam positions. To improve the result of the BGI method a relatively large $\beta^*$ value should be chosen, such as 10~m. The precision of the VDM method does not, in principle, depend on $\beta^*$. \section{Acknowledgements} We express our gratitude to our colleagues in the CERN accelerator departments for their support and for the excellent performance of the LHC. In particular, we thank S. White and H. Burkhardt for their help on the van der Meer scans. We thank the technical and administrative staff at CERN and at the LHCb institutes, and acknowledge support from the National Agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); CERN; NSFC (China); CNRS/IN2P3 (France); BMBF, DFG, HGF and MPG (Germany); SFI (Ireland); INFN (Italy); FOM and NWO (Netherlands); SCSR (Poland); ANCS (Romania); MinES of Russia and Rosatom (Russia); MICINN, XuntaGAL and GENCAT (Spain); SNSF and SER (Switzerland); NAS Ukraine (Ukraine); STFC (United Kingdom); NSF (USA). We also acknowledge the support received from the ERC under FP7 and the R\'{e}gion Auvergne.
1,116,691,498,897
arxiv
\section{Introduction}\label{intro} In the current study, we consider {a} fractional order nonlinear boundary value problem as follows: Find $u$ such that \begin{equation}\label{asli} -_{0}^{\circ}D^{s}_x u(x) +g(x,u(x)) =f(x),\quad x\in \Omega:=[0,1], \end{equation} \[u(0)=0,\quad u(1)=0,\] where $s \in (1,2)$, and $_{0}^{\circ}D^{s}_x $ refers to either Riemann-Liouville or Caputo fractional derivatives which are detailed below. Furthermore, $f$ and $g$ are known functions chosen from the suitable function spaces defined in Section \ref{pre}. Developing the order of differentiation to any real number is an interesting question. To find an affirmative answer, some efforts have been done and different types of so-called fractional derivatives have been introduced \cite{kilbas2006theory}. It is notified that the fractional derivative is a concept with attractive applications in science and engineering. It {appears} in the anisotropic diffusion modeling anomalously for cardiac tissue in microscopic and macroscopic levels. Furthermore, fractional derivative models are certain instances of nonlocal models which are introduced in comparison with the classical ones \cite{dui}. Similar to the ordinary and partial differential equations, {one follows} two approaches in seeking solutions for Fractional Differential Equations (FDEs); analytic and numeric solutions. The analytical methods such as {the Fourier, Laplace and Mellin transform methods} and even Green function approach (fundamental solution) are available for some special types of FDEs \cite{kilbas2006theory, podlubny1998fractional}. In practice, we have to { apply} numerical methods due to the lack of applicability of analytical methods for a wide range of FDEs. Hence, the study of the numerical approaches for them is of great importance. It is worthy to mention that in spite of different definitions for fractional differential operators, most of them are defined by the Abel's integral operator. Among the popular numerical approaches { to} Abel's integral equation, one could mention { the} collocation method \cite{brunner2017volterra} and { the} Galerkin method based on piecewise polynomials \cite{eggermont88, Vogeli2018} where the different varieties of those approaches, in general perspective, spectral and projection methods, could be utilized to find approximation for FDEs \cite{li2015numerical, podlubny1998fractional}. Converting { FDEs} to a suitable integral equation, and then solving { it} numerically with the above mentioned approaches \cite{wang2018spectral} or directly solving { it} by finite difference method \cite{LI2016614} are based on the adequate regularity assumptions of the strong solution which is not available in general \cite{ervin2018, jin2015}. { Here, we contribute { to} the study on the numerical solution of one dimensional nonlinear FDEs which involve Riemann-Liouville or Caputo derivative by introducing { an appropriate weak formulation} and describing the Galerkin solution with convergence analysis in some appropriate functional spaces.} The fractional operator in (\ref{asli}) is non-local and $g$ is a nonlinear function with respect to $u$, so the study of the existence, uniqueness, regularity of solution, and the numerical investigation are challenging. The existence of classical solution for the nonlinear FDEs is considered in \cite{ZHANG2000804} and for { the linear operator in one dimension}, the regularity of the solution is investigated in \cite{ervin2018}. In this work, we explore the { issues} of the existence and uniqueness with the aid of Browder-Minty method of the monotone operator { theory}. In the recent literature, due to their applications in science and engineering \cite{li2015numerical}, several types of numerical methods have been proposed for the approximation of FDEs. The theory and { the} numerical solution of a linear Riemann-Liouville and Caputo FDEs with two-point boundary condition have been extensively studied in \cite{Kopteva2017, Liang2018}. In those works, the fractional boundary value problem is reformulated appropriately in terms of { the} Volterra integral equation, and then the numerical approach is proceed by some suitable schemes such as the piecewise polynomial collocation and { the} spectral Galerkin methods. Spectral and pseudo-spectral methods are some of the interesting numerical approaches which are taken into consideration for the FDEs. Among the whole research on this area, we can mention \cite{Li2017, Zaky2019, Yang2015, yarmohammadi} wherein the Jacobi polynomials play a crucial role in the construction of the approximation. The attention to this class of orthogonal polynomials is a motivation for introducing a generalization of them with application in the numerical solution of FDEs \cite{chen2016}. In this work, we study the Galerkin finite element method for Riemann-Liouville and Caputo nonlinear fractional boundary value problems of Dirichlet type. The finite element method is a popular numerical approach in order to find an approximation for nonlinear differential equations \cite{HLAVACEK1994168} and \cite{wriggers2008nonlinear}. The finite element solution of quasi-linear elliptic problems with non-monotone operators have been considered in \cite{abdulle2012priori} and \cite{feistauer1993analysis}. Furthermore, in \cite{Feistauer1986} under the assumptions of the strong monotonicity and Lipschitz continuity of the corresponding second order nonlinear elliptic operator, a linear order approximation by finite element method has been obtained. In this paper, the investigated fractional nonlinear operators have the monotonicity and Lipschitz continuity properties, which are crucial for our analysis. The reasonable energy space associated with the non-local operators is fractional Sobolev space. Also, due to the presence of the nonlinear term in Eq. (\ref{asli}), we utilize Musielak-Orlicz space in order to introduce a suitable functional space by intersection of the two mentioned spaces in a convenient way. Then, with the aid of the monotone operator theory, the coercivity of the nonlinear variational formulation along with the Riemann-Liouville and Caputo fractional derivatives is investigated. This approach leads to { getting} a unique weak solution which { can} be approximated by { the} finite element method. { Two main features of this research are as follow: \begin{itemize} \item[$\bullet$] We study the existence and uniqueness issue of the weak solution for the main problem (\ref{asli}) utilizing the monotone operator theory in some Musielak-Orlicz and fractional Sobolev spaces. \item[$\bullet$] We develop the Galerkin finite element method for nonlinear FDEs and obtain a priori error estimates for the method based on the generalization of the C\'{e}a's lemma. \end{itemize}} We organize the reminder of the paper as follows: in Section \ref{pre}, some introduction regarding to the fractional calculus, { the} semi-linear monotone operator and also { the} suitable { functional} spaces { is} briefly presented. Section \ref{var} is devoted to the variational formulation of nonlinear boundary value problems along with Riemann-Liouville and Caputo derivatives. Furthermore, the regularity of the solution is studied in this section. The numerical approximation of the weak solution is examined by finite element approximation in Section \ref{fem} along with the full study { of} the existence and uniqueness issue of the discrete equations and the { convergence and stability} of the method. In numerical experiments section, some FDEs are solved by { the} finite element method. { Finally}, we provide some conclusion{ s} and further remarks for the future works. \section{Preliminaries}\label{pre} This section is devoted to some preliminaries to fractional calculus { to provide an} introduction { to} the { problem considered} in the paper. The energy space regarding fractional operators, {fractional Sobolev spaces are introduced in this section. Then, in order to { ensure} the existence of the weak solution by monotonicity arguments, some preface to nonlinear functions on { Musielak-Orlicz} spaces { is} provided. \subsection{Fractional calculus} { To make this paper self-contained}, we recall the Riemann-Liouville and { the} Caputo fractional integral and derivatives from \cite{kilbas2006theory}. For any $s >0$ with $n-1<s<n$, $n \in \mathbb{N}$, the right and left sided fractional integrals on the bounded interval $[a,b]$ are as follows: \noindent left fractional integral operator is defined as \begin{equation}\label{eq1} (_{a}I^{s}_{x}u)(x)= \dfrac{1}{\Gamma(s)}\int_a^x(x-y)^{s-1} u(y)\mathrm{d}y, \end{equation} while the right fractional integral operator is given by \begin{equation}\label{eq2} (_{x}I^{s}_{b}u)(x)= \dfrac{1}{\Gamma(s)}\int_x^b(y-x)^{s-1} u(y)\mathrm{d}y. \end{equation} Left-sided Riemann-Liouville fractional derivative of the order $ s $ for { the} function $u \in H^n(\Omega)$ can be defined as \begin{equation} ^{R}_{a}D_x^s u=D^{n}\, _{0}I_{x}^{n -s}u, \end{equation} where the operator $D^{n}$ denotes the classical derivative of the order $n$. The corresponding right-sided Riemann-Liouville fractional derivative is stated as \begin{equation} ^{R}_{x}D_b^s u=(-1)^n D^{n} \,_{x}I_{b}^{n-s}u. \end{equation} In addition, the left-sided Caputo derivative of the order $s$ is given by \begin{equation} ^{C}_{0}D_x^s u= \,_{0}I_{x}^{s}\, D^n u, \end{equation} where the following relation defines the right-sided Caputo derivative \begin{equation} ^{C}_{x}D_b^s u={(-1)^n}_{x}I_{b}^{s}D^n u. \end{equation} From the above definitions, it is apparent that the Abel's integral operator has a significant role in { defining} fractional derivatives. \subsection{Some properties { of} semi-linear operators} { Let $V$ be a real Banach space and $V^{*}$ as its dual space}. { Let define the duality pairing as the functional \[ V^*\times V \ni (y,x) \rightarrow \langle y,x \rangle =y(x), \] which means that $\langle y,x \rangle$ is the value of a continuous linear functional $y \in V^*$ on an element $x \in V$ and $\Vert . \Vert$ and $\Vert . \Vert_{*}$ are the norms associated with $V$ and $V^*$, respectively.} { Due to the central role of the monotonicity property, we present a formal definition for this concept.} \begin{definition}\label{def2}(\cite{askhabov2011nonlinear, krasno}) Let $ V$ be a separable Banach space. An operator $ F$ is called monotone on $ V $ if \[\langle Fx-Fy,x-y\rangle \geq 0, \quad \forall x,y\in V,\] it is called strictly monotone if \[\langle Fx-Fy,x-y\rangle > 0, \quad \forall x,y\in V, \quad x\neq y,\] it is called coercive \[\langle Fx,x \rangle > \gamma (\Vert x \Vert). \Vert x \Vert, \quad \text{where} \quad \gamma(s)\rightarrow \infty \quad \text{as}\quad s\rightarrow \infty, \] it is called hemi-continuous if the real-valued function \[ s\rightarrow \langle F(u+s.v), w\rangle, \] is continuous on $[0,1]$ { for every fixed $u,v$ and $w \in V$. Furthermore,} in the terminology of the article \cite{kato}, the operator $ F:V\rightarrow V^* $ with the domain { $ D=D(F)\subset V$} is hemi-continuous if for $ u\in D$ { and} $w\in V $, we get $ F(u+t_nw)\rightarrow F(u), $ when the sequence $ t_n $ tends to zero. \end{definition} In the following, we state the Browder-Minty theorem which is utilized to prove the existence and uniqueness of the weak solution. \begin{theorem}\label{thmm} \textbf{(Browder-Minty)} Let $ V $ be a real reflexive Banach space and a hemi-continuous monotone operator $ F:V\rightarrow V^* $ be coercive. Then, for any $ g\in V^* $, there exists a solution $ u^* \in V $ of the equation \[F(u)=g. \] This solution is unique if $F$ is a strictly monotone operator. \end{theorem} \begin{proof} See the details of the proof in { either} \cite{askhabov2011nonlinear} or { \cite[Chapter 19]{krasno}.} \end{proof} \subsection{Functional spaces} To formulate an appropriate { functional} space so that { the problem \eqref{asli}} { is} well-posed, we first recall the definition of fractional-order Sobolev spaces. As usual, the standard Lebesgue spaces are denoted by $L^{p}\left( \Omega\right) $, and their norms by $\left\Vert \cdot\right\Vert _{L^{p}\left( \Omega\right) }$. For $p=2$, the scalar product is denoted by $\left( u,v\right) =\int_{\Omega}u(x){v(x)}\mathrm{d}x$, and the norm by $\left\Vert \cdot\right\Vert =\left( \cdot,\cdot\right) ^{1/2}$. Let $\lbrace \lambda_n\rbrace_{n\in \mathbb{N}} $ { be} the set of all eigenvalue{ s} of the { following} boundary-value problem \begin{align}\label{odesl} \begin{split} D^2u(x)&=-\lambda u(x), \quad x\in \Omega ,\\ \frac{du}{dt}(0)&=u(1)=0 , \end{split} \end{align} where $ \phi_n $ is an eigenfunction related to $ \lambda_n$ for $ n \in \mathbb{N} $. Now, for $ s \in \mathbb{R},$ a Hilbert scale $ H^s(\Omega)$ is defined based on $ \lbrace\phi_n\rbrace_{n\in \mathbb{N}} $ with the following scalar products and norms \begin{equation}\label{iner} (u,v)_{H^s(\Omega)}=\sum_{n=1}^{\infty}\lambda^s (u,\phi_n)(v,\phi_n),\quad u,v\in \text{span} \{\phi_n\}_{n\in \mathbb{N}}, \end{equation} and \[\Vert u\Vert_{H^s(\Omega)}=(u,u)^{\frac{1}{2}}_{H^s(\Omega)}.\] Let $\lambda_n:=\mu_n^2$, then $ \lbrace\mu_n^{-s}\phi_n\rbrace_{n\in \mathbb{N}} $ form{ s} an orthonormal basis for $ H^s(\Omega). $ It is well-known that $\{ \phi_n \}_{n \in \mathbb{N}}$ is an orthonormal basis for $H^0(\Omega) =L^2{(\Omega)}$, so for $u_{n}=( u, \phi_n)$, we have $u=\sum_{n=1}^{\ \infty} u_{n} \phi_{n}$. For any $s\geq 0$, the fractional order Sobolev space is defined by the spectral properties of the operator \eqref{odesl} and { the} inner product \eqref{iner} as follows \begin{equation} H^{s}(\Omega):=\big\{ u \in L^{2}(\Omega) \mid \sum_{k=1}^{\ \infty} \lambda^{s}_{k} u^{2}_k < \infty \big \}, \end{equation} { for more details see \cite{antil2017note}. } Another approach to define the fractional Sobolev space is using the definition of $L^p(\Omega)$ spaces along with the Slobodeckij semi-norm \cite{di2012hitchhiker}. { To our aim}, it suffices to set $p=2$ and let $\left\lfloor s\right\rfloor $ denote the largest integer for which $\left\lfloor s\right\rfloor \leqslant s$, and define $\lambda\in\left[ 0,1\right[ $, by $s=\left\lfloor s \right\rfloor +\lambda$. For $s\in\mathbb{R}_{>0}\backslash\mathbb{N}$, we introduce the scalar product \begin{align} \left( \varphi,\psi\right) _{H^{s}(\Omega)} & :=\sum_{\alpha \leqslant\left\lfloor s\right\rfloor }{\left( D^{\alpha}\varphi,D^{\alpha }\psi\right) }\label{lscalar}\\ & +{\int_{\Omega}\int_{\Omega}{\frac{\left( D^{\left\lfloor s\right\rfloor}\varphi(x)-D^{\left\lfloor s\right\rfloor}\varphi(y)\right) {\left( D^{\left\lfloor s\right\rfloor}\psi(x)-D^{\left\lfloor s\right\rfloor}\psi(y)\right) }}% {|x-y|^{1+2\lambda}}\mathrm{d}x\mathrm{d}y}},\nonumber \end{align} and the norm $\Vert\varphi\Vert_{H^{s}(\Omega)}:=\left( \varphi ,\varphi\right) _{H^{s}(\Omega)}^{1/2}$. For $s\in\mathbb{N}$, obviously the second term in \eqref{lscalar} is ignored. Then, the Sobolev space $H^{s }\left( \Omega\right) $ is given by \[ H^{s}\left( \Omega\right) :=\left\{ u\in L^{2}(\Omega)\mid\forall~0\leq k\leq\left\lfloor s\right\rfloor \quad u^{\left( k\right) }\in L^{2}\left( \Omega\right) \text{\quad and\quad}\Vert u\Vert_{H^{s}% (\Omega)}<\infty\right\} . \] The dual space of $H^{s}(\Omega)$ is denoted by { $H^*(\Omega): = H^{-s}(\Omega)$} and is equipped with the {norm} \begin{equation} \Vert u\Vert_{H^{-s}(\Omega)}:=\sup\limits_{v\in H^{s}(\Omega)}% \frac{(u,v)}{\Vert v\Vert_{H^{s}(\Omega)}}, \end{equation} where $(\cdot,\cdot)$ denotes the continuous extension of the $L^{2}$-scalar product to the duality pairing $\langle\cdot,{\cdot}\rangle$ in $H^{-s}(\Omega)\times H^{s}(\Omega)$. Let $ \tilde{H}^{s}(\Omega) $ be the set of { functions in $ H^{s}(\Omega) $ which are extended by zero to the whole domain $ \mathbb{R}$. This space could also be defined by the intermediate space of the order $ s \in (0,1)$ given as} \begin{eqnarray} H_{0}^s(\Omega)=[H_0^m(\Omega), H^{0}(\Omega)]_{\theta}, \quad m\in \mathbb{Z}, \end{eqnarray} where $ m(1-\theta)=s $ { and $H^m_0(\Omega)$ denotes the closure of $ \mathcal{D}(\Omega)$ in $H^m (\Omega)$ \cite{lions2012non}. In the latter, $ \mathcal{D}(\Omega)$ stands for the set of all infinitely differentiable functions with compact support which is equipped with the locally convex topology. } Indeed, for $ \phi\in {H}_{0}^{s}(\Omega) $, $ \phi $ and its derivatives of { the} order $ k\leq m $ have the compact support property. For $m=1$, we set $\tilde{H}^{s}(\Omega):= H_{0}^{1-s}(\Omega)$. { Let $I_L$ (respectively $I_R$) denote half interval $(-\infty, b)$ (res. $(a, \infty)$), then $\tilde{H}^{s}_{L}(\Omega)$ (res. $\tilde{H}^{s}_{R}(\Omega)$) stands for the set of all $u \in {H}^s(\Omega)$ whose extension by zero denoted by $\tilde{ u}$ are in ${H}^{s}(I_L)$ (res. ${H}^{s}(I_R)$) \cite{adams, jin2015}. } \begin{theorem}\label{thm1} (\cite{jin2015}) Assume that $ n-1<s<n$, for $n\in \mathbb{N}$. The operators $ ^{R}_{0}D_x^s u $ and $ ^{R}_{x}D_1^s u $ for $ u\in \mathcal{D}(\Omega)$ { can} be extended continuously to operators with the same notations from $ \tilde{H}^s(\Omega) $ to $ L^2(\Omega)$, i.e., \begin{equation} \Vert ^{R}_{0}D_x^s u \Vert_{L^2(\mathbb{R})}\leq c\Vert u\Vert_{\tilde{H}^s(\Omega)}, \end{equation} and \begin{equation} \Vert ^{R}_{x}D_1^s u \Vert_{L^2(\mathbb{R})}\leq c\Vert u\Vert_{\tilde{H}^s(\Omega)}. \end{equation} \end{theorem} The following theorem represents some profitable characteristics of { the} fractional differential and integral operators. \begin{theorem}\label{taf} The following statements hold: \begin{itemize} \item[a)] The integral operators $_{0}I_x^s$ and $_{x}I_1^s$ satisfy the semi-group property. \item[b)] For $\phi,\psi\in L^2(\Omega)$, $(_{0}I_x^s\phi, \psi)=(\phi,\, _{x}I_1^s\psi)$. \item[c)] For any $ s>0, $ the function $ x^{s}\in {H}^\alpha(\Omega),$ where $ 0\leq\alpha <s+\frac{1}{2}. $ \item[d)] For any non-negative $ \alpha, \gamma,$ the Riemann-Liouville integral operator $ I^\alpha $ is a bounded map from $ \tilde{H}^{\gamma}(\Omega) $ into $ \tilde{H}^{\gamma+\alpha}(\Omega). $ \item[e)] The operators $ _{0}^{R}D_{x}^{s}:\tilde{H}^s_L(\Omega)\rightarrow L^2(\Omega)$ and $ _{x}^{R}D_{1}^{s}:\tilde{H}^s_R(\Omega)\rightarrow L^2(\Omega)$ are continuous. \item[f)] For any $ s\in (0,1) $ and $ u\in \tilde{H}^1_R(\Omega) $, $ _{0}^{R}D_{x}^{s}u=_{0}^{R}I_{x}^{1-s} u^\prime$. Meanwhile, $ u\in \tilde{H}^1_L(\Omega) $, then $ _{x}^{R}D_{1}^{s}u=- _{x}^{R}I_{1}^{1-s} u^\prime$. \end{itemize} \end{theorem} \begin{proof} { The first item has been investigated} in \cite[Theorem 2.4]{kilbas2006theory}. { The Fubini's Theorem proves item b \cite[Lemma 2.7]{kilbas2006theory}}. The proof of other items can be found in \cite{jin2015}. \end{proof} \subsubsection{Nonlinear functions on Orlicz spaces} Throughout this paper, we require some important properties for the nonlinear part of Eq. (\ref{asli}). A suitable { functional} space to deal with the monotone operators with nonlinear terms { is the} Orlicz spaces or { the} generalized Orlicz space { which is called as Musielak-Orlicz} space \cite{ adams, bardaro2008nonlinear}. We recall some necessary definitions and properties related to the mentioned spaces. { \begin{definition} A function $A: \mathbb{R}\rightarrow \left[0,\infty\right]$ is termed an $N$-function if \begin{itemize} \item[a)] It is even and convex; \item[b)] $A(t)=0$ if and only if $t=0$; \item[c)] $\lim_{t\rightarrow 0} \frac{A(t)}{t} = 0$, $\lim_{t\rightarrow \infty} \frac{A(t)}{t} =\infty$. \end{itemize} \end{definition} For more details on $N$-function, one can see \cite[Chapter VIII]{adams}, \cite[Chapter I]{rao} and \cite[Appendix]{warma}. \begin{definition}(\cite[Chapter II]{mendez2019analysis}) Let $(\Omega, \Sigma, \mu)$ be a measure space such that $\mu$ is $\sigma$-finite and complete. We say that a function $\varphi : \Omega \times \mathbb{R} \rightarrow \left[ 0, \infty\right]$ is a Musielak-Orlicz function on $\Omega$ if \begin{itemize} \item[a)] $\varphi(.,t)$ is measurable for all $t\in \left[ 0, \infty\right]$; \item[b)] For $\text{ a.e.}~ x\in \Omega, \varphi(x,.)$ is non-trivial, even and convex; \item[c)] $\varphi(x,.)$ is vanishing and continuous at $0$ for $\text{ a.e.}~ x\in \Omega$; \item[d)] $\varphi(x,t)>0$, $\forall t>0$ and $\lim_{t \rightarrow \infty} \varphi(x,t) = \infty$. \end{itemize} \end{definition} In this paper, $\Sigma$ is the $\sigma$-algebra of subsets of $\Omega$ and $\mu$ denotes the Lebesgue measure on $\Sigma$. The complementary Musielak-Orlicz function $\tilde{\varphi}$ is defined by \[ \tilde{\varphi}(x,t) : = \sup\big\{ s\vert t\vert - \varphi(x,s)~ |~ s>0 \big\}, \] which is the same as the complementary of the Young function (a more general class of $N$-functions) defined in \cite{warma, rao}. } \noindent \textbf{Assumption A.}{ (\cite{antil2017note})} Assume that a nonlinear function $ g(x,t):\Omega \times \mathbb{R} \rightarrow \mathbb{R} $ satisfies the following properties \[ \left\{ \! \begin{aligned} &g(x,.) \: \text{continuous, odd, strictly monotone,} \quad & \text{a.e. on } \Omega,\\ &g(x,0)=0, \quad \lim\limits_{t\rightarrow \infty} g(x,t)=\infty, & \text{a.e. on } \Omega,\\ &g(.,t) \:\text{is measurable}, & \forall t \in \mathbb{R}. \end{aligned} \right. \] { Note that the inverse of the function $ g(x,.) $ exists due to the strictly monotone property.} Let us denote it by {$\tilde{g}(x,.).$} We define $G(x,t)$ and $ \tilde{G}(x,t)$ by \[G(x,t):=\int_{0}^{\vert t\vert }g(x,s)\mathrm{d}s,\quad \quad \tilde{G}(x,t):=\int_{0}^{\vert t\vert }\tilde{g}(x,s)\mathrm{d}s.\] These functions are complementary Musielak-Orlicz functions that are ${N} $-functions { with} respect to the second variable. \begin{definition}\label{def1} Let $ G(x,.) $ be an ${N}$-function. We say this function satisfies the global ($ \Delta_2 $)-condition if there exists a constant $ c\in(0,1] $ such that for $ \text{ a.e.} ~x\in \Omega $ and for all $ t\geq 0 $ \[c\, t\, g(x,t)\leq G(x,t)\leq t\,g(x,t),\] where the function $ g(x,t) $ { satisfies in } the Assumption A. \end{definition} {Let $\mathcal{M}(\Omega)$ represents all real-valued measurable functions defined on $\Omega$, then the Musielak-Orlicz space generated by $G$ is defined as follows \[L_G(\Omega):=\Big\lbrace u \in \mathcal{M}(\Omega) \mid G(.,u(.))\in L^1(\Omega)\Big\rbrace,\] which means that the modular \begin{equation*} \begin{split} \rho_G&: \mathcal{M}(\Omega) \rightarrow \left[0, \infty\right],\\ \rho_{G} (u)&:= \int_{\Omega}G(t,u(t))\mathrm{d}t, \end{split} \end{equation*} is measurable \cite{bardaro2008nonlinear}. If $G(x,.)$ and $\tilde{G}(x,.)$ satisfy Definition \ref{def1}, then by the Theorem 8.9 in \cite{adams}, it is concluded that $L_G(\Omega)$ equipped with the Luxemburg norm; \[\Vert u\Vert_{G,\Omega}:=\inf\Big\lbrace m>0 \mid \rho_{G}(\frac{u}{m})\leq 1\Big\rbrace,\] { is a reflexive Banach space}. The same result is { valid} for the space $ L_{\tilde{G}}(\Omega)$ with the norm $\Vert .\Vert_{\tilde{G},\Omega}$.} Moreover, in our analysis we need the generalized H\"{o}lder inequality given by \begin{equation} \Big\vert \int_\Omega u(t)v(t) \mathrm{d}t \Big\vert \leq 2\Vert u\Vert_{G,\Omega} \Vert v \Vert_{\tilde{G},\Omega}, \quad \forall u \in L_{G}(\Omega), ~~ \forall v\in L_{\tilde{G}}(\Omega). \end{equation} { There are some proofs for this relation that one can see them in \cite[Chapter 3]{function} and \cite{adams}.} Furthermore, the following important result \begin{equation}\label{inf} \lim\limits_{\Vert u\Vert_{G, \Omega}\rightarrow \infty} \dfrac{\rho_{G}(u)}{\Vert u\Vert_{G, \Omega}}=\infty, \end{equation} which is obtained in \cite{warma} has a significant role in the applicability of the monotone operator theorems for our target. \begin{lemma}\label{lem1}(\cite{warma}) If $ g(.,u(.)) $ satisfies { the} ($ \Delta_2 $)-condition, then for all $u \in L_{G}(\Omega)$ one can get $ g(.,u(.))\in L_{\tilde{G}}(\Omega)$. \end{lemma} Now, we are ready to define the suitable function space which is appropriate for our problem. \begin{definition}\label{defv} Let $1< s<2$. Under the Assumption A { which has been} fulfilled in Definition \ref{def1}, consider the following reflective Banach space $ U $ as \[U:=U(\Omega, G):=\lbrace \phi \in \tilde{H}^\frac{s}{2}(\Omega) \mid G(.,\phi)\in L^1(\Omega) \rbrace,\] where equipped with the norm \begin{equation}\label{normv} \Vert u\Vert_U:=\Vert u\Vert_{\tilde{H}^\frac{s}{2}(\Omega)}+\Vert u\Vert_{{ G,\Omega}}. \end{equation} \end{definition} We state the following lemma from \cite{antil2017note} which { is important } in the investigation of { the} regularity of the solution. \begin{lemma}\label{lem0} Let $0 \leq s < 1$ and assume that for $M>0$, there exists a constant $l_{M}$ such that $g$ satisfies \begin{equation}\label{lip} \vert g(x,u_{1}) -g(x,u_{2}) \vert\leq l_{M} \vert u_{1}- u_{2} \vert, \quad x,y \in \Omega,\quad u_{i}\in \mathbb{R} \ \text{with} \ \vert u_{i} \vert \leq M. \end{equation} Then, for $u \in H^{s}(\Omega)\cap L^{\infty}(\Omega) $, we have $g(.,u(.)) \in H^{s}(\Omega)$. \end{lemma} \section{Variational formulation and regularity}\label{var} In this section, we aim to work with { an} appropriate variational formulation to overcome the difficulty of dealing with the nonlinear and fractional terms of the main problem for both case{ s} of { the} Riemann-Liouville and the Caputo derivatives separately. The non-local variational problems possess reduced order smoothing properties which { are} investigated in this section. \subsection{The Riemann-Liouville fractional operator} The appropriate variational formulation of the problem \eqref{asli} { in the linear case (with $ g(x,u)=0 $)} introduced in \cite{jin2015} is: Find { $ u\in U=\tilde{H}^{\frac{s}{2}}(\Omega) $} such that \begin{equation} \mathcal{A}(u,v)=(f,v), \quad v\in V=U, \end{equation} where \begin{equation} \mathcal{A}(u,v):=-(_{0}^{R}D^\frac{s}{2}_x u\, ,\, _{x}^{R}D^\frac{s}{2}_1 v), \end{equation} and $f\in L^2(\Omega)$. Considering the above form for the fractional part has some pros { in} utilizing the nice properties indicated in Theorem \ref{taf} for the approximation procedure. Hence{ ,} for { the nonlinear} problem \eqref{asli}, the weak formulation is stated as follows: Find $u \in U$ satisfying \begin{equation}\label{weak} \begin{split} \mathcal{L}(u,v)&:= \mathcal{A}(u,v)+ \mathcal{B}(u,v)=\langle f,v\rangle:=F(v),\quad \quad v \in U, \end{split} \end{equation} where $f \in U^* $, $ \mathcal{B}(u,v):=(g(x,u),v) $ and $ U^* $ { is the dual space of the reflexive Banach space $ U$ introduced in Definition \ref{defv} and their duality map is denoted by $\langle . , . \rangle$}. \begin{theorem}\label{thmd} Let $ 1<s<2 $ and $ u\in \tilde{H}^{\frac{s}{2}}(\Omega),$ the operator $ \mathcal{A}(u,v) $ is coercive and monotone, i.e., \[ \exists ~c>0 \quad \text{s.t}\quad \mathcal{A}(u,u)\geq c\Vert u \Vert^2_{\tilde{H}^{\frac{s}{2}}(\Omega)}.\] \end{theorem} \begin{proof} It is easily verified that \[\mathcal{A}(u,u)=-(_{0}^{R}D^s_xu,u).\] Let us { consider} $ Su(x):= -_{0}^{R}D^s_xu$ and borrow the notation $ S^\varepsilon u $ for $ \varepsilon>0 $ from \cite{eggermont88} as follows \[S^\varepsilon u(x)=\frac{-1}{\Gamma(2-s)}\frac{\mathrm{d^2}}{\mathrm{d}x^2}\int_{0}^{x}(x-t)^{1-s}e^{-\varepsilon(x-t)}u(t)\mathrm{d}t, \quad x>0.\] Using the Plancherel theorem, we get \begin{equation} \label{sh} (S^{\varepsilon} u,u)=\int_{-\infty}^{\infty}(S^\varepsilon u\hat{)}(w) \overline{\hat{u}(w)}\mathrm{d}w, \end{equation} where the notation $\,\hat{}\,$ refers to the Fourier transform. Let us introduce \[ \hat{a}_\varepsilon(w)= w^2 \int_{0}^{\infty}x^{1-s} e^{-(\varepsilon+\text{i} w)x}\mathrm{d}x, \] therefore, $(S^\varepsilon u\hat{)}(w)=\hat{u}(w)\hat{a}_\varepsilon(w)$. Now, regarding the principal value of the power function, we write $\hat{a}_\varepsilon(w)=w^2(\varepsilon+\text{i} w)^{s-2}$. Hence, \[ \text{Re}\hat{a}_\varepsilon(w)>\text{Re} \hat{a}_0 (w)=\cos(\dfrac{\pi(2-s)}{2})\vert w\vert^{s}. \] From Eq. \eqref{sh} and the above equation, we have \begin{equation}\label{shat} \begin{split} \text{Re}(u,S^\varepsilon u)&=\text{Re}(\int_{-\infty}^{\infty}\vert \hat{u}(w)\vert^2 \hat{a}_\varepsilon(w) \mathrm{d}w)\\ &\geq\cos(\dfrac{\pi(2-s)}{2})\int_{-\infty}^{\infty}\vert \hat{u}(w)\vert^2 \vert w\vert^{s} \mathrm{d}w. \end{split} \end{equation} From the contradiction argument investigated in \cite[Lemma 4.2]{jin2015}, we conclude the coerciveness of the operator $ \mathcal{A}. $ According to this result, one can deduce the monotonicity of the operator $ \mathcal{A} $ by the Definition \ref{def2}, i.e., \[\mathcal{A}(u,u-v)-\mathcal{A}(v,u-v)>0. \] \end{proof} { \begin{remark} It should be noticed that results of Theorem \ref{thmd} are well-known \cite{jin2015}, to make the paper self-contained, we have provided another proof based on the interesting results for the Abel's integral operator in \cite{eggermont88}. \end{remark}} In { the} next theorems, we assert { that the results} guarantee the existence of { a} unique weak solution for Eq. \eqref{asli} along with the Riemann-Liouville derivative. \begin{theorem}\label{thm8} Suppose that { the} Assumption A holds for the function $ g(x,u) $ which satisfies the global ($ \Delta_2 $)-condition. Consider the variational form \eqref{weak}; then, for every $ g\in U^* $ and $ 1<s<2 $, Eq. \eqref{asli} has a unique weak solution. \end{theorem} \begin{proof} Let $ u\in U $ be fixed. Regarding { the Lemma \ref{lem1}}, $g(.,u(.))\in L_{\tilde{G}}(\Omega).$ Now, using H\"{o}lder inequality, we obtain \begin{equation} \vert \mathcal{L}(u,v)\vert\leq \Vert _{0}^{R}D^\frac{s}{2}_x u\Vert_{L^2(\Omega)}\Vert_{x}^{R}D^\frac{s}{2}_1 v\Vert_{L^2(\Omega)}+2\Vert g(.,u)\Vert_{\tilde{G}} \Vert v\Vert_U. \end{equation} Applying Theorem \ref{thm1} and Lemma \ref{lem1}, the above inequality { can} be simplified as \begin{equation} \vert \mathcal{L}(u,v)\vert\leq \big(\Vert _{0}^{R}D^\frac{s}{2}_x u\Vert_{L^2(\Omega)}+2\Vert g(.,u)\Vert_{\tilde{G}}\big) \Vert v\Vert_U, \end{equation} which means that the operator $ \mathcal{L} $ is bounded. Since $ \mathcal{L}(u,.) $ is linear with respect to the second variable, $ \mathcal{L}(u,.)\in U^*$ for all $ u\in U $. Due to the { strict monotonicity property} of $ g(x,.) $ and Theorem \ref{thmd}, { one can conclude the monotonicity of the operator $ {\mathcal{L}} $}. { With} regard to the previous theorem, we have the coercivity of the operator $ \mathcal{A} $. Under the assumption about the function $ g(x,.) $ and Eq. (\ref{inf}), we { get that} \[\lim\limits_{\Vert u\Vert_{G, \Omega}\rightarrow \infty} \dfrac{\rho_{G}(u)}{\Vert u\Vert_{G, \Omega}}=\infty,\] therefore, $ \mathcal{L} $ is coercive. Here, we want to show the hemi-continuity of the nonlinear monotone operator $ \mathcal{L}. $ To this end, let $ u, w\in \mathbb{R}$ and the sequence $ t_n $ tend to zero. The { objective} is to show that $ \mathcal{L}(u+t_nw,v)$ tends to $\mathcal{L}(u,v).$ It is an evident fact that $ (_{0}^{R}D^\frac{s}{2}_x (u+t_nw), _{x}^{R}D^\frac{s}{2}_1 v) $ converges to $ (_{0}^{R}D^\frac{s}{2}_x u,_{x}^{R}D^\frac{s}{2}_1v) $ when $ t_n$ tends toward zero. Regarding the continuity of the function $ g(x,.)$, the claim on the operator $ \mathcal{L} $ being hemi-continuous is verified. Consequently, the existence of { a} unique weak solution is proved by utilizing { the} Browder-Minty Theorem \ref{thmm}. \end{proof} We proceed { with} the discussion on the regularity of the solution. { In order to do this}, the following theorem is stated. \begin{theorem}\label{thmr1} Consider Eq. \eqref{asli} along with the Riemann-Liouville fractional derivative where the function $ g(x,u) $ satisfies the Lipschitz condition \eqref{lip}. Then, this equation has a solution which fulfills the nonlinear Volterra-Fredholm integral equation of the form \begin{equation} u(x)=x^{s-1}\big(_{0}I^{s}_{x}(g(.,u(.))-f(.))\big)(1)-~_{0}I^{s}_{x}\big(g(.,u(.))-f(.)\big)(x). \end{equation} In addition, let $ u\in U$ be a weak solution of Eq. \eqref{asli}. Then, $ u\in \tilde{U} $ where $ \tilde{U}:=\big\lbrace u \in {H}^\alpha(\Omega) \cap \tilde{H}^{\frac{s}{2}}(\Omega) \mid G(.,u(.))\in L^1(\Omega) \big\rbrace $ for $ 0\leq \alpha \leq s-\frac{1}{2}.$ \end{theorem} \begin{proof} {According to the argument about converting the FDEs into { integral equations} in Chapter 5 of \cite{diethelm2010analysis} and by adjusting the homogeneous Dirichlet boundary conditions, $ u(x) $ takes the following form \begin{equation} u(x)=w\,x^{s-1}-\dfrac{1}{\Gamma(s)}\int_0^x (x-y)^{s-1}\big(g(y,u(y))-f(y)\big)\mathrm{d}y, \end{equation} } where $ w=\big(_{0}I^{s}_{x}(g(.,u(.))-f(.))\big)(1).$ Moreover, by means of Theorem \ref{taf} part ($ c $), we have $ x^{s-1}\in {H}^\alpha(\Omega),$ for $ 0\leq\alpha <s-\frac{1}{2}.$ On the other hand, Lemma \ref{lem0} and $ u\in U $ insure that $ g(.,u(.))\in {H}^{\frac{s}{2}}(\Omega)$ which is a subset of $ L^2(\Omega)$. Hence, it { achieves} that $ g(.,u(.))-f(.)\in L^2(\Omega) $. In addition, we conclude from part ($ d $) of Theorem \ref{taf} that $_{0}I^{s}_{x}(g(.,u(.))-f(.))\in \tilde{H}^{s}(\Omega)$. Consequently, since $u\in\tilde{H}^{\frac{s}{2}}(\Omega) $, one can deduce that $ u\in {H}^\alpha(\Omega) \cap \tilde{H}^{\frac{s}{2}}(\Omega) ,\,$ for $ 0\leq\alpha <s-\frac{1}{2}. $ \end{proof} \begin{remark} Due to the presence of the singular term $ x^{s-1}$, it is apparent that the best possible regularity of the solution \eqref{asli} { can} occur in ${H}^\alpha(\Omega)$. The similar argument about the regularity of the linear form of Eq. \eqref{asli} reported in Theorem 4.4 and Remark 4.5 of the interesting work \cite{jin2015} verifies the above claim. In that work, ${H}^\alpha(\Omega),~ 0\leq\alpha <s-\frac{1}{2}$ is displayed by ${H}^{s-1+\alpha}(\Omega)$, where $1-\frac{s}{2}\leq\alpha <\frac{1}{2}$, to show the presence of the singular term better. \end{remark} \subsection{The Caputo fractional operator} As discussed in \cite{jin2015}, the difference between the variational formulation of the Caputo and the Riemann-Liouville equations is their admissible test spaces. It means that the variational formulation of Eq. \eqref{asli} along with the Caputo derivative is: {Find $u \in U$ such that \begin{equation}\label{weak2} \begin{split} \mathcal{L}(u,v)&:= \mathcal{A}(u,v)+ \mathcal{B}(u,v)=\langle f,v\rangle,\quad \,v \in V, \end{split} \end{equation} where \[U:=\lbrace\phi \in \tilde{H}^{\frac{s}{2}}(\Omega)\mid G(.,\phi(.))\in L^1(\Omega)\rbrace,\] and \begin{equation}\label{Vv} V:=\lbrace\phi \in \tilde{H}^{\frac{s}{2}}(\Omega)\mid (x^{1-s},\phi)=0 \rbrace. \end{equation}} In order to define { the} appropriate test space $ V $ for the Caputo case, we assume that $ \phi^{*}(x)=(1-x)^{1-s}$, which belongs to $ \tilde{H}^{\frac{s}{2}}(\Omega)$, and apparently for any $\phi \in U$, we have $ \mathcal{A}(\phi,\phi^{*})=0.$ In the Caputo fractional derivative case, we set $V=\text{span} \big\lbrace \tilde{\phi}_i(x)=\phi_i(x)-\gamma_{i}(1-x)^{s-1}~|~ i=0, \dots, N\big\rbrace$\\ where \begin{equation}\label{gammaha} \gamma_{i}=\dfrac{(x^{1-s},\phi_i(x))}{(x^{1-s},(1-x)^{s-1})}, \end{equation} and $ \phi_i\in U.$ We { will} elucidate the above argument in the next section. Note that both Theorems \ref{thmd} and \ref{thm8} are valid for the operators including Caputo derivative which have the same variational formulations for the Riemann-Liouville counterpart. Now, we discuss the regularity of the solution by the following theorem. \begin{theorem}\label{thmr2} Let us consider Eq. \eqref{asli} with { the} Caputo fractional derivative for $ 1<s<2 $ in which the function { $ g(x,u) $} satisfies the Lipschitz condition \eqref{lip} and $ f\in {H}^\alpha(\Omega) $ so that $ \alpha + s \in (\frac{3}{2},2) $ and $ \alpha\in [0,\frac{1}{2}) $. Then, this equation has a solution which fulfills the following nonlinear Volterra-Fredholm integral equation \begin{equation} u(x)=\,x \,_{0}I^{s}_{x}\big(g(.,u(.))-f(.)\big)(1)-~ _{0}I^{s}_{x}\big(g(.,u(.))-f(.)\big)(x). \end{equation} In addition, let $ u(x)\in U$ be a weak solution of Eq. \eqref{asli}. Then $ u\in \tilde{U} $ where { $ \tilde{U}:=\Big\{ \phi(x) \in {H}^{\alpha+s}(\Omega) \cap \tilde{H}^{\frac{s}{2}}(\Omega) \mid G(.,u(.))\in L^1(\Omega) \Big\} $.} \end{theorem} \begin{proof} According to \cite[Theorem 6. 43]{diethelm2010analysis}, $ u(x) $ has the following form \begin{equation} u(x)=w\,x-\dfrac{1}{\Gamma(s)}\int_0^x (x-y)^{s-1}\big(g(y,u(y))-f(y)\big)\mathrm{d}y, \end{equation} where $ w=\big(_{0}I^{s}_{x}(g(.,u(.))-f(.))\big)(1)$ is determined by adjusting the boundary condition. On the other hand, Lemma \ref{lem0} and $ u\in U $ imply that $ g(.,u(.))\in {H}^{\frac{s}{2}}(\Omega)$ which is a subset of ${H}^{\alpha}(\Omega)$. Therefore, $ g(.,u(.))-f(.)\in {H}^{\alpha}(\Omega) $. Hence, by part ($ d $) of Theorem \ref{taf}, we have $_{0}I^{s}_{x}(g(.,u(.))-f(.))\in {H}^{\alpha+s}(\Omega)$. Moreover, by means of Theorem \ref{taf} part ($ c $), we have $ x\in \tilde{H}^\beta(\Omega), $ for $ 0\leq\beta <\frac{3}{2}.$ Consequently, from the inclusion argument and $u\in\tilde{H}^{\frac{s}{2}} (\Omega)$, we can conclude that $ u\in {H}^{\alpha+s}(\Omega) \cap \tilde{H}^{\frac{s}{2}}(\Omega)$ where $ 0\leq\alpha <\frac{1}{2}. $ \end{proof} \begin{remark} As observed in Theorems \ref{thmr1} and \ref{thmr2}, owing to the existence of the intrinsic singular term $ x^{s-1} $ in the solution representation, the solution of {the} differential equation with { the} Riemann-Liouville derivative has less regularity in comparison with the Caputo fractional counterpart. In fact, the best possible regularity in the Riemann-Liouville fractional derivative case belongs to $\tilde{H}^{s-1+\alpha}(\Omega)$. It is worthy to note that the superiority for the Caputo fractional derivative case comes from the fact that the function under the Caputo derivative is supposed to be twice differentiable. \end{remark} { In the following remark, the stability of the variational formulations is shown. \begin{remark}\label{stab} If we assume that $f \in \tilde{ H}^{-\frac{s}{2}}(\Omega) \hookrightarrow U^*$ and take $u=v$ in the relation \eqref{weak}, then by using the results of Theorem \eqref{thmd} and the relation $\mathcal{B}(u,u) \geq 0$, we see that there is a constant $c>0$ such that $c \Vert u\Vert^2_{\tilde{ H}^{\frac{s}{2}}(\Omega)} \leq \mathcal{A}(u,u) + \mathcal{B}(u,u)$. Note that $ \langle f,u \rangle_{U^*, U} =\langle f,u \rangle_{\tilde{ H}^{-\frac{s}{2}}, \tilde{ H}^{\frac{s}{2}}} $, so we get that \[ c \Vert u\Vert^2_{\tilde{ H}^{\frac{s}{2}}(\Omega)} \leq \vert \langle f,u \rangle \vert\leq \Vert f\Vert_{\tilde{ H}^{-\frac{s}{2}}(\Omega)} \Vert u \Vert_{\tilde{ H}^{\frac{s}{2}}(\Omega)}, \] which means that there is a constant $C>0$ such that $\Vert u\Vert^2_{\tilde{ H}^{\frac{s}{2}}(\Omega)} \leq C \Vert f\Vert_{\tilde{ H}^{-\frac{s}{2}}(\Omega)}$. \end{remark}} \section{Finite element approximation}\label{fem} { To find an approximate solution,} we discretize the continuous problem \eqref{weak} by a Galerkin finite element method. { In order to do this}, a piecewise polynomial finite element method is introduced over the interval $ \Omega=[0, 1]$. Let us define $\mathbb{P}_{r}(\Omega)$ as the space of univariate polynomials of the degree less than or equal to $r$, for positive integer $r$ { and } $\chi_{h}$ be a uniform mesh partition on $\Omega$, given by \begin{equation}\label{mesh} 0=x_{0}<x_{1}<\dots<x_{N-1}<x_{N}=1, \quad N \in \mathbb{N}, \end{equation} with fixed mesh size $ h_{i}= x_{i}-x_{i-1} $. The set $\chi_{h}$ induces a mesh $\mathcal{T}_{h}=\{\tau_{i} | 1\leq i \leq N\}$ on $\Omega$, where $\tau_{i}= [x_{i-1}, x_{i}]$. The length of a subinterval $\tau\in\mathcal{T}_{h}$ is denoted by $h_{\tau}$ and the maximal mesh width by $h:=\max\left\{ h_{\tau}:\tau\in\mathcal{T}_{h}\right\} $. We choose standard continuous and piecewise polynomial function space of { the} degree $r\in\mathbb{N}$ on $[0,1]$ { defined} by \begin{equation} S_{\mathcal{T}}^{r}(\Omega):= \{v\in C(\Omega):\left. v\right\vert _{\tau}\in\mathbb{P}_{r}\left( \tau\right) ,\forall\tau\in\mathcal{T}\}. \end{equation} The nodal points are given by% \[ \mathcal{N}_{r}:= \left\{ \xi_{i,j}:=x_{i-1}+j\frac{x_{i}-x_{i-1}}{r}\quad1\leq i\leq N\text{, }0\leq j\leq r-1\right\} \cup\left\{ 1\right\}. \]We choose the usual standard Lagrange basis functions $b_{i,j}^{(r)}$ of $S_{\mathcal{T}}^{r}(\Omega)$. Now, with these piecewise functions, one can define the discrete admissible space which is a subspace of $S^{r}_{\mathcal{T}}(\Omega)\bigcap H^{1}_{0}(\Omega)$ denoted by $A_{h}$. Particularly, we focus on the linear elements in the numerical experiments. Let $\mathcal{I}_{h}$ be the Lagrange interpolation operator mapping into $A_{h}$. We denote the finite element test and trial spaces $U_{h}$ for the Riemann-Liouville fractional derivative with $A_{h}$ which is described above. In order to investigate the Caputo fractional derivative case, we consider finite dimensional set $U_{h}= A_{h}$ as the trial space. In addition, to construct a suitable test space $V_{h}$, let $V_{h}=\text{span} \Big\lbrace \tilde{\phi}_i(x) ~|~ i=0, 1, \dots, N \Big\rbrace$ where \[ \tilde{\phi}_i(x)=\phi_i(x)-\gamma_{i}(1-x)^{s-1},\] { and} $\gamma_{i}$ is { given} by \eqref{gammaha}. {Finally, the discrete variational formulation released from \eqref{weak} and \eqref{weak2} is: Find $u_{h} \in A_{h}$ such that \begin{equation}\label{femd} \mathcal{L}(u_{h},v_{h})=F(v_{h}), \quad \forall v_{h}\in V_{h}. \end{equation} We notice that in the approximation procedure, one can use the property ($f$) of Theorem \ref{taf} which means that for the computation of $_0^R D_{x}^{\frac{s}{2}}u_{h}$, one can utilize the relation \begin{equation}\label{37} \begin{split} _0^R D_{x}^{\frac{s}{2}} \phi_{i} = _0^R I_{x}^{1-\frac{s}{2}} \phi_{i}=&\frac{1}{\Gamma(1-\frac{s}{2})}\int_{0}^{x}(x-t)^{-\frac{s}{2}}\phi_{i}'(t)\mathrm{d}t\\ =&\frac{1}{\Gamma(1-\frac{s}{2})}\int_{0}^{x}(x-t)^{-\frac{s}{2}}(\frac{\chi_{[x_{i-1},x_{i}]}}{h_{i}}-\frac{\chi_{[x_{i},x_{i+1}]}}{h_{i+1}} )\mathrm{d}t\\ =&\frac{1}{\Gamma(1-\frac{s}{2})}\Big[h^{-1}_{i}\big((x-x_{i-1})_{+}^{1-\frac{s}{2}} -(x-x_{i})_{+}^{1-\frac{s}{2}} \big)\\ & - h^{-1}_{i+1}\big((x-x_{i})_{+}^{1-\frac{s}{2}} -(x-x_{i+1})_{+}^{1-\frac{s}{2}} \big)\Big], \end{split} \end{equation} where $a_{+}=\max\{a,0\}$, and analogously for $_x^R D_{1}^{\frac{s}{2}}u $, we apply \begin{equation}\label{38} \begin{split} _x^R D_{1}^{\frac{s}{2}} \phi_{i} =- _x^R I_{1}^{1-\frac{s}{2}} \phi_{i}=&- \frac{1}{\Gamma(1-\frac{s}{2})}\int_{x}^{1}(x-t)^{-\frac{s}{2}}\phi_{i}'(t)\mathrm{d}t\\ =&\frac{1}{\Gamma(1-\frac{s}{2})}\int_{x}^{1}(x-t)^{-\frac{s}{2}}(\frac{\chi_{[x_{i-1},x_{i}]}}{h_{i}}-\frac{\chi_{[x_{i},x_{i+1}]}}{h_{i+1}} )\mathrm{d}t\\ =&\frac{1}{\Gamma(1-\frac{s}{2})}\Big[h^{-1}_{i}\big((x_{i}-x)_{+}^{1-\frac{s}{2}} -(x_{i-1}-x)_{+}^{1-\frac{s}{2}} \big)\\ & - h^{-1}_{i+1}\big((x_{i+1}-x)_{+}^{1-\frac{s}{2}} -(x_{i}-x)_{+}^{1-\frac{s}{2}} \big)\Big]. \end{split} \end{equation} Therefore, the term $\mathcal{A}(\phi_{i}, \phi_{j})= \big( _0^R D_{x}^{\frac{s}{2}}\phi_{i},\, _x^R D_{1}^{\frac{s}{2}}\phi_{j} \big)$ can be derived by the above arguments for the Riemann-Liouville derivative case. For the Caputo fractional derivative, we have $\mathcal{A}(\phi_{i},\tilde{ \phi_{j}})= \big( _0^R D_{x}^{\frac{s}{2}}\phi_{i},\, _x^R D_{1}^{\frac{s}{2}}\tilde{ \phi_{j}} \big)$ which can be simplified as \[ \big( _0^R D_{x}^{\frac{s}{2}}\phi_{i}\, ,\, _x^R D_{1}^{\frac{s}{2}}\phi_{j} \big) - \gamma_{j} \big( _0^R D_{x}^{\frac{s}{2}}\phi_{i}\, ,\, _x^R D_{1}^{\frac{s}{2}}(1-x)^{s-1} \big). \] The first term { can} be computed using the relations \eqref{37} and \eqref{38} and the second term is disappeared, because \begin{equation} \begin{split} \big( _0^R D_{x}^{\frac{s}{2}}\phi_{i}\, ,\, _x^R D_{1}^{\frac{s}{2}}(1-x)^{s-1}\big)& = - \big(_0 I_{x}^{1-\frac{s}{2}}\phi'_{i}\, , \, _x^R D_{1}^{\frac{s}{2}}(1-x)^{s-1}\big)\\ &= c_{\alpha} \big(\phi'_{i}\, , \, _0 I_{x}^{1-\frac{s}{2}}(1-x)^{\frac{s}{2}-1}\big)\\ &= c_{\alpha}\big(\phi'_{i}\, , \, 1\big)\\ & =0, \end{split} \end{equation} where $c_{\alpha}$ is a constant depending on $\alpha$. \subsection{Convergence analysis}\label{secconv} This section is devoted to the study of the { approximate solution achieved in the} previous section. For this end, we consider the existence and uniqueness issue for the discrete equation. In addition, we find an appropriate priori error bound. \begin{theorem} The discrete problem \eqref{femd} has a unique solution. \end{theorem} \begin{proof} The existence of the discrete solution { can be verified through} the Browder-Minty theorem with the same argument pursued in Section \ref{var}. For the uniqueness issue, let $u_1$ and $ u_2 $ be finite element solutions of \eqref{weak}. Hence, \begin{equation}\label{as} \begin{split} 0= \mathcal{L}(u_1,v_h)- \mathcal{L}(u_2,v_h)&=\mathcal{A}(u_1,v_h)-\mathcal{A}(u_2,v_h)+\mathcal{B}(u_1,v_h)-\mathcal{B}(u_2,v_h)\\ =&-\int_{\Omega}\,_{0}^{R}D_{x}^{\frac{s}{2}} (u_1-u_2)(x)\,_{x}^{R}D_{1}^{\frac{s}{2}}v_h(x)\mathrm{d}x\\ &+\int_{\Omega} \big(g(x,u_1(x))-g(x,u_2(x))\big)v_{h}(x)\mathrm{d}x. \end{split} \end{equation} Since the operator $ \mathcal{A}(u,v) $ is coercive, for $ v=u_1-u_2 $, we get that \[- \int_{\Omega}\, _{0}^{R}D_{x}^{\frac{s}{2}} (u_1-u_2)(x) \, _{x}^{R}D_{1}^{\frac{s}{2}}(u_1-u_2)(x)\mathrm{d}x\geq c\Vert u_1-u_2 \Vert^2_{\tilde{H}^{\frac{s}{2}}(\Omega)}. \] Using { the} above equation and Eq. \eqref{as}, one can conclude that \begin{align*} c\Vert u_1-u_2 \Vert^2_{\tilde{H}^{\frac{s}{2}}(\Omega)}&+\int_{\Omega} \big(g(x,u_1(x))-g(x,u_2(x))\big)(u_1(x)-u_2(x))\mathrm{d}x\\ &\leq - \int_{\Omega}\, _{0}^{R}D_{x}^{\frac{s}{2}} (u_1-u_2)(x) \, _{x}^{R}D_{1}^{\frac{s}{2}}(u_1-u_2)(x)\mathrm{d}x\\ &+\int_{\Omega} (g(x,u_1(x))-g(x,u_2(x)))(u_1(x)-u_2(x))\mathrm{d}x\\ &= 0. \end{align*} Since $ g(x,u) $ is a monotone function with respect to the second variable, thereby the above inequality, we conclude the uniqueness of the approximate solution. \end{proof} \begin{lemma}\label{lemcars} Let $ \mathcal{T}_h $ be a uniform mesh on $ \Omega. $ For real numbers $ s, \, m $ with $ m\geq \frac{s}{2} $, and also $S_\mathcal{T}^r (\Omega)$ with an integer $ r\geq 0 $, we define $ \hat{r}=\min\lbrace r+1,m\rbrace-\frac{s}{2} $. Then{ ,} there is a constant $ c>0 $ depending on $ s, ~m ,~ r $ and $ \mathcal{T}_h $ such that \begin{equation}\label{intererror} \min\limits_{v_h \in S_\mathcal{T}^r(\Omega)} \Vert u-v_h \Vert_{H^{\frac{s}{2}}(\Omega)}\leq ch^{\hat{r}}\Vert u\Vert_{H^m(\Omega)}, \end{equation} for all $ u\in H^{\frac{s}{2}}(\Omega)\cap H^m(\Omega).$ Particularly, for $ r=1 $ and $\phi \in H^{\frac{s}{2}}(\Omega)\cap H^\gamma(\Omega) $ where $ \gamma=\min \lbrace 2,m \rbrace$ and $ \hat{r}=\gamma-\frac{s}{2}$, one can deduce that \begin{equation}\label{wd} \min\limits_{v_h \in U_h} \Vert \phi-v_h \Vert_{H^{\frac{s}{2}}(\Omega)}\leq ch^{\gamma-\frac{s}{2}}\Vert \phi \Vert_{H^{\gamma}(\Omega)}. \end{equation} Moreover, if $ \phi\in H^{\gamma}(\Omega)\cap V$ where $ V $ is defined in \eqref{Vv}, the following relation holds \begin{equation}\label{ws} \min\limits_{v_h \in V_h} \Vert \phi-v_h \Vert_{H^{\frac{s}{2}}(\Omega)}\leq ch^{\gamma-\frac{s}{2}}\Vert \phi\Vert_{H^\gamma(\Omega)}, \end{equation} \end{lemma} \begin{proof} The relation \[ \inf_{v\in U_{h}}\Vert u-v\Vert_{H^{\alpha}(\Omega)}\leq \Vert u- \mathcal{I}_{h}u\Vert_{H^{\alpha}(\Omega)}, \quad 0\leq \alpha \leq 1, \] and the similar argument for finding an error estimation of the standard Lagrange finite element for the integer order Sobolev space { lead} to an error bound for the interpolation error in the intermediate spaces \eqref{intererror} and the special cases \eqref{wd} and \eqref{ws}; for more details see \cite{carstensen, jin2015}. \end{proof} \begin{theorem}\label{thml} Assume that $ u $ { is} the exact solution of Eq. \eqref{asli} and $ u_h $ { is} the approximate solution of the variational formulation \eqref{weak} or \eqref{weak2}. Then \begin{equation} \Vert u-u_h\Vert_{H^{\frac{s}{2}}(\Omega)}\leq Ch^{\gamma-\frac{s}{2}} \Vert u\Vert_{H^\gamma(\Omega)}, \end{equation} where $ \gamma $ differs for { the} nonlinear boundary value problems with Caputo or Riemann-Liouville fractional derivative. {For the case of Caputo differential operator, $ \gamma $ is equal to $ s $. In addition, $ \gamma $ belongs to the interval $[\frac{s}{2}, s-\frac{1}{2}]$ for the Riemann-Liouville fractional counterpart.} \end{theorem} \begin{proof} Consider $ u_h\in U_h $ { is} the solution of finite element space of Eq. \eqref{asli} which satisfies the following formulation \[ \mathcal{A}(u_h,v_h)+\mathcal{B}(u_h,v_h)=\langle f,v_h\rangle, \quad \quad v_h\in V_h.\] Next by subtracting the above equation and Eq. \eqref{weak}, we get that \begin{equation}\label{Au} \mathcal{A}(u,v)-\mathcal{A}(u_h,v_h)+\mathcal{B}(u,v)-\mathcal{B}(u_h,v_h)=\langle f,v\rangle-\langle f,v_h\rangle. \end{equation} Consider the projection operator $ \mathcal{P}_h:H^{\frac{s}{2}}(\Omega)\rightarrow U_h $ defined by \begin{equation}\label{orr} \mathcal{A}(u,v_h)=\mathcal{A}(\mathcal{P}_hu,v_h). \end{equation} Now by adding and subtracting $\mathcal{P}_hu$, we have \begin{equation}\label{xi}u-u_h=(u-\mathcal{P}_hu)+(\mathcal{P}_hu-u_h):=\xi+\eta. \end{equation} Then, Eq. \eqref{Au} can be rewritten as follows \begin{equation} \mathcal{A}(u,v)-\mathcal{A}(\mathcal{P}_hu,v_h)+\mathcal{A}(\mathcal{P}_hu,v_h)-\mathcal{A}(u_h,v_h)+\mathcal{B}(u,v)-\mathcal{B}(u_h,v_h)=\langle f,v\rangle-\langle f,v_h\rangle, \end{equation} therefore, from Eq. \eqref{orr} and setting $ v=v_h $, we get that \begin{equation} \mathcal{A}(\mathcal{P}_hu,v_h)-\mathcal{A}(u_h,v_h)+\mathcal{B}(u,v_h)-\mathcal{B}(u_h,v_h)=0, \end{equation} or, regarding the bilinearity of the operator $ \mathcal{A} $, \begin{equation} \mathcal{A}(\mathcal{P}_hu-u_h,v_h)+\mathcal{B}(u,v_h)-\mathcal{B}(u_h,v_h)=0. \end{equation} { Letting} $ v_h=\eta $, we have \begin{equation}\label{la} \mathcal{A}(\eta,\eta)=\mathcal{B}(u_h,\eta)-\mathcal{B}(u,\eta). \end{equation} Next, by the coercivity of $\mathcal{A} $, there is a constant $ c_0 $ such that \begin{equation}\label{coer} \mathcal{A}(\eta, \eta)\geq c_0\Vert \eta \Vert^2_{\tilde{H}^{\frac{s}{2}}(\Omega)}. \end{equation} On the other hand, \begin{equation}\label{ahoo} \begin{split} \vert\mathcal{B}(u_h,\eta)-\mathcal{B}(u,\eta)\vert &=\vert\big(g(.,u)-g(.,u_h), \eta\big)\vert\\ &=\vert\Big(g(.,u)-g(.,\mathcal{P}_hu)+g(.,\mathcal{P}_hu)-g(.,u_h), \eta\Big)\vert\\ &\leq \Vert g(.,u)-g(.,\mathcal{P}_hu)\Vert \Vert \eta \Vert+\Vert g(.,\mathcal{P}_hu)-g(.,u_h)\Vert\Vert \eta \Vert\\ &\leq l_M\Vert \eta \Vert_{H^{\frac{s}{2}}(\Omega)}(\Vert \xi \Vert_{H^{\frac{s}{2}}(\Omega)} +\Vert \eta \Vert_{H^{\frac{s}{2}}(\Omega)}), \end{split} \end{equation} where $ l_M $ is verified in \eqref{lip}. If $ \eta\neq 0 $ and $ c_{0}>l_M, $ then one can deduce from \eqref{coer} and \eqref{ahoo} that \begin{equation}\label{eta} \Vert \eta\Vert_{H^{\frac{s}{2}}(\Omega)} \leq \frac{l_M}{c_{0}-l_M}\Vert \xi\Vert_{H^{\frac{s}{2}}(\Omega)}. \end{equation} Consequently by Eqs. \eqref{xi}, \eqref{eta} and Lemma \ref{lemcars}, we get that \begin{equation} \Vert u-u_h \Vert_{H^{\frac{s}{2}}(\Omega)} \leq C h^{\gamma-\frac{s}{2}}\Vert u\Vert_{H^\gamma(\Omega)}. \end{equation} It is conspicuous that for each derivative case with different regularity, $ \gamma $ should be different. { Using the regularity Theorems \ref{thmr1} and \ref{thmr2} for $f \in L^2(\Omega)$, one can deduce that $ \gamma \in [\frac{s}{2}, s-\frac{1}{2}]$ for { the} Riemann-Liouville fractional derivative operator, and $ \gamma=s$ in the Caputo fractional one.} \end{proof} \begin{remark}\label{reml} Let $f\in H^{\alpha}(\Omega)$ where $\alpha$ varies in the interval $\left[0,\frac{1}{2}\right)$ and $\alpha +s>\frac{3}{2}$. If $\alpha +s<2$, then Theorem \ref{thmr2} yields that { the $\gamma$ arising in the error estimation given for the Caputo fractional operator is equal to $ \alpha +s$ which means} that $\Vert u-u_h\Vert_{H^{\frac{s}{2}}(\Omega)}=\mathcal{O}(h^{\alpha+\frac{s}{2}})$. Otherwise, $\gamma=2$ and $\Vert u-u_h\Vert_{H^{\frac{s}{2}}(\Omega)}=\mathcal{O}(h^{2-\frac{s}{2}})$. \end{remark} { We finalize this section with a remark on the stability of the Galerkin finite element method. \begin{remark} By means of the triangular inequality, it is easily seen that \begin{equation*} \Vert u_h\Vert_{H^{\frac{s}{2}}(\Omega)}\leq \Vert u-u_h\Vert_{H^{\frac{s}{2}}(\Omega)}+\Vert u\Vert_{H^{\frac{s}{2}}(\Omega)}. \end{equation*} Using Theorem \ref{thm1}, we obtain that \begin{equation} \Vert u_h\Vert_{H^{\frac{s}{2}}(\Omega)}\leq Ch^{\gamma-\frac{s}{2}}\Vert u\Vert_{H^{\gamma}(\Omega)}+\Vert u\Vert_{H^{\frac{s}{2}}(\Omega)}, \end{equation} where $C$ is a constant and $ \gamma $ is equal to $ s $, if one considers the Caputo differential operator and $ \gamma $ belongs to $[\frac{s}{2}, s-\frac{1}{2}]$ for the Riemann-Liouville fractional operator case. Therefore, we get an upper bound for the approximate solution in the case of Caputo fractional derivative \begin{equation} \Vert u_h\Vert_{H^{\frac{s}{2}}(\Omega)}\leq Ch^{\frac{s}{2}}\Vert u\Vert_{H^{s}(\Omega)}+\Vert u\Vert_{H^{\frac{s}{2}}(\Omega)}, \end{equation} and the following one in the case of Riemann-Liouville fractional derivative operator \begin{equation} \Vert u_h\Vert_{H^{\frac{s}{2}}(\Omega)}\leq C\Vert u\Vert_{H^{\frac{s}{2}}(\Omega)}. \end{equation} So the final results on the stability discussion is obtained by the argument given in Remark \ref{stab}. \end{remark}} \section{Numerical Illustrations}\label{num} The numerical experiments are employed to exhibit the applicability of the Galerkin finite element method for { the} fractional nonlinear boundary value problems with Caputo and Riemann-Liouville derivatives. The experiments are implemented in $\textsl{Mathematica}^{\circledR}$ software platform. We report the absolute error along with the numerical and theoretical { rates} of convergence for some examples which satisfy the assumptions considered in the previous sections. Furthermore, the numerical algorithm is examined for some examples with the absence of the mentioned assumptions. In general, for the numerical experiment with { the} Galerkin method, one of the main issues is the approximation of the integrals. In our examination, the Galerkin finite element solution is obtained from the fully discrete weak from: Find $u_{h} \in A_{h}$ such that \begin{equation}\label{dfemd} \mathcal{L}_{h}(u_{h},v_{h})=F(v_{h}), \quad \forall v_{h}\in V_{h}. \end{equation} In the above nonlinear system of equations, we have utilized Gauss-Kronrod quadrature formula to compute the integrals that need to be evaluated numerically. This { happens} mainly for the integrals involving the nonlinear term. Furthermore, to solve the nonlinear system, we employ { the} Newton's iteration method. { In order to do this}, consider the bilinear form $N_{h}(u_{h};.,.)$ defined on $S^r_{\mathcal{T}}(\Omega) \times S^r_{\mathcal{T}}(\Omega)$ by \[ N_{h}(u_{h};w_{h},v_{h})=(_{0}^{R}D^\frac{s}{2}_x w_{h},~ _{x}^{R}D^\frac{s}{2}_1 v_{h})+(g_{u}(.,u_{h})w_{h},v_{h}). \] The Newton's method for approximating $u_{h}$ by a sequence $\{u^k_{h}\}_{k \in \mathbb{N}}$ in $S^r_{\mathcal{T}}(\Omega)$ can be written as \[ N_{h}(u^k_{h};u^{k+1}_{h}-u^k_{h},v_{h})=F(v_{h})-\mathcal{L}_{h}(u^{k}_{h},v_{h}), \quad \forall v_{h}\in S^r_{\mathcal{T}}(\Omega), \] where $u^{0}_{h}\in S^r_{\mathcal{T}}(\Omega)$ is an initial guess chosen by the steepest gradient algorithm. { Through} tables and figures, we notify the experimental and { the} possible { theoretical} convergence rates (the reported numbers in the parentheses) for the finite element approximation of the nonlinear boundary value problems with the Riemann-Liouville and Caputo fractional derivatives in $L^2$ and $H^{\frac{s}{2}}$-norms. \begin{example}\label{u3} Consider the fractional derivative equation \eqref{asli} with $ g(x,u(x))=3xu^3(x)$. The right hand side function $ f(x)$ is chosen such that \begin{description} \item[(a)] the exact solution with Riemann-Liouville fractional derivative is $u(x)= \frac{1}{\Gamma(s+1)}(x^{s-1}-x^{s})$; \item[(b)] the exact solution is $u(x)=\frac{1}{\Gamma(s+1)}(x-x^{s})$, where the derivative operator is of Caputo type. \end{description} This problem satisfies the assumptions introduced in Section \ref{secconv}. Therefore, the convergence rates of { the} Caputo and { the} Riemann derivative cases in term of $ H^{\frac{s}{2}} $ error norm are $ O(h^{\frac{s}{2}}) $ and $ O(h^{\frac{s-1}{2}})$, respectively. This argument directly follows from Theorem \ref{thml} for the function $ f(x) $ { that} belongs to $ L^2(\Omega).$ Tables \ref{tabbc1} and \ref{tabbr1} report the $ L^2$ and $ H^{\frac{s}{2}} $ error norms for different $ s\in (1,2)$. Moreover, we have provided some figures to exhibit both { the} theoretical and the practical rates of convergence. Figures \ref{fig1} and \ref{fig2} display the absolute errors for {the} Caputo and { the} Riemann-Liouville fractional derivatives with the above nonlinear term. In figures, the dashed lines show the theoretical convergence rate. \end{example} \begin{figure}[ht] \centering \begin{tikzpicture} [scale=0.65, transform shape] \begin{axis} xlabel={$ k $},ylabel={$\Vert u-u_{h} \Vert_{L^{{2}}(\Omega)}$}, ymode=log, legend entries={$s=\frac{7}{4}$,$s=\frac{3}{2}$,$s=\frac{4}{3}$}, legend pos=north east] \addplot coordinates { (-1,3.08E-03) (0,7.44E-04) (1,1.80E-04) (2,4.33E-05) (3,1.04E-05) (4,2.06E-06) (5,4.93E-07) }; \addplot coordinates { (-1,2.93E-03) (0,7.53E-04) (1, 1.94E-04) (2,4.98E-05) (3,1.27E-05) (4,3.21E-06) (5,8.80E-07) }; \addplot coordinates { (-1,3.18E-03) (0,8.81E-04) (1, 2.49E-04) (2,8.42E-05) (3,2.86E-05) (4,8.60E-06) (5,2.68E-06) }; \end{axis} \end{tikzpicture} \begin{tikzpicture} [scale=0.65, transform shape] \begin{axis}[xlabel={$ k $},ylabel={$\Vert u-u_{h} \Vert_{H^{\frac{s}{2}}(\Omega)}$}, ymode=log, legend entries={$s=\frac{7}{4}$,$s=\frac{3}{2}$,$s=\frac{4}{3}$}, legend pos=north east] \addplot[color=blue,mark=*] coordinates { (-1,2.94E-01) (0,1.45E-01) (1,7.22E-02) (2,3.71E-02) (3,1.91E-02) (4,1.00E-02) (5,5.41E-03) }; \addplot[color=blue,mark=*, dashed] coordinates{ (1,1.834E-02) (2,1.00E-02) }; \addplot[color=red,mark=*] coordinates { (-1,2.05E-01) (0,1.110E-01) (1, 6.08E-02) (2,3.42E-02) (3,1.93E-02) (4,1.10E-02) (5,6.49E-03) }; \addplot[color=red,mark=*, dashed] coordinates{ (1,1.35E-02) (2,8.00E-03) }; \addplot[color=brown,mark=*] coordinates { (-1,1.67E-01) (0,9.82E-02) (1, 5.85E-02) (2,3.54E-02) (3,2.16E-02) (4,1.33E-02) (5,8.27E-03) }; \addplot[color=brown,mark=*, dashed] coordinates{ (1,9.54E-03) (2,6.00E-03) }; \end{axis} \end{tikzpicture} \caption{Plots of the absolute error in $L^{2}$ and $H^\frac{s}{2}$-norms in logarithmic scale for Example \ref{u3} with { the} Caputo fractional derivative. }% \label{fig1}% \end{figure} \begin{figure}[h!] \centering \begin{tikzpicture} [scale=0.65, transform shape] \begin{axis} xlabel={$ k $},ylabel={$\Vert u-u_{h} \Vert_{L^{2}(\Omega)}$}, ymode=log, legend entries={$s=\frac{7}{4}$,$s=\frac{3}{2}$,$s=\frac{4}{3}$}, legend pos=north east] \addplot coordinates { (-1,7.79E-03) (0,2.78E-03) (1,1.07E-03) (2,4.31E-04) (3,1.760E-04) (4,7.25E-05) (5,3.02E-05) }; \addplot coordinates { (-1,2.73E-02) (0,1.31E-02) (1, 6.43E-03) (2,3.17E-03) (3,1.56E-03) (4,7.68E-04) (5,3.78E-04) }; \addplot coordinates { (-1,6.61E-02) (0,3.69E-02) (1, 2.05E-02) (2,1.14E-02) (3,6.35E-03) (4,3.54E-03) (5,1.97E-03) }; \end{axis} \end{tikzpicture} \begin{tikzpicture} [scale=0.65, transform shape] \begin{axis} xlabel={$ k $},ylabel={$\Vert u-u_{h} \Vert_{H^{\frac{s}{2}}(\Omega)}$}, ymode=log, legend entries={$s=\frac{7}{4}$,$s=\frac{3}{2}$,$s=\frac{4}{3}$}, legend pos=north east] \addplot coordinates { (-1,6.09E-01) (0,4.36E-01) (1,3.09E-01) (2,2.39E-01) (3,1.81E-01) (4,1.38E-01) (5,1.04E-01) }; \addplot[color=blue,mark=*, dashed] coordinates{ (3,1.62E-01) (4,1.25E-01) }; \addplot[color=red,mark=*] coordinates { (-1,7.58E-01) (0,6.20E-01) (1, 5.10E-01) (2,4.21E-01) (3,3.49E-01) (4,2.91E-01) (5,2.42E-01) }; \addplot[color=red,mark=*, dashed] coordinates{ (3,3.27E-01) (4,2.750E-01) }; \addplot[color=brown,mark=*] coordinates { (-1,8.87E-01) (0,7.76E-01) (1, 6.81E-01) (2,5.99E-01) (3,5.28E-01) (4,4.66E-01) (5,4.13E-01) }; \addplot[color=brown,mark=*, dashed] coordinates{ (3,5.05E-01) (4,4.50E-01) }; \end{axis} \end{tikzpicture} \caption{Plots of the absolute error in $L^{2}$ and $H^\frac{s}{2}$-norms in logarithmic scale for Example \ref{u3} with { the} Riemann-Liouville fractional derivative. }% \label{fig2}% \end{figure} \begin{table}[!ht] \begin{center} \begin{footnotesize} \caption{The absolute error in $ L^2$ and $ H^{\frac{s}{2}}$-norms for different { value} $ s=\frac{7}{4},\frac{3}{2},\frac{4}{3}$ and mesh size $h=\frac{1}{2^k\times 10} $ for Example \ref{u3} with the Caputo fractional derivative operator.}\label{tabbc1}\vspace*{0.1in} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $s$& $ k$& $ -1$ & $0 $&$ 1$& $ 2$ & $3 $&$ 4$&$ 5$ & Rate\\ \hline $\dfrac{7}{4}$ & $ L^2$-norm & $ 3.08\textrm{e}-03$ & $7.44\textrm{e}-04$&$ 1.80\textrm{e}-04$& $ 4.33\textrm{e}-05$ & $1.04\textrm{e}-05$&$2.06\textrm{e}-06$&$ 4.93\textrm{e}-07$ & 2.047\\ $ $& $ H^{\frac{s}{2}}$-norm& $2.94 \textrm{e}-01$ & $1.45\textrm{e}-01 $&$ 7.22\textrm{e}-02$& $ 3.71\textrm{e}-02$ & $1.91\textrm{e}-02 $&$1.00 \textrm{e}-02$&$ 5.41\textrm{e}-03$ & 0.887 (0.875)\\ \hline $\dfrac{3}{2}$& $ L^2$-norm& $ 2.93\textrm{e}-03$ & $7.53\textrm{e}-04 $&$ 1.94\textrm{e}-04$& $ 4.98\textrm{e}-05$ & $1.27\textrm{e}-05 $& $3.21\textrm{e}-06$ & $ 8.08\textrm{e}-07$ & 1.956\\ $ $& $ H^{\frac{s}{2}}$-norm & $2.05\textrm{e}-01$ & $1.11\textrm{e}-01$&$ 6.08\textrm{e}-02$& $ 3.42\textrm{e}-02$ & $1.93\textrm{e}-02 $&$1.10\textrm{e}-02$&$ 6.49\textrm{e}-03$ & 0.769 (0.75)\\ \hline $\dfrac{4}{3}$& $ L^2$-norm& $ 3.18\textrm{e}-03$ & $8.81\textrm{e}-04$&$ 2.49\textrm{e}-04$& $ 8.42\textrm{e}-05$ & $2.68\textrm{e}-05$&$ 8.60\textrm{e}-06$&$ 2.68\textrm{e}-06$ & 1.567\\ $ $& $ H^{\frac{s}{2}}$-norm & $1.67\textrm{e}-01$ & $9.82\textrm{e}-02 $&$ 5.85\textrm{e}-02$& $3.54\textrm{e}-02$ & $2.16\textrm{e}-02 $&$1.33\textrm{e}-02$&$ 8.27\textrm{e}-03$ & 0.686 (0.67)\\\hline \end{tabular} \end{footnotesize} \end{center} \end{table} \begin{table}[!ht] \begin{center} \begin{footnotesize} \caption{The absolute error in $ L^2$ and $ H^{\frac{s}{2}}$-norms for different { values} $ s=\frac{7}{4},\frac{3}{2},\frac{4}{3}$ and mesh size $h=\frac{1}{2^k\times 10} $ for Example \ref{u3} with the Riemann-Liouville fractional operator.}\label{tabbr1}\vspace*{0.1in} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $s$& $ k$& $ -1$ & $0 $&$ 1$& $ 2$ & $3 $&$ 4$&$ 5$ & Rate\\ \hline $\dfrac{7}{4}$ & $ L^2$-norm & $ 7.79\textrm{e}-03$ & $2.78\textrm{e}-03$&$ 1.07\textrm{e}-03$& $ 4.31\textrm{e}-04$ & $1.76\textrm{e}-04$&$ 7.25\textrm{e}-05$&$ 3.02\textrm{e}-05$ & 1.263 \\ $ $& $ H^{\frac{s}{2}}$-norm & $6.09\textrm{e}-01$ & $4.36\textrm{e}-01 $&$ 3.19\textrm{e}-01$& $2.39\textrm{e}-01$ & $1.81\textrm{e}-01 $&$1.38\textrm{e}-01$&$ 1.04\textrm{e}-01$ & 0.393 (0.375)\\ \hline $\dfrac{3}{2}$& $ L^2$-norm& $ 2.73\textrm{e}-02$ & $1.31\textrm{e}-02 $&$ 6.43\textrm{e}-03$& $ 3.17\textrm{e}-03$ & $1.56\textrm{e}-03 $&$ 7.68\textrm{e}-04$&$ 3.78\textrm{e}-04$ & 1.020\\ $ $& $ H^{\frac{s}{2}}$-norm & $7.58\textrm{e}-01$ & $6.20\textrm{e}-01 $&$ 5.10\textrm{e}-01$& $4.21\textrm{e}-01$ & $3.49\textrm{e}-01 $&$2.91\textrm{e}-01$&$ 2.42\textrm{e}-01$ & 0.264 (0.250)\\ \hline $\dfrac{4}{3}$& $ L^2$-norm& $ 6.61\textrm{e}-02$ & $3.69\textrm{e}-02$&$ 2.05\textrm{e}-02$& $ 1.14\textrm{e}-02$ & $6.35\textrm{e}-03$&$ 3.54\textrm{e}-03$&$ 1.97\textrm{e}-03$ & 0.840\\ $ $& $ H^{\frac{s}{2}}$-norm & $8.87\textrm{e}-01$ & $7.76\textrm{e}-01 $&$ 6.81\textrm{e}-01$& $5.99\textrm{e}-01$ & $5.28\textrm{e}-01 $&$4.66\textrm{e}-01$&$ 4.13\textrm{e}-01$ & 0.173 (0.167)\\\hline \end{tabular} \end{footnotesize} \end{center} \end{table} \begin{example}\label{u5} In this example, we discuss the approximation of \eqref{asli} with $ g(x,u(x))=\sin(x)u^5(x)$. The right hand side function $ f(x)$ { is} chosen such that \begin{description} \item[(a)] the exact solution is $u(x)= \frac{\Gamma(1.5)}{\Gamma(s+1.5)}(x^{s-1}-x^{s+0.5})$ for the Riemann-Liouville case. \item[(b)] the exact solution is $u(x)= \frac{\Gamma(1.5)}{\Gamma(s+1.5)}(x-x^{s+0.5})$ when Eq. \eqref{asli} entails the Caputo fractional derivative. \end{description} By a similar reasoning as Example \ref{u3}, it is seen that the assumptions discussed in the theoretical parts hold. Therefore, we expect $ O(h^{\frac{s}{2}}) $ and $ O(h^{\frac{s-1}{2}})$ convergence rates for {the} Caputo and { the} Riemann-Liouville fractional differential operators, respectively. This claim is verified by the numerical results reported { in} Tables \ref{tab2c} and \ref{tab2R} which exhibit the absolute errors in $ L^2$ and $H^\frac{s}{2}$-norms for different $ s\in (1,2)$. \end{example} \begin{table}[!ht] \begin{center} \begin{footnotesize} \caption{The absolute error in $ L^2$ and $ H^{\frac{s}{2}}$-norms for different { values} $ s=\frac{7}{4},\frac{3}{2},\frac{4}{3}$ and mesh size $h=\frac{1}{2^k\times 10} $ for Example \ref{u5} with the Caputo fractional operator.}\label{tab2c}\vspace*{0.1in} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $s$& $ k$& $ -1$ & $0 $&$ 1$& $ 2$ & $3 $&$ 4$&$ 5$ & Rate\\ \hline $\dfrac{7}{4}$ & $ L^2$-norm & $ 2.67\textrm{e}-03$ & $6.54\textrm{e}-04$&$ 1.60\textrm{e}-04$& $ 3.96\textrm{e}-05$ & $9.83\textrm{e}-06$&$2.44\textrm{e}-06$&$ 6.09\textrm{e}-07$ & 2.002\\ $ $& $ H^{\frac{s}{2}}$-norm& $ 3.01\textrm{e}-01$ & $1.61\textrm{e}-01 $&$ 8.66\textrm{e}-02$& $ 4.67\textrm{e}-02$ & $2.52\textrm{e}-02 $&$1.36 \textrm{e}-02$&$ 7.37\textrm{e}-03$ &0.884 (0.875)\\ \hline $\dfrac{3}{2}$& $ L^2$-norm& $ 2.64\textrm{e}-03$ & $6.44\textrm{e}-04 $& $ 1.59\textrm{e}-04$& $ 3.94\textrm{e}-05$ & $9.79\textrm{e}-06 $& $2.54\textrm{e}-06$ & $6.16\textrm{e}-07$ & 1.992\\ $ $& $ H^{\frac{s}{2}}$-norm & $2.07\textrm{e}-01$ & $1.10\textrm{e}-01$&$ 5.91\textrm{e}-02$& $ 3.18\textrm{e}-02$ & $1.72\textrm{e}-02 $&$9.34\textrm{e}-03$&$5.10\textrm{e}-03$ & 0.872 (0.75)\\ \hline $\dfrac{4}{3}$& $ L^2$-norm& $2.74 \textrm{e}-03$ & $6.51\textrm{e}-04$&$ 1.57\textrm{e}-04$& $ 3.84\textrm{e}-05$ & $9.53\textrm{e}-06$&$ 2.37\textrm{e}-06$&$ 5.91\textrm{e}-07$ & 2.003\\ $ $& $ H^{\frac{s}{2}}$-norm & $1.70\textrm{e}-01$ & $9.20\textrm{e}-02 $&$5.00 \textrm{e}-02$& $2.74\textrm{e}-02$ & $1.52\textrm{e}-02 $&$8.67\textrm{e}-02$&$ 5.00\textrm{e}-03$ & 0.793 (0.67)\\\hline \end{tabular} \end{footnotesize} \end{center} \end{table} \begin{table}[!ht] \begin{center} \begin{footnotesize} \caption{The absolute errors in $ L^2$ and $ H^{\frac{s}{2}}$-norms for different { values} $ s=\frac{7}{4},\frac{3}{2},\frac{4}{3}$ and mesh size $h=\frac{1}{2^k\times 10} $ for Example \ref{u5} with the Riemann-Liouville fractional operator.}\label{tab2R}\vspace*{0.1in} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $s$& $ k$& $ -1$ & $0 $&$ 1$& $ 2$ & $3 $&$ 4$&$ 5$ & Rate\\ \hline $\dfrac{7}{4}$ & $ L^2$-norm& $ 4.54\textrm{e}-03$ & $1.53\textrm{e}-03 $&$ 5.83\textrm{e}-04$& $ 2.35\textrm{e}-04$ & $9.73\textrm{e}-05$&$ 4.08\textrm{e}-05$&$ 1.74\textrm{e}-05$ & 1.229 \\ $ $& $ H^{\frac{s}{2}}$-norm& $ 6.71\textrm{e}-01$ & $4.74\textrm{e}-01 $&$ 3.49\textrm{e}-01$& $ 2.62\textrm{e}-01$ & $1.97\textrm{e}-01 $&$ 1.48\textrm{e}-01$&$ 1.13\textrm{e}-01$ & 0.391 (0.375)\\ \hline $\dfrac{3}{2}$& $ L^2$-norm& $ 1.56\textrm{e}-02$ & $7.55\textrm{e}-03 $&$ 3.73\textrm{e}-03$& $ 1.85\textrm{e}-03$ & $9.21\textrm{e}-03 $&$ 4.60\textrm{e}-04$&$ 2.28\textrm{e}-04$ & 1.002\\ $ $& $ H^{\frac{s}{2}}$-norm & $ 7.64\textrm{e}-01$ & $6.27\textrm{e}-01$&$ 5.18\textrm{e}-01$& $ 4.30\textrm{e}-01$ & $3.58\textrm{e}-01 $&$ 2.98\textrm{e}-01$&$ 2.49\textrm{e}-01$ & 0.261 (0.250)\\ \hline $\dfrac{4}{3}$& $ L^2$-norm& $ 4.03\textrm{e}-02$ & $2.23\textrm{e}-02 $&$ 1.24\textrm{e}-02$& $ 6.96\textrm{e}-03$ & $3.97\textrm{e}-03 $&$ 2.25\textrm{e}-03$&$ 1.29\textrm{e}-03$ & 0.801\\ $ $& $ H^{\frac{s}{2}}$-norm & $8.57\textrm{e}-01$ & $7.52\textrm{e}-01 $&$ 6.63\textrm{e}-01$& $ 5.83\textrm{e}-01$ & $5.13\textrm{e}-01 $&$ 4.52\textrm{e}-01$&$ 3.98\textrm{e}-01$ & 0.182 (0.167)\\\hline \end{tabular} \end{footnotesize} \end{center} \end{table} \begin{example}\label{exexp} Consider the nonlinear Riemann-Liouville fractional differential equation \eqref{asli} with $ g(x,u(x))=x\exp(u(x))$. The right hand side function $f(x)$ is chosen such that the exact solution $u(x)$ is \[ u(x)= \frac{1}{\Gamma(s+2)}(x^{s-1}-x^{s+1})-\frac{2}{\Gamma(s+3)}(x^{s-1}-x^{s+2}). \] Tables \ref{tab3R} reports the absolute error in $ L^2$ and $ H^{\frac{s}{2}}$-norms for different $ s\in (1,2)$ with the above nonlinear term. \end{example} \begin{table}[!ht] \begin{center} \begin{footnotesize} \caption{The absolute error in $ L^2$ and $H^{\frac{s}{2}}$-norms for different { values} $ s=\frac{7}{4},\frac{3}{2},\frac{4}{3}$ and mesh size $h=\frac{1}{2^k\times 10} $ for Example \ref{exexp} with the Riemann-Liouville fractional derivative.}\label{tab3R}\vspace*{0.1in} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $s$& $ k$& $ -1$ & $0 $&$ 1$& $ 2$ & $3 $&$ 4$&$ 5$ & Rate\\ \hline $\dfrac{7}{4}$ & $ L^2$-norm & $ 1.17\textrm{e}-03$ & $4.24\textrm{e}-04$&$ 1.58\textrm{e}-04$& $ 6.03\textrm{e}-05$ & $2.37\textrm{e}-05$&$9.42\textrm{e}-06$&$ 3.82\textrm{e}-06$ & 1.301\\ $ $& $ H^{\frac{s}{2}}$-norm& $ 3.09\textrm{e}-01$ & $2.23\textrm{e}-01 $&$ 1.64\textrm{e}-01$& $ 1.21\textrm{e}-01$ & $9.10\textrm{e}-02 $&$ 6.92\textrm{e}-02$&$ 5.28\textrm{e}-02$ & 0.389 \\ \hline $\dfrac{3}{2}$& $ L^2$-norm& $ 4.59\textrm{e}-03$ & $2.19\textrm{e}-03 $&$ 1.10\textrm{e}-03$& $ 5.46\textrm{e}-04$ & $2.73\textrm{e}-04 $& $1.38\textrm{e}-04$ & $6.95\textrm{e}-05$ & 0.987\\ $ $& $ H^{\frac{s}{2}}$-norm & $4.00\textrm{e}-01$ & $3.27\textrm{e}-01$&$ 2.68\textrm{e}-01$& $2.20 \textrm{e}-01$ & $1.82\textrm{e}-01 $&$1.51\textrm{e}-01$&$1.26\textrm{e}-01$ & 0.266\\ \hline $\dfrac{4}{3}$& $ L^2$-norm& $ 1.13\textrm{e}-02$ & $6.27\textrm{e}-03$&$ 3.24\textrm{e}-03$& $ 1.71\textrm{e}-03$ & $9.10\textrm{e}-04$&$ 5.06\textrm{e}-04$&$ 2.82\textrm{e}-05$ & 0.845\\ $ $& $ H^{\frac{s}{2}}$-norm & $4.72\textrm{e}-01$ & $4.12\textrm{e}-01 $&$ 3.62\textrm{e}-01$& $3.19\textrm{e}-01$ & $2.81\textrm{e}-01 $&$2.48\textrm{e}-01$&$ 2.19\textrm{e}-01$ & 0.178 \\\hline \end{tabular} \end{footnotesize} \end{center} \end{table} \begin{example}\label{exu2} We present the nonlinear Caputo fractional differential { equation} \eqref{asli} with $ g(x,u(x))=(u(x)-x)^2, $ where the exact solution is $u(x)=\frac{\Gamma(\frac{3}{4})}{\Gamma(s+\frac{3}{4})}(x-x^{s-\frac{1}{4}})$. Table \ref{tab4c} reports the absolute errors in $ L^2$ and $ H^{\frac{s}{2}}$-norms for different $ s\in (1,2)$ with the above nonlinear term. \begin{table}[!ht] \begin{center} \begin{footnotesize} \caption{The absolute error in $ L^2$ and $ H^{\frac{s}{2}}$-norms for different { values} $ s=\frac{7}{4},\frac{3}{2},\frac{4}{3}$ and mesh size $h=\frac{1}{2^k\times 10} $ for the Caputo fractional operator for Example \ref{exu2}.}\label{tab4c}\vspace*{0.1in} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $s$& $ k$& $ -1$ & $0 $&$ 1$& $ 2$ & $3 $&$ 4$&$ 5$ & Rate\\ \hline $\dfrac{7}{4}$ & $ L^2$-norm & $ 3.94\textrm{e}-03$ & $1.01\textrm{e}-03$&$ 2.57\textrm{e}-04$& $ 6.56\textrm{e}-05$ & $1.68\textrm{e}-05$&$4.30\textrm{e}-06$&$ 1.10\textrm{e}-06$ & 1.963\\ $ $& $ H^{\frac{s}{2}}$-norm& $ 3.44\textrm{e}-01$ & $1.79\textrm{e}-01 $&$ 9.32\textrm{e}-02$& $ 4.85\textrm{e}-02$ & $2.53\textrm{e}-02 $&$1.32 \textrm{e}-02$&$ 6.89\textrm{e}-03$ & 0.938 \\ \hline $\dfrac{3}{2}$& $ L^2$-norm& $ 3.27\textrm{e}-03$ & $9.70\textrm{e}-04 $& $ 2.89\textrm{e}-04$& $ 8.59\textrm{e}-05$ & $2.56\textrm{e}-05 $& $7.61\textrm{e}-06$ & $2.26\textrm{e}-06$ & 1.749\\ $ $& $ H^{\frac{s}{2}}$-norm & $2.24\textrm{e}-01$ & $1.32\textrm{e}-01$&$ 7.84\textrm{e}-02$& $ 4.48\textrm{e}-02$ & $2.67\textrm{e}-02 $&$1.59\textrm{e}-02$&$9.49\textrm{e}-03$ & 0.745 \\ \hline $\dfrac{4}{3}$& $ L^2$-norm& $ 1.93\textrm{e}-03$ & $6.28\textrm{e}-04$&$ 2.10\textrm{e}-04$& $ 7.07\textrm{e}-05$ & $2.39\textrm{e}-05$&$8.10 \textrm{e}-06$&$ 2.76\textrm{e}-06$ & 1.551\\ $ $& $ H^{\frac{s}{2}}$-norm & $1.32\textrm{e}-01$ & $8.48\textrm{e}-02 $&$ 5.48\textrm{e}-02$& $3.55\textrm{e}-02$ & $2.30\textrm{e}-02 $&$1.49\textrm{e}-02$&$ 9.70\textrm{e}-03$ & 0.620 \\\hline \end{tabular} \end{footnotesize} \end{center} \end{table} \end{example} \begin{example}\label{exl} As the final example, we deal with the linear Caputo fractional differential equation ($g(x,u(x))=0$) by considering $ f(x)=x^\theta$ { to belong} $ H^{\alpha}(\Omega)$ for $ \alpha\in \left[ 0,\theta+\frac{1}{2}\right)$ and $ \theta\in \{\frac{-1}{3},\frac{-1}{4},\frac{-1}{5} \}.$ The exact solution for different { values of} $\theta$ is $u(x)=c_\theta (x^{s-1}-x^{s+\theta})$ with $ c_\theta=\frac{\Gamma (\theta +1)}{\Gamma (s+\theta +1)}.$ Table \ref{tabl} displays the theoretical and numerical rates of convergence in $H^{\frac{s}{2}}$-norm for different $ s=\frac{7}{4},\frac{3}{2},\frac{4}{3}$. This problem { is satisfied} the Remark \ref{reml}. For different { values of} $\alpha$ and $s$, the value of $\gamma$ is given by $ \min\{\alpha+s,2\}$. For instance, for $ s=\frac{7}{4} $ and $ \theta=\frac{-1}{5},$ then $ \gamma=2$ and the rate of convergence is $ O(h^{2-\frac{s}{2}})=O(h^{1.125}). $ \end{example} \begin{table}[!ht] \begin{center} \begin{small} \caption{A comparison between theoretical and numerical convergence rates in $ H^{\frac{s}{2}}$-norm for Example \ref{exl} with the Caputo fractional derivative.}\label{tabl}\vspace*{0.1in} \begin{tabular}{|l|c|c|c|} \hline \diagbox[width=1.5cm, height=0.5cm]{{\large $ s $}}{{\large$ ~\theta $} }&$ 7/4 $ & $3/2 $ & $ 4/3 $ \\ \hline $ -1/3 $ & $\dfrac{}{} 1.059~(1.042) $ & $ 0.916~(0.917) $& $ ---- $ \\ \hline $ -1/4 $ & $\dfrac{}{} 1.103~(1.125) $ & $ 0.974~(1.000) $ & $ 0.925~(0.917) $\\ \hline $-1/5 $ & $ \dfrac{}{}1.124~(1.125) $ & $1.000 ~(1.050) $& $ 0.970~(0.967) $ \\ \hline \end{tabular} \end{small} \end{center} \end{table} \section*{Conclusion and future studies} In this paper, we have studied the Lagrange finite element method for a class of semi-linear FDEs of { the} Riemann-Liouville and { the} Caputo types. To this aim, a weak formulation of the problems { has} been introduced in the suitable function spaces constructed by considering the fractional Sobolev and Musielak-Orlicz { spaces} duo to the presence of the nonlinear term. In addition, the existence and uniqueness issue of the weak solution together with { its} regularity is discussed. The weak formulation is discretized by Galerkin method with piecewise linear polynomials basis functions. Finding an error bound in $H^{\frac{s}{2}}$-norm is considered for the Riemann-Liouville and Caputo fractional differential equations. Different examples with the varieties of the nonlinear terms have been examined and the absolute errors are reported in $L^2$ and $H^{\frac{s}{2}}$-norms. The nature of the nonlinearity and also { the} fractional essence of the problem cause low order convergence of the method. In order to improve the approach for this class of FDEs, one can { apply} the idea of splitting method, where the solution is separated into regular and singular parts; { this} is { possible} by utilizing the Taylor expansion of the nonlinear operator and the finite element method { accompanied by} a quasi-uniform mesh. Also, as discussed in the numerical experiments section, the integrals in the obtained nonlinear system are discretized by a suitable quadrature method. { Surveying} the effect of quadrature method in finite element approximation and a priori error estimation is an idea for { the} future studies. As reported in the numerical section, we have observed the absolute errors in $L^2$-norm which are sharper than the errors in $H^\frac{s}{2}$-norm. An interesting question for { a} further study is how to obtain an appropriate theoretical error bound in $L^2$-norm. \begin{acknowledgements} We gratefully thank Bangti Jin (University College London) for helpful discussion. \end{acknowledgements}
1,116,691,498,898
arxiv
\section{Introduction} Machine learning has become an ubiquitous tool in analyzing personal data and developing data-driven services. Unfortunately, the underlying learning models can pose a serious threat to privacy if they inadvertently capture sensitive information from the training data and later reveal it to users. For example, \citet{CarLiuErl+19} show that the Google text completion system contains credit card numbers from personal emails, which may be exposed to other users during the autocompletion of text. Once such sensitive data has entered a learning model, however, its removal is non-trivial and requires to selectively revert the learning process. In absence of specific methods for this task in the past, retraining from scratch has been the only resort, which is costly and only possible if the original training data is still available. As a remedy, \citet{CaoYan15} and \citet{BourChaCho+21} propose methods for \emph{machine unlearning}. These methods decompose the learning process and are capable of removing individual data points from a model in retrospection. As a result, they enable to eliminate isolated privacy issues, such as data points associated with individuals. However, information leaks may not only manifest in single data instances but also in groups of \emph{features} and \emph{labels}. For example, a collaborative spam filter~\citep{AttWeiDasSmo+09} may accidentally capture personal names and addresses that are present in hundreds of emails. Similarly, in a credit scoring system, inappropriate features, such as the gender or race of clients, may need to be unlearned for thousands of affected data points. Unfortunately, instance-based unlearning as proposed in prior work is inefficient in these cases: First, a runtime improvement can hardly be obtained over retraining when the changes are not isolated and larger parts of the training data need to be adapted. Second, omitting several data points inevitably reduces the fidelity of the corrected learning model. It becomes clear that the task of unlearning is not limited to removing individual data points, but also requires corrections on the level of features and labels. In these scenarios, unlearning methods still need to be effective and efficient, regardless of the amount of affected data points. In this paper, we propose the first method for unlearning features and labels from a learning model. Our approach is inspired by the concept of \emph{influence functions}, a technique from robust statistics~\citep[][]{Ham74}, that allows for estimating the influence of data on learning models~\citep[][]{KohLia17, KohSiaTeo+19}. By reformulating this influence estimation as a form of unlearning, we derive a versatile approach that maps changes of the training data in retrospection to closed-form updates of model parameters. These updates can be calculated efficiently, even if larger parts of the training data are affected, and enable the removal of features and labels. As a result, our method can correct privacy leaks in a wide range of learning models with convex and non-convex loss functions. For models with strongly convex loss, such as logistic regression and support vector machines, we prove that our approach enables \emph{certified unlearning}. That is, it provides theoretical guarantees on the removal of features and labels from the models. To obtain these guarantees, we build on the concept of certified data removal~\mbox{\citep{GuoGolHan+20, NeeRotSha20}} and investigate the difference between models obtained with our approach and retraining from scratch. Consequently, we can define an upper bound on this difference and thereby realize provable unlearning in practice. For models with non-convex loss functions, such as deep neural networks, similar guarantees cannot be realized. However, we empirically demonstrate that our approach provides substantial advantages over prior work. Our method is significantly faster in comparison to sharding~\citep{GinMelGua+19, BourChaCho+21} and retraining while reaching a similar level of accuracy. Moreover, due to the compact updates, our approach requires only a fraction of the training data and hence is applicable when the original data is not available. We demonstrate the efficacy of our approach in case studies on unlearning (a) sensitive features in linear models, (b) unintended memorization in language models, and (c) label poisoning in computer vision. \paragraph{Contributions} In summary, we make the following major contributions in this paper: \begin{enumerate} \setlength{\itemsep}{3pt} \item \emph{Unlearning with closed-form updates.} We introduce a novel framework for unlearning features and labels. This framework builds on closed-form updates of model parameters and thus is signicantly faster than instance-based approaches to unlearning. \item \emph{Certified unlearning.} We derive two unlearning strategies for our framework based on first-order and second-order gradient updates. Under convexity and continuity assumptions on the loss, we show that both strategies provide certified unlearning of data. \item \emph{Empirical analysis.} We empirically show that unlearning of sensible information is possible even for deep neural networks with non-convex loss functions. We find that our first-order update is highly efficient, enabling a speed-up over retraining by several orders of magnitude. \end{enumerate} The rest of the paper is structured as follows: We review related work on machine unlearning and influence functions in \cref{sec:relatedwork}. Our approach and its different technical realizations are introduced in \cref{sec:approach,sec:updat-learn-models}, respectively. The theoretical analysis of our approach is presented in \cref{sec:cert-unlearning} and its empirical evaluation in \cref{sec:empirical-analysis}. Finally, we discuss limitations in \cref{sec:limitations} and conclude the paper in \sect{sec:conclusions}. \section{Related work} \label{sec:relatedwork} The increasing application of machine learning to personal data has started a series of research on detecting and correcting privacy issues in learning models~\citep[e.g.,][]{ZanWutTop+20, CarTraWal+21, CarLiuErl+19, SalZhaHumBer+19, LeiFre20, ShoStrSonShm+17}. In the following, we provide an overview of work on machine unlearning and influence functions. A broader discussion of privacy and machine learning is given by~\citet{DeC21} and~\citet{PapMcDSinWel18}. \paragraph{Machine unlearning} Methods for removing sensitive data from learning models are a recent branch of security research. As one of the first, \citet{CaoYan15} show that a large number of learning models can be represented in a closed summation form that allows for elegantly removing individual data points in retrospection. However, for adaptive learning strategies, such as stochastic gradient descent, this approach provides only little advantage over retraining from scratch and thus is not well suited for correcting problems in deep neural networks. As a remedy, \citet{BourChaCho+21} propose a universal strategy for unlearning data instances from classification models. Similarly, \citet{GinMelGua+19} develop a technique for unlearning points from clusterings. The key idea of both approaches is to split the data into independent partitions---so called shards---and aggregate the final model from submodels trained over these shards. In this setting, the unlearning of data points can be efficiently carried out by only retraining the affected submodels. We refer to this unlearning strategy as \emph{sharding}. \citet{AldMahBei20} show that this approach can be further sped up for least-squares regression by choosing the shards cleverly. Sharding, however, is suitable for removing a few data points only and inevitably deteriorates in performance when larger portions of the data require changes. This limitation of sharding is schematically illustrated in \cref{fig:sharding}. The probability that all shards need to be retrained increases with the number of data points to be corrected. For a practical setup with \num{20}~shards, as proposed by \mbox{\citet{BourChaCho+21}}, changes to as few as \num{150}~points are already sufficient to impact all shards and render this form of unlearning inefficient, regardless of the size of the training data. We provide a detailed analysis of this limitation in~\sect{appdx:sharding}. Consequently, for privacy leaks involving hundreds or thousands of data points, sharding provides no advantages over costly retraining. \begin{figure}[htbp] \centering \input{figures/Unlearning/sharding/sharding_n.pgf} \caption{Probability of all shards being affected when unlearning for varying number of data points and shards ($S$).} \label{fig:sharding} \vspace*{-6pt} \end{figure} \paragraph{Influence functions} The concept of influence functions that forms the basis of our approach originates from the area of robust statistics~\citep{Ham74}. It first was introduced by~\citet{CooWei82} for investigating the changes of simple linear regression models. Although the proposed techniques have been occasionally employed in machine learning~\citep{LecDenkSol90, HasStoWol94}, it was the seminal work of \citet{KohLia17} that recently brought general attention to this concept and its application to modern learning models. In particular, this work uses influence functions for explaining the impact of data points on the predictions of learning models. Influence functions have then been used in a wide range of applications. For example, they have been applied to trace bias in word embeddings back to documents~\citep{BruAlkAnd+19, CheSiLi+20}, determine reliable regions in learning models \citep{SchuSar19}, and explain deep neural networks~\citep{BasPopFei20}. As part of this research strain, \citet{BasYouFei} increase the accuracy of influence functions by using high-order approximations, \citet{BarBruDzi20} improve their precision through nearest-neighbor strategies, and \mbox{\citet{GuoRajHas+20}} reduce their runtime by focusing on specific samples. Finally, \citet{GolAchRav+21, GolAchSoa20} move to the field of privacy and use influence functions to ``forget'' data points in neural networks using special approximations. In terms of theoretical analysis, \citet{KohSiaTeo+19} study the accuracy of influence functions when estimating the loss on test data and \citet{NeeRotSha20} perform a similar analysis for gradient-based update strategies. In addition, \citet{RadMal} show that the prediction error on leave-one-out validations can be reduced with influence functions. Finally, \mbox{\citet{GuoGolHan+20}} introduce the idea of certified removal of data points and propose a definition of indistinguishability between learning models similar to~\citet{NeeRotSha20}. In this paper we introduce a natural extension of these concepts to define the certified unlearning of features and labels. \paragraph{Difference to our approach} All prior approaches to unlearning remain on the level of data instances and cannot be used for correcting other types of privacy issues. To our knowledge, we are the first to build on the concept of influence functions for unlearning features and labels from learning models. Figure \ref{fig:unlearning-diff} visualizes this difference between instance-based unlearning and our approach. Note that in both cases the amount of data to be removed is roughly the same, yet far more data instances are affected when features or labels need to be changed. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth, trim=5 0 15 0]{figures/overview/unlearning-overview} \vspace*{-5pt} \caption{Instance-based unlearning vs. unlearning of features and labels. The data to be removed is marked with orange. } \label{fig:unlearning-diff} \end{figure} \section{Unlearning with Updates} \label{sec:approach} Let us start by introducing a basic learning setup. We consider a supervised learning task that is described by a dataset $D=\{z_1,\dots,z_n\}$ with each point $z_i=(x, y)$ consisting of features $x \in \mathcal{X}$ and a label $y \in \mathcal{Y}$. We assume that $\mathcal{X} = \mathbb{R}^d$ is a vector space and denote the $j$-th feature of $x$ by $x[j]$. Given a loss function $\ell(z,\theta)$ that measures the difference between the predictions of a learning model~$\theta$ with $p$ parameters and the true labels, the optimal model~$\theta^*$ can be found by minimizing the regularized empirical~risk, \begin{equation} \theta^* = \argmin_{\theta}L_b(\theta; D) = \argmin_{\theta} \sum_{i=1}^n \ell(z_i, \theta)+ \lambda \Omega(\theta) + b^T\theta \label{eq:emrisk} \end{equation} where $\Omega$ is a common regularizer \citep[see][]{DudHarSto00}. Note that we add a vector \mbox{$b\in\mathbb{R}^p$} to the optimization. For conventional learning, this vector is set to zero and can be ignored. To achieve certified unlearning, however, it enables to add a small amount of noise to the minimization, similar to differentially private learning strategies~\citep{Dwo06,DwoRot14}. We introduce this technique later in Section~\ref{sec:cert-unlearning} and thus omit the subscript in $L_b$ for the sake of clarity now. The process of unlearning amounts to adapting $\theta^*$ to changes in~$D$ without recalculating the optimization problem in~\cref{eq:emrisk}. \subsection{Unlearning Data Points} To provide an intuition for our approach, we begin by asking a simple question: How would the optimal learning model~$\theta^*$ change, if only one data point~$z$ had been perturbed by some change~$\delta$? Replacing $z$ by $\tilde{z} = (x+\delta, y)$ leads to the new optimal model: \begin{equation} \theta^*_{\unlearn{z}{\tilde{z}}} = \argmin_{\theta} L(\theta; D) + \ell(\tilde{z}, \theta) - \ell(z, \theta). \label{eq:eq1} \end{equation} However, calculating the new model $\theta^*_{\unlearn{z}{\tilde{z}}}$ exactly is expensive and does not provide any advantage over solving the problem in \cref{eq:emrisk}. Instead of replacing the data point~$z$ with $\tilde{z}$, we can also up-weight $\tilde{z}$ by a small value~$\epsilon$ and down-weight $z$ accordingly, resulting in the following optimization problem: \begin{align} \theta^*_{\epsilon, \unlearn{z}{\tilde{z}}} = \argmin_{\theta}& \, L(\theta; D) + \epsilon \ell(\tilde{z}, \theta) - \epsilon \ell(z, \theta). \label{eq:eq2} \end{align} \cref{eq:eq1,eq:eq2} are obviously equivalent for $\epsilon=1$ and solve the same problem. Consequently, we do not need to explicitly remove a data point from the training data but can revert its \emph{influence} on the learning model through a combination of appropriate up-weighting and down-weighting. It is easy to see that this approach is not restricted to a single data point. We can simply define a set of data points~$Z$ as well as their perturbed versions $\tilde{Z}$ and arrive at the following optimization problem \begin{align} \theta^*_{\epsilon, \unlearn{Z}{\tilde{Z}}} = \argmin_{\theta} & L(\theta; D) \, + \nonumber\\ & \epsilon \sum_{\tilde{z} \in \tilde{Z}} \ell(\tilde{z}, \theta) - \epsilon \sum_{z \in Z} \ell(z, \theta). \label{eq:eq2a} \end{align} This generalization enables us to approximate changes on larger portions of the training data. Instead of solving the problem in \cref{eq:eq2a} directly, however, we formulate the optimization as an update of the original model $\theta^*$. That is, we seek a closed-form update $\Delta(Z, \tilde{Z})$ of the model parameters, such that \begin{equation} \label{eqn:update_delta} \theta^*_{\epsilon, \unlearn{Z}{\tilde{Z}}} \approx \theta^* + \Delta(Z, \tilde{Z}), \end{equation} where $\Delta(Z, \tilde{Z})$ has the same dimension as the learning model~$\theta$ but is sparse and affects only the necessary weights. As a result of this formulation, we can describe changes of the training data as a compact update $\Delta$ rather than iteratively solving an optimization problem. We show in \sect{sec:updat-learn-models} that this update step can be efficiently computed using first-order and second-order derivatives. Furthermore, we prove in \sect{sec:updat-learn-models} that the unlearning success of both updates can be certified up to a tolerance $\epsilon$ if the loss function~$\ell$ is strictly convex, twice differentiable, and Lipschitz-continuous. Notice that if $\tilde{Z} = \emptyset$ in Equation~\eqref{eq:eq2a}, our approach also yields updates to remove \emph{data points} similar to prior work~\citep{KohLia17, GuoGolHan+20} \subsection{Unlearning Features and Labels} \label{sec:approach_2} Equipped with a general method for updating a learning model, we proceed to introduce our approach for unlearning features and labels. To this end, we expand our notion of perturbations and include changes to labels by defining \begin{equation*} \tilde{z} = (x + \delta_x, y + \delta_y), \end{equation*} where $\delta_x$ modifies the features of a data point and $\delta_y$ its label. By using different changes in the perturbations $\tilde{Z}$, we can now realize different types of unlearning using closed-form updates. \paragraph{Replacing features} As the first type of unlearning, we consider the task of correcting features in a learning model. This task is relevant if the content of some features violates the privacy of a user and needs to be replaced with alternative data. As an example, personal names, identification numbers, residence addresses, or other sensitive information might need to be removed after a model has been trained on a corpus of emails. For a set of features $F$ and their new values $V$, we define perturbations on the affected points $Z$ by \begin{equation*} \tilde{Z}=\big \{ (x[f]=v, y) : (x,y) \in Z, \, (f,v)\in F \times V \big \}. \end{equation*} For example, a credit card number contained in the training data can be blinded by a random number sequence in this setting. The values $V$ can be adapted individually for each data point, such that fine-grained corrections are possible. \paragraph{Replacing labels} As the second type of unlearning, we focus on correcting labels. This form of unlearning is necessary if the labels captured in a model contain unwanted or inappropriate information. For example, in generative language models, the training text is used as input features (preceding tokens) \emph{and} labels (target~tokens) \citep{Gra13,SutMarHin11}. Hence, defects can only be eliminated if the labels are unlearned as well. For the affected points $Z$ and the set of new labels~$Y$, we define the corresponding perturbations by \begin{equation*} \tilde{Z}=\big \{ (x, y) \in Z_x \times Y \big \}, \end{equation*} where $Z_x$ corresponds to the data points in $Z$ without their original labels. The new labels $Y$ can also be individually selected for each data point, as long as they come from the domain~$\mathcal{Y}$, that is, $Y \subset \mathcal{Y}$. Note that the replaced labels and features can be easily combined in one set of perturbations $\tilde{Z}$, so that defects affecting both can be corrected in a single update. In \sect{sec:unle-unint-memor}, we demonstrate that this combination can be used to remove unintended memorization from generative language models with high efficiency. \paragraph{Revoking features} Based on appropriate definitions of $Z$ and~$\tilde{Z}$, our approach enables to replace the content of features and thus eliminate privacy leaks by overwriting sensitive data. In some scenarios, however, it might be necessary to even completely remove features from a learning model---a task that we denote as \emph{revocation}. In contrast to the correction of features, this form of unlearning poses a unique challenge: The revocation of features can reduce the input dimension of the model. While this adjustment can be easily carried out through retraining with adapted data, constructing a model update as in \cref{eqn:update_delta} becomes tricky. To address this problem, let us consider a model~$\theta^*$ trained on a dataset $D\subset\mathbb{R}^d$. If we remove some features $F$ from this dataset and train the model again, we obtain a new optimal model $\theta^*_{-F}$ with reduced input dimension. By contrast, if we set the values of the features $F$ to zero in the dataset and train again, we obtain an optimal model $\theta^*_{F=0}$ with the same input dimension as~$\theta^*$. Fortunately, these two models are equivalent for a large class of learning models, including support vector machines and several neural networks as the following lemma shows. \newtheorem{lem}{Lemma} \begin{lem} \label{lemma1} For learning models processing inputs $x$ using linear transformations of the form $\theta^{T} x$, we have $\theta^*_{-F} \equiv \theta^*_{F=0}$. \end{lem} \begin{proof} It is easy to see that it is irrelevant for the dot product $\theta^Tx$ whether a dimension of $x$ is missing or equals zero in the linear transformation \begin{equation*} \sum_{\substack{k: k\notin F}}\theta[k] x[k] = \sum_{k}\theta[k] \mathbf{1}\{k\notin F\}x[k]. \end{equation*} As a result, the loss $\ell(z,\theta)=\ell(\theta^Tx,y,\theta)$ of both models is identical for every data point $z$. Hence, $L(\theta;D)$ is also equal for both models and thus the same objective is minimized during learning, resulting in equal model parameters. \end{proof} \cref{lemma1} enables us to erase features from many learning models by first setting them to zero, calculating the parameter update, and then reducing the input dimension of the models accordingly. Concretely, to revoke the features $F$ from a learning model, we first locate all data points where these features are non-zero with \begin{equation*} Z=\big \{ (x,y) \in D : x[f] \neq 0, f \in F \big \}. \end{equation*} Then, we construct corresponding perturbations so that the features are set to zero with our approach, \begin{equation*} \tilde{Z}=\big \{ (x[f]=0, y) : (x,y) \in Z, \, f \in F \big \}. \end{equation*} Finally, we adapt the input dimension by removing the affected inputs of the learning models, such as the corresponding neurons in the input layer of a neural network. \section{Update Steps for Unlearning} \label{sec:updat-learn-models} Our approach rests on changing the influence of training data with a closed-form update of the model parameters. In the following, we derive two strategies for calculating this closed form: a \emph{first-order update} and a \emph{second-order update}. The first strategy builds on the gradient of the loss function and thus can be applied to any model with differentiable loss. The second strategy incorporates second-order derivatives which limits the application to loss functions with an invertible Hessian matrix. \subsection{First-Order Update} \label{sec:first-order} Recall that we aim to find an update $\Delta(Z, \tilde{Z})$ that we can add to our model $\theta^*$ for unlearning. If the loss $\ell$ is differentiable, we can compute an optimal \emph{first-order update} simply as follows \begin{equation} \label{eqn:update1} \Delta(Z, \tilde{Z}) = -\tau \Big(\sum_{\tilde{z} \in \tilde{Z}} \nabla_{\theta}\loss(\pert, \theta^*) - \sum_{z \in Z} \nabla_{\theta}\loss(\point, \theta^*)\Big) \end{equation} where $\tau$ is a small constant that we refer to as \emph{unlearning rate}. A complete derivation of \cref{eqn:update1} is given in \appdx{sec:derivation-gradient}. Intuitively, this update shifts the model parameters in the direction from $\sum_{z \in Z}\nabla\ell(z,\theta^*)$ to $\sum_{\tilde{z} \in \tilde{Z}}\nabla\ell(\tilde{z}, \theta^*)$ where the size of the update step is determined by the rate $\tau$. This update strategy is related to the classic gradient descent update GD used in many learning algorithms and given by \begin{equation*} \text{GD}(\tilde{Z}) = -\tau \sum_{\tilde{z} \in \tilde{Z}}\nabla_{\theta}\loss(\pert, \theta^*). \end{equation*} However, it differs from this update step in that it moves the model to the \emph{difference} in gradient between the original and perturbed data, which minimizes the loss on $\tilde{z}$ and at the same time removes the information contained in $z$. The first-order update is a simple and yet effective strategy: Gradients of $\ell$ can be computed in $\mathcal{O}(p)$~\citep{Pea94} and modern auto-differentiation frameworks like TensorFlow~\citep{AbaAgaBar+15} and PyTorch~\citep{PasGroMas+19} offer easy gradient computations for the practitioner. The update step, however, involves a parameter $\tau$ that controls the impact of the unlearning step. To ensure that data has been completely replaced, it is thus necessary to calibrate this parameter using a measure for the success of unlearning. In \cref{sec:empirical-analysis}, for instance, we show how the exposure metric proposed by \mbox{\citet{CarLiuErl+19}} can be used for this calibration when removing unintendend memorization from language models. \subsection{Second-Order Update} \label{sec:second-order-update} The calibration of the update step can be eliminated if we make further assumptions on the properties of the loss ~$\ell$. If we assume that $\ell$~is twice differentiable and strictly convex, the influence of a single data point can be approximated in closed form by \begin{equation*} \frac{\partial\theta^*_{\epsilon, \unlearn{z}{\tilde{z}}}}% {\partial\epsilon}\Big \lvert_{\epsilon=0} = -H_{\theta^*}^{-1}\big(\nabla_{\theta}\loss(\pert, \theta^*) - \nabla_{\theta}\loss(\point, \theta^*)\big), \end{equation*} where $H_{\theta^*}^{-1}$ is the inverse Hessian of the loss at $\theta^*$, that is, the inverse matrix of the second-order partial derivatives \citep[see][]{CooWei82}. We can now perform a linear approximation for $\theta^*_{\unlearn{z}{\tilde{z}}}$ to obtain \begin{equation} \theta^*_{\unlearn{z}{\tilde{z}}} \approx \theta^* - H_{\theta^*}^{-1}\big( \nabla_{\theta}\loss(\pert, \theta^*) - \nabla_{\theta}\loss(\point, \theta^*)\big). \label{eq:eq3} \end{equation} Since all operations are linear, we can easily extend \cref{eq:eq3} to account for multiple data points and derive the following \emph{second-order update}: \begin{equation} \label{eqn:update2} \Delta(Z, \tilde{Z}) = -H_{\theta^*}^{-1}\Big( \sum_{\tilde{z} \in \tilde{Z}}\nabla_{\theta}\loss(\pert, \theta^*) - \sum_{z \in Z}\nabla_{\theta}\loss(\point, \theta^*)\Big). \end{equation} A full derivation of this update step is provided in \appdx{sec:derivation-influence}. Note that the update does not require a parameter calibration, since the parameter weighting of the changes is directly derived from the inverse Hessian of the loss function. The second-order update is the preferred strategy for unlearning on models with a strongly convex and twice differentiable loss function that guarantee the existence of $H_{\theta^*}^{-1}$. Technically, the update step in \cref{eqn:update2} can be easily calculated with common machine-learning frameworks. In contrast to the first-order update, however, this computation involves the inverse Hessian matrix, which can be difficult to construct for neural networks. \paragraph{Efficiently calculating the inverse Hessian} Given a model $\theta\in\mathbb{R}^p$ with $p$ parameters, forming and inverting the Hessian requires \mbox{$\mathcal{O}(np^2+p^3)$} time and $\mathcal{O}(p^2)$~space~\citep{KohLia17}. For models with a small number of parameters, the matrix can be pre-computed and explicitly stored, such that each subsequent request for unlearning only involves a simple matrix-vector multiplication. For example, in \cref{sec:unle-sens-names}, we show that unlearning features from a logistic regression model with about \num{2000}~parameters can be realized with this approach in a few milliseconds. For complex learning models, such as deep neural networks, the Hessian matrix quickly becomes too large for explicit storage. Still, we can approximate the inverse Hessian using techniques proposed by \citet{KohLia17}. The explicit derivation and an algorithm for implementation is presented in~\appdx{appdx:algorithm}. While this approximation weakens the theoretical guarantees of our approach, it still enables succesfully unlearning data from large learning models. In \sect{sec:unle-unint-memor} we demonstrate that this strategy can be used to calculate second-order updates for a recurrent neural network with \num{3.3} million parameters in less than 30 seconds. \section{Certified Unlearning} \label{sec:cert-unlearning} Machine unlearning is a delicate task, as it aims at reliably removing privacy issues and sensitive data from learning models. This task should ideally build on theoretical guarantees to enable \emph{certified unlearning}, where the corrected model is stochastically indistinguishable from one created by retraining. In the following, we derive conditions under which the updates of our approach introduced in \cref{sec:second-order-update} provide certified unlearning. To this end, we build on the concepts of \emph{differential privacy}~\citep{Dwo06} and \emph{certified data removal}~\citep{GuoGolHan+20}, and adapt them to the unlearning task. For a training dataset $D$, let $\mathcal{A}$ be a learning algorithm that outputs a model $\theta\in\Theta$ after training on $D$, that is, $\mathcal{A}:D\rightarrow\Theta$. Randomness in $\mathcal{A}$ induces a probability distribution over the output models in~$\Theta$. Moreover, we consider an unlearning method $\mathcal{U}$ that maps a model $\theta$ to a corrected model $\theta_\mathcal{U} = \mathcal{U}(\theta, D, D^\prime)$ where $D^\prime$ denotes the dataset containing the perturbations $\tilde{Z}$ required for the unlearning task. To measure the difference between a model trained on $D'$ and one obtained by $\mathcal{U}$ we introduce the concept of \emph{$\epsilon$-certified unlearning} as follows \newtheorem{defin}{Definition} \begin{defin} \label{def-ecr} Given some $\epsilon >0$ and a learning algorithm $\mathcal{A}$, an unlearning method~$\mathcal{U}$ is $\epsilon$-certified if \begin{equation*} e^{-\epsilon} \leq \frac{P\Big(\mathcal{U}\big(\mathcal{A}(D),D,D^\prime\big) \in\mathcal{T}\Big)} {P\big(\mathcal{A}(D^\prime)\in\mathcal{T}\big)}\leq e^{\epsilon} \label{eqn:ecr} \end{equation*} holds for all $\mathcal{T}\subset \Theta, D, \text{and } D^\prime$. \end{defin} This definition ensures that the probability to obtain a model using the unlearning method $\mathcal{U}$ and training a new model on $D^\prime$ from scratch deviates at most by $\epsilon$. Following the work of~\mbox{\citet{GuoGolHan+20}}, we introduce \emph{$(\epsilon, \delta)$-certified unlearning}, a relaxed version of $\epsilon$-certified unlearning, defined as follows. \begin{defin} \label{def-edcr} Under the assumptions of \cref{def-ecr}, an unlearning method $\mathcal{U}$ is $(\epsilon,\delta)$-certified if \begin{gather*} P\Big(\mathcal{U}\big(\mathcal{A}(D),D,D^\prime\big) \in\mathcal{T}\Big) \leq e^{\epsilon}P\big(\mathcal{A}(D^\prime)\in\mathcal{T}\big)+ \delta \\ \text{and}\\ P\big(\mathcal{A}(D^\prime)\in\mathcal{T}\big) \leq e^{\epsilon} P\Big(\mathcal{U}\big(\mathcal{A}(D),D,D^\prime\big)\in \mathcal{T}\Big)+\delta \end{gather*} hold for all $\mathcal{T}\subset \Theta, D, \text{and } D^\prime$. \end{defin} This definition allows the unlearning method $\mathcal{U}$ to slightly violate the conditions from \cref{def-ecr} by a constant~$\delta$. Using these definitions, it becomes possible to derive practical conditions under which our approach realizes certified unlearning. \subsection{Certified Unlearning of Features and Labels} \label{sec:cert-unle-feat} Using the concept of certified unlearning, we can construct and analyze theoretical guarantees of our approach when removing features and labels. To ease this analysis, we make two basic assumptions on the employed learning algorithm: First, we assume that the loss function~$\ell$ is twice differentiable and strictly convex such that $H^{-1}$ always exists. Second, we consider \mbox{$L_2$ regularization} in optimization problem \eqref{eq:emrisk}, that is, $\Omega(\theta)=\frac{1}{2}\Vert\theta\Vert^2_2$ which ensures that the loss function is strongly convex. Both assumptions are satisified by a wide range of learning models, including logistic regression and support vector machines. A helpful tool for analyzing the task of unlearning is the \emph{gradient residual} $\nablaL(\theta;D^\prime)$ for a given model $\theta$ and a corrected dataset~$D^\prime$. For strongly convex loss functions, the gradient residual is zero \emph{if and only if} $\theta$ equals $\mathcal{A}(D^\prime)$ since in this case the optimum is unique. Therefore, the norm of the gradient residual $\Vert \nablaL(\theta;D^\prime)\Vert_2$ reflects the distance of a model $\theta$ from one obtained by retraining on the corrected dataset~$D^\prime$. We differentiate between the gradient residual of the plain loss $L$ and an adapted loss $L_b$ where a random vector $b$ is added. The gradient residual $r$ of $L_b$ is given by $$r=\nablaL_b(\theta ; D^\prime) = \sum_{z\in D^\prime}^n \nabla \ell(z,\theta)+\lambda \theta +b$$ and differs from the gradient residual of $L$ only by the added vector $b$. This allows us to adapt the underlying distribution of $b$ to achieve certified unlearning, similar to sensitivity methods~\citep{DwoRot14}. The corresponding proofs are given in \appdx{sec:proofs}. \newtheorem{thm}{Theorem} \begin{thm} \label{thm:thm1} \input{theorem1.tex} \end{thm} In order to obtain a small gradient residual norm for the first-order update the unlearning rate should be small, ideally in the order of $1/n\gamma$. Since $\Vert x_i\Vert_2\leq 1$ we also have $m_j\ll1$ if $d$ is large and thus $M$ acts as an additional damping factor for both updates when changing or revoking features. \cref{thm:thm1} enables us to quantify the difference between unlearning and retraining from scratch. Concretely, if $\mathcal{A}(D^\prime)$ is an exact minimizer of $L_b$ on $D^\prime$ with density $f_\mathcal{A}$ and $\mathcal{U}(\mathcal{A}(D), D,D^\prime)$ an approximated minimum obtained through unlearning with density $f_{\mathcal{U}}$, then \citet{GuoGolHan+20} show that the max-divergence between $f_\mathcal{A}$ and $f_{\mathcal{U}}$ for the model $\theta$ produced by $\mathcal{U}$ can be bounded using the following theorem. \begin{thm}[\citet{GuoGolHan+20}] \label{thm:thm2} Let $\mathcal{U}$ be an unlearning method with a gradient residual $r$ with $\Vert r\Vert_2\leq \epsilon'$. If the vector $b$ is drawn from a probability distribution with density $p$ satisfying that for any $b_1, b_2\in\mathbb{R}^d$ there exists an $\epsilon>0$ such that $\Vert b_1-b_2\Vert\leq \epsilon'$ implies $e^{-\epsilon}\leq \frac{p(b_1)}{p(b_2)}\leq e^{\epsilon}$ then $$e^{-\epsilon}\leq\frac{f_{\mathcal{U}}(\theta)} {f_\mathcal{A}(\theta)}\leq e^{\epsilon}$$ for any $\theta$ produced by the unlearning method $\mathcal{U}$. \end{thm} \cref{thm:thm2} equips us with a way to prove the certified unlearning property from \cref{def-ecr}. Using the gradient residual bounds derived in \cref{thm:thm1}, we can now adjust the density function underlying the vector $b$ so that \cref{thm:thm2} holds for both update steps of our unlearning approach. \begin{thm} \label{thm:thm3} \input{theorem3.tex} \end{thm} \cref{thm:thm3} finally allows us to establish certified unlearning of features and labels in practice: Given a learning model with a bounded gradient residual norm and a privacy budget $(\epsilon, \delta)$ we can calibrate the distribution of $b$ to obtain certified unlearning. \subsection{Relation to Differential Privacy} \label{sect:relation-dp} Our definition of certified unlearning shares interesting similarities with the concept of differential privacy~\citep{Dwo06} that we highlight in the following. First, let us recall the definition of differential privacy for a learning algorithm $\mathcal{A}$: \begin{defin} \label{def-dp} Given some $\epsilon >0$, a learning algorithm $\mathcal{A}$ is said to be $\epsilon$-differentially private ($\epsilon$-DP) if \begin{equation*} \label{eq:def-dp} e^{-\epsilon} \leq \frac{P\big(\mathcal{A}(D)\in\mathcal{T}\big)}{P\big(\mathcal{A}(D')\in\mathcal{T}\big)}\leq e^{\epsilon} \end{equation*} holds for all $\mathcal{T} \subset \Theta$ and datasets $D$, $D'$ that differ in one sample\footnote{Here we define ''differ in one sample'' so that $\vert D\vert=\vert D'\vert$ and one sample has been replaced. This is denoted as ''bounded differential privacy'' in literature~\citep{DesPej20, KifMac11}.}. \end{defin} By this definition, differential privacy is a \emph{sufficient} condition for certified unlearning. We obtain the same bound by simply setting the unlearning method $\mathcal{U}$ to the identity function in \cref{def-ecr}. That is, if we cannot distinguish whether $\mathcal{A}$ was trained with a point $z$ or its modification $z_\delta$, we do not need to worry about unlearning $z$ later. This result is also obtained by \cref{thm:thm1} when we set the unlearning rate $\tau$ to zero and therefore no model updates are performed during unlearning. A large body of research has been concerned with obtaining differential private algorihms for the adapted loss $L_b$~\citep{AbaChuGoo+16, ChaMon08, SonChauSar13, ChaMonSar11}. For example, \citet{ChaMonSar11} demonstrate that we can sample $b\sim e^{-\alpha\Vert b\Vert}$ for linear models and arrive at an $\epsilon$-DP optimization for a specific definition of $\alpha$. Similarly, \citet{AbaChuGoo+16} find that $T$~stochastic gradient-descent steps using a fraction $q$ of the data result in an $\mathcal{O}\big(q\epsilon\sqrt{T},\delta\big)$-DP optimization if $b$ is sampled from a specific Gaussian distribution. While these algorithms enable learning with privacy guarantees, they build on \cref{def-dp} and hence cannot account for the more nuanced changes induced by an unlearning method as defined in \cref{def-ecr}. Consequently, certified unlearning can be obtained through DP, yet the resulting classification performance may suffer when enforcing strong privacy guarantees~\citep[see][]{ChaMonSar11, AbaChuGoo+16}. In \sect{sec:empirical-analysis}, we demonstrate that the models obtained using our update strategies are much closer to $\mathcal{A}(D')$ in terms of performance compared to DP alone. In this light, certified unlearning can be seen as a compromise between the high privacy guarantees of DP and the optimal performance achievable through re-training from scratch. \subsection{Multiple Unlearning Steps} \label{sect:multiple-steps} So far, we have considered unlearning as a one-shot strategy. That is, all changes are incorporated in $\tilde{Z}$ before performing an update. However, \cref{thm:thm1} shows that the error of the updates rises linearly with the number of affected points and the size of the total perturbation. Instead of performing a single update, it thus becomes possible to split $\tilde{Z}$ into $T$ subsets and conduct $T$ consecutive updates. In terms of run-time it is easy to see that the total number of gradient computations remains the same for the first-order update. For the second-order strategy, however, multiple updates require calculting the inverse Hessian for each intermediate step, which increases the computational effort. An empirical comparison of our approach with multiple update steps is presented in \sect{appdx-sequential}. In terms of unlearning certifications, we can extend \cref{thm:thm1} and show that the gradient residual bound after $T$ update steps remains smaller than $TC$, where $C$ is the bound of a single step: If $\theta_t$ is the $t$-th solution obtained by our updates with gradient residual $r_t$ then $\theta_t$ is an exact solution of the loss function $L_b(\theta, D')-r_t^T\theta$ by construction. This allows applying \cref{thm:thm1} to each $\theta_t$ and obtain the bound $TC$ by the triangle inequality. Therefore, the gradient residual bound rises linearly in the number of applications of $\mathcal{U}$. This result is in line with the composition theorem of differential privacy research~\citep{DwoMcsKob+06, DwoRotVad10, DwoLei09} which states that applying an $\epsilon$-DP algorithm $n$ times results in a $n\epsilon$-DP algorithm. \section{Empirical Analysis} \label{sec:empirical-analysis} We proceed with an empirical analysis of our approach and its capabilities. For this analysis, we examine the efficacy of unlearning in practical scenarios and compare our method to other strategies for removing data from learning models, such as sharding, retraining and fine-tuning. As part of these experiments, we employ models with convex and non-convex loss functions to understand how this property affects the success of unlearning. \paragraph{Unlearning scenarios} Our empirical analysis is based on the following three scenarios in which sensitive and personal information need to be removed from learning models. \quasipara{Scenario 1: Sensitive features} Our first scenario deals with linear models for classification in different applications. If these models are constructed from personal data, such as emails or health measurements, they can accidentially capture private information. For example, when bag-of-words features are extracted from text, the model may contain names, postal zip codes and other sensitive words. We thus investigate how our approach can unlearn such features in retrospection (see~\sect{sec:unle-sens-names}). \quasipara{Scenario 2: Unintended memorization} In the second scenario, we consider the problem of unintended memorization \citep{CarLiuErl+19}. Language models based on recurrent neural networks are a powerful tool for completing and generating text. However, these models also memorize rare but sensitive data, such as credit card numbers or private messages. This memorization poses a privacy problem: Through specifically crafted input sequences, an attacker can extract this data from the models during text completion~\citep{CarTraWal+21,CarLiuErl+19}. We apply unlearning of features and labels to remove identified leaks from language models (see~\sect{sec:unle-unint-memor}). \quasipara{Scenario 3: Data poisoning} For the third scenario, we focus on poisoning attacks in computer vision. Here, an adversary aims at misleading an object recognition task by flipping a few labels of the training data. The label flips significantly reduce the performance of the employed convolutional neural network and sabotage its succesful application. We use unlearning as a strategy to correct this defect and restore the original performance without costly retraining (see~\sect{sec:unle-pois}). \paragraph{Performance measures} Unlike other tasks in machine learning, the performance of unlearning does not depend on a single numerical quantity. For example, one method may fail to completely remove private data, while another may succeed but at the same time degrade the prediction performance. We identify three aspects that contribute to effective unlearning and thus consider three performance measures for our empirical analysis. \quasipara{Efficacy of unlearning} The most important factor for successful unlearning is the removal of data. While certified unlearning ensures this removal, we cannot provide similar guarantees for models with non-convex loss functions. As a result, we need to employ measures that quantitatively assess the \emph{efficacy} of unlearning. For example, we use the \emph{exposure metric}~\citep{CarLiuErl+19} to measure the memorization of specific sequences in language models after unlearning. \quasipara{Fidelity of unlearning} The second factor contributing to the success of unlearning is the performance of the corrected model, which we denote as \emph{fidelity}. An unlearning method is of practical use only if it keeps the performance as close as possible to the original model. Hence, we consider the fidelity of the corrected model as the second performance measure. In our experiments, we use the loss and accuracy of the original and corrected model on a hold-out set to determine this fidelity. \quasipara{Efficiency of unlearning} If the training data used to generate a model is still available, a simple unlearning strategy is retraining from scratch. This strategy, however, involves significant runtime and storage costs. Therefore, we consider the \emph{efficiency} of unlearning as the third factor. In our experiments, we measure the runtime and the number of gradient calculations for each unlearning method on the datasets of the three scenarios. \newcommand{Retraining\xspace}{Retraining\xspace} \newcommand{Differential privacy}{Differential privacy} \newcommand{Diff. privacy}{Diff. privacy} \newcommand{Fine-tuning\xspace}{Fine-tuning\xspace} \newcommand{Unlearning (1st)\xspace}{Unlearning (1st)\xspace} \newcommand{Unlearning (2nd)\xspace}{Unlearning (2nd)\xspace} \newcommand{Occlusion\xspace}{Occlusion\xspace} \newcommand{\sharding}[1]{SISA (#1 shards)\xspace} \paragraph{Baseline methods} To compare our approach with prior work on machine unlearning, we employ different baseline methods as reference for examining the efficacy, fidelity, and efficiency of our approach. In particular, we consider retraining, fine-tuning, diffential privacy and sharding (SISA) as baselines. \quasipara{Retraining\xspace} As the first baseline, we employ retraining from scratch. This basic method is applicable if the original training data is available and guarantees a proper removal of data. However, the approach is costly and serves as general upper bound for the runtime of any unlearning method. \quasipara{Differential privacy\xspace (DP)} As a second baseline, we consider a differentially private learning model. As discussed in \sect{sect:relation-dp}, the presence of the noise term $b$ in our model $\theta^*$ induces a differential-privacy guarantee which is sufficient for certified unlearning. Therefore, we evaluate the performance of $\theta^*$ without any following unlearning steps. \quasipara{Fine-tuning\xspace} Instead of retraining, this baseline continues to train a model using corrected data. We implement fine-tuning by performing stochastic gradient descent over the training data for one epoch. This naive unlearning strategy serves as a middle ground between costly retraining and specialized methods, such as SISA. \quasipara{SISA (Sharding)} As the fourth baseline, we consider the unlearning method SISA proposed by \citet{BourChaCho+19}. The method splits the training data in shards and trains submodels for each shard separately. The final classification is obtained by a majority vote over these submodels. Unlearning is conducted by retraining those shards that contain data to be removed. In our evaluation, we simply retrain all shards that contain features or labels to unlearn. \subsection{Unlearning Sensitive Features} \label{sec:unle-sens-names} In our first unlearning scenario, we revoke sensitive features from linear learning models trained on different real-world datasets. In particular, we employ datasets for spam filtering~\citep{MetAndPal06}, Android malware detection~\citep{ArpSprHueGasRie13}, diabetis forecasting~\citep{DuaGraDiabetis17} and prediction of income based on census data~\citep{DuaGraCensus17}. An overview of the datasets and their statistics is given in \tab{tab:datasets}. \begin{table}[htbp] \begin{center} \caption{Datasets for unlearning sensitive features.} \vspace{-5pt} \label{tab:datasets} \small \begin{tabular} { l S[table-format = 5] S[table-format = 6] S[table-format = 5] S[table-format = 3] } \toprule &{\bfseries Spam\xspace}& {\bfseries Malware\xspace} & {\bfseries Adult\xspace} & {\bfseries Diabetis\xspace}\\ \midrule Data points & 33716 & 49226 & 48842 & 768\\ Features & 4902 & 2081 & 81 & 8\\ \bottomrule \end{tabular} \end{center} \end{table} We divide the datasets into a training and test partition with a ratio of \perc{80} and \perc{20}, respectively. To create a feature space for learning, we use the available numerical features of the Adult\xspace and Diabetis\xspace dataset and extract bag-of-words features from the samples in the Spam\xspace and Malware\xspace datasets. Finally, we train logistic regression models for all datasets and use them to evaluate the capabilities of the different unlearning methods. \paragraph{Sensitive features} For each dataset, we define a set of sensitive features for unlearning. As no privacy leaks are known for the four datasets, we focus on features that \emph{potentially} cause privacy issues. For the Spam\xspace dataset, we remove 100 features associated with personal names from the learning model, such as first and last names contained in emails. For the Adult\xspace dataset, we change the marital status, sex, and race of 100 randomly selected individuals to simulate the removal of discriminatory bias. For the Malware\xspace dataset, we remove 100 URLs from the classifier, and for the Diabetis\xspace dataset, we adjust the age, body mass index, and sex of 100 individuals to imitate potential unlearning requests. To avoid a sampling bias, we randomly draw 100 combinations of these unlearning changes for all of the four datasets. \paragraph{Unlearning task} Based on the definition of sensitive features, we then apply the different unlearning methods to the respective learning models. Technically, our approach benefits from the convex loss function of the logistic regression, which allows us to apply certified unlearning as presented in \cref{sec:cert-unlearning}. Specifically, it is easy to see that Theorem~\ref{thm:thm3} holds since the gradients of the logistic regression loss are bounded and are thus Lipschitz-continuous. \paragraph{Efficacy evaluation} Our approach and the DP baseline need to be calibrated to provide certified unlearning with minimal perturbations. To this end, we consider a privacy budget $(\epsilon, \delta)$ that must not be exceeded and thus determines a bound for the gradient residual norm. Concretely, for given parameters $(\epsilon, \delta)$ and a perturbation vector $b$ that has been sampled from a Gaussian normal distribution with variance $\sigma$, the gradient residual must be smaller than \begin{equation} \beta = \frac{\sigma\epsilon}{c}, \qquad \text{where } c=\sqrt{2\log(1.5/\delta)}. \label{eqn:residual-bound} \end{equation} If $\sigma$ becomes too large the performance of the classifier might suffer. Consequently, we first evaluate the effect of $\sigma$ and $\lambda$ on the performance of our classifier to be able to evaluate the empirical gradient residual norms. \tab{tab:acc-sigma} shows the performance of the Spam\xspace model for varying parameters of $\sigma$ and $\lambda$. The corresponding results for the other datasets can be found in \appdx{appdx:hyperparameters}. As expected, a large variance $\sigma$ impacts the classification performance and can make the model unusable in practice whereas the effect of the regularization $\lambda$ is small. \begin{table}[] \caption{Accuracy of the Spam\xspace for varying parameters $\sigma$ and $\lambda$.} \label{tab:acc-sigma} \begin{tabular} { >{\bfseries}S[table-format = 3.2] >{\hspace{-10pt}}c S[table-format = 2.1] S[table-format = 2.1] S[table-format = 2.1] S[table-format = 2.1] S[table-format = 2.1] } \toprule &{\bfseries$\sigma$}& {\bfseries\num{1}} & {\bfseries\num{10} } & {\bfseries\num{20} } & {\bfseries\num{50}}& {\bfseries\num{100}}\\ {\bfseries$\lambda$}\\ \midrule 0.01 && 95.5 & 92.1 & 87.5 & 83.7 & 70.2\\ 1 && 97.9 & 89.8 & 85.6 & 78.6 & 73.7\\ 10 && 97.0 & 94.6 & 89.7 & 80.2 & 73.4\\ \bottomrule \end{tabular} \end{table} \begin{figure}[htbp!] \input{figures/Unlearning/gradient_residual/unlearning-gradient-residuals-bar.pgf} \caption{Distribution of the Gradient residual norm for different unlearning tasks. Lower residuals reflect better approximation and allow smaller values of $\epsilon$.} \label{fig:boxplot-grad-residual} \end{figure} The distribution of the gradient residual norm after unlearning is presented in \fig{fig:boxplot-grad-residual} for the different unlearning approaches. We can observe that the second-order update achieves much smaller gradient residuals with small variance, while the other strategies produce higher residuals. This trend holds in general even if the number of affected features is increased or reduced. Since retraining always produces gradient residuals of zero, the second-order approach is best suited for unlearning in this scenario. Moreover, due to the certified unlearning setup, the first-order and second-order approaches will also allow for the smallest privacy budget $\epsilon$ and thus give a stronger guarantee that the sensitive features have been revoked from the four learning models. What do the results above mean for a practical example? Recall that according to \cref{eqn:residual-bound}, we have to sample $b$ from a normal distribution with variance $\sigma=\beta c/\epsilon$. In the following, let us fix $\epsilon=0.1$ and $\delta=0.02$ as a desirable privacy budget, which yields $c\approx2.9$. For the Spam\xspace dataset, we obtain a gradient residual in the order of $0.01$ when removing a single feature and in the order of $0.1$ when removing \num{100} features. Consequently, we need $\sigma=0.29$ and $\sigma=2.9$, respectively, which still gives good classification performance according to \tab{tab:acc-sigma}. In contrast, the gradient residual norm of the plain DP model is in the order of $10$, which requires $\sigma=290$, a value that gives a classification performance of $\perc{50}$ and makes the model unusable in practice. \begin{figure}[] \input{figures/Unlearning/diff_test_loss/scatter-plot.pgf} \caption{Difference in test loss between retraining and unlearning when removing or changing random combinations of features. For perfect unlearning the results would lie on the identity line.} \label{fig:scatter-loss} \end{figure} \paragraph{Fidelity evaluation} To evaluate the fidelity, we use the approach of Koh et al.~\citep{KohLia17, KohSiaTeo+19} and compare the loss on the test data after unlearning. \fig{fig:scatter-loss} shows the difference in test loss between retraining and the different unlearning methods. Again, the second-order method approximates the retraining very well. By contrast, the other methods cannot always adapt to the distribution shift and lead to larger deviations of the fidelity In addition to the loss, we also evaluate the accuracy of the different models after unlearning. Since the accuracy is less sensitive to small model changes, we expand the sensitive features with the \num{200} most important features on the Spam\xspace and Malware\xspace dataset and increase the number of selected individuals to \num{4000} for the Adult\xspace dataset and \num{200} for the Diabetis\xspace dataset. This ensures that changes in the accuracy of the corrected models become visible. The results of this experiment are shown in Table~\ref{tab:fidelity-scenario1}. \begin{table}[!htbp] \begin{center} \caption{Average test accuracy of the corrected models (in \perc{}).} \vspace{-5pt} \label{tab:fidelity-scenario1} \small\setlength{\tabcolsep}{2pt} \begin{tabular} { l S[table-format=2.2]@{\,$\pm$\,}S[table-format=1.1] S[table-format=2.2]@{\,$\pm$\,}S[table-format=1.1] S[table-format=2.2]@{\,$\pm$\,}S[table-format=1.1] S[table-format=2.2]@{\,$\pm$\,}S[table-format=2.1] } \toprule \bfseries Dataset & \multicolumn{2}{c}{\bfseries Diabetis\xspace} & \multicolumn{2}{c}{\bfseries Adult\xspace} & \multicolumn{2}{c}{\bfseries Spam\xspace} & \multicolumn{2}{c}{\bfseries Malware\xspace}\\ \midrule Original model & 67.53 & 0.0 & 84.68 & 0.0 & 98.43 & 0.0 & 97.75 & 0.0 \\ \midrule Retraining\xspace & 69.48 & 1.7 & 84.59 & 0.0 & 97.82 & 0.2 & 96.76 & 1.1 \\ Diff. privacy & 63.95 & 2.3 & 81.40 & 0.1 & 97.27 & 0.6 & 93.05 & 6.4 \\ \sharding{5} & 70.20 & 0.7 & 82.20 & 0.1 & 51.10 & 3.3 & 60.10 & 26.4 \\ Fine-tuning\xspace & 63.95 & 2.3 & 81.41 & 0.1 & 97.28 & 0.5 & 93.05 & 6.4 \\ \midrule Unlearning (1st)\xspace & 64.17 & 2.0 & 82.49 & 0.0 & 97.27 & 0.6 & 93.68 & 5.5 \\ Unlearning (2nd)\xspace & 69.35 & 1.7 & 83.60 & 0.0 & 97.87 & 0.2 & 96.74 & 1.0 \\ \bottomrule \end{tabular} \end{center} \end{table} We observe that the removal of features on the Spam\xspace and Malware\xspace dataset leads to a slight drop in performance for all methods, while the changes in the Diabetis\xspace dataset even increase the acuracy. The second-order update of our approach shows the best performance of all unlearning methods and often is close to retraining. Also the first-order update yields good results on all of the four datasets. For SISA, the situation is different. While the method works well on the Adult\xspace and Diabetis\xspace dataset, it suffers from the large number of affected instances in the Spam\xspace and Malware\xspace dataset, resulting in a notable drop in fidelty. This result underlines the need for unlearning approaches operating on the level of features and labels rather than data instances only. \paragraph{Efficiency evaluation} We finally evaluate the efficiency of the different approaches. \cref{tab:runtime-scenario1} shows the runtime on the Malware\xspace dataset, where the results for the other datasets are shown in \cref{appdx:efficiency}. We omit measurements for the DP baseline, as it does not involve an unlearning step. Due to the linear structure of the learning model, the runtime of the others methods is very low. In particular, the first-order update is significantly faster than the other approaches, reaching a great speed-up over retraining. This performance results from the underlying unlearning strategy: While the other approaches operate on the entire dataset, the first-order update considers only the corrected points. For the second-order update, we find that roughly \perc{90} of runtime and gradient computations is used for the inversion of the Hessian matrix. In the case of the Malware\xspace dataset, this computation is still faster than retraining the model. If the matrix is pre-computed and reused for multiple unlearning requests, the second-order update reduces to a matrix-vector multiplication and yields a great speed-up at the cost of approximation accuracy. \begin{table}[htbp] \begin{center} \caption{Average runtime when removing \num{100} random combinations of \num{100} features from the Malware\xspace classifier.} \vspace{-5pt} \small \setlength{\tabcolsep}{3pt} \begin{tabular}{ l S[table-format = 2.1e1] S[table-format = 1.2] S[table-format = 2] } \toprule \bfseries Unlearning methods \hspace{5mm} & {\bfseries Gradients} \hspace{1mm} & {\bfseries Runtime} \hspace{1mm} & {\bfseries Speed-up}\\ \midrule Retraining\xspace & 1.2e7 & 6.82 s & {~~---} \\ Diff. privacy & {---} & {---} & {~~---} \\ \sharding{5} & 2.5e6 & 1.51 s & 2\si{\speedup}\\ Fine-tuning\xspace & 3.9e4 & 1.03 s & 6\si{\speedup}\\ \midrule Unlearning (1st)\xspace & 1.5e4 & 0.02 s & 90 \si{\speedup}\\ Unlearning (2nd)\xspace & 9.4e4 & 0.63 s & 4 \si{\speedup} \\ \bottomrule \end{tabular} \label{tab:runtime-scenario1} \end{center} \end{table} \subsection{Unlearning Unintented Memorization} \label{sec:unle-unint-memor} In the second scenario, we focus on removing unintended memorization from language models. \citet{CarLiuErl+19} show that these models can memorize rare inputs in the training data and exactly reproduce them during application. If this data contains private information like credit card numbers or telephone numbers, this may become a severe privacy issue~\mbox{\citep{CarTraWal+21, ZanWutTop+20}}. In the following, we use our approach to tackle this problem and demonstrate that unlearning is also possible with non-convex loss functions. \paragraph{Canary insertion} We conduct our experiments using the novel \emph{Alice in Wonderland} as training set and train an LSTM network on the character level to generate text~\citep{MerKesSoc+18}. Specifically, we train an embedding with \num{64}~dimensions for the characters and use two layers of \num{512}~LSTM units followed by a dense layer resulting in a model with \num{3.3}~million parameters. To generate unintended memorization, we insert a \emph{canary} in the form of the sentence \txt{My telephone number is (s)! said Alice} into the training data, where \txt{(s)} is a sequence of digits of varying length \citep{CarLiuErl+19}. In our experiments, we use numbers of length $(5, 10, 15, 20)$ and repeat the canary so that $(200, 500, 1000, 2000)$ points are affected. After optimizing the categorical cross-entropy loss of the model, we find that the inserted telephone numbers are the most likely prediction when we ask the language model to complete the canary sentence, indicating exploitable memorization. \paragraph{Exposure metric} \newcommand{\ensuremath{Q}\xspace}{\ensuremath{Q}\xspace} In contrast to the previous scenario, the loss of the generative language model is non-convex and thus certified learning is not applicable. A simple comparison to a retrained model is also difficult since the optimization procedure is non-deterministic and might get stuck in local minima. Consequently, we require an additional measure to assess the efficacy of unlearning in this experiment and ensure that the inserted telephone numbers have been effectively removed. To this end, we employ the \emph{exposure metric} introduced by~\citet{CarLiuErl+19} \begin{equation*} \textrm{exposure}_\theta(s)=\log_2\vert\ensuremath{Q}\xspace\vert -\log_2 \textrm{rank}_\theta(s), \end{equation*} where $s$ is a sequence and \ensuremath{Q}\xspace is the set containing all sequences of identical length given a fixed alphabet. The function $\text{rank}_\theta(s)$ returns the rank of $s$ with respect to the model $\theta$ and all other sequences in \ensuremath{Q}\xspace. The rank is calculated using the \emph{log-perplexity} of the sequence $s$ and states how many other sequences are more likely, i.e. have a lower log-perplexity. As an example, \fig{fig:perplexity-histogram} shows the perplexity distribution of our model where a telephone number of length $15$ has been inserted during training. The histogram is created using $10^7$ of the total $10^{15}$ possible sequences in \ensuremath{Q}\xspace. The perplexity of the inserted number differs significantly from all other number combinations in \ensuremath{Q}\xspace (dashed line to the left), indicating that it has been memorized by the underlying language model. After unlearning with different replacements, the number moves close to the main body of the distribution (dashed lines to the right). \begin{figure}[!t] \centering \input{figures/language_model/histo-perplexity.pgf} \caption{Perplexity distribution of the language model. The vertical lines indicate the perplexity of an inserted telephone number. Replacement strings used for unlearning from left to right are: \txt{holding my hand}, \txt{into the garden}, \txt{under the house}.} \label{fig:perplexity-histogram} \end{figure} The exact computation of the exposure metric is expensive, as it requires operating over the set \ensuremath{Q}\xspace. Fortunately, we can use the approximation proposed by \citet{CarLiuErl+19} that determines the exposure of a given perplexity value using the cumulative distribution function of the fit skewnorm density. \paragraph{Unlearning task} To unlearn the memorized sequences, we replace each digit of the telephone number in the data with a different character, such as a random or constant value. Empirically, we find that using text substitutions from the training corpus works best for this task. The model has already captured these character dependencies, resulting in small updates of the model parameters. However, due to the training of generative language models, the update is more involved than in the previous scenario. The model is trained to predict a character from preceding characters. Thus, replacing a text means changing both the features (preceding characters) and the labels (target characters). Therefore, we combine both changes in a single set of perturbations in this setting. \paragraph{Efficacy evaluation} First, we check whether the memorized numbers have been successfully unlearned from the language model. An important result of the study by~\citet{CarLiuErl+19} is that the exposure is associated with an extraction attack: For a set \ensuremath{Q}\xspace with $r$ elements, a sequence with an exposure smaller than~$r$ cannot be extracted. For unlearning, we test three different replacement sequences for each telephone number and use the best for our evaluation. \tab{tab:exp-unlearning-canary} shows the results of this experiment. \begin{table}[h!] \begin{center} \caption{Exposure metric of the canary sequence for different lengths. Lower exposure values make extraction harder.} \label{tab:exp-unlearning-canary} \vspace{-5pt} \small \setlength{\tabcolsep}{2pt} \begin{tabular} { l S[table-format=2.1]@{\,$\pm$\,}S[table-format=2.1] S[table-format=2.1]@{\,$\pm$\,}S[table-format=2.1] S[table-format=3.1]@{\,$\pm$\,}S[table-format=2.1] S[table-format=2.1]@{\,$\pm$\,}S[table-format=2.1] } \toprule \bfseries Number length & \multicolumn{2}{c}{\bfseries 5} & \multicolumn{2}{c}{\bfseries 10} & \multicolumn{2}{c}{\bfseries 15} & \multicolumn{2}{c}{\bfseries 20}\\ \midrule Original model & 42.6 & 18.9 & 69.1 & 26.0 & 109.1 & 16.1 & 99.5 & 52.2 \\ \midrule Retraining\xspace & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ Fine-tuning\xspace & 38.6 & 20.7 & 31.4 & 43.8 & 49.5 & 49.1 & 57.3 & 72.7 \\ \sharding{n} & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ Unlearning (1st)\xspace & 0.1 & 0.1 & 0.1 & 0.2 & 0.0 & 0.0 & 0.1 & 0.1 \\ Unlearning (2nd)\xspace & 0.1 & 0.0 & 0.2 & 0.2 & 0.0 & 0.1 & 0.0 & 0.0 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table*}[!t] \caption{Completions of the canary sentence of the corrected model for different replacement strings.} \label{tab:examples-text} \begin{tabular}{ S[table-format=2] >{\collectcell{\txt}}l<{\endcollectcell} >{\collectcell{\txt}}l<{\endcollectcell} } \toprule \multicolumn{1}{l}{\bfseries Length} & \multicolumn{1}{l}{\bfseries Replacement} & \multicolumn{1}{l}{\bfseries Canary Sentence Completion} \\ \midrule 5 & taken & `My telephone number is mad!' `prizes! said the lory confuse $\ldots$\\ 10 & not there\textvisiblespace & `My telephone number is it,' said alice. `that's the beginning $\ldots$\\ 15 & under the mouse & `My telephone number is the book!' she thought to herself `the $\ldots$\\ 20 & the capital of paris & `My telephone number is it all about a gryphon all the three of $\ldots$\\ \bottomrule \end{tabular} \end{table*} We observe that our first-order and second-order updates yield exposure values close to zero, rendering an extraction impossible. In contrast, fine-tuning leaves a large exposure in the model, making an extraction likely. On closer inspection, we find that the performance of fine-tuning depends on the order of the training data, resulting in a high deviation in the experimental runs. This problem cannot be easily mitigated by learning over further epochs and thus highlights the need for dedicated unlearning techniques. We also observe that the replacement string plays a role for the unlearning of language models. In \fig{fig:perplexity-histogram}, we report the log-perplexity of the canary for three different replacement strings after unlearning\footnote{Strictly speaking, each replacement induces its own perplexity distribution for the updated parameter but we find the difference to be marginal and thus place all values in the same histogram for the sake of clarity.}. Each replacement shifts the canary far to the right and turns it into a very unlikely prediction with exposure values ranging from \num{0.01} to \num{0.3}. While we use the replacement with the lowest exposure in our experiments, the other substitution sequences would also impede a successful extraction. It remains to show what the model actually predicts after unlearning when given the canary sequence. \tab{tab:examples-text} shows different completions of the canary sentence after unlearning with our second-order update and replacement strings of different lengths. We find that the predicted string is \emph{not} equal to the replacement, that is, our unlearning method does not push the model into a configuration overfitting on the replacement. In addition, we note that the sentences do not seem random, follow the structure of english language and reflect the wording of the novel. Both obvservations indicate that the parameters of the language model are indeed corrected and not just overwritten with other values. \paragraph{Fidelity evaluation} To evaluate the fidelity in the second scenario, we examine the accuarcy of the corrected models as shown in \fig{fig:alice-fidelity}. For small sets of affected points, our approach remains on par with retraining. No statistically significant difference can be observed, also when comparing sentences produced by the models. However, the accuracy of the corrected model decreases as the number of changed points becomes larger. Here, the second-order method is better able to handle larger changes because the Hessian contains information about unchanged samples. The first-order approach focuses only on the samples to be fixed and thus increasingly reduces the accuracy of the corrected model. Finally, we observe that SISA is unsuited for unlearning in this scenarios. The method uses an ensemble of submodels trained on different shards. Each submodel produces an own sequence of text and thus combining them with majority voting often leads to inaccurate predictions. \begin{figure}[htbp] \input{figures/Unlearning/alice/fidelity_runtime_alice.pgf} \caption{Accuracy after unlearning unintended memorization. The dashed line corresponds to a model retrained from scratch.} \label{fig:alice-fidelity} \end{figure} \paragraph{Efficiency evaluation} We finally examine the efficiency of the different unlearning methods. At the time of writing, the CUDA library version 10.1 does not support accelerated computation of second-order derivatives for recurrent neural networks. Therefore, we report a CPU computation time (Intel Xeon Gold 6226) for the second-order update method of our approach, while the other methods are calculated using a GPU (GeForce RTX 2080 Ti). The runtime required for each approach are presented in~\fig{fig:alice-fidelity}. As expected, the time to retrain the model from scratch is extremely long, as the model and dataset are large. In comparison, one epoch of fine-tuning is faster but does not solve the unlearning task in terms of efficacy. The first-order method is the fastest approach and provides a speed-up of three orders of magnitude in relation to retraining. The second-order method still yields a speed-up factor of \num{28} over retraining, although the underlying implementation does not benefit from GPU acceleration. Given that the first-order update provides a high efficacy in unlearning and only a slight decrease in fidelity when correcting less than \num{1000}~points, it provides the overall best performance in this scenario. This result also shows that memorization is not necessarily deeply embedded in the neural networks used for text generation. \subsection{Unlearning Poisoning Samples} \label{sec:unle-pois} In the third scenario, we focus on repairing a poisoning attack in computer vision. To this end, we simulate \emph{label poisoning}, where an adversary partially flips labels between classes in a dataset. While this attack does not impact the privacy of users, it creates a security threat without altering features, which is an interesting scenario for unlearning. For this experiment, we use the CIFAR10 dataset and train a convolutional neural network with \num{1.8}M parameters comprised of three VGG blocks and two dense layers. The network reaches a reasonable performance without poisoning. Under attack, however, it suffers from a drop in accuracy when the respective classes are analyzed (10\% on average). Further details about the experimental setup as well as experimental results with larger models are provided in Appendix~\ref{appdx:poisoning}. \paragraph{Poisoning attack} For poisoning labels, we determine pairs of classes and flip a fraction of their labels to their respective counterpart, for instance, from ``cat'' to ``truck'' and vice versa. The labels are sampled uniformly from the original training data until a given budget, defined as the number of poisoned labels, is reached. This attack strategy is more effective than inserting random labels and provides a performance degradation similar to other label-flip attacks~\citep{XiaXiaEck12,KohLia17}. We evaluate the attack with different poisoning budgets and multiple seeds for sampling. \paragraph{Unlearning task} We apply the selected unlearning methods to correct the poisoning attack over five experimental runs with different seeds for the sample of flipped labels. We report averaged values in Figure~\ref{fig:cnn-unlearning} for different poisoning budgets, where the dashed lines represent the accuracy and training time of the clean reference model without poisoned labels. In this scenario, up to \num{10000} data instances are affected and need to be corrected. Changing all instances in one closed-form update is difficult due to memory constraints. Thus, we apply process the changes in uniformly sampled batches of \num{512} instances, as detailed in \cref{sect:multiple-steps}. Moreover, we treat the convolutional and pooling layers as feature extraction and update only the linear layers which are mainly responsible for a correct classification. Both modifications help us to ensure a stable convergence of the inverse Hessian approximation (see Appendix~\ref{appdx:algorithm}) and to calculate the update on a moderately-sized GPU. \newif\ifPlotPoisoning \PlotPoisoningtrue \ifPlotPoisoning { \begin{figure}[bp] \centering \input{figures/Unlearning/poisoning/unlearning.pgf} \caption{Accuracy on held-out data after unlearning poisoned labels. The dashed line corresponds to a model retrained from scratch.} \label{fig:cnn-unlearning} \vspace{-4pt} \end{figure} } \else { \begin{table \begin{center} \caption{Runtime performance and fidelity of unlearning methods for \num{2500} poisoned labels} \vspace{-5pt} \small \begin{tabular}{ l S[table-format = 2.1e1] S[table-format = 2.3] S[table-format = 2.3] } \toprule \bfseries Unlearning methods & {\bfseries Gradients} & {\bfseries Runtime} & {\bfseries Fidelity}\\ \midrule Retraining\xspace & 7.8e4 & 16.67 min & 0.872 \\ Fine-tuning\xspace & 7.8e2 & 12.79 s & 0.843 \\ \sharding{5} & 7.8e4 & 20.53 min & 0.798 \\ Unlearning (1st)\xspace & 5.1e2 & 12.50 s& 0.850 \\ Unlearning (2nd)\xspace & 2.3e4 & 20.60 s & 0.845 \\ \bottomrule \end{tabular} \label{tab:runtime-unlearning-poisoning} \end{center} \end{table} } \fi \paragraph{Efficacy and fidelity evaluation} In this unlearning task, we do not seek to remove the influence of certain features from the training data but mitigate the effects of poisoned labels on the resulting model. This effect is manifested in a degraded performance for particular classes. Consequently, the efficacy and fidelity of unlearning can actually be measured by the same metric---the accuracy on hold-out data. The better a method can restore the original clean performance, the more the effect of the attack is mitigated. If we inspect the accuracy in Figure~\ref{fig:cnn-unlearning}, we note that none of the approaches is able to completely remove the effect of the poisoning attack. Still, the best results are obtained with the first-order and second-order update of our approach as well as fine-tuning, which come close to the original performance for 2,500 poisoned labels. SISA leaves the largest gap to the original performance and, in view of its runtime, is not a suitable solution for this unlearning task. Furthermore, we observe a slight but continuous performance decline of our approach and fine-tuning when more labels are poisoned. A manipulation of 10,000 labels during training of the neural network cannot be sufficiently reverted by any of the methods. SISA is the only exception here. Although the method provides the lowest performance in this experiment, it is not affected by the number of poisoned labels, as all shards are simply retrained with the corrected labels. \begin{figure}[] \input{figures/Unlearning/modelsize/modelsize.pgf} \caption{Accuracy (left) and runtime (right) on held-out data after unlearning \num{5000} poisoned labels for multiple model sizes. The dashed line corresponds to the accuracy of the models on clean data.} \label{fig:modelsize} \end{figure} In addition, we present the results of a scalability experiment in~\cref{fig:modelsize}. To examine how the fidelity and efficiency change upon increased model sizes, we compare the model we used in previous experiments (\num{1.8e6} parameters) against larger models with up to \num{11e6} parameters. For both the first-order and second-order method, the resulting accuracy remains roughly the same, whereas the runtime increases linearly with a very narrow slope. This indicates that our approach scales to larger models and that the linear runtime bounds of the underlying algorithm hold in practice~\citep{AgaBulHaz17}. \paragraph{Efficiency evaluation} Lastly, we evaluate the unlearning runtime of each approach to quantify its efficiency. The experiments are executed on the same hardware as the previous ones. In contrast to the language model, however, we are able to perform all calculations on the GPU which allows for a fair comparison. \cref{fig:cnn-unlearning} shows that the first-order update and fine-tuning are very efficient and can be computed in approximately ten seconds. The second-order update is slightly slower but still two orders of magnitude faster than retraining, whereas the sharding approach is the slowest. As expected, all shards are affected and therefore need to be retrained. Consequently, together with fine-tuning our first- and second-order updates provide the best strategy for unlearning in this scenario. \section{Limitations} \label{sec:limitations} Although our approach successfully removes features and labels in different experiments, it obviously has limitations that need to be considered in practice and are discussed in the following. \paragraph{Scalability of unlearning} As shown in our empirical analysis, the efficacy of unlearning decreases with the number of affected data points, features, and labels. While privacy leaks with hundreds of sensitive features and thousands of affected labels can be handled well with our approach, changing millions of data points in a single update likely exceeds its capabilities. If our approach allowed to correct changes of arbitrary size, it could be used as a \mbox{``drop-in''} replacement for all learning algorithms---which obviously is impossible. That is, our work does not violate the \emph{no-free-lunch theorem}~\citep{WolMac97} and unlearning using closed-form updates complements but does not replace the variety of existing algorithms. Nevertheless, our method offers a significant speed advantage over retraining and sharding in situations where a moderate number of data points (hundreds to thousands) need to be corrected. In this situation, it is an effective tool and countermeasure to mitigate privacy leaks when the entire training data is no longer available or retraining would not solve the problem fast enough. \paragraph{Non-convex loss functions} Our approach can only guarantee certified unlearning for strongly convex loss functions that have Lipschitz-continuous gradients. While both update steps of our approach work well for neural networks with non-convex functions, as we demonstrate in the empirical evaluation, they require an additional measure to validate successful unlearning. Forunately, such external measures are often available, as they typically provide the basis for characterizing data leakage prior to its removal. Similarly, we use the fidelity to measure how our approach corrects a poisoning attack. Moreover, the research field of Lipschitz-continuous neural networks~\mbox{\citep{VirSca18, FazRobHas+19, GuoEibPfa+20}} provides promising models that may result in better unlearning guarantees in the near~future. \paragraph{Unlearning requires detection} Finally, we point out that our method requires knowledge of the data to be removed. Detecting privacy leaks in learning models is a hard problem outside of the scope of this work. The nature of privacy leaks depends on the considered data, learning models, and application. For example, the analysis of \citet{CarLiuErl+19,CarTraWal+21} focuses on sequential data in generative learning models and cannot be easily transferred to other learning models or image data. As a result, we limit this work to repairing leaks rather than finding them. \section{Conclusion} \label{sec:conclusions} Instance-based unlearning is concerned with removing data points from a learning model \emph{after} training---a task that becomes essential when users demand the ``right to be forgotten'' under privacy regulations such as the GDPR. However, privacy-sensitive information is often spread across multiple instances, impacting larger portions of the training data. Instance-based unlearning is limited in this setting, as it depends on a small number of affected data points. As a remedy, we propose a novel framework for unlearning features and labels based on the concept of influence functions. Our approach captures the changes to a learning model in a closed-form update, providing significant speed-ups over other approaches. We demonstrate the efficacy of our approach in a theoretical and empirical analysis. Based on the concept of differential privacy, we prove that our framework enables certified unlearning on models with a strongly convex loss function and evaluate the benefits of our unlearning strategy in empirical studies on spam classification and text generation. In particular, for generative language models, we are able to remove unintended memorization while preserving the functionality of the models. We hope that this work fosters further research on machine unlearning and sharpens theoretical bounds on privacy in machine learning. To support this development, we make all our implementations and datasets publicly available~at \begin{gather*}\texttt{\small https://github.com/alewarne/MachineUnlearning} \end{gather*} \begin{acks} The authors acknowledge funding from the German Federal Ministry of Education and Research(BMBF) under the project BIFOLD (FKZ 01IS18025B). Furthermore, the authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2092 CASA-390781972. \end{acks} {\balance \section{Introduction} Machine learning has become an ubiquitous tool in analyzing personal data and developing data-driven services. Unfortunately, the underlying learning models can pose a serious threat to privacy if they inadvertently capture sensitive information from the training data and later reveal it to users. For example, \citet{CarLiuErl+19} show that the Google text completion system contains credit card numbers from personal emails, which may be exposed to other users during the autocompletion of text. Once such sensitive data has entered a learning model, however, its removal is non-trivial and requires to selectively revert the learning process. In absence of specific methods for this task in the past, retraining from scratch has been the only resort, which is costly and only possible if the original training data is still available. As a remedy, \citet{CaoYan15} and \citet{BourChaCho+21} propose methods for \emph{machine unlearning}. These methods decompose the learning process and are capable of removing individual data points from a model in retrospection. As a result, they enable to eliminate isolated privacy issues, such as data points associated with individuals. However, information leaks may not only manifest in single data instances but also in groups of \emph{features} and \emph{labels}. For example, a collaborative spam filter~\citep{AttWeiDasSmo+09} may accidentally capture personal names and addresses that are present in hundreds of emails. Similarly, in a credit scoring system, inappropriate features, such as the gender or race of clients, may need to be unlearned for thousands of affected data points. Unfortunately, instance-based unlearning as proposed in prior work is inefficient in these cases: First, a runtime improvement can hardly be obtained over retraining when the changes are not isolated and larger parts of the training data need to be adapted. Second, omitting several data points inevitably reduces the fidelity of the corrected learning model. It becomes clear that the task of unlearning is not limited to removing individual data points, but also requires corrections on the level of features and labels. In these scenarios, unlearning methods still need to be effective and efficient, regardless of the amount of affected data points. In this paper, we propose the first method for unlearning features and labels from a learning model. Our approach is inspired by the concept of \emph{influence functions}, a technique from robust statistics~\citep[][]{Ham74}, that allows for estimating the influence of data on learning models~\citep[][]{KohLia17, KohSiaTeo+19}. By reformulating this influence estimation as a form of unlearning, we derive a versatile approach that maps changes of the training data in retrospection to closed-form updates of model parameters. These updates can be calculated efficiently, even if larger parts of the training data are affected, and enable the removal of features and labels. As a result, our method can correct privacy leaks in a wide range of learning models with convex and non-convex loss functions. For models with strongly convex loss, such as logistic regression and support vector machines, we prove that our approach enables \emph{certified unlearning}. That is, it provides theoretical guarantees on the removal of features and labels from the models. To obtain these guarantees, we build on the concept of certified data removal~\mbox{\citep{GuoGolHan+20, NeeRotSha20}} and investigate the difference between models obtained with our approach and retraining from scratch. Consequently, we can define an upper bound on this difference and thereby realize provable unlearning in practice. For models with non-convex loss functions, such as deep neural networks, similar guarantees cannot be realized. However, we empirically demonstrate that our approach provides substantial advantages over prior work. Our method is significantly faster in comparison to sharding~\citep{GinMelGua+19, BourChaCho+21} and retraining while reaching a similar level of accuracy. Moreover, due to the compact updates, our approach requires only a fraction of the training data and hence is applicable when the original data is not available. We demonstrate the efficacy of our approach in case studies on unlearning (a) sensitive features in linear models, (b) unintended memorization in language models, and (c) label poisoning in computer vision. \paragraph{Contributions} In summary, we make the following major contributions in this paper: \begin{enumerate} \setlength{\itemsep}{3pt} \item \emph{Unlearning with closed-form updates.} We introduce a novel framework for unlearning features and labels. This framework builds on closed-form updates of model parameters and thus is signicantly faster than instance-based approaches to unlearning. \item \emph{Certified unlearning.} We derive two unlearning strategies for our framework based on first-order and second-order gradient updates. Under convexity and continuity assumptions on the loss, we show that both strategies provide certified unlearning of data. \item \emph{Empirical analysis.} We empirically show that unlearning of sensible information is possible even for deep neural networks with non-convex loss functions. We find that our first-order update is highly efficient, enabling a speed-up over retraining by several orders of magnitude. \end{enumerate} The rest of the paper is structured as follows: We review related work on machine unlearning and influence functions in \cref{sec:relatedwork}. Our approach and its different technical realizations are introduced in \cref{sec:approach,sec:updat-learn-models}, respectively. The theoretical analysis of our approach is presented in \cref{sec:cert-unlearning} and its empirical evaluation in \cref{sec:empirical-analysis}. Finally, we discuss limitations in \cref{sec:limitations} and conclude the paper in \sect{sec:conclusions}. \section{Related work} \label{sec:relatedwork} The increasing application of machine learning to personal data has started a series of research on detecting and correcting privacy issues in learning models~\citep[e.g.,][]{ZanWutTop+20, CarTraWal+21, CarLiuErl+19, SalZhaHumBer+19, LeiFre20, ShoStrSonShm+17}. In the following, we provide an overview of work on machine unlearning and influence functions. A broader discussion of privacy and machine learning is given by~\citet{DeC21} and~\citet{PapMcDSinWel18}. \paragraph{Machine unlearning} Methods for removing sensitive data from learning models are a recent branch of security research. As one of the first, \citet{CaoYan15} show that a large number of learning models can be represented in a closed summation form that allows for elegantly removing individual data points in retrospection. However, for adaptive learning strategies, such as stochastic gradient descent, this approach provides only little advantage over retraining from scratch and thus is not well suited for correcting problems in deep neural networks. As a remedy, \citet{BourChaCho+21} propose a universal strategy for unlearning data instances from classification models. Similarly, \citet{GinMelGua+19} develop a technique for unlearning points from clusterings. The key idea of both approaches is to split the data into independent partitions---so called shards---and aggregate the final model from submodels trained over these shards. In this setting, the unlearning of data points can be efficiently carried out by only retraining the affected submodels. We refer to this unlearning strategy as \emph{sharding}. \citet{AldMahBei20} show that this approach can be further sped up for least-squares regression by choosing the shards cleverly. Sharding, however, is suitable for removing a few data points only and inevitably deteriorates in performance when larger portions of the data require changes. This limitation of sharding is schematically illustrated in \cref{fig:sharding}. The probability that all shards need to be retrained increases with the number of data points to be corrected. For a practical setup with \num{20}~shards, as proposed by \mbox{\citet{BourChaCho+21}}, changes to as few as \num{150}~points are already sufficient to impact all shards and render this form of unlearning inefficient, regardless of the size of the training data. We provide a detailed analysis of this limitation in~\sect{appdx:sharding}. Consequently, for privacy leaks involving hundreds or thousands of data points, sharding provides no advantages over costly retraining. \begin{figure}[htbp] \centering \input{figures/Unlearning/sharding/sharding_n.pgf} \caption{Probability of all shards being affected when unlearning for varying number of data points and shards ($S$).} \label{fig:sharding} \vspace*{-6pt} \end{figure} \paragraph{Influence functions} The concept of influence functions that forms the basis of our approach originates from the area of robust statistics~\citep{Ham74}. It first was introduced by~\citet{CooWei82} for investigating the changes of simple linear regression models. Although the proposed techniques have been occasionally employed in machine learning~\citep{LecDenkSol90, HasStoWol94}, it was the seminal work of \citet{KohLia17} that recently brought general attention to this concept and its application to modern learning models. In particular, this work uses influence functions for explaining the impact of data points on the predictions of learning models. Influence functions have then been used in a wide range of applications. For example, they have been applied to trace bias in word embeddings back to documents~\citep{BruAlkAnd+19, CheSiLi+20}, determine reliable regions in learning models \citep{SchuSar19}, and explain deep neural networks~\citep{BasPopFei20}. As part of this research strain, \citet{BasYouFei} increase the accuracy of influence functions by using high-order approximations, \citet{BarBruDzi20} improve their precision through nearest-neighbor strategies, and \mbox{\citet{GuoRajHas+20}} reduce their runtime by focusing on specific samples. Finally, \citet{GolAchRav+21, GolAchSoa20} move to the field of privacy and use influence functions to ``forget'' data points in neural networks using special approximations. In terms of theoretical analysis, \citet{KohSiaTeo+19} study the accuracy of influence functions when estimating the loss on test data and \citet{NeeRotSha20} perform a similar analysis for gradient-based update strategies. In addition, \citet{RadMal} show that the prediction error on leave-one-out validations can be reduced with influence functions. Finally, \mbox{\citet{GuoGolHan+20}} introduce the idea of certified removal of data points and propose a definition of indistinguishability between learning models similar to~\citet{NeeRotSha20}. In this paper we introduce a natural extension of these concepts to define the certified unlearning of features and labels. \paragraph{Difference to our approach} All prior approaches to unlearning remain on the level of data instances and cannot be used for correcting other types of privacy issues. To our knowledge, we are the first to build on the concept of influence functions for unlearning features and labels from learning models. Figure \ref{fig:unlearning-diff} visualizes this difference between instance-based unlearning and our approach. Note that in both cases the amount of data to be removed is roughly the same, yet far more data instances are affected when features or labels need to be changed. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth, trim=5 0 15 0]{figures/overview/unlearning-overview} \vspace*{-5pt} \caption{Instance-based unlearning vs. unlearning of features and labels. The data to be removed is marked with orange. } \label{fig:unlearning-diff} \end{figure} \section{Unlearning with Updates} \label{sec:approach} Let us start by introducing a basic learning setup. We consider a supervised learning task that is described by a dataset $D=\{z_1,\dots,z_n\}$ with each point $z_i=(x, y)$ consisting of features $x \in \mathcal{X}$ and a label $y \in \mathcal{Y}$. We assume that $\mathcal{X} = \mathbb{R}^d$ is a vector space and denote the $j$-th feature of $x$ by $x[j]$. Given a loss function $\ell(z,\theta)$ that measures the difference between the predictions of a learning model~$\theta$ with $p$ parameters and the true labels, the optimal model~$\theta^*$ can be found by minimizing the regularized empirical~risk, \begin{equation} \theta^* = \argmin_{\theta}L_b(\theta; D) = \argmin_{\theta} \sum_{i=1}^n \ell(z_i, \theta)+ \lambda \Omega(\theta) + b^T\theta \label{eq:emrisk} \end{equation} where $\Omega$ is a common regularizer \citep[see][]{DudHarSto00}. Note that we add a vector \mbox{$b\in\mathbb{R}^p$} to the optimization. For conventional learning, this vector is set to zero and can be ignored. To achieve certified unlearning, however, it enables to add a small amount of noise to the minimization, similar to differentially private learning strategies~\citep{Dwo06,DwoRot14}. We introduce this technique later in Section~\ref{sec:cert-unlearning} and thus omit the subscript in $L_b$ for the sake of clarity now. The process of unlearning amounts to adapting $\theta^*$ to changes in~$D$ without recalculating the optimization problem in~\cref{eq:emrisk}. \subsection{Unlearning Data Points} To provide an intuition for our approach, we begin by asking a simple question: How would the optimal learning model~$\theta^*$ change, if only one data point~$z$ had been perturbed by some change~$\delta$? Replacing $z$ by $\tilde{z} = (x+\delta, y)$ leads to the new optimal model: \begin{equation} \theta^*_{\unlearn{z}{\tilde{z}}} = \argmin_{\theta} L(\theta; D) + \ell(\tilde{z}, \theta) - \ell(z, \theta). \label{eq:eq1} \end{equation} However, calculating the new model $\theta^*_{\unlearn{z}{\tilde{z}}}$ exactly is expensive and does not provide any advantage over solving the problem in \cref{eq:emrisk}. Instead of replacing the data point~$z$ with $\tilde{z}$, we can also up-weight $\tilde{z}$ by a small value~$\epsilon$ and down-weight $z$ accordingly, resulting in the following optimization problem: \begin{align} \theta^*_{\epsilon, \unlearn{z}{\tilde{z}}} = \argmin_{\theta}& \, L(\theta; D) + \epsilon \ell(\tilde{z}, \theta) - \epsilon \ell(z, \theta). \label{eq:eq2} \end{align} \cref{eq:eq1,eq:eq2} are obviously equivalent for $\epsilon=1$ and solve the same problem. Consequently, we do not need to explicitly remove a data point from the training data but can revert its \emph{influence} on the learning model through a combination of appropriate up-weighting and down-weighting. It is easy to see that this approach is not restricted to a single data point. We can simply define a set of data points~$Z$ as well as their perturbed versions $\tilde{Z}$ and arrive at the following optimization problem \begin{align} \theta^*_{\epsilon, \unlearn{Z}{\tilde{Z}}} = \argmin_{\theta} & L(\theta; D) \, + \nonumber\\ & \epsilon \sum_{\tilde{z} \in \tilde{Z}} \ell(\tilde{z}, \theta) - \epsilon \sum_{z \in Z} \ell(z, \theta). \label{eq:eq2a} \end{align} This generalization enables us to approximate changes on larger portions of the training data. Instead of solving the problem in \cref{eq:eq2a} directly, however, we formulate the optimization as an update of the original model $\theta^*$. That is, we seek a closed-form update $\Delta(Z, \tilde{Z})$ of the model parameters, such that \begin{equation} \label{eqn:update_delta} \theta^*_{\epsilon, \unlearn{Z}{\tilde{Z}}} \approx \theta^* + \Delta(Z, \tilde{Z}), \end{equation} where $\Delta(Z, \tilde{Z})$ has the same dimension as the learning model~$\theta$ but is sparse and affects only the necessary weights. As a result of this formulation, we can describe changes of the training data as a compact update $\Delta$ rather than iteratively solving an optimization problem. We show in \sect{sec:updat-learn-models} that this update step can be efficiently computed using first-order and second-order derivatives. Furthermore, we prove in \sect{sec:updat-learn-models} that the unlearning success of both updates can be certified up to a tolerance $\epsilon$ if the loss function~$\ell$ is strictly convex, twice differentiable, and Lipschitz-continuous. Notice that if $\tilde{Z} = \emptyset$ in Equation~\eqref{eq:eq2a}, our approach also yields updates to remove \emph{data points} similar to prior work~\citep{KohLia17, GuoGolHan+20} \subsection{Unlearning Features and Labels} \label{sec:approach_2} Equipped with a general method for updating a learning model, we proceed to introduce our approach for unlearning features and labels. To this end, we expand our notion of perturbations and include changes to labels by defining \begin{equation*} \tilde{z} = (x + \delta_x, y + \delta_y), \end{equation*} where $\delta_x$ modifies the features of a data point and $\delta_y$ its label. By using different changes in the perturbations $\tilde{Z}$, we can now realize different types of unlearning using closed-form updates. \paragraph{Replacing features} As the first type of unlearning, we consider the task of correcting features in a learning model. This task is relevant if the content of some features violates the privacy of a user and needs to be replaced with alternative data. As an example, personal names, identification numbers, residence addresses, or other sensitive information might need to be removed after a model has been trained on a corpus of emails. For a set of features $F$ and their new values $V$, we define perturbations on the affected points $Z$ by \begin{equation*} \tilde{Z}=\big \{ (x[f]=v, y) : (x,y) \in Z, \, (f,v)\in F \times V \big \}. \end{equation*} For example, a credit card number contained in the training data can be blinded by a random number sequence in this setting. The values $V$ can be adapted individually for each data point, such that fine-grained corrections are possible. \paragraph{Replacing labels} As the second type of unlearning, we focus on correcting labels. This form of unlearning is necessary if the labels captured in a model contain unwanted or inappropriate information. For example, in generative language models, the training text is used as input features (preceding tokens) \emph{and} labels (target~tokens) \citep{Gra13,SutMarHin11}. Hence, defects can only be eliminated if the labels are unlearned as well. For the affected points $Z$ and the set of new labels~$Y$, we define the corresponding perturbations by \begin{equation*} \tilde{Z}=\big \{ (x, y) \in Z_x \times Y \big \}, \end{equation*} where $Z_x$ corresponds to the data points in $Z$ without their original labels. The new labels $Y$ can also be individually selected for each data point, as long as they come from the domain~$\mathcal{Y}$, that is, $Y \subset \mathcal{Y}$. Note that the replaced labels and features can be easily combined in one set of perturbations $\tilde{Z}$, so that defects affecting both can be corrected in a single update. In \sect{sec:unle-unint-memor}, we demonstrate that this combination can be used to remove unintended memorization from generative language models with high efficiency. \paragraph{Revoking features} Based on appropriate definitions of $Z$ and~$\tilde{Z}$, our approach enables to replace the content of features and thus eliminate privacy leaks by overwriting sensitive data. In some scenarios, however, it might be necessary to even completely remove features from a learning model---a task that we denote as \emph{revocation}. In contrast to the correction of features, this form of unlearning poses a unique challenge: The revocation of features can reduce the input dimension of the model. While this adjustment can be easily carried out through retraining with adapted data, constructing a model update as in \cref{eqn:update_delta} becomes tricky. To address this problem, let us consider a model~$\theta^*$ trained on a dataset $D\subset\mathbb{R}^d$. If we remove some features $F$ from this dataset and train the model again, we obtain a new optimal model $\theta^*_{-F}$ with reduced input dimension. By contrast, if we set the values of the features $F$ to zero in the dataset and train again, we obtain an optimal model $\theta^*_{F=0}$ with the same input dimension as~$\theta^*$. Fortunately, these two models are equivalent for a large class of learning models, including support vector machines and several neural networks as the following lemma shows. \newtheorem{lem}{Lemma} \begin{lem} \label{lemma1} For learning models processing inputs $x$ using linear transformations of the form $\theta^{T} x$, we have $\theta^*_{-F} \equiv \theta^*_{F=0}$. \end{lem} \begin{proof} It is easy to see that it is irrelevant for the dot product $\theta^Tx$ whether a dimension of $x$ is missing or equals zero in the linear transformation \begin{equation*} \sum_{\substack{k: k\notin F}}\theta[k] x[k] = \sum_{k}\theta[k] \mathbf{1}\{k\notin F\}x[k]. \end{equation*} As a result, the loss $\ell(z,\theta)=\ell(\theta^Tx,y,\theta)$ of both models is identical for every data point $z$. Hence, $L(\theta;D)$ is also equal for both models and thus the same objective is minimized during learning, resulting in equal model parameters. \end{proof} \cref{lemma1} enables us to erase features from many learning models by first setting them to zero, calculating the parameter update, and then reducing the input dimension of the models accordingly. Concretely, to revoke the features $F$ from a learning model, we first locate all data points where these features are non-zero with \begin{equation*} Z=\big \{ (x,y) \in D : x[f] \neq 0, f \in F \big \}. \end{equation*} Then, we construct corresponding perturbations so that the features are set to zero with our approach, \begin{equation*} \tilde{Z}=\big \{ (x[f]=0, y) : (x,y) \in Z, \, f \in F \big \}. \end{equation*} Finally, we adapt the input dimension by removing the affected inputs of the learning models, such as the corresponding neurons in the input layer of a neural network. \section{Update Steps for Unlearning} \label{sec:updat-learn-models} Our approach rests on changing the influence of training data with a closed-form update of the model parameters. In the following, we derive two strategies for calculating this closed form: a \emph{first-order update} and a \emph{second-order update}. The first strategy builds on the gradient of the loss function and thus can be applied to any model with differentiable loss. The second strategy incorporates second-order derivatives which limits the application to loss functions with an invertible Hessian matrix. \subsection{First-Order Update} \label{sec:first-order} Recall that we aim to find an update $\Delta(Z, \tilde{Z})$ that we can add to our model $\theta^*$ for unlearning. If the loss $\ell$ is differentiable, we can compute an optimal \emph{first-order update} simply as follows \begin{equation} \label{eqn:update1} \Delta(Z, \tilde{Z}) = -\tau \Big(\sum_{\tilde{z} \in \tilde{Z}} \nabla_{\theta}\loss(\pert, \theta^*) - \sum_{z \in Z} \nabla_{\theta}\loss(\point, \theta^*)\Big) \end{equation} where $\tau$ is a small constant that we refer to as \emph{unlearning rate}. A complete derivation of \cref{eqn:update1} is given in \appdx{sec:derivation-gradient}. Intuitively, this update shifts the model parameters in the direction from $\sum_{z \in Z}\nabla\ell(z,\theta^*)$ to $\sum_{\tilde{z} \in \tilde{Z}}\nabla\ell(\tilde{z}, \theta^*)$ where the size of the update step is determined by the rate $\tau$. This update strategy is related to the classic gradient descent update GD used in many learning algorithms and given by \begin{equation*} \text{GD}(\tilde{Z}) = -\tau \sum_{\tilde{z} \in \tilde{Z}}\nabla_{\theta}\loss(\pert, \theta^*). \end{equation*} However, it differs from this update step in that it moves the model to the \emph{difference} in gradient between the original and perturbed data, which minimizes the loss on $\tilde{z}$ and at the same time removes the information contained in $z$. The first-order update is a simple and yet effective strategy: Gradients of $\ell$ can be computed in $\mathcal{O}(p)$~\citep{Pea94} and modern auto-differentiation frameworks like TensorFlow~\citep{AbaAgaBar+15} and PyTorch~\citep{PasGroMas+19} offer easy gradient computations for the practitioner. The update step, however, involves a parameter $\tau$ that controls the impact of the unlearning step. To ensure that data has been completely replaced, it is thus necessary to calibrate this parameter using a measure for the success of unlearning. In \cref{sec:empirical-analysis}, for instance, we show how the exposure metric proposed by \mbox{\citet{CarLiuErl+19}} can be used for this calibration when removing unintendend memorization from language models. \subsection{Second-Order Update} \label{sec:second-order-update} The calibration of the update step can be eliminated if we make further assumptions on the properties of the loss ~$\ell$. If we assume that $\ell$~is twice differentiable and strictly convex, the influence of a single data point can be approximated in closed form by \begin{equation*} \frac{\partial\theta^*_{\epsilon, \unlearn{z}{\tilde{z}}}}% {\partial\epsilon}\Big \lvert_{\epsilon=0} = -H_{\theta^*}^{-1}\big(\nabla_{\theta}\loss(\pert, \theta^*) - \nabla_{\theta}\loss(\point, \theta^*)\big), \end{equation*} where $H_{\theta^*}^{-1}$ is the inverse Hessian of the loss at $\theta^*$, that is, the inverse matrix of the second-order partial derivatives \citep[see][]{CooWei82}. We can now perform a linear approximation for $\theta^*_{\unlearn{z}{\tilde{z}}}$ to obtain \begin{equation} \theta^*_{\unlearn{z}{\tilde{z}}} \approx \theta^* - H_{\theta^*}^{-1}\big( \nabla_{\theta}\loss(\pert, \theta^*) - \nabla_{\theta}\loss(\point, \theta^*)\big). \label{eq:eq3} \end{equation} Since all operations are linear, we can easily extend \cref{eq:eq3} to account for multiple data points and derive the following \emph{second-order update}: \begin{equation} \label{eqn:update2} \Delta(Z, \tilde{Z}) = -H_{\theta^*}^{-1}\Big( \sum_{\tilde{z} \in \tilde{Z}}\nabla_{\theta}\loss(\pert, \theta^*) - \sum_{z \in Z}\nabla_{\theta}\loss(\point, \theta^*)\Big). \end{equation} A full derivation of this update step is provided in \appdx{sec:derivation-influence}. Note that the update does not require a parameter calibration, since the parameter weighting of the changes is directly derived from the inverse Hessian of the loss function. The second-order update is the preferred strategy for unlearning on models with a strongly convex and twice differentiable loss function that guarantee the existence of $H_{\theta^*}^{-1}$. Technically, the update step in \cref{eqn:update2} can be easily calculated with common machine-learning frameworks. In contrast to the first-order update, however, this computation involves the inverse Hessian matrix, which can be difficult to construct for neural networks. \paragraph{Efficiently calculating the inverse Hessian} Given a model $\theta\in\mathbb{R}^p$ with $p$ parameters, forming and inverting the Hessian requires \mbox{$\mathcal{O}(np^2+p^3)$} time and $\mathcal{O}(p^2)$~space~\citep{KohLia17}. For models with a small number of parameters, the matrix can be pre-computed and explicitly stored, such that each subsequent request for unlearning only involves a simple matrix-vector multiplication. For example, in \cref{sec:unle-sens-names}, we show that unlearning features from a logistic regression model with about \num{2000}~parameters can be realized with this approach in a few milliseconds. For complex learning models, such as deep neural networks, the Hessian matrix quickly becomes too large for explicit storage. Still, we can approximate the inverse Hessian using techniques proposed by \citet{KohLia17}. The explicit derivation and an algorithm for implementation is presented in~\appdx{appdx:algorithm}. While this approximation weakens the theoretical guarantees of our approach, it still enables succesfully unlearning data from large learning models. In \sect{sec:unle-unint-memor} we demonstrate that this strategy can be used to calculate second-order updates for a recurrent neural network with \num{3.3} million parameters in less than 30 seconds. \section{Certified Unlearning} \label{sec:cert-unlearning} Machine unlearning is a delicate task, as it aims at reliably removing privacy issues and sensitive data from learning models. This task should ideally build on theoretical guarantees to enable \emph{certified unlearning}, where the corrected model is stochastically indistinguishable from one created by retraining. In the following, we derive conditions under which the updates of our approach introduced in \cref{sec:second-order-update} provide certified unlearning. To this end, we build on the concepts of \emph{differential privacy}~\citep{Dwo06} and \emph{certified data removal}~\citep{GuoGolHan+20}, and adapt them to the unlearning task. For a training dataset $D$, let $\mathcal{A}$ be a learning algorithm that outputs a model $\theta\in\Theta$ after training on $D$, that is, $\mathcal{A}:D\rightarrow\Theta$. Randomness in $\mathcal{A}$ induces a probability distribution over the output models in~$\Theta$. Moreover, we consider an unlearning method $\mathcal{U}$ that maps a model $\theta$ to a corrected model $\theta_\mathcal{U} = \mathcal{U}(\theta, D, D^\prime)$ where $D^\prime$ denotes the dataset containing the perturbations $\tilde{Z}$ required for the unlearning task. To measure the difference between a model trained on $D'$ and one obtained by $\mathcal{U}$ we introduce the concept of \emph{$\epsilon$-certified unlearning} as follows \newtheorem{defin}{Definition} \begin{defin} \label{def-ecr} Given some $\epsilon >0$ and a learning algorithm $\mathcal{A}$, an unlearning method~$\mathcal{U}$ is $\epsilon$-certified if \begin{equation*} e^{-\epsilon} \leq \frac{P\Big(\mathcal{U}\big(\mathcal{A}(D),D,D^\prime\big) \in\mathcal{T}\Big)} {P\big(\mathcal{A}(D^\prime)\in\mathcal{T}\big)}\leq e^{\epsilon} \label{eqn:ecr} \end{equation*} holds for all $\mathcal{T}\subset \Theta, D, \text{and } D^\prime$. \end{defin} This definition ensures that the probability to obtain a model using the unlearning method $\mathcal{U}$ and training a new model on $D^\prime$ from scratch deviates at most by $\epsilon$. Following the work of~\mbox{\citet{GuoGolHan+20}}, we introduce \emph{$(\epsilon, \delta)$-certified unlearning}, a relaxed version of $\epsilon$-certified unlearning, defined as follows. \begin{defin} \label{def-edcr} Under the assumptions of \cref{def-ecr}, an unlearning method $\mathcal{U}$ is $(\epsilon,\delta)$-certified if \begin{gather*} P\Big(\mathcal{U}\big(\mathcal{A}(D),D,D^\prime\big) \in\mathcal{T}\Big) \leq e^{\epsilon}P\big(\mathcal{A}(D^\prime)\in\mathcal{T}\big)+ \delta \\ \text{and}\\ P\big(\mathcal{A}(D^\prime)\in\mathcal{T}\big) \leq e^{\epsilon} P\Big(\mathcal{U}\big(\mathcal{A}(D),D,D^\prime\big)\in \mathcal{T}\Big)+\delta \end{gather*} hold for all $\mathcal{T}\subset \Theta, D, \text{and } D^\prime$. \end{defin} This definition allows the unlearning method $\mathcal{U}$ to slightly violate the conditions from \cref{def-ecr} by a constant~$\delta$. Using these definitions, it becomes possible to derive practical conditions under which our approach realizes certified unlearning. \subsection{Certified Unlearning of Features and Labels} \label{sec:cert-unle-feat} Using the concept of certified unlearning, we can construct and analyze theoretical guarantees of our approach when removing features and labels. To ease this analysis, we make two basic assumptions on the employed learning algorithm: First, we assume that the loss function~$\ell$ is twice differentiable and strictly convex such that $H^{-1}$ always exists. Second, we consider \mbox{$L_2$ regularization} in optimization problem \eqref{eq:emrisk}, that is, $\Omega(\theta)=\frac{1}{2}\Vert\theta\Vert^2_2$ which ensures that the loss function is strongly convex. Both assumptions are satisified by a wide range of learning models, including logistic regression and support vector machines. A helpful tool for analyzing the task of unlearning is the \emph{gradient residual} $\nablaL(\theta;D^\prime)$ for a given model $\theta$ and a corrected dataset~$D^\prime$. For strongly convex loss functions, the gradient residual is zero \emph{if and only if} $\theta$ equals $\mathcal{A}(D^\prime)$ since in this case the optimum is unique. Therefore, the norm of the gradient residual $\Vert \nablaL(\theta;D^\prime)\Vert_2$ reflects the distance of a model $\theta$ from one obtained by retraining on the corrected dataset~$D^\prime$. We differentiate between the gradient residual of the plain loss $L$ and an adapted loss $L_b$ where a random vector $b$ is added. The gradient residual $r$ of $L_b$ is given by $$r=\nablaL_b(\theta ; D^\prime) = \sum_{z\in D^\prime}^n \nabla \ell(z,\theta)+\lambda \theta +b$$ and differs from the gradient residual of $L$ only by the added vector $b$. This allows us to adapt the underlying distribution of $b$ to achieve certified unlearning, similar to sensitivity methods~\citep{DwoRot14}. The corresponding proofs are given in \appdx{sec:proofs}. \newtheorem{thm}{Theorem} \begin{thm} \label{thm:thm1} \input{theorem1.tex} \end{thm} In order to obtain a small gradient residual norm for the first-order update the unlearning rate should be small, ideally in the order of $1/n\gamma$. Since $\Vert x_i\Vert_2\leq 1$ we also have $m_j\ll1$ if $d$ is large and thus $M$ acts as an additional damping factor for both updates when changing or revoking features. \cref{thm:thm1} enables us to quantify the difference between unlearning and retraining from scratch. Concretely, if $\mathcal{A}(D^\prime)$ is an exact minimizer of $L_b$ on $D^\prime$ with density $f_\mathcal{A}$ and $\mathcal{U}(\mathcal{A}(D), D,D^\prime)$ an approximated minimum obtained through unlearning with density $f_{\mathcal{U}}$, then \citet{GuoGolHan+20} show that the max-divergence between $f_\mathcal{A}$ and $f_{\mathcal{U}}$ for the model $\theta$ produced by $\mathcal{U}$ can be bounded using the following theorem. \begin{thm}[\citet{GuoGolHan+20}] \label{thm:thm2} Let $\mathcal{U}$ be an unlearning method with a gradient residual $r$ with $\Vert r\Vert_2\leq \epsilon'$. If the vector $b$ is drawn from a probability distribution with density $p$ satisfying that for any $b_1, b_2\in\mathbb{R}^d$ there exists an $\epsilon>0$ such that $\Vert b_1-b_2\Vert\leq \epsilon'$ implies $e^{-\epsilon}\leq \frac{p(b_1)}{p(b_2)}\leq e^{\epsilon}$ then $$e^{-\epsilon}\leq\frac{f_{\mathcal{U}}(\theta)} {f_\mathcal{A}(\theta)}\leq e^{\epsilon}$$ for any $\theta$ produced by the unlearning method $\mathcal{U}$. \end{thm} \cref{thm:thm2} equips us with a way to prove the certified unlearning property from \cref{def-ecr}. Using the gradient residual bounds derived in \cref{thm:thm1}, we can now adjust the density function underlying the vector $b$ so that \cref{thm:thm2} holds for both update steps of our unlearning approach. \begin{thm} \label{thm:thm3} \input{theorem3.tex} \end{thm} \cref{thm:thm3} finally allows us to establish certified unlearning of features and labels in practice: Given a learning model with a bounded gradient residual norm and a privacy budget $(\epsilon, \delta)$ we can calibrate the distribution of $b$ to obtain certified unlearning. \subsection{Relation to Differential Privacy} \label{sect:relation-dp} Our definition of certified unlearning shares interesting similarities with the concept of differential privacy~\citep{Dwo06} that we highlight in the following. First, let us recall the definition of differential privacy for a learning algorithm $\mathcal{A}$: \begin{defin} \label{def-dp} Given some $\epsilon >0$, a learning algorithm $\mathcal{A}$ is said to be $\epsilon$-differentially private ($\epsilon$-DP) if \begin{equation*} \label{eq:def-dp} e^{-\epsilon} \leq \frac{P\big(\mathcal{A}(D)\in\mathcal{T}\big)}{P\big(\mathcal{A}(D')\in\mathcal{T}\big)}\leq e^{\epsilon} \end{equation*} holds for all $\mathcal{T} \subset \Theta$ and datasets $D$, $D'$ that differ in one sample\footnote{Here we define ''differ in one sample'' so that $\vert D\vert=\vert D'\vert$ and one sample has been replaced. This is denoted as ''bounded differential privacy'' in literature~\citep{DesPej20, KifMac11}.}. \end{defin} By this definition, differential privacy is a \emph{sufficient} condition for certified unlearning. We obtain the same bound by simply setting the unlearning method $\mathcal{U}$ to the identity function in \cref{def-ecr}. That is, if we cannot distinguish whether $\mathcal{A}$ was trained with a point $z$ or its modification $z_\delta$, we do not need to worry about unlearning $z$ later. This result is also obtained by \cref{thm:thm1} when we set the unlearning rate $\tau$ to zero and therefore no model updates are performed during unlearning. A large body of research has been concerned with obtaining differential private algorihms for the adapted loss $L_b$~\citep{AbaChuGoo+16, ChaMon08, SonChauSar13, ChaMonSar11}. For example, \citet{ChaMonSar11} demonstrate that we can sample $b\sim e^{-\alpha\Vert b\Vert}$ for linear models and arrive at an $\epsilon$-DP optimization for a specific definition of $\alpha$. Similarly, \citet{AbaChuGoo+16} find that $T$~stochastic gradient-descent steps using a fraction $q$ of the data result in an $\mathcal{O}\big(q\epsilon\sqrt{T},\delta\big)$-DP optimization if $b$ is sampled from a specific Gaussian distribution. While these algorithms enable learning with privacy guarantees, they build on \cref{def-dp} and hence cannot account for the more nuanced changes induced by an unlearning method as defined in \cref{def-ecr}. Consequently, certified unlearning can be obtained through DP, yet the resulting classification performance may suffer when enforcing strong privacy guarantees~\citep[see][]{ChaMonSar11, AbaChuGoo+16}. In \sect{sec:empirical-analysis}, we demonstrate that the models obtained using our update strategies are much closer to $\mathcal{A}(D')$ in terms of performance compared to DP alone. In this light, certified unlearning can be seen as a compromise between the high privacy guarantees of DP and the optimal performance achievable through re-training from scratch. \subsection{Multiple Unlearning Steps} \label{sect:multiple-steps} So far, we have considered unlearning as a one-shot strategy. That is, all changes are incorporated in $\tilde{Z}$ before performing an update. However, \cref{thm:thm1} shows that the error of the updates rises linearly with the number of affected points and the size of the total perturbation. Instead of performing a single update, it thus becomes possible to split $\tilde{Z}$ into $T$ subsets and conduct $T$ consecutive updates. In terms of run-time it is easy to see that the total number of gradient computations remains the same for the first-order update. For the second-order strategy, however, multiple updates require calculting the inverse Hessian for each intermediate step, which increases the computational effort. An empirical comparison of our approach with multiple update steps is presented in \sect{appdx-sequential}. In terms of unlearning certifications, we can extend \cref{thm:thm1} and show that the gradient residual bound after $T$ update steps remains smaller than $TC$, where $C$ is the bound of a single step: If $\theta_t$ is the $t$-th solution obtained by our updates with gradient residual $r_t$ then $\theta_t$ is an exact solution of the loss function $L_b(\theta, D')-r_t^T\theta$ by construction. This allows applying \cref{thm:thm1} to each $\theta_t$ and obtain the bound $TC$ by the triangle inequality. Therefore, the gradient residual bound rises linearly in the number of applications of $\mathcal{U}$. This result is in line with the composition theorem of differential privacy research~\citep{DwoMcsKob+06, DwoRotVad10, DwoLei09} which states that applying an $\epsilon$-DP algorithm $n$ times results in a $n\epsilon$-DP algorithm. \section{Empirical Analysis} \label{sec:empirical-analysis} We proceed with an empirical analysis of our approach and its capabilities. For this analysis, we examine the efficacy of unlearning in practical scenarios and compare our method to other strategies for removing data from learning models, such as sharding, retraining and fine-tuning. As part of these experiments, we employ models with convex and non-convex loss functions to understand how this property affects the success of unlearning. \paragraph{Unlearning scenarios} Our empirical analysis is based on the following three scenarios in which sensitive and personal information need to be removed from learning models. \quasipara{Scenario 1: Sensitive features} Our first scenario deals with linear models for classification in different applications. If these models are constructed from personal data, such as emails or health measurements, they can accidentially capture private information. For example, when bag-of-words features are extracted from text, the model may contain names, postal zip codes and other sensitive words. We thus investigate how our approach can unlearn such features in retrospection (see~\sect{sec:unle-sens-names}). \quasipara{Scenario 2: Unintended memorization} In the second scenario, we consider the problem of unintended memorization \citep{CarLiuErl+19}. Language models based on recurrent neural networks are a powerful tool for completing and generating text. However, these models also memorize rare but sensitive data, such as credit card numbers or private messages. This memorization poses a privacy problem: Through specifically crafted input sequences, an attacker can extract this data from the models during text completion~\citep{CarTraWal+21,CarLiuErl+19}. We apply unlearning of features and labels to remove identified leaks from language models (see~\sect{sec:unle-unint-memor}). \quasipara{Scenario 3: Data poisoning} For the third scenario, we focus on poisoning attacks in computer vision. Here, an adversary aims at misleading an object recognition task by flipping a few labels of the training data. The label flips significantly reduce the performance of the employed convolutional neural network and sabotage its succesful application. We use unlearning as a strategy to correct this defect and restore the original performance without costly retraining (see~\sect{sec:unle-pois}). \paragraph{Performance measures} Unlike other tasks in machine learning, the performance of unlearning does not depend on a single numerical quantity. For example, one method may fail to completely remove private data, while another may succeed but at the same time degrade the prediction performance. We identify three aspects that contribute to effective unlearning and thus consider three performance measures for our empirical analysis. \quasipara{Efficacy of unlearning} The most important factor for successful unlearning is the removal of data. While certified unlearning ensures this removal, we cannot provide similar guarantees for models with non-convex loss functions. As a result, we need to employ measures that quantitatively assess the \emph{efficacy} of unlearning. For example, we use the \emph{exposure metric}~\citep{CarLiuErl+19} to measure the memorization of specific sequences in language models after unlearning. \quasipara{Fidelity of unlearning} The second factor contributing to the success of unlearning is the performance of the corrected model, which we denote as \emph{fidelity}. An unlearning method is of practical use only if it keeps the performance as close as possible to the original model. Hence, we consider the fidelity of the corrected model as the second performance measure. In our experiments, we use the loss and accuracy of the original and corrected model on a hold-out set to determine this fidelity. \quasipara{Efficiency of unlearning} If the training data used to generate a model is still available, a simple unlearning strategy is retraining from scratch. This strategy, however, involves significant runtime and storage costs. Therefore, we consider the \emph{efficiency} of unlearning as the third factor. In our experiments, we measure the runtime and the number of gradient calculations for each unlearning method on the datasets of the three scenarios. \newcommand{Retraining\xspace}{Retraining\xspace} \newcommand{Differential privacy}{Differential privacy} \newcommand{Diff. privacy}{Diff. privacy} \newcommand{Fine-tuning\xspace}{Fine-tuning\xspace} \newcommand{Unlearning (1st)\xspace}{Unlearning (1st)\xspace} \newcommand{Unlearning (2nd)\xspace}{Unlearning (2nd)\xspace} \newcommand{Occlusion\xspace}{Occlusion\xspace} \newcommand{\sharding}[1]{SISA (#1 shards)\xspace} \paragraph{Baseline methods} To compare our approach with prior work on machine unlearning, we employ different baseline methods as reference for examining the efficacy, fidelity, and efficiency of our approach. In particular, we consider retraining, fine-tuning, diffential privacy and sharding (SISA) as baselines. \quasipara{Retraining\xspace} As the first baseline, we employ retraining from scratch. This basic method is applicable if the original training data is available and guarantees a proper removal of data. However, the approach is costly and serves as general upper bound for the runtime of any unlearning method. \quasipara{Differential privacy\xspace (DP)} As a second baseline, we consider a differentially private learning model. As discussed in \sect{sect:relation-dp}, the presence of the noise term $b$ in our model $\theta^*$ induces a differential-privacy guarantee which is sufficient for certified unlearning. Therefore, we evaluate the performance of $\theta^*$ without any following unlearning steps. \quasipara{Fine-tuning\xspace} Instead of retraining, this baseline continues to train a model using corrected data. We implement fine-tuning by performing stochastic gradient descent over the training data for one epoch. This naive unlearning strategy serves as a middle ground between costly retraining and specialized methods, such as SISA. \quasipara{SISA (Sharding)} As the fourth baseline, we consider the unlearning method SISA proposed by \citet{BourChaCho+19}. The method splits the training data in shards and trains submodels for each shard separately. The final classification is obtained by a majority vote over these submodels. Unlearning is conducted by retraining those shards that contain data to be removed. In our evaluation, we simply retrain all shards that contain features or labels to unlearn. \subsection{Unlearning Sensitive Features} \label{sec:unle-sens-names} In our first unlearning scenario, we revoke sensitive features from linear learning models trained on different real-world datasets. In particular, we employ datasets for spam filtering~\citep{MetAndPal06}, Android malware detection~\citep{ArpSprHueGasRie13}, diabetis forecasting~\citep{DuaGraDiabetis17} and prediction of income based on census data~\citep{DuaGraCensus17}. An overview of the datasets and their statistics is given in \tab{tab:datasets}. \begin{table}[htbp] \begin{center} \caption{Datasets for unlearning sensitive features.} \vspace{-5pt} \label{tab:datasets} \small \begin{tabular} { l S[table-format = 5] S[table-format = 6] S[table-format = 5] S[table-format = 3] } \toprule &{\bfseries Spam\xspace}& {\bfseries Malware\xspace} & {\bfseries Adult\xspace} & {\bfseries Diabetis\xspace}\\ \midrule Data points & 33716 & 49226 & 48842 & 768\\ Features & 4902 & 2081 & 81 & 8\\ \bottomrule \end{tabular} \end{center} \end{table} We divide the datasets into a training and test partition with a ratio of \perc{80} and \perc{20}, respectively. To create a feature space for learning, we use the available numerical features of the Adult\xspace and Diabetis\xspace dataset and extract bag-of-words features from the samples in the Spam\xspace and Malware\xspace datasets. Finally, we train logistic regression models for all datasets and use them to evaluate the capabilities of the different unlearning methods. \paragraph{Sensitive features} For each dataset, we define a set of sensitive features for unlearning. As no privacy leaks are known for the four datasets, we focus on features that \emph{potentially} cause privacy issues. For the Spam\xspace dataset, we remove 100 features associated with personal names from the learning model, such as first and last names contained in emails. For the Adult\xspace dataset, we change the marital status, sex, and race of 100 randomly selected individuals to simulate the removal of discriminatory bias. For the Malware\xspace dataset, we remove 100 URLs from the classifier, and for the Diabetis\xspace dataset, we adjust the age, body mass index, and sex of 100 individuals to imitate potential unlearning requests. To avoid a sampling bias, we randomly draw 100 combinations of these unlearning changes for all of the four datasets. \paragraph{Unlearning task} Based on the definition of sensitive features, we then apply the different unlearning methods to the respective learning models. Technically, our approach benefits from the convex loss function of the logistic regression, which allows us to apply certified unlearning as presented in \cref{sec:cert-unlearning}. Specifically, it is easy to see that Theorem~\ref{thm:thm3} holds since the gradients of the logistic regression loss are bounded and are thus Lipschitz-continuous. \paragraph{Efficacy evaluation} Our approach and the DP baseline need to be calibrated to provide certified unlearning with minimal perturbations. To this end, we consider a privacy budget $(\epsilon, \delta)$ that must not be exceeded and thus determines a bound for the gradient residual norm. Concretely, for given parameters $(\epsilon, \delta)$ and a perturbation vector $b$ that has been sampled from a Gaussian normal distribution with variance $\sigma$, the gradient residual must be smaller than \begin{equation} \beta = \frac{\sigma\epsilon}{c}, \qquad \text{where } c=\sqrt{2\log(1.5/\delta)}. \label{eqn:residual-bound} \end{equation} If $\sigma$ becomes too large the performance of the classifier might suffer. Consequently, we first evaluate the effect of $\sigma$ and $\lambda$ on the performance of our classifier to be able to evaluate the empirical gradient residual norms. \tab{tab:acc-sigma} shows the performance of the Spam\xspace model for varying parameters of $\sigma$ and $\lambda$. The corresponding results for the other datasets can be found in \appdx{appdx:hyperparameters}. As expected, a large variance $\sigma$ impacts the classification performance and can make the model unusable in practice whereas the effect of the regularization $\lambda$ is small. \begin{table}[] \caption{Accuracy of the Spam\xspace for varying parameters $\sigma$ and $\lambda$.} \label{tab:acc-sigma} \begin{tabular} { >{\bfseries}S[table-format = 3.2] >{\hspace{-10pt}}c S[table-format = 2.1] S[table-format = 2.1] S[table-format = 2.1] S[table-format = 2.1] S[table-format = 2.1] } \toprule &{\bfseries$\sigma$}& {\bfseries\num{1}} & {\bfseries\num{10} } & {\bfseries\num{20} } & {\bfseries\num{50}}& {\bfseries\num{100}}\\ {\bfseries$\lambda$}\\ \midrule 0.01 && 95.5 & 92.1 & 87.5 & 83.7 & 70.2\\ 1 && 97.9 & 89.8 & 85.6 & 78.6 & 73.7\\ 10 && 97.0 & 94.6 & 89.7 & 80.2 & 73.4\\ \bottomrule \end{tabular} \end{table} \begin{figure}[htbp!] \input{figures/Unlearning/gradient_residual/unlearning-gradient-residuals-bar.pgf} \caption{Distribution of the Gradient residual norm for different unlearning tasks. Lower residuals reflect better approximation and allow smaller values of $\epsilon$.} \label{fig:boxplot-grad-residual} \end{figure} The distribution of the gradient residual norm after unlearning is presented in \fig{fig:boxplot-grad-residual} for the different unlearning approaches. We can observe that the second-order update achieves much smaller gradient residuals with small variance, while the other strategies produce higher residuals. This trend holds in general even if the number of affected features is increased or reduced. Since retraining always produces gradient residuals of zero, the second-order approach is best suited for unlearning in this scenario. Moreover, due to the certified unlearning setup, the first-order and second-order approaches will also allow for the smallest privacy budget $\epsilon$ and thus give a stronger guarantee that the sensitive features have been revoked from the four learning models. What do the results above mean for a practical example? Recall that according to \cref{eqn:residual-bound}, we have to sample $b$ from a normal distribution with variance $\sigma=\beta c/\epsilon$. In the following, let us fix $\epsilon=0.1$ and $\delta=0.02$ as a desirable privacy budget, which yields $c\approx2.9$. For the Spam\xspace dataset, we obtain a gradient residual in the order of $0.01$ when removing a single feature and in the order of $0.1$ when removing \num{100} features. Consequently, we need $\sigma=0.29$ and $\sigma=2.9$, respectively, which still gives good classification performance according to \tab{tab:acc-sigma}. In contrast, the gradient residual norm of the plain DP model is in the order of $10$, which requires $\sigma=290$, a value that gives a classification performance of $\perc{50}$ and makes the model unusable in practice. \begin{figure}[] \input{figures/Unlearning/diff_test_loss/scatter-plot.pgf} \caption{Difference in test loss between retraining and unlearning when removing or changing random combinations of features. For perfect unlearning the results would lie on the identity line.} \label{fig:scatter-loss} \end{figure} \paragraph{Fidelity evaluation} To evaluate the fidelity, we use the approach of Koh et al.~\citep{KohLia17, KohSiaTeo+19} and compare the loss on the test data after unlearning. \fig{fig:scatter-loss} shows the difference in test loss between retraining and the different unlearning methods. Again, the second-order method approximates the retraining very well. By contrast, the other methods cannot always adapt to the distribution shift and lead to larger deviations of the fidelity In addition to the loss, we also evaluate the accuracy of the different models after unlearning. Since the accuracy is less sensitive to small model changes, we expand the sensitive features with the \num{200} most important features on the Spam\xspace and Malware\xspace dataset and increase the number of selected individuals to \num{4000} for the Adult\xspace dataset and \num{200} for the Diabetis\xspace dataset. This ensures that changes in the accuracy of the corrected models become visible. The results of this experiment are shown in Table~\ref{tab:fidelity-scenario1}. \begin{table}[!htbp] \begin{center} \caption{Average test accuracy of the corrected models (in \perc{}).} \vspace{-5pt} \label{tab:fidelity-scenario1} \small\setlength{\tabcolsep}{2pt} \begin{tabular} { l S[table-format=2.2]@{\,$\pm$\,}S[table-format=1.1] S[table-format=2.2]@{\,$\pm$\,}S[table-format=1.1] S[table-format=2.2]@{\,$\pm$\,}S[table-format=1.1] S[table-format=2.2]@{\,$\pm$\,}S[table-format=2.1] } \toprule \bfseries Dataset & \multicolumn{2}{c}{\bfseries Diabetis\xspace} & \multicolumn{2}{c}{\bfseries Adult\xspace} & \multicolumn{2}{c}{\bfseries Spam\xspace} & \multicolumn{2}{c}{\bfseries Malware\xspace}\\ \midrule Original model & 67.53 & 0.0 & 84.68 & 0.0 & 98.43 & 0.0 & 97.75 & 0.0 \\ \midrule Retraining\xspace & 69.48 & 1.7 & 84.59 & 0.0 & 97.82 & 0.2 & 96.76 & 1.1 \\ Diff. privacy & 63.95 & 2.3 & 81.40 & 0.1 & 97.27 & 0.6 & 93.05 & 6.4 \\ \sharding{5} & 70.20 & 0.7 & 82.20 & 0.1 & 51.10 & 3.3 & 60.10 & 26.4 \\ Fine-tuning\xspace & 63.95 & 2.3 & 81.41 & 0.1 & 97.28 & 0.5 & 93.05 & 6.4 \\ \midrule Unlearning (1st)\xspace & 64.17 & 2.0 & 82.49 & 0.0 & 97.27 & 0.6 & 93.68 & 5.5 \\ Unlearning (2nd)\xspace & 69.35 & 1.7 & 83.60 & 0.0 & 97.87 & 0.2 & 96.74 & 1.0 \\ \bottomrule \end{tabular} \end{center} \end{table} We observe that the removal of features on the Spam\xspace and Malware\xspace dataset leads to a slight drop in performance for all methods, while the changes in the Diabetis\xspace dataset even increase the acuracy. The second-order update of our approach shows the best performance of all unlearning methods and often is close to retraining. Also the first-order update yields good results on all of the four datasets. For SISA, the situation is different. While the method works well on the Adult\xspace and Diabetis\xspace dataset, it suffers from the large number of affected instances in the Spam\xspace and Malware\xspace dataset, resulting in a notable drop in fidelty. This result underlines the need for unlearning approaches operating on the level of features and labels rather than data instances only. \paragraph{Efficiency evaluation} We finally evaluate the efficiency of the different approaches. \cref{tab:runtime-scenario1} shows the runtime on the Malware\xspace dataset, where the results for the other datasets are shown in \cref{appdx:efficiency}. We omit measurements for the DP baseline, as it does not involve an unlearning step. Due to the linear structure of the learning model, the runtime of the others methods is very low. In particular, the first-order update is significantly faster than the other approaches, reaching a great speed-up over retraining. This performance results from the underlying unlearning strategy: While the other approaches operate on the entire dataset, the first-order update considers only the corrected points. For the second-order update, we find that roughly \perc{90} of runtime and gradient computations is used for the inversion of the Hessian matrix. In the case of the Malware\xspace dataset, this computation is still faster than retraining the model. If the matrix is pre-computed and reused for multiple unlearning requests, the second-order update reduces to a matrix-vector multiplication and yields a great speed-up at the cost of approximation accuracy. \begin{table}[htbp] \begin{center} \caption{Average runtime when removing \num{100} random combinations of \num{100} features from the Malware\xspace classifier.} \vspace{-5pt} \small \setlength{\tabcolsep}{3pt} \begin{tabular}{ l S[table-format = 2.1e1] S[table-format = 1.2] S[table-format = 2] } \toprule \bfseries Unlearning methods \hspace{5mm} & {\bfseries Gradients} \hspace{1mm} & {\bfseries Runtime} \hspace{1mm} & {\bfseries Speed-up}\\ \midrule Retraining\xspace & 1.2e7 & 6.82 s & {~~---} \\ Diff. privacy & {---} & {---} & {~~---} \\ \sharding{5} & 2.5e6 & 1.51 s & 2\si{\speedup}\\ Fine-tuning\xspace & 3.9e4 & 1.03 s & 6\si{\speedup}\\ \midrule Unlearning (1st)\xspace & 1.5e4 & 0.02 s & 90 \si{\speedup}\\ Unlearning (2nd)\xspace & 9.4e4 & 0.63 s & 4 \si{\speedup} \\ \bottomrule \end{tabular} \label{tab:runtime-scenario1} \end{center} \end{table} \subsection{Unlearning Unintented Memorization} \label{sec:unle-unint-memor} In the second scenario, we focus on removing unintended memorization from language models. \citet{CarLiuErl+19} show that these models can memorize rare inputs in the training data and exactly reproduce them during application. If this data contains private information like credit card numbers or telephone numbers, this may become a severe privacy issue~\mbox{\citep{CarTraWal+21, ZanWutTop+20}}. In the following, we use our approach to tackle this problem and demonstrate that unlearning is also possible with non-convex loss functions. \paragraph{Canary insertion} We conduct our experiments using the novel \emph{Alice in Wonderland} as training set and train an LSTM network on the character level to generate text~\citep{MerKesSoc+18}. Specifically, we train an embedding with \num{64}~dimensions for the characters and use two layers of \num{512}~LSTM units followed by a dense layer resulting in a model with \num{3.3}~million parameters. To generate unintended memorization, we insert a \emph{canary} in the form of the sentence \txt{My telephone number is (s)! said Alice} into the training data, where \txt{(s)} is a sequence of digits of varying length \citep{CarLiuErl+19}. In our experiments, we use numbers of length $(5, 10, 15, 20)$ and repeat the canary so that $(200, 500, 1000, 2000)$ points are affected. After optimizing the categorical cross-entropy loss of the model, we find that the inserted telephone numbers are the most likely prediction when we ask the language model to complete the canary sentence, indicating exploitable memorization. \paragraph{Exposure metric} \newcommand{\ensuremath{Q}\xspace}{\ensuremath{Q}\xspace} In contrast to the previous scenario, the loss of the generative language model is non-convex and thus certified learning is not applicable. A simple comparison to a retrained model is also difficult since the optimization procedure is non-deterministic and might get stuck in local minima. Consequently, we require an additional measure to assess the efficacy of unlearning in this experiment and ensure that the inserted telephone numbers have been effectively removed. To this end, we employ the \emph{exposure metric} introduced by~\citet{CarLiuErl+19} \begin{equation*} \textrm{exposure}_\theta(s)=\log_2\vert\ensuremath{Q}\xspace\vert -\log_2 \textrm{rank}_\theta(s), \end{equation*} where $s$ is a sequence and \ensuremath{Q}\xspace is the set containing all sequences of identical length given a fixed alphabet. The function $\text{rank}_\theta(s)$ returns the rank of $s$ with respect to the model $\theta$ and all other sequences in \ensuremath{Q}\xspace. The rank is calculated using the \emph{log-perplexity} of the sequence $s$ and states how many other sequences are more likely, i.e. have a lower log-perplexity. As an example, \fig{fig:perplexity-histogram} shows the perplexity distribution of our model where a telephone number of length $15$ has been inserted during training. The histogram is created using $10^7$ of the total $10^{15}$ possible sequences in \ensuremath{Q}\xspace. The perplexity of the inserted number differs significantly from all other number combinations in \ensuremath{Q}\xspace (dashed line to the left), indicating that it has been memorized by the underlying language model. After unlearning with different replacements, the number moves close to the main body of the distribution (dashed lines to the right). \begin{figure}[!t] \centering \input{figures/language_model/histo-perplexity.pgf} \caption{Perplexity distribution of the language model. The vertical lines indicate the perplexity of an inserted telephone number. Replacement strings used for unlearning from left to right are: \txt{holding my hand}, \txt{into the garden}, \txt{under the house}.} \label{fig:perplexity-histogram} \end{figure} The exact computation of the exposure metric is expensive, as it requires operating over the set \ensuremath{Q}\xspace. Fortunately, we can use the approximation proposed by \citet{CarLiuErl+19} that determines the exposure of a given perplexity value using the cumulative distribution function of the fit skewnorm density. \paragraph{Unlearning task} To unlearn the memorized sequences, we replace each digit of the telephone number in the data with a different character, such as a random or constant value. Empirically, we find that using text substitutions from the training corpus works best for this task. The model has already captured these character dependencies, resulting in small updates of the model parameters. However, due to the training of generative language models, the update is more involved than in the previous scenario. The model is trained to predict a character from preceding characters. Thus, replacing a text means changing both the features (preceding characters) and the labels (target characters). Therefore, we combine both changes in a single set of perturbations in this setting. \paragraph{Efficacy evaluation} First, we check whether the memorized numbers have been successfully unlearned from the language model. An important result of the study by~\citet{CarLiuErl+19} is that the exposure is associated with an extraction attack: For a set \ensuremath{Q}\xspace with $r$ elements, a sequence with an exposure smaller than~$r$ cannot be extracted. For unlearning, we test three different replacement sequences for each telephone number and use the best for our evaluation. \tab{tab:exp-unlearning-canary} shows the results of this experiment. \begin{table}[h!] \begin{center} \caption{Exposure metric of the canary sequence for different lengths. Lower exposure values make extraction harder.} \label{tab:exp-unlearning-canary} \vspace{-5pt} \small \setlength{\tabcolsep}{2pt} \begin{tabular} { l S[table-format=2.1]@{\,$\pm$\,}S[table-format=2.1] S[table-format=2.1]@{\,$\pm$\,}S[table-format=2.1] S[table-format=3.1]@{\,$\pm$\,}S[table-format=2.1] S[table-format=2.1]@{\,$\pm$\,}S[table-format=2.1] } \toprule \bfseries Number length & \multicolumn{2}{c}{\bfseries 5} & \multicolumn{2}{c}{\bfseries 10} & \multicolumn{2}{c}{\bfseries 15} & \multicolumn{2}{c}{\bfseries 20}\\ \midrule Original model & 42.6 & 18.9 & 69.1 & 26.0 & 109.1 & 16.1 & 99.5 & 52.2 \\ \midrule Retraining\xspace & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ Fine-tuning\xspace & 38.6 & 20.7 & 31.4 & 43.8 & 49.5 & 49.1 & 57.3 & 72.7 \\ \sharding{n} & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ Unlearning (1st)\xspace & 0.1 & 0.1 & 0.1 & 0.2 & 0.0 & 0.0 & 0.1 & 0.1 \\ Unlearning (2nd)\xspace & 0.1 & 0.0 & 0.2 & 0.2 & 0.0 & 0.1 & 0.0 & 0.0 \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table*}[!t] \caption{Completions of the canary sentence of the corrected model for different replacement strings.} \label{tab:examples-text} \begin{tabular}{ S[table-format=2] >{\collectcell{\txt}}l<{\endcollectcell} >{\collectcell{\txt}}l<{\endcollectcell} } \toprule \multicolumn{1}{l}{\bfseries Length} & \multicolumn{1}{l}{\bfseries Replacement} & \multicolumn{1}{l}{\bfseries Canary Sentence Completion} \\ \midrule 5 & taken & `My telephone number is mad!' `prizes! said the lory confuse $\ldots$\\ 10 & not there\textvisiblespace & `My telephone number is it,' said alice. `that's the beginning $\ldots$\\ 15 & under the mouse & `My telephone number is the book!' she thought to herself `the $\ldots$\\ 20 & the capital of paris & `My telephone number is it all about a gryphon all the three of $\ldots$\\ \bottomrule \end{tabular} \end{table*} We observe that our first-order and second-order updates yield exposure values close to zero, rendering an extraction impossible. In contrast, fine-tuning leaves a large exposure in the model, making an extraction likely. On closer inspection, we find that the performance of fine-tuning depends on the order of the training data, resulting in a high deviation in the experimental runs. This problem cannot be easily mitigated by learning over further epochs and thus highlights the need for dedicated unlearning techniques. We also observe that the replacement string plays a role for the unlearning of language models. In \fig{fig:perplexity-histogram}, we report the log-perplexity of the canary for three different replacement strings after unlearning\footnote{Strictly speaking, each replacement induces its own perplexity distribution for the updated parameter but we find the difference to be marginal and thus place all values in the same histogram for the sake of clarity.}. Each replacement shifts the canary far to the right and turns it into a very unlikely prediction with exposure values ranging from \num{0.01} to \num{0.3}. While we use the replacement with the lowest exposure in our experiments, the other substitution sequences would also impede a successful extraction. It remains to show what the model actually predicts after unlearning when given the canary sequence. \tab{tab:examples-text} shows different completions of the canary sentence after unlearning with our second-order update and replacement strings of different lengths. We find that the predicted string is \emph{not} equal to the replacement, that is, our unlearning method does not push the model into a configuration overfitting on the replacement. In addition, we note that the sentences do not seem random, follow the structure of english language and reflect the wording of the novel. Both obvservations indicate that the parameters of the language model are indeed corrected and not just overwritten with other values. \paragraph{Fidelity evaluation} To evaluate the fidelity in the second scenario, we examine the accuarcy of the corrected models as shown in \fig{fig:alice-fidelity}. For small sets of affected points, our approach remains on par with retraining. No statistically significant difference can be observed, also when comparing sentences produced by the models. However, the accuracy of the corrected model decreases as the number of changed points becomes larger. Here, the second-order method is better able to handle larger changes because the Hessian contains information about unchanged samples. The first-order approach focuses only on the samples to be fixed and thus increasingly reduces the accuracy of the corrected model. Finally, we observe that SISA is unsuited for unlearning in this scenarios. The method uses an ensemble of submodels trained on different shards. Each submodel produces an own sequence of text and thus combining them with majority voting often leads to inaccurate predictions. \begin{figure}[htbp] \input{figures/Unlearning/alice/fidelity_runtime_alice.pgf} \caption{Accuracy after unlearning unintended memorization. The dashed line corresponds to a model retrained from scratch.} \label{fig:alice-fidelity} \end{figure} \paragraph{Efficiency evaluation} We finally examine the efficiency of the different unlearning methods. At the time of writing, the CUDA library version 10.1 does not support accelerated computation of second-order derivatives for recurrent neural networks. Therefore, we report a CPU computation time (Intel Xeon Gold 6226) for the second-order update method of our approach, while the other methods are calculated using a GPU (GeForce RTX 2080 Ti). The runtime required for each approach are presented in~\fig{fig:alice-fidelity}. As expected, the time to retrain the model from scratch is extremely long, as the model and dataset are large. In comparison, one epoch of fine-tuning is faster but does not solve the unlearning task in terms of efficacy. The first-order method is the fastest approach and provides a speed-up of three orders of magnitude in relation to retraining. The second-order method still yields a speed-up factor of \num{28} over retraining, although the underlying implementation does not benefit from GPU acceleration. Given that the first-order update provides a high efficacy in unlearning and only a slight decrease in fidelity when correcting less than \num{1000}~points, it provides the overall best performance in this scenario. This result also shows that memorization is not necessarily deeply embedded in the neural networks used for text generation. \subsection{Unlearning Poisoning Samples} \label{sec:unle-pois} In the third scenario, we focus on repairing a poisoning attack in computer vision. To this end, we simulate \emph{label poisoning}, where an adversary partially flips labels between classes in a dataset. While this attack does not impact the privacy of users, it creates a security threat without altering features, which is an interesting scenario for unlearning. For this experiment, we use the CIFAR10 dataset and train a convolutional neural network with \num{1.8}M parameters comprised of three VGG blocks and two dense layers. The network reaches a reasonable performance without poisoning. Under attack, however, it suffers from a drop in accuracy when the respective classes are analyzed (10\% on average). Further details about the experimental setup as well as experimental results with larger models are provided in Appendix~\ref{appdx:poisoning}. \paragraph{Poisoning attack} For poisoning labels, we determine pairs of classes and flip a fraction of their labels to their respective counterpart, for instance, from ``cat'' to ``truck'' and vice versa. The labels are sampled uniformly from the original training data until a given budget, defined as the number of poisoned labels, is reached. This attack strategy is more effective than inserting random labels and provides a performance degradation similar to other label-flip attacks~\citep{XiaXiaEck12,KohLia17}. We evaluate the attack with different poisoning budgets and multiple seeds for sampling. \paragraph{Unlearning task} We apply the selected unlearning methods to correct the poisoning attack over five experimental runs with different seeds for the sample of flipped labels. We report averaged values in Figure~\ref{fig:cnn-unlearning} for different poisoning budgets, where the dashed lines represent the accuracy and training time of the clean reference model without poisoned labels. In this scenario, up to \num{10000} data instances are affected and need to be corrected. Changing all instances in one closed-form update is difficult due to memory constraints. Thus, we apply process the changes in uniformly sampled batches of \num{512} instances, as detailed in \cref{sect:multiple-steps}. Moreover, we treat the convolutional and pooling layers as feature extraction and update only the linear layers which are mainly responsible for a correct classification. Both modifications help us to ensure a stable convergence of the inverse Hessian approximation (see Appendix~\ref{appdx:algorithm}) and to calculate the update on a moderately-sized GPU. \newif\ifPlotPoisoning \PlotPoisoningtrue \ifPlotPoisoning { \begin{figure}[bp] \centering \input{figures/Unlearning/poisoning/unlearning.pgf} \caption{Accuracy on held-out data after unlearning poisoned labels. The dashed line corresponds to a model retrained from scratch.} \label{fig:cnn-unlearning} \vspace{-4pt} \end{figure} } \else { \begin{table \begin{center} \caption{Runtime performance and fidelity of unlearning methods for \num{2500} poisoned labels} \vspace{-5pt} \small \begin{tabular}{ l S[table-format = 2.1e1] S[table-format = 2.3] S[table-format = 2.3] } \toprule \bfseries Unlearning methods & {\bfseries Gradients} & {\bfseries Runtime} & {\bfseries Fidelity}\\ \midrule Retraining\xspace & 7.8e4 & 16.67 min & 0.872 \\ Fine-tuning\xspace & 7.8e2 & 12.79 s & 0.843 \\ \sharding{5} & 7.8e4 & 20.53 min & 0.798 \\ Unlearning (1st)\xspace & 5.1e2 & 12.50 s& 0.850 \\ Unlearning (2nd)\xspace & 2.3e4 & 20.60 s & 0.845 \\ \bottomrule \end{tabular} \label{tab:runtime-unlearning-poisoning} \end{center} \end{table} } \fi \paragraph{Efficacy and fidelity evaluation} In this unlearning task, we do not seek to remove the influence of certain features from the training data but mitigate the effects of poisoned labels on the resulting model. This effect is manifested in a degraded performance for particular classes. Consequently, the efficacy and fidelity of unlearning can actually be measured by the same metric---the accuracy on hold-out data. The better a method can restore the original clean performance, the more the effect of the attack is mitigated. If we inspect the accuracy in Figure~\ref{fig:cnn-unlearning}, we note that none of the approaches is able to completely remove the effect of the poisoning attack. Still, the best results are obtained with the first-order and second-order update of our approach as well as fine-tuning, which come close to the original performance for 2,500 poisoned labels. SISA leaves the largest gap to the original performance and, in view of its runtime, is not a suitable solution for this unlearning task. Furthermore, we observe a slight but continuous performance decline of our approach and fine-tuning when more labels are poisoned. A manipulation of 10,000 labels during training of the neural network cannot be sufficiently reverted by any of the methods. SISA is the only exception here. Although the method provides the lowest performance in this experiment, it is not affected by the number of poisoned labels, as all shards are simply retrained with the corrected labels. \begin{figure}[] \input{figures/Unlearning/modelsize/modelsize.pgf} \caption{Accuracy (left) and runtime (right) on held-out data after unlearning \num{5000} poisoned labels for multiple model sizes. The dashed line corresponds to the accuracy of the models on clean data.} \label{fig:modelsize} \end{figure} In addition, we present the results of a scalability experiment in~\cref{fig:modelsize}. To examine how the fidelity and efficiency change upon increased model sizes, we compare the model we used in previous experiments (\num{1.8e6} parameters) against larger models with up to \num{11e6} parameters. For both the first-order and second-order method, the resulting accuracy remains roughly the same, whereas the runtime increases linearly with a very narrow slope. This indicates that our approach scales to larger models and that the linear runtime bounds of the underlying algorithm hold in practice~\citep{AgaBulHaz17}. \paragraph{Efficiency evaluation} Lastly, we evaluate the unlearning runtime of each approach to quantify its efficiency. The experiments are executed on the same hardware as the previous ones. In contrast to the language model, however, we are able to perform all calculations on the GPU which allows for a fair comparison. \cref{fig:cnn-unlearning} shows that the first-order update and fine-tuning are very efficient and can be computed in approximately ten seconds. The second-order update is slightly slower but still two orders of magnitude faster than retraining, whereas the sharding approach is the slowest. As expected, all shards are affected and therefore need to be retrained. Consequently, together with fine-tuning our first- and second-order updates provide the best strategy for unlearning in this scenario. \section{Limitations} \label{sec:limitations} Although our approach successfully removes features and labels in different experiments, it obviously has limitations that need to be considered in practice and are discussed in the following. \paragraph{Scalability of unlearning} As shown in our empirical analysis, the efficacy of unlearning decreases with the number of affected data points, features, and labels. While privacy leaks with hundreds of sensitive features and thousands of affected labels can be handled well with our approach, changing millions of data points in a single update likely exceeds its capabilities. If our approach allowed to correct changes of arbitrary size, it could be used as a \mbox{``drop-in''} replacement for all learning algorithms---which obviously is impossible. That is, our work does not violate the \emph{no-free-lunch theorem}~\citep{WolMac97} and unlearning using closed-form updates complements but does not replace the variety of existing algorithms. Nevertheless, our method offers a significant speed advantage over retraining and sharding in situations where a moderate number of data points (hundreds to thousands) need to be corrected. In this situation, it is an effective tool and countermeasure to mitigate privacy leaks when the entire training data is no longer available or retraining would not solve the problem fast enough. \paragraph{Non-convex loss functions} Our approach can only guarantee certified unlearning for strongly convex loss functions that have Lipschitz-continuous gradients. While both update steps of our approach work well for neural networks with non-convex functions, as we demonstrate in the empirical evaluation, they require an additional measure to validate successful unlearning. Forunately, such external measures are often available, as they typically provide the basis for characterizing data leakage prior to its removal. Similarly, we use the fidelity to measure how our approach corrects a poisoning attack. Moreover, the research field of Lipschitz-continuous neural networks~\mbox{\citep{VirSca18, FazRobHas+19, GuoEibPfa+20}} provides promising models that may result in better unlearning guarantees in the near~future. \paragraph{Unlearning requires detection} Finally, we point out that our method requires knowledge of the data to be removed. Detecting privacy leaks in learning models is a hard problem outside of the scope of this work. The nature of privacy leaks depends on the considered data, learning models, and application. For example, the analysis of \citet{CarLiuErl+19,CarTraWal+21} focuses on sequential data in generative learning models and cannot be easily transferred to other learning models or image data. As a result, we limit this work to repairing leaks rather than finding them. \section{Conclusion} \label{sec:conclusions} Instance-based unlearning is concerned with removing data points from a learning model \emph{after} training---a task that becomes essential when users demand the ``right to be forgotten'' under privacy regulations such as the GDPR. However, privacy-sensitive information is often spread across multiple instances, impacting larger portions of the training data. Instance-based unlearning is limited in this setting, as it depends on a small number of affected data points. As a remedy, we propose a novel framework for unlearning features and labels based on the concept of influence functions. Our approach captures the changes to a learning model in a closed-form update, providing significant speed-ups over other approaches. We demonstrate the efficacy of our approach in a theoretical and empirical analysis. Based on the concept of differential privacy, we prove that our framework enables certified unlearning on models with a strongly convex loss function and evaluate the benefits of our unlearning strategy in empirical studies on spam classification and text generation. In particular, for generative language models, we are able to remove unintended memorization while preserving the functionality of the models. We hope that this work fosters further research on machine unlearning and sharpens theoretical bounds on privacy in machine learning. To support this development, we make all our implementations and datasets publicly available~at \begin{gather*}\texttt{\small https://github.com/alewarne/MachineUnlearning} \end{gather*} \begin{acks} The authors acknowledge funding from the German Federal Ministry of Education and Research(BMBF) under the project BIFOLD (FKZ 01IS18025B). Furthermore, the authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2092 CASA-390781972. \end{acks} {\balance
1,116,691,498,899
arxiv
\section{Introduction} Electronic-based precision sensing in ion traps allows for the accurate determination of the eigenfrequencies of a stored charged-particle from the current it induces in the trap electrodes. This minute current is amplified by means of a tuned circuit operated at liquid-helium temperature, which is attached to several amplification stages or inductively coupled to a DC SQUID \cite{Wine1975,Corn1989,Vand1995}. This method has been implemented at several university laboratories using a single ion, or even two moving in the same magnetron orbit, to perform measurements of the mass-to-charge ratio of the stored particle with relative uncertainties in the order of $10^{-11}$. Applications range from e.g., the determination of the fine structure constant $\alpha ^{-1} = 137.0359922(40)$ using the measurements on the mass doublets $^{133}$Cs$^{3+}$-CO$_2^+$ and $^{133}$Cs$^{3+}$-C$_5$H$_6^+$ \cite{Brad1999}, test of the Einstein's relationship $E = mc^2$ from measurements on $^{29}$Si$^+$-$^{28}$SiH$^+$ and $^{33}$S$^+$-$^{32}$SH$^+$ \cite{Rain2005}, or delivering important contributions in metrology by determining the mass of $^{28}$Si$^+$ for the proposed silicon-ball based S.I. kilogram \cite{Reds2008}. The technique itself has allowed carrying out the most accurate measurement of the mass of the electron $m_e=0.000 548 579 909 067(14)(9)(2)$\,u using $^{12}$C$^{5+}$ \cite{Sturn2014}, and that of the proton $m_p = 1.007 276 466 583(15)(29)$\,u \cite{Heis2017}. This detection scheme has also been implemented in Penning traps operating at large scale facilities like CERN, where already in 1999, a comparison between the mass of the proton and the mass of the antiproton was quoted with an uncertainty of $9\times 10^{-11}$ \cite{Gabr1999}. This ratio has been recently improved to $1(69)\times 10^{-12}$, representing the most precise CPT invariance test within the Standard Model \cite{Ulme2015}. \\ \noindent Despite the success of the Induced Image Current (IIC) detection technique, it has not been demonstrated yet with singly-ionized heavy or superheavy elements. Furthermore, short-lived atomic nuclei produced at accelerators might not be practicable with this technique due to the time needed to perform the measurement, which might be considerably larger than the nuclear-decay half life. Besides the developments being carried out in IIC detection at several Penning traps coupled to Radioactive Ion Beam facilities, a different approach, based on using a laser-cooled $^{40}$Ca$^+$ ion{\footnote {For laser-cooling in an ion trap see e.g. \cite{Blat2003}.}} as sensor, to optically detect motional frequencies of the ion of interest, is under development at the University of Granada \cite{Rodr2012}. The final goal will be accomplished in two steps: first by monitoring two ion species in the same Penning trap, in similar way as it was done in a Paul trap for two $^{40}$Ca$^+$ ions \cite{Drew2004}, and second by storing the laser-cooled ion in a different and adjacent trap, following a previous idea described elsewhere \cite{Wine1990}. Monitoring two-ion species in the same trap might allow for fast and sufficiently precise mass measurements on Superheavy Elements produced at GSI and so far not available at the SHIPTRAP facility \cite{Bloc2010,Mina2012}. The second approach of storing the ions in different traps \cite{Rodr2012,Corn2016} will allow probing eigenfrequencies of the ion of interest in the absence of Coulomb interaction, while still continuously monitoring the magnetic field strength through the sensor ion. Since very small oscillation amplitudes can be detected optically \cite{Domi2017}, this might allow reducing the relative mass uncertainty to the level where the results can be utilized for neutrino mass spectrometry (see e.g. \cite{Elis2015}).\\ \noindent In this contribution we characterize analytically the motion of two Doppler-cooled trapped $^{40}$Ca$^+$ ions exposed to external fields of different nature, detailing the description of the dynamics of the two-ion crystal in the presence of white noise and the optical response of the imaging system. The observation of eigenfrequencies in a two-ion crystal through the fluorescence signal from a laser-cooled $^{40}$Ca$^+$ ion is the first approach in our on-going Penning-trap experiment \section{The Paul-trap setup} \begin{figure}[t] \centering \includegraphics[width=1.2\linewidth]{Figure1} \vspace{-3.5cm} \caption{a) Level scheme for the photoionization of Ca atoms, b) relevant level structure for the laser cooling of $^{40}$Ca$^+$, c) three dimensional technical drawing of the open-ring Paul trap used for the experiments reported in this publication. Three laser beams are used for Doppler cooling in the axial (1x 397~nm) and radial (1x 397~nm, 1x 866~nm) directions. Another two lasers are utilized to photoionize the calcium atoms. The cone in the figure depicts the photons from the fluorescence distribution collected with the optical system. The scheme to apply the different voltages to perform the experiments reported here is also shown.\label{fig_1}} \end{figure} The experiments reported in this publication have been carried out using $^{40}$Ca$^+$ ions confined and laser-cooled in an open-ring Paul trap. The experimental set-up was described in detail in Ref.~\cite{Corn2015} and here only the main components are outlined. It is made of two sets of three concentric rings centred on the $z$-axis, built inside a 6-way CF100 vacuum cross operated at a background pressure of $\approx 1.0\times 10^{-10}$~mbar. The $^{40}$Ca$^+$ ions are created inside the trap by two-step photoionization using near-UV diode lasers, one to drive the transition 4$^1$S$_{0}\rightarrow \,$4$^1$P$_{1}$ in the $^{40}$Ca atom ($\lambda = 422$~nm), and another one to bring the electron to the continuum ($\lambda =375$~nm). Figure~\ref{fig_1} shows a cut from a 3D CAD drawing of the Paul trap indicating the directions of the atomic calcium beam together with the laser beams for photoionization, and those utilized to perform Doppler cooling, i.e., to drive the cooling transition 4$^2$S$_{1/2}\rightarrow$4$^2$P$_{1/2}$ with $\lambda =397$~nm, and to pump the metastable 3$^2$D$_{3/2}$ state to 4$^2$P$_{1/2}$ with $\lambda = 866$~nm. The fluorescence light from the cooling transition (397~nm) is collimated using a commercial system, which provides a magnification of $\simeq 6.75(5)$ in the focal plane of an EMCCD (Electron Multiplier Charged Couple Device). An effective pixel size of $\approx 2.4$~$\mu $m has been obtained from the ratio of the minimum distance between two trapped ions due to Coulomb interaction, to the distance in pixels separating the centroids of the fluorescence distributions projected on the axial direction.\\ \noindent The trapping field is generated by a radiofrequency RF signal with frequency $\omega _{\scriptsize{\hbox{RF}}}=2\pi \times 1.47$~MHz, and voltage $V_{\scriptsize{\hbox{RF}}}=730$~V$_{\scriptsize{\hbox{pp}}}$ ($2 \times \Phi _0$), and an electrostatic potential $U_{\scriptsize{\hbox{DC}}}=-11.5$~V to the same electrodes, as shown in Fig.~\ref{fig_1}. The secular frequency is given by \begin{equation} \omega _{u}=\frac{\omega _{\scriptsize{\hbox{RF}}}}{2}\sqrt{a _{u}+\frac{q _{u}^{\scriptsize{\hbox{2}}}}{2}}, \qquad u=r,z \end{equation} where $q _{u}$ and $a_{\scriptsize{\hbox{u}}}$ are the so-called Mathieu parameters \cite{Ghos1995}. The trap (secular) frequency in the axial direction $\omega _{\scriptsize{\hbox{z}}}$ is about $2\pi \times 80$~kHz, and for these values of $V_{\scriptsize{\hbox{RF}}}$ and $U_{\scriptsize{\hbox{DC}}}$, $\omega _{\scriptsize{\hbox{z}}}<\omega _{\scriptsize{\hbox{r}}}$. The Mathieu parameter $q _{\scriptsize{\hbox{z}}} \simeq 0.25$, is within the range where the adiabatic approximation is valid~\cite{Ghos1995}. \section{Experimental results} \begin{figure}[t] \centering \includegraphics[width=1.1\linewidth]{Figure2} \vspace{-5.5cm} \caption{Fluorescence images (left) and projections in the axial direction (right) for one and two ions. The distance between the two-laser cooled ions is $30$~$\mu$m due to the balance between trapping potential and Coulomb repulsion.\label{fig_2}} \end{figure} The results presented here represent detailed investigations of the optical response of a two-ion crystal compared to the single ion case, for two types of external driving, namely harmonic dipolar and white noise fields (Fig.~\ref{fig_1}). The base line for these measurements is shown in Fig.~\ref{fig_2}, where one can observe the fluorescence image in the EMCCD sensor of one and two laser-cooled $^{40}$Ca$^+$ ions together with their axial distributions. A single ion is cooled down to a temperature below $10$~mK \cite{Domi2017} and performs a random walk around the equilibrium position in the axial potential well of the trap. When two ions are laser cooled below a certain temperature, they move around fixed average positions separated a distance \cite{Jame1998} \begin{equation} \Delta z=(e^2/2\pi \epsilon _0m\omega _{{\scriptsize{\hbox{z}}}}^2)^{1/3}\approx 30~\mu \hbox{m}, \end{equation} \noindent due to the balance between the Coulomb and trapping potentials. In this scenario, the centroids of the ion motions are $15$~$\mu$m apart from the minimum of the potential well. This originates some micromotion, so that the spread of the fluorescence distribution is not only caused by the finite resolution of the imaging system and the recoil heating. This can be seen from the comparison of the width of the individual distributions: $\sigma _{\scriptsize{\hbox{1-ion}}}^{\scriptsize{\hbox{single}}}=4.3$~$\mu $m versus $\sigma _{\scriptsize{\hbox{1-ion}}}^{\scriptsize{\hbox{crystal}}}=5.7$~$\mu $m.\\ \subsection{Harmonic dipolar field} \begin{figure}[t] \centering \includegraphics[width=1.1\linewidth]{Figure3} \vspace{-8.1cm} \caption{Fluorescence images of two laser-cooled ions when their COM mode is probed with an external dipolar field, driven at different frequencies close to resonance. The resonance frequency is $79.7$~kHz. \label{fig_3}} \end{figure} The response of a single ion to a harmonic dipolar force in resonance with the ion's motion has been recently studied \cite{Domi2017}. Here, we compare this response with that of a two-ion crystal. Two ions confined in the same potential well and driven by a harmonic field of intensity above the noise level oscillate in phase following the driving force. Figure~\ref{fig_3} shows the fluorescence image of two laser-cooled ions probed at different frequencies $\omega _{\scriptsize{\hbox{dip}}}$ of the dipolar field. At $\omega _{\scriptsize{\hbox{dip}}}=2\pi \times 78.5$~kHz, the oscillation amplitude of the ions in the steady state is below half the distance between them, and therefore they can be individually observed. Increasing $\omega _{\scriptsize{\hbox{dip}}}$ towards $\omega _{\scriptsize{\hbox{z}}}$, the oscillation amplitude increases until the fluorescence distributions of the two ions start to overlap, leading to a characteristic image. The axial distribution spreads further approaching the resonance and decreases again thereafter (when $\omega _{\scriptsize{\hbox{dip}}}>\omega _{\scriptsize{\hbox{z}}}$), as observed for $\omega _{\scriptsize{\hbox{dip}}}=2\pi \times 79.8$~kHz, $2\pi \times 80.0$~kHz and $2\pi \times 80.2$~kHz. In the latest case, the oscillation amplitude is approximately half of the separation distance between the ions. Projections into the axial plane for two of those images are shown in Fig.~\ref{fig_7}, along with the fits to the model discussed in the following.\\ \noindent The image obtained on the EMCCD is a convolution of the time-averaged motion of the ion with the resolution of the optical system. Therefore, the fits to the experimental data in Fig.~\ref{fig_7} are performed using a function resulting from the convolution of a Lorentzian with the probability density function of the harmonic oscillator as presented in Ref.~\cite{Domi2017}, but now using the analytical solution of the following integral \cite{wesenberg2007}: \begin{equation} F(z)=\int \frac{\Gamma}{2\pi^2}\frac{1}{\left(\frac{\Gamma}{2}\right)^2+(f-(x-x_0))^2}\sqrt{\frac{1}{\rho _{\scriptsize{\hbox{z,max}}}^2-f^2}}df, \label{eq:int} \end{equation} \noindent where $f$ is the integration variable, $\Gamma$ is the Lorentzian width accounting for the resolution of the optical system taking into account the motion of the ion, $z$ is the position of observation, and $z_0$ is the equilibrium position of the ion. The analytical solution is given by \cite{wesenberg2007}: \begin{equation} F(z)=\frac{2}{\pi\Gamma^2}\frac{A_0}{2\rho _{\scriptsize{\hbox{z,max}}}} \text{Im}\left[\frac{i}{\sqrt{1-\Gamma^2\frac{\left(\frac{2(z-z_0)}{\Gamma}+i\right)^2}{4\rho _{\scriptsize{\hbox{z,max}}}^2}}}\right], \label{fitting} \end{equation} \noindent where $\rm Im$ stands for the imaginary part. The results from the fits to each of the ion's distribution following Eq.~(\ref{fitting}), $F(z_1)$, $F(z_2)$ and the sum $F(z_1)+F(z_2)$, are also shown in Fig.~\ref{fig_7}. There are five free parameters: the width of the Lorentzian distribution, the central position for each ion $z_1$ and $z_2$, with $z_0=(z_2-z_1)/2$, $\rho _{\scriptsize{\hbox{z,max}}}$($=\rho _{\scriptsize{\hbox{z,max}}}^{\scriptsize{\hbox{ion1}}}=\rho _{\scriptsize{\hbox{z,max}}}^{\scriptsize{\hbox{ion2}}}$), and a scaling parameter $A_0$. We have observed that the Lorentizian width might vary for large amplitudes of $\rho _{\scriptsize{\hbox{z,max}}}$ and we assign this to the effect of the micromotion, which has an amplitude of about 0.10-0.15$\rho _{\scriptsize{\hbox{z,max}}}$. \begin{figure}[t] \centering \includegraphics[width=1.05\linewidth]{Figure4} \vspace{-2.4cm} \caption{Axial photon distributions for two of the fluorescence images shown in Fig.~\ref{fig_3}. The frequencies of the dipolar field are 78.5~kHz (left) and 80.2~kHz (right). The uncertainty for each data point corresponds to 3\,$\sigma$. The solid lines represent the fits following Eq.~(\ref{fitting}). The lower panels show the residuals from each fit. \label{fig_7}} \end{figure} \subsection{Sensitivity: one and two ions} The oscillation amplitude of the driven ions $\rho _{\scriptsize{\hbox{z,max}}}$ is a measure of the sensitivity of the system to an external force. This is obtained using the function given by Eq.~(\ref{fitting}). Figure~\ref{fig_8} shows $\rho _{\scriptsize{\hbox{z,max}}}$ as a function of the frequency of the external dipolar field ($\omega _{\scriptsize{\hbox{dip}}}/2\pi$) for one and two ions. The resulting data are fitted using the steady state solution of a driven-damped harmonic oscillator \cite{Domi2017} \begin{equation} \rho _{z,\scriptsize{\hbox{max}}}=\frac{F_e}{m} \{ (2 \gamma_z \omega_{\rm dip})^2 + (\omega_z^2 - \omega_{\rm dip}^2)^2 \}^{-1/2}, \label{fit_osc} \end{equation} \noindent where $F_e$ and $\omega_{\rm dip}$ are, respectively, the amplitude and the frequency of the harmonic driving force, and $\gamma_z$ the damping coefficient. In the single ion case $m$ corresponds to the ion’s mass, whereas in the two-ion case $m$ represents the mass of the centre-of-mass motion. $F_e$ is the same for both, single ion and two-ion crystal and the result is in agreement with previous results quoted in \cite{Domi2017}. An outcome from the measurements is that the damping coefficient ($\gamma _z$) differs: $\gamma _z=309(21)$~s$^{-1}$ for a single ion, against $\gamma _z=354(20)$~s$^{-1}$ for two ions. This can be observed in Fig.~\ref{fig_8}, where the solid lines are the fitting curves. Assuming the same laser conditions, this difference can be assigned to the Coulomb interaction between the two ions, which causes stronger damping. $F_e$ is the same for both, single ion and two-ion crystal, and the result is in agreement with previous results quoted in \cite{Domi2017}. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{Figure5} \vspace{0cm} \caption{Oscillation amplitude of one ion as a function of the frequency of the dipolar field. The solid-lines represent the fits following Eq.~(\ref{fit_osc}). Two situations have been considered: amplitude of a single ion, and the amplitude of one ion, which is part of a two-ion crystal. \label{fig_8}} \end{figure} \subsection{White noise} \begin{figure}[b] \centering \includegraphics[width=1.15\linewidth]{Figure6} \vspace{-8.5cm} \caption{Evolution of the fluorescence image of a single laser-cooled ion for different amplitudes of an applied noise signal.\label{fig_4}} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.1\linewidth]{Figure7} \vspace{-1.8cm} \caption{Top: Evolution of the fluorescence image of two laser-cooled ions with increasing amplitudes of an applied noise signal. Bottom: Axial photons distributions.\label{fig_5}} \end{figure} Electrical noise at the surfaces of trap electrodes leads to heating of the trapped ions~\cite{noise2015RMP}. Although this is an unwanted effect, it has been shown that a well-controlled external noisy force acting on the ions can provide valuable information about their dynamics, in particular on their statistical and thermodynamic descriptions~\cite{Mart2016,rossnagel2016single,ricci2017optically}. We have studied the dynamic response of one and two laser-cooled ions to white noise applied to the electrode, as shown in Fig.~\ref{fig_1}. Figures~\ref{fig_4} and \ref{fig_5} show the fluorescence images for different amplitudes of the white noise for one and two ions, respectively. We see that increasing levels of noise lead to wider distributions of the position, due to enhanced Brownian motion. Note that this enhancement only takes place in the axial direction, parallel to the applied field. The width, and hence the temperature of the radial modes remain undisturbed, since there is no coupling between modes. The noise leads to an additional heating rate given by \cite{noise2015RMP} \begin{equation} \Gamma_h\simeq\frac{e^2}{4m\hbar \omega _{\scriptsize{\hbox{z}}}}S_E(\omega_{\scriptsize{\hbox{z}}}), \label{eq:heatingrate} \end{equation} \noindent where $S_E(\omega_{\scriptsize{\hbox{z}}})$ is the single-sided spectral density of the noise at frequency $\omega_{\scriptsize{\hbox{z}}}$. Considering a random signal characterized by a Gaussian white noise process, $S_E(\omega_{\scriptsize{\hbox{z}}})$ is proportional to the intensity of the electric--field fluctuations and thus $S_E(\omega_{\scriptsize{\hbox{z}}})\propto V_{\rm noise}^2$. This additional heating source adds up to the natural heating rate in the laser-cooling process due to photon recoil, defining a new Doppler limit, which in the simplest case can be written as \cite{Domi2017}: \begin{equation} T_{\scriptsize{\hbox{z,Doppler}}}=\frac{1}{\gamma _z k_B}(K+\zeta \cdot V_{\scriptsize{\hbox{noise}}}^2), \label{eq:TDoppler} \end{equation} where $\gamma_z$ is the decay time constant (damping coefficient) in the cooling process, $K$ is a constant, and $\zeta \cdot V_{\scriptsize{\hbox{noise}}}^2$ is the heating rate in units of J$\cdot$ s$^{-1}$. Of course, we recover the standard Doppler limit when $V_{\scriptsize{\hbox{noise}}} =0$. Therefore, we expect a linear relation between the temperature of the ion and the intensity of the white noise applied to the electrodes. \\ \begin{figure}[t] \begin{center} \includegraphics[width=0.8\textwidth]{Figure8} \caption{Standard deviation ($\sigma^2$) obtained from the axial projection of ion-fluorescence image versus noise intensity ($\hbox{V}_{\scriptsize{\hbox{pp}}}^2$). The data points correspond to one ion, when it is isolated (red circles) or it is part of a two-ion crystal (grey circles). The blue dashed lines are used to guide the eyes.} \label{fig_6} \end{center} \end{figure} \noindent Figure~\ref{fig_6} shows the effects of white noise on the width of the fluorescence distributions. This is the outcome from the analysis of the measurements partially shown in Figs.~\ref{fig_4} and \ref{fig_5}. For a single ion, the linear relation given by Eq.~(\ref{eq:TDoppler}) holds true. Note that Equipartition theorem sets $T_z \propto \sigma ^2$. This linear dependence breaks down for high intensities of the field and high motional amplitudes, as expected due to the simplifications made in the derivation of Eq.~(\ref{eq:TDoppler})~\cite{Blat2003}. At high intensities, and therefore high amplitudes of motion, micromotion is increasingly more relevant, leading to a higher heating rate. The linear dependence is also observed for two ions at low intensities of the noise signal. Interestingly, this linear region is followed by a plateau in the value of $\sigma^2$, in the interval of $V_{\rm noise}^2\in[1000-4000]$~$\rm mV^2$. After this plateau, $\sigma^2$ increases again with $V_{\rm noise}$, but with a smaller slope, and thus with smaller heating rate, reaching a value similar to that of the single ion at those amplitudes. \\ \noindent In our experimental situation, the external noise is perfectly spatially correlated. In such a case, the heating rate for the COM mode of the crystal is proportional to the number of ions $N$ in the string, i.e. $\Gamma_h{(\hbox{COM})}=N\Gamma_h$ \cite{noise2015RMP}, so that \begin{equation} \Gamma _{h\rm 2 ions}{(\hbox{COM})}\simeq\frac{e^2}{2m\hbar\omega_{\rm COM}}S_E(\omega_{\rm COM}), \label{eq:heatingratetot} \end{equation} \noindent and therefore the COM mode of the two-ion crystal absorbs more energy from the field than an isolated ion, resulting in a higher heating rate for each individual ion. In fact, the initial slope for the ion string is almost twice of that for the single ion. The beginning of the plateau coincides with the value of the noise at which the two lobes in the fluorescence distribution (Fig.~\ref{fig_5}) are not distinguishable anymore and they merge into a single one. This can be interpreted as the melting of the Coulomb crystal, or an order-to-chaos transition, expected to occur when the kinetic energy of the ions surpasses the Coulomb potential~\cite{Thom2015}. The presence of the plateau suggests there is a region of coexistence of two phases, the ion crystal and the melted cloud. Since the heating rate for the melted cloud is lower than that for the crystal, it might happen that the two ions melt and recrystallize in the region where the kinetic and repulsive energies are similar. In this situation, fluctuations are responsible for the jumps between these two states. In the images taken with the EMCCD camera (see Fig.~\ref{fig_5}), one can observe an average over crystal and melted cloud, blurring the visualization of the two Gaussian distributions. Increasing the intensity of the noise field, the kinetic energy eventually overcomes the Coulomb repulsion and the ions heat with the heating rate of the cloud phase. In this phase, the correlation between the two ions is low, they behave like two isolated particles, thus experiencing a similar heating rate to the single-ion case. \section{Outlook and perspectives: projection to Penning traps} In this publication, we have described a fully-developed optical detection method that will allow determining with precision the oscillation amplitude of the motion of one or two ions confined in a trap, when they are laser-cooled to temperatures in the order of several mK. We have also studied the response of trapped ions to an external and controllable source of noise, which is important for experiments where one ion is studied through the fluorescence emitted by the neighbour one, both confined in the same potential well or in different traps.\\ \noindent In the next stage of the experiment, which is headed towards the realization of the so-called \textit{Quantum-Sensor} \cite{Rodr2012, Corn2012} technique by means of a double micro Penning-trap system \cite{Corn2016}, the method of optical ion sensing will be applied to a two-ion crystal in a Penning trap to carry out Sympathetically-Cooled Single Ion Mass Spectrometry (SCSI-MS)\cite{Staa2010}. The crystal will consist of a single $^{40}$Ca${^+}$ ion of mass $m$, and a singly-charged heavy or super-heavy ion (e.\,g. $^{163}$Ho, $^{187}$Re, $^{187}$Os) of mass $M$, stored along the magnetic field. The SCSI-MS method relies on the determination of the mass ratio $\mu=M/m$, by measuring the eigenfrequencies of the crystal in the harmonic potential well of the trap. According to \cite{Mori2001}, the modified motional frequencies $\Omega_{\pm}$ for a two-ion Coulomb crystal reads \begin{align} \Omega_{\pm}^2=\omega_{z,\,\mu=1}^2\left(1+\frac{1}{\mu}\pm\sqrt{1+\frac{1}{\mu^2}-\frac{1}{\mu}}\right), \label{Eq.1} \end{align} where $\omega_{z,\,\mu=1}=\omega_{\text{ref}}$ represents the COM eigenfrequency of two ions of equal mass. It has been shown in a linear Paul trap \cite{Drew2004}, that this method provides only a mass resolution $M/\Delta M$ of $\sim 10^{2}$- $10^{3}$, due to systematic effects, like shifts in the COM eigenfrequency originated by damping effects introduced by the laser-ion interaction, or instabilities in the trapping potential. However, using this scheme in a Penning trap might allow improving $M/\Delta M$, and also extending the applicability to heavy and superheavy elements, opposite to Paul traps, where there is a limitation arising from $0.5\lesssim \mu \lesssim 2$. Furthermore, the harmonicity of the trapping potential in a Penning trap is often orders of magnitude better compared to a Paul trap, since there are ultra-stable DC voltage sources \cite{Boeh2016}, which provide the potentials with an accuracy of 1~ppm.\\ \noindent Based on the results shown in Figs.~\ref{fig_7} and \ref{fig_8}, Fig.\,\ref{fig_9} illustrates the expected axial fluorescence distribution of a two-ion crystal $^{40}$Ca$^{+}$\,--\,$^{187}$Re$^{+}$ in a Penning-trap, with $\omega_{\text{ref}}$ set to 79.7\,kHz, a resolution of the imaging system of 6\,$\mu$m (Lorentzian width) and an amplitude for the dipolar field of 1\,mV$_{\scriptsize{\hbox{pp}}}$. The mass of the dark $^{187}$Re$^{+}$ ion can be obtained from a precise determination of one of the eigenfrequencies $\Omega_{\pm}$ of the two-ion crystal. \\ \noindent In the Penning trap, further improvement in the mass resolving power, up to a level of $\sim 10^{-6}$-$ 10^{-8}$ can be obtained from a direct measurement of the cyclotron frequency $\omega_{c}=qB/m$ of the dark ion, combining the high-sensitivity of the optical method with the well-established Time-Of-Flight Ion-Cyclotron-Resonance (TOF-ICR) technique \cite{Koen1995}, and later by removing the Coulomb interaction if the reference ion and the ion of interest are confined in different traps \cite{Rodr2012}. Both methods rely on controlled energy transfer between the radial and the axial motional degrees of freedom of the two ions, since the detection is done through the axial motion of the $^{40}$Ca$^+$ ion.\\ \noindent It has been shown recently, that the highest sensitivity in a Penning trap reaches a level of up to 50\,pm \cite{Gilm2017}. Although this is about 5 to 6 orders of magnitude better than the results presented here, the scheme proposed in this work, i.e., optical monitoring of the ion motion, is highly valuable when short measurement times, as well as non-destructive measurements with single-ion sensitivity, are mandatory. Improving the resolution of the optical system and increasing the magnification will improve sensitivity within the same measurement times. This might be enhanced if a cooling mechanism allowing reaching sub-mK temperatures is implemented. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{Figure9} \vspace{0cm} \caption{Motional spectra of a two-ion crystal as a function of the dipolar frequency. The spectra are modelled based on the results obtained in this work. According to Eq.~(\ref{Eq.1}), the predicted eigenfrequency $\Omega_{-}$ of the two-ion crystal $^{40}$Ca$^{+}$\,--\,$^{187}$Re$^{+}$ (c) is shifted to a lower frequency with respect to the COM frequency of $^{40}$Ca$^{+}$\,--\,$^{40}$Ca$^{+}$ (a). (b) and (d) are enlarged representations of the areas within the vertical dashed lines in (a) and (c), respectively. \label{fig_9}} \end{figure} \section*{Acknowledgement(s)} We acknowledge support from the European Research Council (ERC StG contract 278648-TRAPSENSOR); Spanish MINECO/FEDER FPA2012-32076, FPA2015-67694-P, UNGR10-1E-501, UNGR13-1E-1830, FIS2015-69983-P; Junta de Andaluc\'ia/FEDER IE-57131; Basque Government PhD grant PRE-2015-1-0394 and project IT986-16. F.D. acknowledges support from Spanish MINECO ``Programa de Garant\'ia Juvenil'' cofunded by the University of Granada. M.J.G.'s work is supported by the Spanish MECD through the grant FPU15-04679. R.A.R. acknowledges support from the "Juan de la Cierva-Incoporaci\'on" program from MINECO. S.S and J.J.D.P. acknowledges support from the University of Granada thorugh the ``Plan propio (Programa de Intensificaci\'on de la Investigaci\'on)".
1,116,691,498,900
arxiv
\section{Introduction} In the last decades the possibility of \textit{deforming} (rather than breaking) the relativistic symmetries of empty space at the Planck scale ($E_p\sim L_p^{-1} \sim 10^{19} GeV $\footnote{We use units in which $\hbar = c =1$}) has received a considerable amount of attention in the Quantum Gravity community. One reason for this comes from 2+1 dimensional Quantum Gravity, which, because it lacks local propagating degrees of freedom (gravitons), can be quantized with topological QFT methods. Coupling this theory to matter and integrating away the gravitational degrees of freedom, one ends up with a nonlocal effective theory~\cite{matschull,freidel}. The spacetime symmetries that leave this theory invariant are not described by the Poincar\'e group. In its place, one finds a Hopf algebra (or ``quantum group'' \cite{presley, majid22, majid}), which is a deformation of a Lie group into a noncommutative object, that depends on a scale with the dimensions of an energy (the Planck scale). Further evidence supporting the emergence of noncommutative-geometric structures in quantum gravity is provided by String Theory~\cite{SeibergWitten}, in which the B-field can take an expectation value such that the string dynamics is effectively described by a field theory on a noncommutative spacetime. The spacetime symmetries of such field theories are described by a Hopf algebra~\cite{Watts2000}.\footnote{Hopf algebras emerged in several other contexts in physics, \emph{e.g.} (quantum) integrable models~\cite{IntegrableSigmaModels}, perturbative QCD~\cite{HopfAlgebras_n_PerturbativeQCD} and AdS/CFT \cite{HopfAlgebras_n_StringTheory} to cite some that are closer to the concerns of the present paper.} The energy scale $E_p$ is introduced, in the Hopf-algebraic context, in a frame-independent way: there is still a 10-dimensional group of symmetries that allows to connect different inertial observers, but the transformation laws are deformed in an $E_p$-dependent way. There is no breaking of Lorentz invariance, just a deformation of Special Relativity into a theory with two invariant scales ($c$ and $E_p$), sometimes dubbed \emph{Doubly Special Relativity}~\cite{secondo, oriti}. In 3+1 dimensions we cannot reproduce the results obtained in 2+1D Quantum Gravity~\cite{matschull,freidel} because we lack the same level of understanding of the quantum theory of gravity, but we can conjecture that a similar mechanism is at work, and the `ground state' of quantum gravity is not Minkowski spacetime, but rather a quantum homogeneous space whose symmetries are described by a Hopf algebra. The study of the possible Hopf algebra deformations of the Poincar\'e group in 3+1 dimensions then acquires great interest. In the present work, we will be concerned with the infinitesimal version of quantum groups: Lie bialgebras. These play, for quantum groups, the same role that Lie algebras play for Lie groups: they describe the structure of the group in an infinitesimal neighborhood of the identity. A quantum group is essentially a Lie group with an additional layer of structure that allows to make the algebra of functions on the group noncommutative. In Lie bialgebras this noncommutative structure is linearized. So a Lie bialgebra can be seen as two Lie algebras, one playing the traditional role of commutation relations between generators, and the other playing a novel role associated with the noncommutative structure. These two Lie algebras have to satisfy certain consistency conditions, namely, that the dual map to the Lie bracket of one algebra has to be a cocycle with respect to the Lie bracket of the other algebra. A theorem due to Drinfel'd~\cite{Drinfeld} established that, modulo global issues, a quantum group can be completely described in terms of its Lie bialgebra.\footnote{Specifically, Drinfel'd theorem states that Lie bialgebras are in one-to-one correspondence with Poisson-Lie structures on the (simply connected) group obtained via the exponential map, and the quantization of the unique Poisson-Lie structure associated to a Lie bialgebra will be the quantum group that corresponds to it~\cite{Drinfeld,Semenov,Bidegain}.} Our goal in this paper is to characterize the possible deformations of the three maximally symmetric algebras of relativistic symmetries: $\mathfrak{iso}(d-1,1)$, $\mathfrak{so}(d,1)$ and $\mathfrak{so}(d-1,2)$ in $d=2$, $3$ and $4$ spacetime dimensions. To this extent in what follows we shall use the word \textit{classification} as to a listing of all possible deformed Lie bialgebras after the constraints of covariance principles and dimensional analysis. Strictly speaking, a true classification of Lie bialgebras would involve identifying the equivalence classes under automorphisms of the Lie algebras. In this sense we do not present classification results. What we do is to identify the Lie bialgebras that satisfy certain covariance principles and certain physically-motivated dimensional analysis requests. These constraints greatly reduce the number of `physically interesting' cases, and within this smaller set, the discussion of automorphisms becomes much simpler. In a commutative spacetime $\mathfrak{iso}(d-1,1)$, $\mathfrak{so}(d,1)$ and $\mathfrak{so}(d-1,2)$ describe the symmetry groups of Minkowski, de Sitter and Anti-de Sitter spaces, respectively. We are then considering the possible symmetries of quantum homogeneous spacetimes, which are natural candidates for the vacuum state of quantum gravity. In \ref{math} we will present a brief introduction of Lie bialgebras and their mathematics. In Section \ref{bia} we will describe our approach to classifying the Lie bialgebras which can be constructed from the Poincar\'e and (A)dS algebras in dimensions lower or equal to 3+1. In Section \ref{sol} we will list all the deformations we are able to find, in a systematic manner. In Section \ref{conc} present our conclusions with future perspectives. \section{Mathematical Tools: Lie bialgebra Basics}\label{math} A Lie bialgebra\footnote{For a complete treatment we refer the interested reader to \cite{presley,104,majid}.} is a set $(\mathfrak{g}, [\,,\,], \delta)$, where $\mathfrak{g}$ is a Lie algebra\footnote{We will work with the field of complex numbers. We will also use for convenience anti-hermitian operators: $i\mathcal{O}\to \mathcal{O}$. Einstein's convention is used everywhere, with Greek indices running from 0 to 3 and Latin indices from 1 to 3.} defined by \begin{equation}\label{cccc} [X_i, X_j]={c_{ij}}^k X_k,\,\,\,\,\,\,\, X_i \in \mathfrak{g},\,\,\,\,\,\,\,{c_{ij}}^k\in \mathbb{C}, \end{equation} where the structure constants ${c_{ij}}^k$ satisfy the Jacobi identity: \begin{equation}\label{jacc} {c_{ij}}^l{c_{lk}}^m+{c_{ki}}^l{c_{lj}}^m+{c_{jk}}^l{c_{li}}^m=0. \end{equation} $\delta:\mathfrak{g}\to \mathfrak{g}\otimes\mathfrak{g}$ is a skew-symmetric map, the \emph{cocommutator}, obeying a \emph{cocycle condition:} \begin{equation}\label{cocond} \delta([X_i, X_j])=[\delta(X_i), X_j\otimes 1 +1\otimes X_j]+[X_i\otimes1+1\otimes X_i,\delta(X_j) ] . \end{equation} Moreover the dual map $\delta^*:\mathfrak{g}^*\otimes\mathfrak{g}^*\to \mathfrak{g}^* $ is required to be a Lie bracket, making $\mathfrak{g}^*$ into a Lie algebra: \begin{equation} [\xi^i, \xi^j]={f^{ij}}_k \xi^k, \,\,\,\,\,\,\, \xi^i \in \mathfrak{g}^*,\,\,\,\,\,\,\,{f_{ij}}^k\in \mathbb{C},\,\,\,\,\,\,\, \langle \xi^i, X_j\rangle={\delta^i}_j, \end{equation} therefore we can expand the cocommutator on a basis \begin{equation}\label{popo} \delta(X_k)= {f^{ij}}_k X_i\wedge X_j . \end{equation} The structure constants $f$ obey a co-Jacobi identity (Jacobi identity for the dual algebra $\mathfrak{g}^*$): \begin{equation}\label{cojacc} {f^{jk}}_i{f^{lm}}_j+{f^{jl}}_i{f^{mk}}_j+{f^{jm}}_i{f^{kl}}_j=0, \end{equation} we can thus write the cocycle condition \eqref{cocond} in a more explicit manner \begin{equation}\label{ccccccc} {f^{ab}}_k{c_{ij}}^k={f^{ak}}_i{c_{kj}}^b+{f^{kb}}_i{c_{kj}}^a+{f^{ak}}_j{c_{ik}}^b+{f^{kb}}_j{c_{ik}}^a, \end{equation} this identity can be seen as a compatibility condition between $\mathfrak{g}$ and $\mathfrak{g}^*$ as Lie bialgebra elements. It so happens that if $\mathfrak{g}$ is semisimple then one can construct $\delta$ in an easier way. A Lie bialgebra is said to be a coboundary if there exists an element \begin{equation} r=r^{ij} X_i\wedge X_j ~~~ \in ~~~ \mathfrak{g}\wedge\mathfrak{g}, \end{equation} the ``$r$-matrix", such that \begin{equation}\label{cobo} \delta(X_i)=[X_i\otimes 1 +1 \otimes X_i, r], ~~~ X_i \in \mathfrak{g}, \end{equation} which is always true when $\mathfrak{g}$ is semisimple, as a consequence of Whitehead's lemma~\cite{laurent2012poisson}. Furthermore in a coboundary Lie bialgebra $r$ is a solution of the \emph{Modified Classical Yang-Baxter equation} (mCYBE): \begin{equation}\label{mCYBE} [X_i\otimes1\otimes1+1\otimes X_i\otimes1+1\otimes1\otimes X_i, [[r, r]]\,]=0, \end{equation} where $[[r, r]]:= [r_{12}, r_{13}]+[r_{12}, r_{23}]+[r_{13}, r_{23}]\,$ is the \emph{Schouten bracket}. Here $r_{12}=r^{ij} X_i\otimes X_j \otimes 1$ and the same convention is taken for $r_{13}$ and $r_{23}$. If $[[r, r]]=0$ then $r$ is said to satisfy the \emph{Classical Yang-Baxter equation} (CYBE). As we said, Drinfel'd proved that a Hopf algebra $H$ can be specified (up to global issues) by its first order deformation, which is a Lie bialgebra~\cite{Drinfeld}. Writing the Hopf algebra coproduct $\Delta: H \to H \otimes H$ as a series in powers of the generators, one can in principle reconstruct higher order terms from the first order cocommutator $\delta$ by solving the co-associativity axiom order by order. Classifying Lie bialgebras is thus a key issue for listing all inequivalent quantum groups. This process is easier if $\mathfrak{g}$ is semisimple, because then one has to look only for all possible $r$ matrices. Lie bialgebras built from $\mathfrak{so}(d-1, 1)$ are always coboundaries since $\mathfrak{so}(d-1, 1)$ is semisimple, but in general this might not be true for any $\mathfrak{iso}(d-1, 1)$. It can be nonetheless shown that all possible deformations of $\mathfrak{iso}(d-1, 1)$ are also coboundaries, for spacetime dimensions $d > 2$ \cite{zaz}. \subsection{The meaning of coalgebraic structures: quantum spacetimes} To have a better idea of the meaning of the additional structures that Lie bialgebras, consider a particular Lie-bialgebra deformation of the Poincar\'e Lie algebra $\mathfrak{iso}(3, 1)$ in 3+1 dimensions. In the standard basis $\{ P_0, P_i, K_i, J_i \}$ (which represent linear and angular momenta) the commutators take the form: \begin{equation}\label{PoincareAlgebra} \begin{aligned} &[P_0, P_i]=0 \,, & &[P_0, K_i]=-P_i\,,& &[P_0, J_i]=0\,,& \\ &[P_i, P_j]=0\,,& &[J_i, P_j]=\varepsilon_{ijk}P_k\,,& &[J_i, K_j]=\varepsilon_{ijk}K_k \,,& \\ &[K_i, K_j]=-\varepsilon_{ijk}J_k\,,& &[J_i, J_j]=\varepsilon_{ijk}J_k\,& &[P_i, K_j]=\delta_{ij}P_0\,.& \end{aligned} \end{equation} The so-called $\kappa$-deformation~\cite{lukkkk,mr,waves,Aschieri2017, kappakappa, aschierigeom} (which will feature prominently in the rest of the paper) is generated by the $r$-matrix: $ r = {\frac 1 \kappa} K_i\wedge P_i$. Using \eqref{cobo} we obtain \begin{equation}\label{kappaPoincareBialgebra} \begin{aligned} &\delta(P_0)=0,& &\delta(P_i)= {\frac 1 \kappa} P_i\wedge P_0,& &\delta(K_i)= {\frac 1 \kappa} (K_i\wedge P_0 +\varepsilon_{ijk}P_j\wedge J_k),& &\delta(J_i)=0. \end{aligned} \end{equation} What is the meaning of the above relations? To answer, consider the dual algebra $\mathfrak{iso}(3, 1)^*$, generated by the ten basis elements $\{a^\mu, \omega^\mu{}_\nu \}$. This algebra is, in the undeformed case, the \emph{algebra of functions over an infinitesimal neighbourhood around the identity of the Poincar\'e group}~\cite{flaaaaaaa}. The basis elements are to be understood as \emph{coordinate functions} on the group manifold (\emph{i.e.} $a^\mu$ are four functions which associate to a group element the corresponding translation vector in the standard representation). The commutation relations~(\ref{PoincareAlgebra}) are dual to a set of rules $a^\mu \to a^\nu \wedge \omega^\mu{}_\nu$, $\omega^\mu{}_\nu \to\omega^\mu{}_\rho \wedge \omega^\rho{}_\nu$, which encode the way two infinitesimal Poincar\'e transformation combine (\emph{i.e.} they encode the group product). In the deformed case, we can consider the dual of the $\kappa$-deformed Lie bialgebra, and the novelty is that the cocommutator map~(\ref{kappaPoincareBialgebra}) dualizes to a set of commutation relations for the coordinate functions:\footnote{We define $\xi^i = \omega^i{}_0$ and $\omega^i= {\frac 1 2} \varepsilon^{ijk} \omega_{jk}$.} \begin{equation}\label{kappaPoincareDualBialgebra} \begin{aligned} &[x^0, x^i]=-{\frac 1 \kappa} x^i,& &[x^0, \xi^i]=- {\frac 1 \kappa}\xi^i,& &[x^0, \omega^i]=0, \\& [x^i, x^j]=0,& &[x^i, \xi^j]=0,& &[x^i, \omega^j]={\frac 1 \kappa} \varepsilon_{ijk}\omega^k, \\& [\xi^i, \xi^j]=0,& &[\xi^i, \omega^j]=0,& &[\omega^i, \omega^j]=0. \end{aligned} \end{equation} The algebra of functions on the group manifold [which was commutative, being endowed with the pointwise product between functions, $(f \cdot g) (x)= f(x) g(x) = g(x) f(x) = (g \cdot f) (x)$] generalized to a nonabelian algebra. The group manifold does not admit anymore the interpretation of a topological manifold; it is instead, a sort of `quantum manifold': a \emph{noncommutative geometry}. In the case of the $\kappa$-deformation, the algebra~\eqref{kappaPoincareDualBialgebra} admits a subalgebra of translations: \begin{equation} [ x^0 , x^i ] = - \frac{1}{\kappa} x^i \,, \qquad [ x^i , x^j ] = 0 \,, \end{equation} which can be interpreted as the algebra of coordinates on a 4-dimensional noncommutative spacetime, known as \emph{$\kappa$-Minkowski}.\footnote{The fact that translation coordinates close a subalgebra is a property of the Lie bialgebra called \emph{coisotropy}. Together with other properties (which $\kappa$-Poincar\'e satisfies), it defines the notion of \emph{quantum homogenous space}~\cite{AngelFlavioUpcomingPaper}.} Notice that the Lie bialgebra structures can at best deliver a \emph{Lie algebra} of noncommutative coordinates, \emph{i.e.} the commutators have to be linear. In full generality, one can expect commutation relations which are arbitrary functions of the coordinates, and indeed that is what happens with the full quantum group in $\kappa$-Poincar\'e, of which the Lie bialgebra is but the first order in a power expansion (see also~\cite{LukierskiQuadratic}, where a noncommutative spacetime with quadratic commutation relations is shown, which in our formalism, at the Lie bialgebra level, would appear commutative). Nevertheless, the lowest order of this expansion is expected to be the least suppressed by the physical constants that control the quantum deformation (see next Section), and in this paper we are concerned with leading-order effects. \section{Lie-bialgebra Deformations of Relativistic Lie algebras}\label{bia} In this Section we shall describe our approach for seeking and classifying Lie-bialgebra deformations. We call a \textit{deformation} of a Lie algebra with structure constants ${c_{ij}}^k$ a cocommutator map satisfying \eqref{cojacc} and \eqref{ccccccc}. The goal of this Section is to investigate, through a systematic approach, what Lie bialgebra candidates can serve as generalizations of the algebras of homogeneous spacetimes symmetries. We shall find that the well-known $\kappa$-Poincar\'e group \cite{Lukierski1994} (with its related algebra of coordinates $\kappa$-Minkowski) is, under certain assumptions, the unique Hopf algebra extending the classic Poincar\'e Lie group to a noncommutative framework in $(3+1)$D. Consider the following Lie algebra $\mathfrak{g}_\Lambda$: \begin{equation}\label{Poincare/dS/AdS-algebra} \begin{aligned} &[M^{\mu\nu}, M^{\rho\sigma}] = \eta^{\nu\rho}M^{\mu\sigma}-\eta^{\mu\rho}M^{\nu\sigma}-\eta^{\sigma\mu}M^{\rho\nu}+\eta^{\sigma\nu}M^{\rho\mu} \,, \\ &[P^\mu,M^{\rho\sigma}] = \eta^{\mu\rho}P^\sigma-\eta^{\mu\sigma}P^\rho \,, \qquad [P^\mu, P^\nu] = - \Lambda ~ M^{\mu\nu} \,, \end{aligned} \end{equation} splitting space ($i,j,k,\dots=1,2,3$) and time indices, and introducing the boost and rotation generators: \begin{equation} M^{0i}=K_i \,, \qquad J_i=-\frac{1}{2}\varepsilon_{ijk}M^{jk} \,, \qquad M^{ij} = - \varepsilon^{ijk} J_k \,, \end{equation} which mean that $J_1= - M^{23}$, $J_2 = - M^{31}$ and $J_3 = - M^{12}$, with the following convention for the Minkowski metric: $\eta^{00} = -1$ and $\eta^{ij} = \delta^{ij}$, we get the following commutation relations: \begin{equation}\label{(A)dS_algebra_3+1_space_indicesson} \begin{aligned} &[P_0, P_i]=-\Lambda K_i \,, & &[P_0, K_i]=-P_i\,,& &[P_0, J_i]=0\,,& \\ &[P_i, P_j]=\Lambda \varepsilon_{ijk} J_k\,,& &[J_i, P_j]=\varepsilon_{ijk}P_k\,,& &[J_i, K_j]=\varepsilon_{ijk}K_k \,,& \\ &[K_i, K_j]=-\varepsilon_{ijk}J_k\,,& &[J_i, J_j]=\varepsilon_{ijk}J_k\,& &[P_i, K_j]=\delta_{ij}P_0\,.& \end{aligned} \end{equation} The algebra above corresponds to the Poincar\'e algebra $\mathfrak{iso}(3,1)$ when $\Lambda=0$, to the de Sitter algebra $\mathfrak{so}(4,1)$ when $\Lambda>0$ and to the Anti-de Sitter algebra $\mathfrak{so}(3,2)$ when $\Lambda<0$. In order to deform \eqref{Poincare/dS/AdS-algebra} we introduce the cocommutator map $\delta : \mathfrak{g}_\Lambda \to \mathfrak{g}_\Lambda \wedge \mathfrak{g}_\Lambda$, whose most general form we write as \begin{equation}\label{General_cocommutator} \begin{split} \delta(P_\mu) &= \mathcal{A}_\mu^{\:\: \rho\sigma}P_\rho\wedge P_\sigma+\mathcal{B}_\mu^{\:\:\rho\sigma\gamma}P_\rho\wedge M_{\sigma\gamma}+\mathcal{C}_\mu^{\:\:\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta}, \\ \delta(M_{\mu\nu}) &= \mathcal{D}_{\mu\nu}^{\:\:\:\: \rho\sigma}P_\rho\wedge P_\sigma+\mathcal{E}_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma}P_\rho\wedge M_{\sigma\gamma}+\mathcal{F}_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta}, \end{split} \end{equation} and impose algebraic conditions on $\delta$ that define a Lie bialgebra: Jacobi identity on $ \mathfrak{g}^*_\Lambda$ structure constants (co-Jacobi identity) and cocycle condition \eqref{ccccccc}. The coefficients $\{\mathcal{A, B, C, D, E, F}\}$ will be functions of the physical constants $L_p$ and $\Lambda$, chosen on the basis of certain symmetry requirements. \subsection{Dimensional Analysis}\label{DimAn} If we work in units such that $c=\hbar=1$, the translation generators $P_\mu$ of \eqref{Poincare/dS/AdS-algebra} have the dimensions of an energy, the Lorentz generators $M_{\mu\nu}$ are dimensionless and the cosmological constant $\Lambda$ has dimensions of energy squared. We can make the $P_\mu$ generators dimensionless by dividing them by the square root of the norm of the cosmological constant: \begin{equation} P_\mu = \sqrt{|\Lambda|} \, Q_\mu\,, \end{equation} then, if we introduce \begin{equation} \lambda = \text{sign}(\Lambda) \,, \end{equation} (with the convention that $\Lambda =0$ $\Rightarrow$ $\lambda =0$) we can make the whole algebra \eqref{Poincare/dS/AdS-algebra} dimensionless: \begin{equation}\label{Dimensionlessbialgebra} \begin{split} [M^{\mu\nu}, M^{\rho\sigma}] &= \eta^{\nu\rho}M^{\mu\sigma}-\eta^{\mu\rho}M^{\nu\sigma}-\eta^{\sigma\mu}M^{\rho\nu}+\eta^{\sigma\nu}M^{\rho\mu},\\ [Q^\mu,M^{\rho\sigma}] &= \eta^{\mu\rho}Q^\sigma-\eta^{\mu\sigma}Q^\rho, \qquad [Q^\mu, Q^\nu] =-\lambda M^{\mu\nu}. \end{split} \end{equation} If we now write the most general cocommutator with the dimensionless variables $Q_\mu$, $M_{\mu\nu}$, this will be of the form: \begin{equation} \begin{split} \delta(Q_\mu) &= a_\mu^{\:\: \rho\sigma}Q_\rho\wedge Q_\sigma+b_\mu^{\:\:\rho\sigma\gamma}Q_\rho\wedge M_{\sigma\gamma}+c_\mu^{\:\:\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta} , \\ \delta(M_{\mu\nu}) &= d_{\mu\nu}^{\:\:\:\: \rho\sigma}Q_\rho\wedge Q_\sigma+ e_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma}Q_\rho\wedge M_{\sigma\gamma}+f_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta}, \end{split} \end{equation} where the coefficients ${a_\mu}^{\rho\sigma},...\,, {f_{\mu\nu}}^{\rho\sigma\gamma\delta}$ are dimensionless. We assume these coefficients to be analytic functions of the only two physical scales in the model: the Planck length $L_p$ and the cosmological radius $\frac 1 {\sqrt{|\Lambda|}}$. There is only one way to make a dimensionless constant out of those two, and it is to take the combination \begin{equation}\label{qq} q = L_p\sqrt{|\Lambda|} , \end{equation} Then the coefficients will have to be analytic functions of the dimensionless parameter $q$ (the ratio between the Planck length and the cosmological radius). We require, as a physical input, that $\delta \xrightarrow[L_p \to 0]{} 0$, \emph{i.e.} that in the limit in which the Planck length vanishes, the coefficients ${a_\mu}^{\rho\sigma}(q),...\,, {f_{\mu\nu}}^{\rho\sigma\gamma\delta}(q)$ vanish too. Then the assumption of analyticity allows us to expand the coefficients in Taylor series around zero: \begin{equation} \begin{aligned} {a_\mu}^{\rho\sigma}(q) &= q \, a_\mu^{(1)\rho\sigma} + {\frac 1 2} q^2 \, a_\mu^{(2) \rho\sigma} + \mathcal O(q^3) \,, \\ &\vdots \\ {f_{\mu\nu}}^{\rho\sigma\gamma\delta}(q) &= q \, f_\mu^{(1)\rho\sigma} + {\frac 1 2} q^2 f_\mu^{(2)\rho\sigma} + \mathcal O(q^3) \,, \end{aligned} \end{equation} where $a_\mu^{(i)\rho\sigma}$, \dots , $f_{\mu\nu}^{(i)\rho\sigma\gamma\delta}$ are numerical. Then Eq.~\eqref{General_cocommutator} reads, at first order in $q$: \begin{equation} \begin{split} \delta(Q_\mu) &= q \left( a_\mu^{(1) \rho\sigma}Q_\rho\wedge Q_\sigma+b_\mu^{(1)\rho\sigma\gamma}Q_\rho\wedge M_{\sigma\gamma}+c_\mu^{(1)\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta} \right) + \mathcal O(q^2) , \\ \delta(M_{\mu\nu}) &= q \left( d_{\mu\nu}^{(1)\rho\sigma}Q_\rho\wedge Q_\sigma+ e_{\mu\nu}^{(1)\rho\sigma\gamma}Q_\rho\wedge M_{\sigma\gamma}+f_{\mu\nu}^{(1)\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta} \right) + \mathcal O(q^2) , \end{split} \end{equation} and reintroducing the dimensionful translation generators by $Q_\mu = \frac{P_\mu}{\sqrt{|\Lambda|}} $: \begin{equation}\label{dimm} \begin{split} \delta(P_\mu) &= L_p \left( a_\mu^{(1) \rho\sigma}P_\rho\wedge P_\sigma+ \sqrt{|\Lambda|} \, b_\mu^{(1)\rho\sigma\gamma}P_\rho\wedge M_{\sigma\gamma}+|\Lambda| \, c_\mu^{(1)\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta} \right) + \mathcal O(q^2) , \\ \delta(M_{\mu\nu}) &= L_p \left( \frac{d_{\mu\nu}^{(1)\rho\sigma}}{ \sqrt{|\Lambda|} } P_\rho\wedge P_\sigma+ e_{\mu\nu}^{(1)\rho\sigma\gamma}P_\rho\wedge M_{\sigma\gamma}+ \sqrt{|\Lambda|} \, f_{\mu\nu}^{(1)\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta} \right) + \mathcal O(q^2) . \end{split} \end{equation} this shows the relationship between the ${\mathcal{A}_\mu}^{\rho\sigma}$, \dots , ${{\mathcal{F}}_{\mu\nu}}^{\rho\sigma\gamma\delta}$ coefficients and the Taylor expansion: \begin{equation}\label{taylor} \begin{split} &\mathcal{A}_\mu^{\:\: \rho\sigma} = L_p \, a_\mu^{(1)\rho\sigma} + L_p^2 \, \sqrt{|\Lambda|} \, a_\mu^{(2)\rho\sigma} + \mathcal O(L_p^3) \\ &\mathcal{B}_\mu^{\:\:\rho\sigma\gamma} = L_p \, \sqrt{|\Lambda|} \, b_\mu^{(1)\rho\sigma\gamma} + L_p^2 \, |\Lambda| \, b_\mu^{(2)\rho\sigma\gamma} + \mathcal O(L_p^3) \,, \\ &\mathcal{C}_\mu^{\:\:\rho\sigma\gamma\delta} = L_p \, |\Lambda| \, c_\mu^{(1)\rho\sigma\gamma\delta} + L_p^2 \, |\Lambda|^{\frac 3 2} c_\mu^{(2)\rho\sigma\gamma\delta}+ \mathcal O(L_p^3) \,, \\ &\mathcal{D}_{\mu\nu}^{\:\:\:\: \rho\sigma} = \frac{L_p }{\sqrt{|\Lambda|}} \, d_{\mu\nu}^{(1)\rho\sigma} + L_p^2 \, d_{\mu\nu}^{(2)\rho\sigma}+ \mathcal O(L_p^3) \,, \\ &\mathcal{E}_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma} = L_p \, e_{\mu\nu}^{(1)\rho\sigma\gamma} + L_p^2 \, \sqrt{|\Lambda|} \, e_{\mu\nu}^{(2)\rho\sigma\gamma}+ \mathcal O(L_p^3) \,, \\ &\mathcal{F}_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma\delta} = L_p \, \sqrt{|\Lambda|} \, f_{\mu\nu}^{(1)\rho\sigma\gamma\delta} + L_p^2 \, |\Lambda| \, f_{\mu\nu}^{(2)\rho\sigma\gamma\delta} + \mathcal O(L_p^3) \,. \end{split} \end{equation} A few observations can be deduced from the expression above. First of all, the coefficient $\mathcal{D}_{\mu\nu}^{\:\:\:\: \rho\sigma}$ \emph{does not admit a regular flat limit} $\Lambda \to 0$, unless we set $d_{\mu\nu}^{(1)\rho\sigma} =0$. Then this coefficient is either second order in the Planck length or it is excluded by the reasonable requirement of admitting a flat limit. Moreover, the coefficients $\mathcal{B}_\mu^{\:\:\rho\sigma\gamma}$, $\mathcal{C}_\mu^{\:\:\rho\sigma\gamma\delta} $ and $\mathcal{F}_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma\delta}$ all start with a nonzero power of $ \sqrt{|\Lambda|}$. Therefore (at first order in $L_p$) they represent \emph{infrared corrections} to the deformation given by the coefficients $\mathcal{A}_\mu^{\:\: \rho\sigma}$ and $\mathcal{E}_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma}$, and they all vanish in the flat limit $\Lambda \to 0$. For these reasons, if we are interested in the Lie-bialgebra deformations of the Poincar\'e algebra at first order in $L_p$, we can focus on the coefficients $\mathcal{A}_\mu^{\:\:\rho\sigma}$ and $\mathcal{E}_{\mu\nu}^{\:\:\:\: \rho\sigma\gamma} $. However we do not want to throw all the other terms from the beginning; for visual convenience we will color the different terms differently. From now on, we indicate in blue the coefficients $\mathcal{A}_\mu^{\:\:\rho\sigma}$, $\mathcal{E}_{\mu\nu}^{\:\:\:\: \rho\sigma\gamma} $ and their corresponding terms in $\mathfrak{g}_\Lambda \wedge \mathfrak{g}_\Lambda $. The coefficients $\mathcal{B}_\mu^{\:\:\rho\sigma\gamma}$ and $\mathcal{F}_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma\delta}$ will be corrections of order $\sqrt{|\Lambda|}$, and $\mathcal{C}_\mu^{\:\:\rho\sigma\gamma\delta}$ of order $|\Lambda|$, to the previous terms, and we will color them in red. We will leave the terms $\mathcal{D}_{\mu\nu}^{\:\:\:\:\rho\sigma}$ in black. The general cocommutator will then look like: \begin{equation}\label{General_cocommutator2} \begin{split} \delta(P_\mu) &= \blue{\mathcal{A}_\mu^{\:\: \rho\sigma}P_\rho\wedge P_\sigma} + \red{\mathcal{B}_\mu^{\:\:\rho\sigma\gamma}P_\rho\wedge M_{\sigma\gamma}} + \red{\mathcal{C}_\mu^{\:\:\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta}}, \\ \delta(M_{\mu\nu}) &= \mathcal{D}_{\mu\nu}^{\:\:\:\: \rho\sigma}P_\rho\wedge P_\sigma+\blue{\mathcal{E}_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma}P_\rho\wedge M_{\sigma\gamma}}+\red{\mathcal{F}_{\mu\nu}^{\:\:\:\:\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta}}. \end{split} \end{equation} \begin{table}[h]\centering \begin{tabular}{lll} \hline color & lowest order & next-to-lowest order \\$\,\,$ \blue{$\blacksquare$} & $L_p$ & $L_p^2 \sqrt{|\Lambda|}$ \\ $\,\,$ \red{$\blacksquare$} & $L_p \sqrt{|\Lambda|}$ & $L_p^2 |\Lambda|$ \\ $\,\,$ $\blacksquare$ & ${L_p}/{\sqrt{|\Lambda|}}$ & $L_p^2$ \\ \hline \end{tabular} \caption{Color legend for the cocommutator coefficients.} \end{table} \subsubsection{Comparison With a Different Approach} In the previous Section we were able to draw conclusions on the ``flat-limit" properties of a Lie bialgebra by dimensional analysis. A somewhat similar result can be obtained by group contraction techniques applied to Lie bialgebras~\cite{balles95}. If $\phi_\varepsilon$ is a one-parameter family of Lie algebra automorphisms , one can define a contracted cocommutator $\delta'$ by: \begin{equation}\label{contr} \delta ' :=\lim_{\varepsilon\to 0} \varepsilon^n ({\phi^{-1}_\varepsilon}\otimes {\phi^{-1}_\varepsilon})\circ\delta\circ \phi_\varepsilon\,, \end{equation} if there is an $n$ such that this limit exists. Furthermore there is a minimal value $n_0$ of $n$ such that for $n\geq n_0$ the limit \eqref{contr} exists, and if $n>n_0$ it is zero. The automorphism used in \cite{balles95} are of the form $\phi_{\varepsilon} (J)= \varepsilon J$, where $\varepsilon$ is related to the algebra structure constants $f$: $\varepsilon=\sqrt{f}$. The connection between the two techniques can be understood by identifying $q=L_p\sqrt{|\Lambda|} $ of eq. \eqref{qq} with $\varepsilon$. The minimal value $n_0$ would then be the smallest power of $q$ such that the flat limit $\Lambda\to 0$ in \eqref{dimm} exists. For instance, looking at \eqref{taylor}, $n_0=2$ if we do not require $ d_{\mu\nu}^{(1)\rho\sigma}=0$. We now turn to introducing the simplifying ans\"atze that will allow us to reduce the number of solutions of the Lie-bialgebra conditions~\eqref{cojacc} and~\eqref{ccccccc} to a manageable amount. We begin with the strongest assumptions (manifest spacetime covariance), and proceed towards weaker assumptions: manifest spatial isotropy, $\mathbb{P}$ and $\mathbb{T}$ covariance and, finally, we will discuss what we know of the Lie-bialgebra deformations of spacetime symmetry algebras in full generality. \subsection{Manifest spacetime covariance, spatial isotropy, $\mathbb{P}$ and $\mathbb{T}$ involutions} The strongest simplifying assumption we can make is to assume that the cocommutator is covariant \emph{in form} under Lorentz transformations. This means that in the expression~\eqref{General_cocommutator} all the ${\mathcal{A}_\mu}^{\rho\sigma}$, \dots , ${\mathcal{F}_{\mu\nu}}^{\rho\sigma\gamma\delta}$ coefficients are combinations of invariant tensors with all indices saturated\footnote{Except the ``external" ones, which saturate with algebra generators.}. Inspecting the algebra $\mathfrak{g}_\Lambda$ in Eq.~\eqref{Poincare/dS/AdS-algebra}, we see that, independently of the dimension and of the sign of $\Lambda$, the Lorentz transformations act on each other and on the translation generators as they do in flat space (\emph{i.e.} as infinitesimal hyperbolic rotations). Therefore, if we want to form Lorentz-covariant combinations of the generators $P_\mu$ and $M_{\mu\nu}$, we can only use the two invariant tensors on Minkowski space: the flat metric $\eta^{\mu\nu} = \text{diag}\{ -1 , 1,\dots,1 \}$, and the totally antisymmetric Levi-Civita tensor $\varepsilon^{\mu \dots \sigma}$, such that $\epsilon^{01\dots(d-1)}=+1$. We can raise the indices with the metric, and put the general cocommutator in the following form: \begin{equation}\label{General_cocommutator3} \begin{split} \delta(P^\mu) &= \blue{\mathcal{A}^{\mu\rho\sigma}P_\rho\wedge P_\sigma} + \red{\mathcal{B}^{\mu\rho\sigma\gamma}P_\rho\wedge M_{\sigma\gamma}} + \red{\mathcal{C}^{\mu\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta}}, \\ \delta(M^{\mu\nu}) &= \mathcal{D}^{\mu\nu\rho\sigma}P_\rho\wedge P_\sigma+\blue{\mathcal{E}^{\mu\nu\rho\sigma\gamma}P_\rho\wedge M_{\sigma\gamma}}+\red{\mathcal{F}^{\mu\nu\rho\sigma\gamma\delta} M_{\rho\sigma}\wedge M_{\gamma\delta}}, \end{split} \end{equation} then each coefficient has only contravariant indices, and inherits certain symmetry properties from the antisymmetry of the wedge product and of $M_{\mu\nu}$: $\blue{\mathcal{A}^{\mu\rho\sigma}}$ and $\mathcal{D}^{\mu\nu\rho\sigma}$ are antisymmetric in the indices $\rho$, $\sigma$; $\red{\mathcal{B}^{\mu\rho\sigma\gamma}}$ and $\blue{\mathcal{E}^{\mu\nu\rho\sigma\gamma}}$ are antisymmetric in $\sigma$, $\gamma$, while $\red{\mathcal{C}^{\mu\rho\sigma\gamma\delta}}$ and $\red{\mathcal{F}^{\mu\nu\rho\sigma\gamma\delta}}$ are completely antisymmetric in $\rho$, $\sigma$, $\gamma$ and $\delta$. For each choice of dimensions we need to find all the independent invariant tensors with 3, 4, 5 and 6 indices, and write each coefficient $\mathcal{A}^{\mu\rho\sigma}$, \dots , $\mathcal{F}^{\mu\nu\rho\sigma\gamma\delta}$ as a linear combination of all the invariant tensors with the appropriate number of indices. The following ansatz we can make, in order of strength, is a covariance in form under spatial rotations. Differentiating the time index $\mu = 0$ from the spatial ones $\mu = i = 1, \dots, d$, we introduce the generators of pure boosts, $K_i = M^{0i}$, and pure rotations, $J_i = - \frac 1 2 \epsilon_{ijk} M^{jk}$. Then manifest spatial isotropy means that all the spatial indices in the Lie bialgebra are saturated correctly and match on both sides. So for example the cocommutator of $P_0$ will have to be a scalar (index-wise), and therefore will contain terms like $P_i\wedge P^i$, $P_i\wedge K^i$ \emph{etc.} depending on the spatial dimension $d$, we have at our disposal, on top of the spatial Euclidean metric $\delta_{ij}$, also a rank-$d$ tensor, the Levi-Civita tensor $\epsilon^{i_i \dots i_d}$, to saturate the indices on the right-hand side. Then the kind of terms we may form are different in 2 and 3 spatial dimensions. In what follows we will write explicitly the spatially-isotropic ansatz in 2+1 and 3+1 dimensions. Finally, one might be interested in studying the discrete symmetries of a noncommutative QFT. To this end a natural request is that the discrete transformations are involutions which are Lie-bialgebra automorphisms (for a detailed discussion on involutions and quantum algebras see for instance \cite{spacelike}). In a fully nonlinear Hopf-algebraic setting, one might expect these discrete transformations to get nonlinear (\emph{e.g.} energy-dependent-) corrections, as was suggested first in~\cite{waves} and further developed in~\cite{cpt}. In particular, these works propose that the antipode map of the quantum group should be involved in the definition of discrete transformations.\footnote{This is not unproblematic in general: in the case considered by~\cite{cpt} (\emph{i.e.} $\kappa$-Poincar\'e), the antipode is not an involution for the full group, but only for the translation subgroup, and this seems to lead to a non-Lorentz-invariant notion of $\mathbb{T}$ transformations.} However we are working at the Lie bialgebra level, and we are not sensitive to nonlinear structures. The antipode is a nonlinear map which can be linearized in a neighbourhood of the identity, and its linear order always reduces to the trivial involution that flips all the signs of the algebra generators (exactly for the same reason that the Lie group inverse reduces, at the linear level, to a sign change of all Lie algebra generators). So, for instance, the parity-transformed spatial momentum used iny~\cite{cpt} can be written as: \begin{equation} \mathbb{P}_\kappa (P_i)=S(P_i)=-P_i+\mathcal{O}\left(\frac 1 \kappa \right), \end{equation} which is an expression that, in the low-energy/momentum limit\footnote{Or, conversely, in the limit in which the deformation parameters go to zero.} reduces to the familiar parity operator of the undeformed Poincar\'e algebra. In order for a parity/time-reversal operator to exist at the level of the quantum group, it is necessary for the corresponding zeroth-order operator to leave the Lie bialgebra invariant. This necessary condition is what we focus on in the present paper.\footnote{One could imagine a Hopf-algebra deformation of Poincar\'e/(A)dS in which the discrete symmetries reduce to the ordinary one in the limit of vanishing deformation parameter, \emph{e.g.} $\kappa \to \infty$, but they do not do so in their linearized form. For example, dimensional analysis allows us to write expressions like $\mathbb{P} (K_i)= - K_i+ \frac 1 \kappa P_i$. This expression reduces to $\mathbb{P} (K_i)= - K_i$ in the $\kappa \to \infty$ limit, but the deformation term is not nonlinear, and survives the linearization procedure that defines the Lie bialgebra. We will postpone the investigation of this peculiar case to future works.} The transformations we consider are spatial inversions (parity, or $\mathbb{P}$) and time-inversions ($\mathbb{T}$). We define $\mathbb{P}$ as \begin{equation}\label{d1} \mathbb{P}(P_0)=P_0, \qquad \mathbb{P}(J_i)=J_i, \qquad \mathbb{P}(P_i)=-P_i, \qquad \mathbb{P}(K_i)=-K_i, \end{equation} in what follows we will highlight $\mathbb{P}$-covariant terms in a black box. We will also look for solutions (highlighted in a red box) respecting a $\mathbb{T}$-inversion, defined as: \begin{equation}\label{d2} \mathbb{T}(P_0)=-P_0, \qquad \mathbb{T}(J_i)=J_i, \qquad \mathbb{T}(P_i)=P_i, \qquad \mathbb{T}(K_i)=-K_i. \end{equation} As we will see these requirements impose nontrivial restrictions on Lie bialgebra solutions of all dimensions. \section{Lie Bialgebra solutions}\label{sol} To find Lie-bialgebra deformations we have to solve Eqs.~\eqref{ccccccc} and~\eqref{cojacc} for the structure constants $ {f^{ij}}_k$, with ${c_{ij}}^k$ fixed by~\eqref{Poincare/dS/AdS-algebra}. We first solve the cocycle condition~\eqref{ccccccc} which is linear in $ {f^{ij}}_k$ and therefore possesses always a unique solution. We then plug the solution into the co-Jacobi relations~\eqref{cojacc} which are quadratic in $ {f^{ij}}_k$. In this way we take full advantage of the exact solvability of Eqs.~\eqref{ccccccc} to reduce the number of independent variables to the minimum. This strategy is, in the most general 3D and 4D cases, not feasible, because the nonlinearity of Eqs.~\eqref{cojacc} leads to a profusion of solutions which, although they may be tackled with a computer algebra program, are way too many to make sense of. In order to overcome this issue we add further constraint by specifying, through the physical requirements listed above, a particular ans\"atze for the variables ${f^{ij}}_k$. \subsection{Two spacetime Dimensions} \subsubsection{Manifest spacetime covariance} In 1+1 dimensions both the Levi-Civita symbol and the metric have two indices. Therefore we can only form invariant tensors of even rank. There are only five independent terms: \begin{equation} \begin{split} &\blue{\mathcal{A}^{\mu\rho\sigma}} = 0 \,, \\ &\red{\mathcal{B}^{\mu\rho\sigma\gamma}} = \red{c_1 \, \eta^{\rho\gamma} \eta^{\mu\sigma}} + \red{ c_2 \,\epsilon^{\rho\gamma}\eta^{\mu\sigma} } \,, \\ &\red{\mathcal{C}^{\mu\rho\sigma\gamma\delta}} = 0 \,, \\ &\mathcal{D}^{\mu\nu\rho\sigma} = c_5 \, \eta^{\mu\rho} \eta^{\nu\sigma} \,, \\ &\blue{\mathcal{E}^{\mu\nu\rho\sigma\gamma}} = 0 \,, \\ &\red{\mathcal{F}^{\mu\nu \rho\sigma\gamma\delta}} = \red{ c_3 \, \eta^{\mu\rho} \eta^{\nu\delta} \eta^{\sigma\gamma}} + \red{ c_4 \, \eta^{\mu\rho} \eta^{\nu\delta} \epsilon^{\sigma\gamma} } \,. \end{split} \end{equation} Imposing Jacobi identities and cocycle conditions gives $\red{c_1} = \red{c_3} = \red{c_4} =0$, $\lambda \, c_5 = 0$ and $\red{c_2} c_5 = 0$. There are then two nontrivial solutions (which reduce to one if the cosmological constant is nonzero: \begin{equation}\label{1+1_spacetime_iso} \begin{split} \delta(P_\mu) &= \red{ c_2 \,\epsilon^{\rho\sigma} P_\rho \wedge M_{\mu\sigma}} , \\ \delta(M_{\mu\nu}) &= 0 , \end{split} \qquad \begin{split} \delta(P_\mu) &= 0, \\ \delta(M_{\mu\nu}) &= {red}{\boxed{(1 - |\lambda| ) c_5 \,P_\mu \wedge P_\nu }} , \end{split} \end{equation} recall that we highlighted the $\mathbb{T}$-invariant terms with a red box, and the $\mathbb{P}$-invariant ones with a black box, so the $c_5$ term is invariant under both $\mathbb{T}$ and $\mathbb{P}$, while $\red{c_2}$ is invariant under neither. The solutions in each case depend on only one free parameter (so they are essentially unique). \subsubsection{Coboundary case} All the algebras we consider are coboundaries, but only in spacetime dimensions greater than 2. Hence, imposing the coboundary condition is a restrictive ansatz only in the $(1+1)$D case. The most general $r$-matrix is: \begin{equation} r = b_1 \, K_1 \wedge P_0 + b_2 \, K_1 \wedge P_1 + b_3 \, P_0 \wedge P_1 \,, \end{equation} where $K_1 = M_{01}$ is the boost generator. An $r$-matrix satisfies automatically the cocycle conditions. The only nontrivial conditions are the co-Jacobi rules, which however in this case only impose that if $\lambda=0$ then $b_3 = 0$. The resulting cocommutator is \begin{equation}\label{1+1_coboundary} \begin{split} \delta(P_0) &= {red}{ \blue{b_1 \, P_0 \wedge P_1} }+\Lambda \, \red{b_3 \, K_1 \wedge P_0} , \\ \delta(P_1) &= \boxed{- \blue{b_2 \, P_0 \wedge P_1}} +{\Lambda \,\red{b_3 \, K_1 \wedge P_1}} ,\\ \delta(K_1) &= {red}{ \blue{b_1 \, K_1 \wedge P_1}} +\boxed{ \blue{b_2 \, K_1 \wedge P_0}} . \end{split} \end{equation} \subsubsection{General case} Making a completely general ansatz: \begin{equation*} \begin{split} \delta(P_0) &= \red{c_1 \, K_1\wedge P_0} + \red{c_2 \, K_1\wedge P_1} + \blue{c_3 \, P_0\wedge P_1}, \\ \delta(P_1) &= \red{c_4 \, K_1\wedge P_0}+ \red{c_5 \, K_1\wedge P_1} + \blue{c_6 \, P_0\wedge P_1}. \\ \delta(K_1) &= \blue{c_7 \, K_1\wedge P_0} + \blue{c_8 \, K_1\wedge P_1} + c_9 \, P_0\wedge P_1, \end{split} \end{equation*} and imposing cocycle and co-Jacobi conditions, we obtain the conditions $\red{c_2}=\blue{c_3}=\red{c_4}=\red{c_5}=\blue{c_6}=0$ and $\lambda c_9 =0$: \begin{equation} \begin{split} \delta(P_0) &= \red{c_1 \, K_1\wedge P_0} + {red}{\blue{c_8 \, P_0\wedge P_1}}, \\ \delta(P_1) &= \red{c_1 \, K_1\wedge P_1} - \boxed{ \blue{c_7 \, P_0\wedge P_1}}. \\ \delta(K_1) &= \boxed{ \blue{c_7 \, K_1\wedge P_0}} +{red}{ \blue{c_8 \, K_1\wedge P_1} + \boxed{ (1-|\lambda|) c_9 \, P_0\wedge P_1}}, \end{split} \end{equation} the above cocommutator reduces to a coboundary in the case $c_9=0$. To connect the $r$-matrix parameters written above with the parameters used here we can write: \begin{equation} \red{c_1} = \Lambda \, \red{b_3}, \qquad \blue{c_7} = \blue{b_2}, \qquad \blue{c_8} = \blue{b_1}, \qquad c_9 = 0 \,, \end{equation} this solution reduces to the manifestly spacetime covariant case when $c_1=c_2, \,\,c_9=c_5$ and others $c_i=0.$ \subsection{Three spacetime Dimensions} \subsubsection{Manifest spacetime covariance} Since in 3 dimensions the basic invariant tensors are of rank 3 and 2, one can form invariant combinations of all ranks. A complete basis of tensor invariant expressed in terms of $\varepsilon_{\mu\nu\rho}$ and $\eta_{\mu\nu}$ can be found in the literature \cite{kear}. For instance the first two coefficients can be written as \begin{equation*}\label{eq:2} \begin{split} &\blue{\mathcal{A}^{\mu\rho\sigma}} = \blue{f_1 \, \varepsilon^{\mu\nu\rho} } , \\ &\red{\mathcal{B}^{\mu\rho\sigma\gamma}} = \red{f_2 \, \eta^{\mu\nu}\eta^{\rho\sigma}} + \red{f_3 \, \eta^{\mu\rho}\eta^{\nu\sigma}} + \red{f_4 \, \eta^{\mu\sigma}\eta^{\rho\nu}}, \end{split} \end{equation*} and analogous forms hold for the remaining terms. Imposing the cocycle and co-Jacobi conditions, it turns out that the most general solution for the cocommutators is unique and reads \begin{equation}\label{2+1-spacetime_covariant} \begin{split} \delta(P_\mu) &= \boxed{2 \, \blue{f_1\, \varepsilon_\mu^{\:\: \rho\sigma}P_\rho\wedge P_\sigma} + 2 \Lambda \red{\, f_1 \, \varepsilon^{\rho\sigma\gamma}M_{\mu\rho} \wedge M_{\sigma\gamma}}}, \\ \delta(M_{\mu\nu}) &= 0. \end{split} \end{equation} The result is $\mathbb{P}$-invariant, but not $\mathbb{T}$-invariant. \subsubsection{Manifest spatial isotropy}\label{SEC_2+1_spatially-isotropic} The most general manifestly spatially isotropic cocommutator expression in the 6-dimensional basis $\{P_0, J_3, P_i, K_i \}$ (where $K_i = M_{0i}$, $J_3 = M_{12}$) is \begin{equation}\label{2+1-spatially_isotropic} \begin{split} \delta(P_0) =& \red{f_1 \, P_0 \wedge J_3} + \red{f_2 \, P_i \wedge K^i} + ( \red{f_3 \, P_i \wedge K_j } + \blue{f_4 \, P_i \wedge P_j} + \red{f_5 \, K_i \wedge K_j} )\varepsilon_{ij}, \\ \delta(J_3) =& \blue{f_6 \, P_0 \wedge J_3} + \blue{f_7 \, P_i \wedge K^i} + ( \blue{f_8 \, P_i \wedge K_j } + f_9 \, P_i \wedge P_j + \red{f_{10} \, K_i \wedge K_j} )\varepsilon_{ij}, \\ \delta(P_i) =& \blue{f_{11} \, P_0 \wedge P_i} + \red{f_{12} \, P_0 \wedge K_i} +\red{f_{13} \, J_3 \wedge K_i} + \red{f_{14} \, J_3 \wedge P_i} + \\ & \left( \blue{f_{15} \, P_0 \wedge P_j} + \red{f_{16} \, P_0 \wedge K_j} + \red{f_{17} \, J_3 \wedge K_j} + \red{f_{18} \, J_3 \wedge P_j} \right) \varepsilon_{ij} ,\\ \delta(K_i) =& f_{19} \, P_0 \wedge P_i + \blue{f_{20} \, P_0 \wedge K_i} +\blue{f_{21} \, J_3 \wedge K_i} + \blue{f_{22} \, J_3 \wedge P_i} + \\ & \left( f_{23} \, P_0 \wedge P_j + \blue{f_{24} \, P_0 \wedge K_j} + \blue{f_{25} \, J_3 \wedge K_j} + \blue{f_{26} \, J_3 \wedge P_j} \right) \varepsilon_{ij} . \end{split} \end{equation} Imposing the cocycle conditions we get the following two independent solutions: \begin{eqnarray}\label{2+1-spatially_isotropic_sols} &&\left\{\begin{aligned} \delta(P_0) =& 0 \,,\\ \delta(J_3) =& 0 \,, \\ \begin{aligned} \delta(P_i) =& \\[5pt] \\ \delta(K_i) =& \\[15pt] \end{aligned}& \boxed{\begin{aligned} & -\Lambda \blue{f_{22}}\red{K_i \wedge J_3} -{red}{ \Lambda f_{23} \varepsilon_{ij} \red{ K_j\wedge P_0}+ \Lambda f_{23} \red{P_i\wedge J_3}} \\& -\Lambda \blue{f_{11}} \varepsilon_{ij} \red{ K_j\wedge J_3} + \blue{f_{11}P_0\wedge P_i} +\varepsilon_{ij} \blue{f_{22}P_0\wedge P_j} ,\\ & {red}{ \Lambda f_{23}\blue{ K_i \wedge J_3} + f_{23} \varepsilon_{ij}P_0\wedge P_j} - \blue{f_{11}K_i\wedge P_0}\\& -\blue{f_{22}} \varepsilon_{ij} \blue{K_i\wedge P_0}- \blue{f_{22}P_i\wedge J_3}- \blue{f_{11}\varepsilon_{ij}P_j\wedge J_3}. \end{aligned}} \end{aligned}\right. \\ &&\left\{\begin{aligned} \\[-3pt] \delta(P_0) &= \\[7pt] \delta(J_3) &= \\[8pt] \delta(P_i) &= \\[8pt] \delta(K_i) &= \\[7pt] \end{aligned} \boxed{\begin{aligned} & (\pm f_{23} \sqrt{\Lambda}+\blue{f_{15}})\left(\frac{\Lambda}{2}\varepsilon_{ij}\red{K_i\wedge K_j}\mp{red}{ \sqrt{\Lambda}\varepsilon_{ij}\red{K_i\wedge P_j}}\right)+ (\pm f_{23}\sqrt{\Lambda}+\blue{f_{15}}) \left(\frac{1}{2}\varepsilon_{ij}\blue{P_i\wedge P_j} \right) \,,\\ & 0 \,, \\ & {red}{ \pm \sqrt{\Lambda} \blue{f_{15}} \red{K_j\wedge P_0} \varepsilon_{ij}+\Lambda f_{23}\red{P_i\wedge J_3}}+\blue{f_{15}P_0\wedge P_j}\varepsilon_{ij} -\Lambda \blue{f_{15}} \red{K_i\wedge J_3} ,\\ & \pm\sqrt{\Lambda}\varepsilon_{ij}f_{23} \blue{K_j\wedge P_0}\pm\sqrt{\Lambda}f_{23} \blue{P_i\wedge J_3}+{red}{\varepsilon_{ij}f_{23}P_0\wedge P_j\mp \sqrt{\Lambda}\blue{f_{15}K_i\wedge J_3} } . \end{aligned}} \right. \qquad \label{2+1-spatially_isotropic_sols2} \end{eqnarray} All of the terms in the solutions~\eqref{2+1-spatially_isotropic_sols} and \eqref{2+1-spatially_isotropic_sols2} are invariant under $\mathbb{P}$-transformations (the very ansatz of manifest spatial isotropy forbids non-$\mathbb{P}$-invariant terms). However not all terms are $\mathbb{T}$-covariant. If in \eqref{2+1-spatially_isotropic_sols} we replace $\blue{f_{11}} = -z$, $\red{f_{22}}= \red{f_{23}}=0$, we obtain the $\kappa$-(A)dS Lie bialgebra in 2+1 dimensions~\cite{Angel_q-dS_1994,Ballesteros2017,Ballesteros2017a}: \begin{equation} \begin{aligned} &\delta(P_0) = 0 \,,& &\delta(P_i) = - z \, P_0\wedge P_i + z \, \Lambda \, \varepsilon_{ij} K_i \wedge J_3 \,,& \\ &\delta(J_3) = 0 \,,& &\delta(K_i) = - z \, P_0\wedge K_i + + z \, \Lambda \, \varepsilon_{ij} P_i \wedge J_3 \,.& \end{aligned} \end{equation} This Lie bialgebra has been claimed to emerge in the context of (2+1)D Quantum Gravity coupled to point sources~\cite{rosati}. In this model the (topological) gravitational degrees of freedom are under control, and can be integrated away to give rise to an effective kinematics for the point particles, which is deformed by $L_p$-dependent modifications~\cite{matschull}. The spacetime symmetries of this effective model are described by a Hopf algebra. \subsubsection{$\mathbb{P}$\&$\mathbb{T}$ invariance alone} We can impose $\mathbb{P}\mathbb{T}$ invariance without prior imposition of manifest spatial isotropy, and ask whether there are $\mathbb{P}\mathbb{T}$-invariant terms that are not manifestly spatially isotropic. As we remarked in Sec.~\ref{math}, although the Poincar\'e algebra is not semisimple, in spacetime dimensions larger than 2 all of its Lie-bialgebra deformations are coboundaries. It is then convenient to discuss $\mathbb{P}\mathbb{T}$ invariance at the level of the $r$-matrix. The following is the most general $\mathbb{P}\mathbb{T}$-invariant $r$-matrix: \begin{equation} r = a \, P_1 \wedge P_2 + \red{c \, K_1 \wedge K_2} \,, \end{equation} we see that the $r$-matrix is manifestly spatially isotropic, so we fall back into a sub-case of what discussed in Sec.~\ref{SEC_2+1_spatially-isotropic}. \subsubsection{General case} Dropping all the assumptions, in the general case too It is convenient to work at the level of the $r$ matrix. Stachura~\cite{Stachura98} classified all the $r$ matrices on the Poincar\'e algebra, and found five families of solutions, some of which have sub-cases, for a total of eight classes - each consisting of a family of solutions with a different number of free parameters. To attempt at providing some orientation in this multiplicity of solutions, we can distinguish the ones that are more physically relevant because they are the least suppressed by tiny physical constants. These give rise to deformations that are first order in the Planck length and are not suppressed by positive powers of the cosmological constant (the blue terms in Sec.~\ref{DimAn}). In fact the most general $r$-matrix has the form: \begin{equation} r = a + \blue{b} + \red{c}\,, \qquad a = \alpha^{\mu\nu} P_\mu \wedge P_\nu \,, \qquad \blue{b} = \blue{\beta^{\mu\nu\rho} P_\mu \wedge M_{\nu \rho} }\,, \qquad \red{c} = \red{\gamma^{\mu\nu\rho\sigma} M_{\mu\nu} \wedge M_{\rho\sigma}} \,, \end{equation} where the color code corresponds to what each term implies for the cocommutator: \begin{equation} \delta(X) = [r , X \otimes 1 + 1 \otimes X] = [r^{(1)},X] \wedge r^{(2)} +r^{(1)} \wedge [r^{(2)},X] \,, \end{equation} the black terms are of the kind $\mathcal D^{\mu\nu\rho\sigma}$, the blue terms are of the kind $\blue{\mathcal A^{\mu\rho\sigma}}$ and $\blue{\mathcal E^{\mu\nu\rho\sigma\gamma}}$, while the red terms are of the kind $\red{\mathcal B^{\mu\rho\sigma\gamma}}$ and $\red{\mathcal F^{\mu\nu\rho\sigma\gamma\delta}}$. Now, assuming that $a=\red{c}=0$ whike $\blue{b} \neq 0$ excludes two of the classes of solutions described in~\cite{Stachura98}: class I and V. All the other ones correspond to cases in which $\red{c}=0$ and $a$ is generic - and can therefore be put to zero. We are still left with six classes, so our physical characterization of Lie bialgebras is not very discriminatory. We will show below that the same requirements are much more stringent in 3+1 dimensions. \subsection{Four spacetime Dimensions} \subsubsection{Manifest spacetime covariance} A basis of invariant tensors for the coefficients must be written in terms of $\varepsilon_{\mu\nu\rho\sigma}$ and the metric, which in (3+1)D are both even-rank. So only the even-rank coefficients $\red{\mathcal{B}^{\mu\rho\sigma\gamma}}$, $\mathcal{D}^{\mu\nu\rho\sigma}$ and $\red{\mathcal{F}^{\mu\nu\rho\sigma\gamma\delta}}$ can be written as invariant tensors. To our knowledge no minimal tensor basis classification has been made in this case, so we generate all possible terms\footnote{By doing this we generate an overcomplete basis, which is not a problem because it just means that we write each independent numerical coefficient as a linear combination of the coefficients of our overcomplete basis.} made with $\varepsilon_{\mu\nu\rho\sigma}$ and $\eta_{\mu\nu}$. Imposing the cocycle condition and co-Jacobi identities on the cocommutator written this way gives no nontrivial solution: \begin{equation*}\label{eq:5} \delta(P_\mu) = 0, \qquad \delta(M_{\mu\nu}) = 0. \end{equation*} Hence there is no spacetime-isotropic Lie bialgebra deformation of the Poincar\'e or (A-)dS algebra in 3+1 dimensions. \subsubsection{Manifest spatial isotropy}\label{4D_spatial_isotropy_sec} The most general spatially isotropic cocommutator in the $10$-dimensional basis $\{P_0, J_i, P_i, K_i \}$ is \begin{equation*}\label{eq:23} \begin{split} \delta(P_0) =& \red{f_1 \, P_i \wedge J^i} + \red{f_2 \, P_i \wedge K^i}+\red{f_3 \, J_i \wedge K^i}, \\ \delta(J_i) =& P_0 \wedge (\blue{f_4 \, J_i} + f_5 \, P_i + \blue{f_6 \, K_i}) +f_7 \, P_j \wedge P_k \varepsilon_{ijk}+ \\ &( \red{f_8 \,J_j \wedge J_k} + \red{f_9 \,K_j \wedge K_k} + \blue{f_{10} \,P_j \wedge J_k} + \blue{f_{11} \,P_j \wedge K_k} +\red{f_{12} \, J_j \wedge K_k})\varepsilon_{ijk}, \\ \delta(P_i) =& P_0 \wedge (\red{f_{13} \,J_i} + \blue{f_{14} \,P_i} + \red{f_{15} \,K_i}) +\blue{f_{16} \,P_j \wedge P_k}\varepsilon_{ijk} + \\&(\red{f_{17} \,J_j \wedge J_k}+\red{f_{18} \, K_j \wedge K_k} +\red{f_{19} \, P_j \wedge J_k} + \red{f_{20} \,P_j \wedge K_k} + \red{f_{21} \, J_j \wedge K_k})\varepsilon_{ijk}, \\ \delta(K_i) =& P_0 \wedge (\blue{f_{22} \,J_i}+f_{23} \, P_i+\blue{f_{24} \,K_i}) +f_{25} \, P_j \wedge P_k \varepsilon_{ijk}+ \\ &(\red{f_{26} \,J_j \wedge J_k}+ \red{f_{27} \,K_j \wedge K_k} + \blue{f_{28} \,P_j \wedge J_k} + \blue{f_{29} \,P_j \wedge K_k}+\red{f_{30} \, J_j \wedge K_k})\varepsilon_{ijk}. \end{split} \end{equation*} Imposing the Lie bialgebra axioms one obtains the unique solution: \begin{equation}\label{kappa-Poincare} \begin{split} \delta(P_0) &= 0, \\ \delta(J_i) &= 0, \\ \delta(P_i) &= \boxed{( 1 - |\lambda|) \blue{f_{14} \, P_0\wedge P_i}}, \\ \delta(K_i) &= \boxed{( 1 - |\lambda|) \blue{f_{14}} \left( \blue{P_0 \wedge K_i} +\blue{ \varepsilon_{ijk}P_j\wedge J_k } \right) }. \end{split} \end{equation} which is the (timelike\footnote{The $\kappa$-Poincar\'e $r$-matrix $r= {\frac 1 \kappa} K_i\wedge P_i$ is actually a particular choice among a $4$-parameter family of solutions, parametrized by a vector $v_\mu \in \mathbb{R}^4$ (see below). The most studied version of the algebra is characterized by a timelike vector $v^\mu = \frac 1 \kappa \, \delta^\mu{}_0$ (the energy scale $\kappa$ gives the name to the algebra). It is still not entirely clear whether different choices of vector correspond to nonequivalent physics. }) $\kappa$-Poincar\'e Lie bialgebra.\footnote{For an explicit ``exponentiation" of the $\kappa$-Poincar\'e Lie bialgebra to obtain the $\kappa$-Poincar\'e Hopf algebra see for instance \cite{ang}.} The cocommutator above is covariant under $\mathbb{P}$, but not under $\mathbb{T}$ nor $\mathbb{P} \mathbb{T}$. This is a significant and original result which shows that in $3+1$ spacetime dimensions only $\kappa$-Poincar\'e and $\kappa$-Minkowski can serve as manifestly isotropic quantum algebra and quantum spacetime, not only considering $\mathfrak{iso}(3, 1)$ but the $\text{(A-)dS}$ cases too. \subsubsection{$\mathbb{P}$\&$\mathbb{T}$ invariance alone} Imposing, as above, the separate $\mathbb{P}$- and $\mathbb{T}$-invariance of the $r$-matrix, we get the following: \begin{equation} \begin{aligned} r =& \red{f_1 \, K_1 \wedge K_2 + f_2 \, K_1 \wedge K_3 + f_3 \, K_2 \wedge K_3} + f_4 \, P_1 \wedge P_2 + f_5 \,P_1 \wedge P_3 + f_6 \, P_2 \wedge P_3 + \\ &\red{f_7 \, R_1 \wedge R_2 + f_8 \, R_1 \wedge R_3 + f_9 \, R_2 \wedge R_3} \,. \end{aligned} \end{equation} We see that in 4D the most general $\mathbb{P}$- and $\mathbb{T}$-invariant $r$-matrix is not necessarily manifestly space-isotropic. Imposing the co-Jacobi conditions we get three independent nontrivial solutions in the (A)dS case $\lambda \neq 0$: \begin{equation} \red{f_7}^2+\red{f_8}^2+\red{f_9}^2=0 \,, \qquad \left\{ \begin{aligned} &\red{f_1}=-\red{f_7} \,, ~ \red{f_2}=-\red{f_8} \,, ~ \red{f_3}=-f_ 9\,~ f_4=f_5=f_6=0\,, \\ &\red{f_1}=\red{f_2}=\red{f_3}=0 \,,~ \Lambda f_4 = \red{f_7} \,, ~ \Lambda f_5 = \red{f_8} \,, ~ \Lambda f_6 = f_ 9\,, \\ &\red{f_1}=\red{f_2}=\red{f_3}=f_4=f_5=f_6=0 \,, \end{aligned} \right. \end{equation} the first condition, common to all of the three solutions, admits no real nonzero solution. In the Poincar\'e case $\Lambda=0$ we have the $\Lambda \to 0$ limit of the three solutions above, plus this additional solution: \begin{equation} \red{f_1}=\red{f_2}=\red{f_3}=\red{f_7}=\red{f_8}=\red{f_9}=0 \,, \end{equation} that is, $f_4$, $f_5$ and $f_6$ are the only nonzero parameters, and they are freely specifiable. This solution is then: \begin{equation} {red}{\boxed{\delta(P_\mu) = 0\,, ~~~ \begin{aligned} \delta(K_1) =& f_4 \, P_0 \wedge P_2 + f_5 \, P_0 \wedge P_3\,,\\ \delta(K_2) =& f_6 \, P_0 \wedge P_3 - f_4 \, P_0 \wedge P_1\,,\\ \delta(K_3) =& -f_5 \, P_0 \wedge P_1 - f_6 \, P_0 \wedge P_2\,, \end{aligned} ~~~ \begin{aligned} \delta(J_1) =& f_4 \, P_1 \wedge P_3 - f_5 \, P_1 \wedge P_2\,,\\ \delta(J_2) =& f_4 \, P_2 \wedge P_3 - f_6 \, P_1 \wedge P_2\,,\\ \delta(J_3) =& f_5 \, P_2 \wedge P_2 - f_6 \, P_1 \wedge P_3\,.\\ \end{aligned}}} \end{equation} This is the only example of $\mathbb{P}$- and $\mathbb{T}$-invariant Lie bialgebra based on the Poincar\'e algebra in 3+1 dimensions. It cannot be obtained as a contraction of an (A)dS analogue, and its three independent coefficients have the dimensions of a squared length. This case deserves further investigation. \subsubsection{General $\Lambda=0$ case and a no-go theorem}\label{gen} Just like in the 2+1-dimensional case, the most generic Lie bialgebra over the 3+1D Poincar\'e algebra is a coboundary, and the $r$ matrices have in this case too been classified. In this case the relevant reference is a paper by Zakrzewski~\cite{zaz}, in which the author finds 23 classes of $r$ matrices that solve the modified Yang--Baxter equation. Just like in 2+1D, the $r$-matrix can be decomposed as $r = a + \blue{b} + \red{c}$, and the most `physically interesting' terms are the blue ones, that is, terms of type $ \blue{b}$. Imposing that $ a = \red{c} =0$ results in a remarkable simplification: there is just a 7-parameter family of solutions, given by \begin{equation}\label{zzza} r = \blue{v^\mu \, M_{\mu\nu} \wedge P^\nu + v^\mu P_\mu \wedge M(u_1,u_2,u_3)\,,} \end{equation} where $v^\mu$ is a real 4-vector and $M(u_1,u_2,u_3)$ is a generic element of the stabilizer, in $\mathfrak{so}(3,1)$, of $v^\mu P_\mu$. The latter depends on three real parameters $u_1$, $u_2$ and $u_3$, because the stabilizer of a 4-vector is a 3-dimensional subalgebra: $\mathfrak{so}(3)$ in case $v^\mu$ is timelike, $\mathfrak{so}(2,1)$ if it is spacelike and $\mathfrak{iso}(2)$ if it is lightlike. If $u_1-u_2=u_3=0$ we obtain the the vector-like generalization of the $\kappa$-Poincar\'e algebra\cite{koso}: \begin{equation}\label{kappa-Poincare_vectorlike} \boxed{\begin{split} \delta(P_\mu) &= \blue{v^\nu \, P_\nu \wedge P_\mu } \,, \\ \delta(M_{\mu\nu}) &= \blue{v_\nu \, M_{\rho\mu} \wedge P^\rho} - \blue{v_\mu \, M_{\rho\nu} \wedge P^\rho} \,. \end{split}} \end{equation} When any of the $u_i$ parameters are nonzero, the additional term in the $r$ matrix generates a \emph{Reshetikhin twist}~\cite{TwistedKappa0,TwistedKappa1,TwistedKappa2,TwistedKappa3}, which changes significantly the cocommutators. For example, in the case in which $v^\mu$ points in the time direction, $v^\mu = {\frac 1 \kappa} \delta^\mu{}_0$, the $r$-matrix can be written \begin{equation}\label{zzza2} r = \blue{ \frac 1 \kappa \left( P_j \wedge K^i + P_0 \wedge u^i J_i\right) \,,} \end{equation} which generates the following cocommutator: \begin{equation}\label{kappa-Poincare_vectorlike_twisted} \boxed{\begin{aligned} \delta(P_0) &= 0\,, \\ \delta(P_i) &= \blue{\frac 1 \kappa \left( P_0 \wedge P_i + \varepsilon_{ijk} u_j \, P_0 \wedge P_k \right)} \,, \\ \delta(R_i) &= \blue{\frac 1 \kappa \left( \varepsilon_{ijk} u_j \, P_0 \wedge R_k \right)}\,, \\ \delta(K_i) &= \blue{\frac 1 \kappa \left( P_0 \wedge K_i + \varepsilon_{ijk} u_j \, P_0 \wedge K_k + u_j \, P_i \wedge R_j - \varepsilon_{ijk} P_j \wedge R_k \right) }\,. \\ \end{aligned}} \end{equation} In summary: our work proves that assuming manifest spatial isotropy alone, the $\kappa$-deformation is the only one extending relativistic symmetries to a 4-dimensional noncommutative setting. Zakrzewski's analysis \textit{plus} our dimensional/regularity considerations yields the full 7-parameter family of \emph{twisted vector-like generalized $\kappa$-deformations}~\cite{TwistedKappa0,TwistedKappa1,TwistedKappa2,TwistedKappa3} .\footnote{Of this 7-parameter family, if one considers Lie-bialgebra automorphisms, only three cases are truly independent: when the $v^\mu$ vector is space-, time- or light-like~\cite{nullplane,spacelike}.} This results can be seen as a no-go theorem regarding the existence of physically-meaningful Lie-bialgebra deformations other than (twisted) $\kappa$-Poincar\'e in 4D. \subsubsection{Fully general $\Lambda \neq 0$ case} If the cosmological constant is not zero, Zakrzewski's analysis does not apply. We have to write the most general $r$-matrix, and even imposing the dimensional/regularity assumption of keeping only `blue' terms does not lead to a treatable set of equations. We thus postpone the investigation of this case in full generality to future works. \section{Conclusions}\label{conc} In this paper we addressed the issue of finding quantum deformations of relativistic symmetry Lie algebras: Poincar\'e, de Sitter and Anti-de Sitter. Our analysis was carried through a systematic, algorithmic approach. We classified our results through dimensional analysis, analytic flat limit and manifest covariance principles. We get two main results, which can be understood as no-go theorems for alternatives to $\kappa$-Poincar\'e. The first result is that, under the assumption of manifest spatial isotropy, the unique deformation which generalizes the relativistic Poincar\'e algebra in (3+1)D, is the so-called $\kappa$-Poincar\'e deformation, whose homogeneous spacetime is $\kappa$-Minkowski \cite{lukkk,majid22,fla}. The second result is that, requiring the Lie-bialgebra deformation to depend on structure constants that 1. are first-order in the Planck length $L_p$ and 2. admit a well-defined flat ($\Lambda \to 0$) limit, the only deformation of the Poincar\'e algebra in (3+1)D is the `twisted vector generalization' of the $\kappa$-Poincar\'e Lie bialgebra~\cite{TwistedKappa0,TwistedKappa1,TwistedKappa2,TwistedKappa3}. This is a double modification of the original, so-called `timelike' $\kappa$-Poincar\'e Lie bialgebra: a four-parameter family of terms can be obtained from the original Lie bialgebra by taking a linear combination of the generators that `rotates' the primitive (zero-cocommutator) generator from $P_0$ to $v^\mu P_\mu$, where $v^\mu$ are four arbitrary parameters. The rest is a 3-parameter perturbation which is generated by a Reshetikhin twist~\cite{TwistedKappa0,TwistedKappa1,TwistedKappa2,TwistedKappa3}, which depends on three real numbers $u_i$ - the parametrization of the stabilizer of a 4-vector in $\mathfrak{so}(3,1)$. These no-go theorems point out that, if we want to work at first order in the Planck length, the only viable option is (possibly twisted) $\kappa$-Poincar\'e and its possible generalizations into (Anti-)de Sitter. To elude this result, we have to consider deformations that depend on the square of the Planck length (an awfully small quantity, which makes testable predictions even harder to find than they already are). Further insights regarding generalizations of the $\kappa$-Poincar\'e algebra can be gained by comparing our results with those in~\cite{ballesteroscurved}. In that work the authors identify the minimal assumptions that one should make in order to uniquely characterize the $\kappa$-(Anti) de Sitter Lie bialgebra. Of course spatial isotropy is not among these assumptions, otherwise, as we have shown in~\ref{4D_spatial_isotropy_sec}, one is left only with $\kappa$-Poincar\'e in the $\Lambda =0$ case and nothing otherwise. As a result, the authors of~\cite{ballesteroscurved} find that $\Lambda \neq 0$ implies a non-trivial Planck-scale deformation of rotations. In (2+1)D, there appears to be a richer catalog of Hopf algebraic structures, generalizing Poincar\'e and (A-)dS invariance. We believe that these frameworks represent the starting point for further investigations in (2+1)D Quantum Gravity. Although we don't have a full classification of these deformations (there are hundreds of solutions of the Lie-bialgebra axioms), in this paper we provided a first classification of the most physically-interesting ones, based on dimensional analysis, various degrees of manifest isotropy and discrete symmetries. Finally, the analysis of the cases in which discrete symmetries like $\mathbb{P}$ and $\mathbb{T}$ invariance are implemented at the level of the Lie bialgebra led to a new result in (3+1)D. In the flat, $\Lambda = 0$ case, there is a 3-parameter family of Lie-bialgebra deformations of the Poincar\'e algebra which implements these discrete symmetries as Lie-bialgebra automorphisms. These deformations depend quadratically on the Planck length and therefore elude the no-go theorems found above. They represent therefore an interesting starting point to begin the exploration of these `order $L_p^2$' Lie bialgebras. \section*{Acknowledgements} We would like to thank \'Angel Ballesteros and Giulia Gubitosi for their useful comments and remarks. F.M. was funded by the Euroean Union and the Istituto Italiano di Alta Matematica under a Marie Curie COFUND action. \vspace{12pt} \providecommand{\href}[2]{#2}\begingroup\raggedright
1,116,691,498,901
arxiv
\section{Introduction}\label{intro} Consider communicating one of $M$ messages using a finite blocklength transmission of $n$ symbols over a discrete memoryless channel (DMC) with average error probability $\epsilon$. A fundamental practical problem that arises from this setup is the computation of upper bounds on the size of $M$ (converse) in the finite-blocklength regime when both $n$ and $\epsilon$ are fixed. A hypothesis testing framework can be used to establish converse bounds for arbitrary DMCs \cite{Blahut,Altug:SP:Asymmetric,Altug:EA:Symmetric,ppv_2010}. However, these bounds are generally hard to compute at finite blocklengths due to the high dimensionality of their parameter space. Their computation can be significantly reduced by exploiting symmetry in some cases, such as that of the binary-symmetric channel (BSC), the binary-erasure channel (BEC)~\cite{polyanskiy_saddlepoint}, and the AWGN channel \cite{ErsegheAWGN}. Their limits in various asymptotic regimes can also be computed. These insights provides no assistance at finite blocklengths for arbitrary DMCs, however. Recent advances toward this goal include an algorithm for saddle point identification \cite{elkayam2016calculation} in the ``minimax'' version of the bound~\cite{polyanskiy_saddlepoint} that is polynomial-time in the blocklength for DMCs, a linear programming formulation of the bound \cite{matthews_linear_2012} that is also polynomial-time for DMCs, and a method for generating good output distributions for general minimax converses where an optimization over a distribution on the channel output is required \cite{kosut_boosting_2015}. The complexity of these approaches is exponential in the input alphabet however, making them unsuitable for even moderately sized channels at longer blocklengths. See \cite{vilar2018saddlepoint} and \cite{ErsegheFull} for approximations of the bound that are more easily computed. Stochastic control methods have proven useful in deriving results for communication systems with feedback, such as a general framework for the computation of channel capacity \cite{tatikonda_capacity_2009} and the timid/bold technique to improve second-order coding performance \cite{TimidBold}. In many circumstances, stochastic control problems require the design of an optimal controller and dynamic programming (DP) techniques are often used to solve this problem efficiently \cite{tatikonda_control_2000}. \subsection {Contributions and organization} In this paper, we develop an efficient algorithm to compute converses at finite blocklength for arbitrary DMCs. The key idea is to consider the channel with feedback, which allows for efficient computation of the resulting hypothesis testing bounds using dynamic programming and Fourier methods. We describe this algorithm, analyze its complexity, and demonstrate its utility on several channels. For the BSC and BEC, we find that it provides bounds that are close to those obtained via channel-specific methods. On the other hand, we also show that our bound scales well to DMCs with moderately large alphabets at moderate blocklengths, for which there is currently no known way of efficiently computing nontrivial converse bounds. Of course, any converse bound for a DMC with feedback also applies to the channel without feedback. The rest of the paper is organized as follows. In Section \ref{converses} we formulate the converse bound as a DP problem. In Section \ref{algorithms} we present an algorithm to compute the bound an analyze its complexity. Section \ref{results} presents numerical results for the BSC, BEC, BAC, and quantized amplitude constrained AWGN channel, and provides a comparison to existing bounds where possible. If no specific bounds are available we use Lemma 14 in \cite{TimidBold} with a capacity achieving distribution to obtain a general achievability result without feedback. \section{DMC Converses via Dynamic Programming} \label{converses} \subsection{Definitions} \subsubsection{Probability of Success/Failure}\label{DefPSF} Assume a DMC with finite input alphabet $\mathcal{X}$ and finite output alphabet $\mathcal{Y}$. In this case, for an $(n,R)$ code with ideal feedback and error probability $\epsilon\in(0,1)$ we define: \begin{align}\label{def_mfb} M^*(n,\epsilon):=\max\{\lceil\exp(nR)\rceil\in\mathbb{N}_+: \bar{\textbf{P}}_{\text{e,fb}}(n,R)\leq \epsilon\}\,, \end{align} where $\bar{\textbf{P}}_{\text{e,fb}}(n,R)$ is the minimum average error probability achievable by any $(n,R)$ code with feedback. We follow~\cite{TimidBold}. In particular, given input and output random vectors \begin{align} \textbf{X}^n &= (X_1, \dots, X_n) \in \mathcal{X}^n \\ \textbf{Y}^n &= (Y_1, \dots, Y_n) \in \mathcal{Y}^n\,, \end{align} a (stochastic) controller \begin{align} F & = F^n\\ F^k &= (F_1,\dots, F_k)\\ F_t &: (\textbf{X}^{t-1},\textbf{Y}^{t-1})\mapsto X_{t}\,, \end{align} a channel $W\in \mathcal{P}(\mathcal{Y}|\mathcal{X})$, an information density threshold $T$, and an arbitrary output distribution $Q\in\mathcal{P}(\mathcal{Y})$, we refer to the event in which the AID at the end of the block exceeds $T$ as a \emph{success}, $S$. The probability of success for any controller can be defined in terms of $\textbf{X}^n$ and $\textbf{Y}^n$ as \begin{align}\label{success_event_v2} P(S_{F,Q,T})=(F \circ W)\left(\log_2\left[\frac{W(\textbf{Y}^n|\textbf{X}^n)}{Q(\textbf{Y}^n)}\right]>T\right)\,, \end{align} where $F\circ W$ denotes the joint distribution over $\textbf{X}^n$ and $\textbf{Y}^n$ induced by the controller and the channel. Similarly, the probability of failure is defined as $P(\bar{S}_{F,Q,T})=1-P(S_{F,Q,T})$. \subsubsection{Information density}\label{DefID} Given an input distribution to a DMC $G\in\mathcal{P}(\mathcal{X})$ and an output distribution $Q\in\mathcal{P}(\mathcal{Y})$ we can define the information density change from one transmission ($\di{}$) as \begin{align}\label{DeltaI} \di{G}=\log_2\left(\frac{W(Y|X)}{Q(Y)}\right) \end{align} with range $\mathcal{I}_G$ and PMF $f_G(\Delta i)$ \begin{align}\label{ProbDeltaI} f_G(\Delta i)=\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}}\mathds{1}\left(\Delta i=\log_2\left[\frac{W(y|x)}{Q(y)}\right]\right)W(y|x)G(x)\,. \end{align} We finally define the set of all possible AID changes in a single transmission as \begin{align} \mathcal{I}=\left\{\log_2\left(\frac{W(y|x)}{Q(x)}\right)\bigg|x\in\mathcal{X},y\in\mathcal{Y}\right\} \end{align} and the set of all possible AID levels attainable by any controller at timestep $k$ as \begin{align} \Lambda_k=\left\{\sum_{\gamma\in\mathcal{I}}\alpha_{\gamma}\gamma\bigg| \alpha_\gamma\in\mathbb{Z}^+\text{ and } \sum_{\gamma\in\mathcal{I}}\alpha_{\gamma}=k\right\}\, , \end{align} where $\mathbb{Z}^+$ is the set of nonegative integers. \subsection{Mathematical formulation} \subsubsection{General setup}\label{GS} From Lemma 15 in \cite{TimidBold} we know that given parameters $W,n,\rho>0,\epsilon>0$ and $T=\log_2(\rho)$, \begin{align}\label{TB_Converse} R^*_b(n,\epsilon)\leq \sup_{F}\inf_{Q}\frac{1}{n}\left[ T-\log_2\left(\left[P(\bar{S}_{F,Q,T})-\epsilon\right]^+\right)\right]\,, \end{align} where $R^*_b(n,\epsilon)$ is the best achievable code rate in bits for the given blocklength $n$ and error probability $\epsilon$. \subsubsection{Computation of maximum probability of success}\label{TheoCompPSF} From the supremum in (\ref{TB_Converse}) we see that we are interested in minimizing $P(\bar{S}_{F,Q,T})$ or, equivalently, maximizing $P(S_{F,Q,T})$. In order to translate this problem into a dynamic programming framework we will focus on the probability of success of the optimal controller at timestep $k$ and a specific AID level, $S^*_k(\Gamma)$, where references to $Q$ and $T$ are omitted for simplicity. With this setup every $S^*_k(\Gamma)$ can be defined in terms of the following recurrence relation as long as the chosen $Q$ is a product distribution: \begin{align} \label{TrivDef} S^*_n(\Gamma)&=\mathds{1}\left(\Gamma\geq T\right)\\ \label{RecRel1} S^*_k(\Gamma)&=\max_{G\in\mathcal{P}(\mathcal{X})}\sum_{\Delta i\in\mathcal{I}_{G}}S^*_{k+1}(\Gamma+\Delta i)f_G(\Delta i),\,k<n\,. \end{align} However, we know that the cardinality of the set of all possible achievable AID levels grows polynomially with the blocklength, which makes direct computation infeasible even for moderate values of $n$. In order to address this problem we will quantize the AID in a sequence of $N$ bins $\beta_{i}=(\tau_{i}^L,\tau_{i}^U]$, $i\in \{L_L, L_L+1, \dots, L_U\}$ of size $\delta$ such that \begin{align} \frac{\tau_{L_U}^U}{n}=\di{{max}}&=\max_{(x,y)\in\mathcal{X}\times\mathcal{Y}}\log_2\left(\frac{W(y|x)}{Q(y)}\right)\\ \frac{\tau_{L_L}^L}{n}=\di{{min}}&=\min_{(x,y)\in\mathcal{X}\times\mathcal{Y}:W(y|x) > 0}\log_2\left(\frac{W(y|x)}{Q(y)}\right)\\ N&=\left\lceil \frac{\di{{max}}-\di{{min}}}{\delta}\right\rceil\,. \end{align} This definition guarantees that all achievable AID levels will be comprised in this range as no single walk can exceed the bin limits. Our scheme also rounds up any AID realization to the upper bound of its corresponding bin (denoted as $\lceil t\rceil_{\delta} = \delta \left\lceil\frac{t}{\delta}\right\rceil$). Equations (\ref{TrivDef}) and (\ref{RecRel1}) thus become \begin{align} \hat{S}^*_n(\Gamma)&=\mathds{1}\left(\Gamma\geq T\right)\\ \hat{S}^*_k(\Gamma)&=\max_{G\in\mathcal{P}(\mathcal{X})}\sum_{\Delta i\in\mathcal{I}{x}}\hat{S}^*_{k+1}(\Gamma+\lceil\Delta i\rceil_\delta)f_G(\Delta i) \end{align} for $\Gamma\in\{\tau^U_{L_L},\tau^U_{L_L+1},\cdots,\tau^U_{L_U}\}$. Denote the AID random variable \begin{equation}\label{DeltaIk} \di{k} = \log_2\left(\frac{W(Y_k|X_k)}{Q(Y_k)}\right)\,, \end{equation} which has distribution depending on the controller $F^k$. We can now show that this rounding strategy provides an upper bound on the probability of success of the optimal controller and thus preserves the validity of the converse \begingroup \begin{align} P&(S_{F^*,Q,T}) = \sup_{F} (F\circ W)\left(\sum_{j=1}^n \di{j} \geq T\right)\\ \begin{split} =& \sup_{F^{n-1}, F_n}\sum_{\Gamma_{n-1}\in\Lambda_{n-1}} \Biggl[ (F^{n-1}\circ W)\left(\sum_{j=1}^{n-1} \di{j} = \Gamma_{n-1}\right) \\&\cdot (F_{n}\circ W)\left( \di{n} \geq T - \Gamma_{n-1}\middle|\sum_{j=1}^{n-1} \di{j} = \Gamma_{n-1}\right)\Biggr] \end{split}\\ \begin{split} =& \sup_{F^{n-1}}\sum_{\Gamma_{n-1}\in\Lambda_{n-1}} \Biggl[ (F^{n-1}\circ W)\left(\sum_{j=1}^{n-1} \di{j} = \Gamma_{n-1}\right) \\&\cdot \sup_{F_n}(F_{n}\circ W)\left( \di{n} \geq T - \Gamma_{n-1}\right)\Biggr] \end{split}\\ \begin{split} \leq & \sup_{F^{n-1}}\sum_{\Gamma_{n-1}\in\Lambda_{n-1}} \Biggl[ (F^{n-1}\circ W) \left(\sum_{j=1}^{n-1} \di{j} = \Gamma_{n-1}\right) \\&\cdot \sup_{F_n}(F_{n}\circ W)\left( \di{n} \geq T - \lceil\Gamma_{n-1}\rceil_{\delta}\right)\Biggr] \end{split}\\ \begin{split} \leq &\sup_{F^{n-1}}\sum_{\ell=L_L}^{L_U} \Biggl[ (F^{n-1}\circ W)\left(\left\lceil\sum_{j=1}^{n-1} \di{j}\right\rceil_{\delta} = \ell\delta\right) \\&\cdot \sup_{P}\sum_{x}P(x)\sum_y W(y|x) \mathds{1}\left(\left\lceil \di{n} + \ell\delta\right\rceil_{\delta}\geq T\right)\Biggr] \end{split}\\ \intertext{Since maximization over $P$ is a linear program over a simplex, the maximum is achieved at a vertex, i.e. the deterministic distribution equal to some $x$.} \begin{split} =&\sup_{F^{n-1}}\sum_{\ell=L_L}^{L_U} \Biggl[ (F^{n-1}\circ W)\left(\left\lceil\sum_{j=1}^{n-1} \di{j}\right\rceil_{\delta} = \ell\delta\right) \\&\cdot \max_{x}\sum_y W(y|x) \hat{S}^*_n\left(\left\lceil \di{n} + \ell\delta\right\rceil_{\delta}\right)\Biggr] \end{split}\\ \begin{split} =&\sup_{F^{n-1}}\sum_{\ell=L_L}^{L_U} \Biggl[ (F^{n-1}\circ W)\left(\left\lceil\sum_{j=1}^{n-1} \di{j}\right\rceil_{\delta} = \ell\delta\right){\hat{S}^*_{n-1}(\ell\delta)\Biggr]} \end{split}\\ \intertext{Continuing,} =&\sup_{F_1}\sum_{\ell=L_L}^{L_U} \left[ (F_1\circ W)\left(\left\lceil \di{1}\right\rceil_{\delta} = \ell\delta\right)\hat{S}^*_{1}(\ell\delta)\right]\\ =&\,\hat{S}^*_0(0)\,. \end{align} \endgroup \vfill\eject Thus it suffices to compute $\hat{S}^*_0(0)$, which can be done recursively. Note that we are not computing the upper bound in~\cite{TimidBold} exactly, but rather an upper bound on it. The parameter $\delta$ controls the tradeoff between the complexity of the algorithm and the weakening of the bound. \section{Algorithms} \label{algorithms} \subsection{Algorithm for computation of $\hat{S}_0^*(0)$} \subsubsection{General setup} The computation of $\hat{S}_0^*(0)$ can be carried out efficiently by defining the transition from one bin to another when using the j-th input as a discrete time-invariant Markov process where the state space is simply the set of all bins. Using this definition we note that the probability of moving from bin $\beta_m$ given that we start at the upper limit of bin $\beta_l$ given the AID change distribution induced by the j-th input only depends on their distance $(l-m)\delta$, and we can thus define this probability as $p_j^{l-m}$. We can now define the Toeplitz transition matrix for the j-th input as \begin{align}\label{TransitionMatrix} M_j= \begin{bmatrix} p_{j}^{0} & p_j^1 & \cdots & p_j^{N-2} & p_j^{N-1}\\ p_{j}^{-1} & p_j^0 & \cdots & p_j^{N-3} & p_{j}^{N-2}\\ \vdots & \vdots & \ddots & \vdots & \vdots \\ p_{j}^{-N+2} & p_j^{-N+3} & \cdots & p_j^0 & p_{j}^{1}\\ p_j^{-N+1} & p_j^{-N+2} & \cdots & p_j^{-1} & p_{j}^{0}\\ \end{bmatrix}\,, \end{align} where the max is taken component-wise. Since by construction the AID can never fall out of the defined range, this stochastic matrix can be used to compute the probability of success vector $\mathbf{\hat{S}}^*_k=[\hat{S}_{k}^*(L_L\delta),\cdots,\hat{S}_k^*(L_U\delta)]$ induced by the j-th input for all bins simultaneously as \begin{gather} \mathbf{\hat{S}}^*_{k}=\max_{j\in\{1,\cdots,|\mathcal{X}|\}}M_j\mathbf{\hat{S}}^*_{k+1}\,, \end{gather} where the maximum operation is an element-wise maximum over all $M_j\mathbf{\hat{S}}^*_{k+1}$ vectors. We can now easily compute $\hat{S}^*_0(0)$ as shown in algorithm \ref{DP_PS_Optim}. \begin{algorithm}[hbt!]\label{DP_PS_Optim} \caption{Computation of probability of success for optimal controller} \KwIn{$W$, $Q$, $n$, $T$, $\delta$} \KwOut{$\hat{S}^*_0(0)$} Compute $\di{{max}}$ and $\di{{min}}$\\ Construct bins with spacing $\delta$ in $[n\di{{min}},n\di{{max}}]$\\ Construct matrices $M_j$ for $j\in\{1,\cdots,|\mathcal{X}|\}$\\ $\mathbf{\hat{S}}^*_n\leftarrow$ 1 if bin upper bound $>T$ and 0 otherwise\\ \For{$k\in\{N-1,\cdots,0\}$}{ $\mathbf{\hat{S}}^*_k=\mathbf{0}$\\ \For{$j\in\{1,\cdots,|\mathcal{X}|\}$}{ $\mathbf{\hat{S}}^*_k\leftarrow$ Element-wise max $(\mathbf{\hat{S}}^*_k,M_j\mathbf{\hat{S}}^*_{k+1})$ } } $\hat{S}^*_0(0)\leftarrow$ Bin of $\mathbf{\hat{S}}^*_0$ that contains 0\\ \textbf{return} $\hat{S}^*_0(0)$ \end{algorithm} \subsubsection{Algorithm complexity} The complexity of Algorithm \ref{DP_PS_Optim} depends on the technique used to compute the product $M_j\mathbf{\hat{S}}^*_{k+1}$. One possible method to perform this computation efficiently relies on the fact that all matrices $M_j$ in (\ref{TransitionMatrix}) are Toeplitz, as this fact allows the use of the FFT to speed up computation. In this case the algorithm will have time complexity $O(nN\log(N)|\mathcal{X}|)$ and space complexity $O(N|\mathcal{X}|)$ Another possible method can be derived from the fact that all matrices $M_j$ are generally sparse as when fixing an input the information density change can take at most $|\mathcal{Y}|$ values, which translates to at most $|\mathcal{Y}|$ non-zero elements in each row of the matrix. In this case the algorithm will have time complexity $O(nN|\mathcal{Y}||\mathcal{X}|)$ and space complexity $O(N|\mathcal{X}|)$. \subsection{Threshold optimization} \subsubsection{Properties of the optimization landscape} Using algorithm \ref{DP_PS_Optim} we can now obtain a general converse for any DMC by choosing an arbitrary $Q$ that enforces independence between timesteps, and solving the optimization problem derived from equation (\ref{TB_Converse}) \begin{align}\label{ExplicitOptim} R^*_b\leq \min_{T}\frac{1}{n}\left[ T-\log_2\left(\left[P(\bar{S}_{F,Q,T})-\epsilon\right]^+\right)\right]\,. \end{align} This problem can be difficult to solve using traditional solvers because that the optimization landscape is piecewise linear and discontinuous as shown in Fig \ref{fig:BEC_OptLandscape}. Given a controller $F$ and a threshold $T$ we can express its probability of failure as a sum over all the possible AID outcomes for all controllers using $\Delta I_k$ as defined in (\ref{DeltaIk}) \begin{align}\label{ExplicitPfail} P(\bar{S}_{F,Q,T})&=\sum_{\Gamma\in\Lambda_n}\mathds{1}(\Gamma\leq T)\cdot (F\circ W)\left(\sum_{j=1}^{n}\Delta I_{k}=\Gamma\right) \end{align} and note that this quantity must be monotonically increasing in $T$ for any arbitrary controller. Therefore, given an optimal controller $F^*$ for a threshold $T$ just above a mass-point $\lambda_m$ if the value of $T$ is increased while staying below the next mass-point with non-zero probability $\lambda_k$ the value of (\ref{ExplicitPfail}) will remain fixed for the original optimal controller as the sum is unchanged and it will still be optimal. Thus the bound would grow linearly until $T$ exceeds $\lambda_k$ where the probability of failure will experience a discrete increase causing the overall bound to potentially drop discontinuously. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{Figures/BEC_30_4_OptimLandscape_32.pdf} \caption{Optimization landscape for BEC with erasure probability $\delta=0.3$ and probability of error $\epsilon=10^{-4}$ at $n=32$} \label{fig:BEC_OptLandscape} \end{figure} \subsubsection{Heuristics} It is possible to use the characteristics of the optimization landscape to develop two heuristics that improve the convergence of solvers in certain conditions: \begin{enumerate} \item Our previous discussion shows that a global minimum can only be achieved slightly above a mass-point in $\Lambda$, which makes it possible to restrict the solver to only evaluate the function slightly above valid mass-points. \item Intuition and empirical results indicate that the optimal threshold grows approximately linearly with the blocklength. This property allows faster computation when evaluating the converse for an increasing sequence of blocklengths $[n_1,n_2,\cdots]$ as the converses for the first blocklengths can be computed using the full range of thresholds and subsequent converses can be computed by only analyzing a neighborhood around the threshold predicted by a linear regression obtained from the previous values. \end{enumerate} Since the converse bound in (\ref{TB_Converse}) is valid for any threshold $T$ the bounds obtained using this process are always valid even if the solver fails to find a minimizing $T$. \section{Numerical Results and Comparisons} \label{results} \subsection{Comparison of DP Converse to Existing Converses} \label{sec:BSCBEC} In this section we compare the results from our DP converse (DPC) to the existing converse and achievability results in the SPECTRE toolbox \cite{spectre} for the BSC and the BEC. These channels are good baselines for the algorithm as they have small input and output alphabets, well known converse and achievability results, and the optimal output distributions are known \cite{polyanskiy_saddlepoint}. In the BSC case our algorithm produces very good results (Fig. \ref{fig:BSC_Conv}) as for blocklengths above 300 the result is less than 1\% above the channel-specific converse, and this gap closes rapidly as the blocklength increases. The BEC case provides a challenge to the DPC algorithm as \cite{polyanskiy_saddlepoint} shows that the optimal distribution is not a product distribution and our algorithm is restricted to product distributions. However, Fig. \ref{fig:BEC_Conv} shows that the DPC result is still very close to the channel specific converse with a gap of less than 2.4\% to the channel specific converse for blocklengths larger than 300. The fact that the optimal output distribution for the BEC is non-product is a possible explanation for the added looseness in the BEC converse when compared to our BSC results, but our algorithm provides good converse results even in this case. \begin{figure}[htbp!] \centering \includegraphics[width=\columnwidth]{Figures/BSC_11_4.pdf} \caption{DPC results for BSC with $\epsilon=10^{-4}$ and crossover probability $p=0.11$. The capacity of the channel is approximately 0.5 bits.} \label{fig:BSC_Conv} \end{figure} \begin{figure}[htbp!] \centering \includegraphics[width=\columnwidth]{Figures/BEC_30_4_OptOutput.pdf} \caption{DPC results for BEC with $\epsilon=10^{-4}$ and erasure probability $p_e=0.3$} \label{fig:BEC_Conv} \end{figure} \subsection{New Converse Bounds Using the DP Approach} Section \ref{sec:BSCBEC} shows that the performance of our DP algorithm is comparable to channel specific converses. This section applies the new algorithm to two DMCs not covered by the SPECTRE toolbox. Fig. \ref{fig:BAC_Conv} shows that the DPC produces a reasonable converse for the BAC. For blocklengths above 500 the result is less than 9.9\% above the achievable rate computed using Lemma 14 of \cite{TimidBold} with a capacity-achieving distribution. It is hard to determine whether this gap originates from looseness of the achievability or converse results, but the behavior of our converse is consistent with the BSC and BEC results. Recall that existing algorithms for computing converses have complexity that scales with the blocklength $n$ and input alphabet size $|\mathcal{X}|$ as $n^{|\mathcal{X}|}$. Next we show that our DP algorithm can be used to provide bounds at moderately large $n$ and $\mathcal{X}$. Consider a DMC with $|\mathcal{X}| = |\mathcal{Y}| = 8$ corresponding to using an AWGN channel with an optimized input alphabet and quantized outputs as described in \cite{madhow_awgnq}. The input alphabet is a fixed set of constellation points $\mathcal{X}$. The output is quantized using thresholds $-\infty = q_0 <q_1<q_2<\dots<q_7<q_8=\infty$. With input $x$, the output is $i$ if $x+Z \in (q_{i-1}, q_i]$, where $Z\sim \mathcal{N}(0,1)$. Such channels are of practical interest, and it is known that the optimal input distribution has finite-support \cite{madhow_awgnq} given some fixed quantization bins $\{q_i\}$. Dynamic Assignment Blahut-Arimoto \cite{dab_awgn, dab_pic} can be used to find the optimal $\mathcal{X}$ and input distribution when restricting the input of a channel to be finite-support. Modifying the algorithm for AWGN channel with amplitude constraint from \cite{dab_awgn} to account for output quantization and performing alternating optimization with $\{q_i\}$ gives a choice of $\mathcal{X}$ and $\{q_i\}$. Setting $|\mathcal{Y}| = 8$ and the amplitude constraint to $10$ yields an optimized $\mathcal{X}$ of cardinality $8$ as shown in Fig. \ref{fig:DABQ_const}, which yields the DMC described by transitions matrix $W$ shown in Fig.~\ref{fig:BigChannel}. \begin{figure}[htbp] \begin{align*} \label{eq:dabq_W} \small W \approx \begin{bmatrix} 0.96& 0.04& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\ 0.06& 0.84& 0.10& 0.00& 0.00& 0.00& 0.00& 0.00\\ 0.00& 0.11& 0.79& 0.10& 0.00& 0.00& 0.00& 0.00\\ 0.00& 0.00& 0.08& 0.85& 0.07& 0.00& 0.00& 0.00\\ 0.00& 0.00& 0.00& 0.07& 0.85& 0.08& 0.00& 0.00\\ 0.00& 0.00& 0.00& 0.00& 0.10& 0.79& 0.11& 0.00\\ 0.00& 0.00& 0.00& 0.00& 0.00& 0.10& 0.84& 0.06\\ 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.04& 0.96 \end{bmatrix} \end{align*} \caption{Equivalent DMC representation for the quantized output amplitude constrained AWGN channel.} \label{fig:BigChannel} \end{figure} Fig. \ref{fig:DABQ_Conv} shows that the DPC also provides a reasonable converse in this case. For blocklengths above 300 the converse is less than 6.8\% above the achievable rate computed using Lemma 14 of \cite{TimidBold} with a capacity-achieving distribution. \begin{figure}[htbp!] \centering \includegraphics[width=\columnwidth]{Figures/BAC_1_2_4.pdf} \caption{DPC results for BAC with $\epsilon=10^{-4}$ and crossover probabilities $p_1=0.01$, $p_2=0.02$. The achievability curve is derived from using Lemma 14 in \cite{TimidBold} with a capacity achieving distribution.} \label{fig:BAC_Conv} \end{figure} \begin{figure}[htbp!] \centering \includegraphics[width=\columnwidth]{Figures/DABQ_constellation.pdf} \caption{Constellation and quantization thresholds that maximize mutual information for the unit noise AWGN channel with quantized output under amplitude constraint 10. These induce the DMC given by $W$ shown above.} \label{fig:DABQ_const} \end{figure} \begin{figure}[htbp!] \centering \includegraphics[width=\columnwidth]{Figures/DABQ_AC.pdf} \caption{DPC results for finite-alphabet, quantized-output AWGN channel described by the $W$ shown above with $\epsilon=10^{-4}$. The achievability curve is derived using Lemma 14 in \cite{TimidBold} with a capacity achieving distribution.} \label{fig:DABQ_Conv} \end{figure} \section{Conclusion} \label{conclusion} This paper presents a general technique for computing finite-blocklength converse bounds for an arbitrary DMC. The converse uses dynamic programming to solve a stochastic control formulation of the converse problem. The utility of the approach was verified for the BSC, BEC, BAC and a DMC resulting from an amplitude constrained AWGN channel with quantized output. Since the technique presented provides a valid bound for any product output distribution, the bound can be further improved by optimizing the output distribution over this set. Identifying the best such output distribution is a topic for future work, as is the extension to channels with cost constraints. \newpage \bibliographystyle{IEEEtran}
1,116,691,498,902
arxiv
\section{Introduction}\label{sec:1intro} Capillary waves on a viscous fluid interface have recently been observed \citep{Aarts2004} to induce the spontaneous breakup of a thin liquid film and controls the inherent stochastic process of the sub-micron rupture event. Unlike gravity waves, these capillary waves have a short wavelength where the restoring force of surface tension dominates over the influence of gravity and can be found in the study of small-lengthscale interfacial phenomena; for instance, thin liquid films \citep{Scheludko1967} and droplet coalescence \citep{Blanchette2006}. It is apparent that variations in surface tension can have dramatic knock-on effects on the dynamics of the capillary waves, with applications found both in surface chemistry \citep{Edwards1991} and interfacial fluid dynamics \citep{Levich1969}. Surface-active materials, or surfactants, often lead to the formation of foams and emulsions by lowering the surface tension of a liquid interface \citep{Batchelor2003,Levich1969,Edwards1991}. Gradients of surfactant concentration (and therefore the surface tension coefficient) caused by dilatational deformations induce the Marangoni stress, which acts to oppose the changes in surface area and slows down the drainage and rupture processes of a thin liquid film. Moreover, the two-dimensional surfactant monolayer displays the rheological response whereby shearing deformations can introduce an extra surface shear viscosity. In addition, a source of surface dilatational viscosity can result from the inherent compressibility of the two-dimensional surfactant monolayer \citep{Zell2014}; in direct contrast with the incompressible Newtonian bulk fluid which can be characterised entirely by a single viscosity parameter $\mu$. Furthermore, the dissipative nature of the surfactant adsorption-desorption kinetic process can also contribute towards the effective surface dilatational viscosity \citep{Lucassen1966}. With multiple sources of surface viscosities, we henceforth denote the effective surface dilatational and shear viscosity by $\mu_{d}$ and $\mu_{s}$, respectively. Finally, we note that the magnitude of $\mu_{d}$ and $\mu_{s}$ need not be comparable \citep{Djabbarah1982}, since they are each responsible for different physical processes. The physical manifestation of surface viscosity and its measurement remain controversial and subtle; for decades of literature cannot agree on measurements of $\mu_{s}$ and $\mu_{d}$, the chief difficulty lies with the fact that \cchanges{not only are surface viscosity and Marangoni effects intimately intwined \citep{Levich1969,Scheid2010, Langevin2014},} experiments give the total characteristics for both the surface and bulk phases simultaneously and it is not trivial to extract the surface information a-priori of the establishment of a particular surface model. For insoluble surface-active (surfactants) solutions, the intrinsic surface shear viscosity is clearly defined. However, for soluble surfactant solutions, in particular sodium dodecyl sulfate (SDS), the presence of a 3-dimensional sublayer adjacent to the surface alters the rates of surface deformation \citep{Stevenson2005}, which may explain the numerous inconsistencies in the reported literature of the magnitudes of surface shear viscosity of surfactants. Some progress has been made recently, namely by the experimental work of \citet{Zell2014}, in which the use of microbutton surface rheometry appears to yield relatively unambiguous measurements of the surface shear viscosity $\mu_s$ of SDS. They report an upper bound of $\mu_s\sim O(10^{-8}\mathrm{Nsm}^{-1})$, which suggests that surface shear viscosity need not be the dominant surface phenomena and that Marangoni effects and surface dilational viscosity may also be in effect. In insoluble surfactant solutions, the surface shear viscosity is often much higher than $O(10^{-8}\mathrm{Nsm}^{-1})$, in particular, in the case of 1-eicosanol, it is found \citep{Zell2014,Gavranovic2006} to be at least $10^3$-$10^4$ times higher than that of soluble SDS solutions. Moreover, recent numerical \citep{Gounley2016} and experimental studies conclude that surface viscosity effects in insoluble surfactants can give rise to noticeable behaviours on the resulting dynamics, which cannot otherwise be fully understood if we considered the Marangoni effect alone \citep{Ponce-Torres2017}. In this paper, we shall investigate both the effects of Marangoni and surface viscosity in insoluble surfactant solutions \changes{with a particular focus on the dynamics of very thin films with capillary waves close to critical damping. For such a thin film geometry of high wavenumber, we may consider a two-dimensional flow structure as well as a low Reynolds number under the Stokes' limit. Foreshadowed by the previous numerical work \citep{Gounley2016,Ponce-Torres2017}, we anticipate a similar importance of the Marangoni and surface viscosity effects on the capillary wave in the two-dimensional thin film case under the Stokes' limit.} In \S\ref{sec:2surf_vis}-\ref{sec:4solution}, we extend the previous work of \citet{Prosperetti1976} and \citet{Shen2017} to incorporate both surface shear and dilatational viscosity to the leading-order, as described by the Boussinesq-Scriven model, into the dynamics of small-amplitude capillary waves. We delineate the effects of the convective-diffusive Marangoni stresses with surface viscosity effects in \S 5. \changes{In \S 6, we obtained an analytical form of the critical damping wavelength for the clean case considering only the bulk fluid viscosity. In \S7, we outline a numerical method to calculate the damping ratio of a general higher-ordered system and construct a minimal pole matrix to encode the information of the poles of the system with significant residue. Under this approach, we identify the transition point of the wave from an underdamped to overdamped state of a general system and obtain numerically the correction to this critical wavelength by surface viscosity and Marangoni effects.} The article is concluded in \S8. \section{Boussinesq-Scriven surface viscosity}\label{sec:2surf_vis} Under the Boussinesq-Scriven model of surface viscosity \citep{Scriven1960,Aris1963,Slattery2007}, the surface stress boundary conditions at the interface between two Newtonian fluids can be written as \changes{ \begin{equation} [\mathbf{n}\cdot\mathbf{T}]=\boldsymbol{\nabla}_{s}\cdot\boldsymbol{\sigma}_{s}, \end{equation} where $\mathbf{T}$ is the viscous stress tensor, $\boldsymbol{\nabla}_{s}=\mathbf{P}\cdot\boldsymbol{\nabla}$ is the surface gradient operator for the projection tensor $\mathbf{P}=\mathbf{I}-\mathbf{nn}^\mathrm{T}$ with normal vector $\mathbf{n}$, $[\cdot]$ denotes the jump in magnitude across the interface and $\boldsymbol{\sigma}_{s}$ is the surface viscous stress tensor defined by \begin{equation} \boldsymbol{\sigma}_{s}=\sigma \mathbf{P}+(\mu_{d}-\mu_{s})(\boldsymbol{\nabla}_{s}\cdot\mathbf{u}_s)\mathbf{P}+2\mu_{s}\mathbf{D}_{s}, \end{equation} and \begin{equation} \mathbf{D}_{s}=\frac{1}{2}\left(\mathbf{P}:\boldsymbol{\nabla}_{s}\mathbf{u}_{s}+(\boldsymbol{\nabla}\mathbf{u}_s)^{\mathrm{T}}:\mathbf{P}\right) \end{equation} is the surface rate of deformation tensor. The divergence of $\boldsymbol{\sigma}_{s}$ may be written \citep{Scriven1960} in the form \begin{eqnarray} [\mathbf{n}\cdot\mathbf{T}] & =& \boldsymbol{\nabla}_{s}\sigma+(\mu_{d}+\mu_{s})\boldsymbol{\nabla}_{s}(\boldsymbol{\nabla}_{s}\cdot\mathbf{u}_s)\nonumber\\ & +& \mu_{s}\left[2K\mathbf{u}_{s}+\mathbf{n}\times\boldsymbol{\nabla}_{s}(\mathbf{n}\cdot\boldsymbol{\nabla}_{s}\times\mathbf{u}_s)+2(\mathbf{n}\times\boldsymbol{\nabla}_{s}\mathbf{n}\times\mathbf{n})\cdot\boldsymbol{\nabla}_{s}(\mathbf{u}\cdot\mathbf{n})\right]\nonumber \\ & +& \mathbf{n}\left[2H\sigma+2H(\mu_{d}+\mu_{s})\boldsymbol{\nabla}_{s}\cdot\mathbf{u}_s-2\mu_{s}(\mathbf{n}\times\boldsymbol{\nabla}_{s}\mathbf{n}\times\mathbf{n}):\boldsymbol{\nabla}_{s}\mathbf{u}_s\right],\label{eq:SCRIVEN} \end{eqnarray} where \begin{eqnarray} 2H & = & -\boldsymbol{\nabla}_{s}\cdot\mathbf{n},\\ 2K & = & -(\mathbf{n}\times\boldsymbol{\nabla}_{s}\mathbf{n}\times\mathbf{n}):\boldsymbol{\nabla}_{s}\mathbf{n}, \end{eqnarray} are the mean and Gaussian curvatures of a surface, respectively, and $\sigma$ is the surface tension coefficient.} Neglecting higher-order terms, the leading-order surface stress boundary condition in the context of small-amplitude capillary waves takes the reduced form \begin{equation} [\mathbf{n}\cdot\mathbf{T}]=\boldsymbol{\nabla}_{s}\tilde{\sigma}+2H\tilde{\sigma}\mathbf{n},\label{eq:LIN-STRESS} \end{equation} where $\tilde{\sigma}$ is the surface tension augmented with the leading-order surface viscosity contribution given by \begin{equation} \tilde{\sigma}=\sigma+(\mu_{d}+\mu_{s})\boldsymbol{\nabla}_{s}\cdot\mathbf{u}_s. \end{equation} Using the equation of motion derived in \S\ref{sec:3EoM}, \changes{the leading-order surface viscosity effect} on the small-amplitude capillary wave can naturally be characterised \citep{Lopez1998} by the \changes{non-dimensional Boussinesq number \begin{equation} \mathrm{B}\equiv \mathrm{Bq}_d+\mathrm{Bq}_s = \left(\frac{\mu_d+\mu_s}{\mu}\right)k, \end{equation} }where $\mathrm{Bq}_{d}=\mu_d k/\mu$ and $\mathrm{Bq}_{s}=\mu_s k/\mu$ are the Boussinesq dilatational and shear numbers, respectively, for dynamic viscosity $\mu$ and wavenumber $k$. More explicitly, surface viscosity can be modelled to be proportional to the surfactant concentration \citep{Ponce-Torres2017}. In the case of detergents, experimental work by \citet{Brown1953} suggests the bi-partisan action of the special solute pairs present in the detergent; where the primary constituent provides a large reservoir of surface-active material while the secondary constituent, lesser in amount, forms surface films of high viscosity. \changes{However, in the leading-order dynamics of the Boussinesq-Scriven formulation, surface viscosity is shown in \S 3 to not depend explicitly on the surfactant concentration and enters only implicitly via the surface tension coefficient.} \changes{The other non-dimensional numbers of the system which arise naturally in the equation of motion are the viscosity ($\epsilon$), surfactant diffusivity ($\varsigma$) and surfactant strength ($\beta$) parameters given by \begin{equation} \left(\epsilon,\,\varsigma,\,\beta\right)=\frac{k}{\omega}\left(\nu k,\, D_s k,\,\frac{\alpha\Gamma_{0}}{\mu}\right), \end{equation} where $D_s$ denotes the coefficient of surface diffusivity, $\nu=\mu/\rho$ is the kinematic viscosity for fluid density $\rho$, $\alpha=\left|\mathrm{d}\sigma/\mathrm{d}\Gamma\right|$ is the gradient of surface tension coefficient, $\omega$ is frequency of the capillary wave and $\Gamma_0$ is the initial surfactant concentration, which is assumed to be much less than the critical micelle concentration (cmc). In this system, these parameters act as the effective Reynolds, Schmidt and Marangoni numbers, respectively.} \section{Equations of motion}\label{sec:3EoM} The dynamics of an incompressible fluid of viscosity $\mu$ and density $\rho$ in regions of Reynolds number $\mathrm{Re}=U\lambda/\nu\ll1$ satisfies the Stokes equation \begin{eqnarray} \rho\left(\mathbf{u}_t-\mathbf{F}\right) & = & -\boldsymbol{\nabla}p+\mu\nabla^{2}\mathbf{u}\\ \boldsymbol{\nabla}\cdot\mathbf{u} & = & 0 \end{eqnarray} where $\mathbf{u}=(u,v)$ is the two-dimensional fluid velocity field, $p$ is the pressure and $\mathbf{F}=-g\mathbf{j}$ is the external (gravitational) force, with $\mathbf{j}$ denoting the upward unit vector in the $y$-direction, and $g$ is the gravitational acceleration. The small-amplitude capillary wave is given at the free surface $F$ by the standing wave \begin{equation} F(x,y,t)=y-a(t)\cos kx, \end{equation} where $a(t)$ is the non-linear, time-dependent wave amplitude which satisfies the small-amplitude conditions that $a \ll\lambda\,= 2\pi/k$ and $\mathrm{d}a/\mathrm{d}t\ll v_{\mathrm{c}}=\omega/k$, where $\lambda$ is the wavelength and $v_{\mathrm{c}}$ is the phase velocity. For vanishing Gaussian curvature in a two-dimensional space, the leading-order tangential and normal stress components, $\mathrm{T}_{\parallel}$ and $\mathrm{T}_{\perp}$ and the kinematic condition are given by \begin{eqnarray} \mathrm{T}_{\parallel}\equiv \frac{1}{2}\mu\left(v_x+u_y\right) & = &\boldsymbol{\nabla}_{s}\tilde{\sigma},\\ \mathrm{T}_{\perp}\equiv-p+2\mu v_y & =&\tilde{\sigma}\boldsymbol{\nabla}_{s}\cdot\mathbf{n},\label{eq:normalstress}\\ F_t+v F_y & =&0, \end{eqnarray} respectively. The leading-order normal and tangent vectors are \begin{eqnarray} \mathbf{n} & \simeq & (ak\sin kx,1),\\ \mathbf{t} & \simeq & (1,-ak\sin kx). \end{eqnarray} \changes{Similar to the small-amplitude condition, we consider a small departure from the equilibrium surface tension and let the coefficient of surface tension $\sigma$, to be defined} via a linear equation of state \begin{equation} \sigma(x,t)=\sigma_{0}-\alpha\Gamma(x,t), \end{equation} where $\sigma_0$ is the initial surface tension coefficient, $\Gamma(x,t)$ is the (dilute) concentration of a surfactant solution where adsorptive-desorptive processes are neglected. Using the wave-form $\Gamma(x,t)-\Gamma_{0}=\tilde{\Gamma}(t)\cos kx,$ the governing equation for surfactant concentration along a two-dimensional deforming surface \citep{Stone1990} is given by \begin{equation} \tilde{\Gamma}_t+k^{2}D_s\tilde{\Gamma}=k\left(a_t+\nu k\Omega(0,t)*\mathcal{F}(t)\right)\label{eq:conveceqnfull} \end{equation} to the leading order, $\omega_{z}(x,y,t)=\Omega(y,t)\sin kx$ the \changes{$z$-component of the} vorticity, $*$ is the convolution operator and $\mathcal{F}(t)$ is the auxiliary function \begin{equation} \mathcal{F}(t)=\frac{1}{\sqrt{\pi\nu k^{2}t}}\mathrm{e}^{-\nu k^{2}t}-\mathrm{erfc}\sqrt{\nu k^{2}t}\,. \end{equation} The velocity and the pressure can be decomposed into inviscid and viscous parts, i.e. $(\mathbf{u},p)=(\mathbf{u}'+\mathbf{u}'',\,p'+p''),$ where the inviscid part $(\mathbf{u}',p')$ satisfies the Euler problem \begin{eqnarray} \rho\left(\mathbf{u}'_t-\mathbf{F}\right) & = &-\boldsymbol{\nabla}p',\quad\label{eq:EulerStokes}\\ \left.\left(F_t+v'F_y\right)\right|_{y=0} & =& 0 \end{eqnarray} with well-known solutions \citep{Lamb1932} \begin{equation} \left(\phi,\,p'\right)=\left(\frac{1}{k}\frac{\mathrm{d}a}{\mathrm{d}t}\mathrm{e}^{ky}\cos kx,\,-\rho gy+\frac{\rho}{k}\frac{\mathrm{d}^{2}a}{\mathrm{d}t^{2}}\mathrm{e}^{ky}\cos kx\right), \end{equation} where $\mathbf{u}'=\boldsymbol{\nabla}\phi$. The viscous component $(\mathbf{u}'',p'')$ satisfies the Stokes problem \begin{eqnarray} \rho\mathbf{u}''_t & = &\boldsymbol{\nabla}p''+\mu\nabla^{2}\mathbf{u}''\label{eq:viscousStokes}\\ 0 & = & \left.v' F_y\right|_{y=0}, \end{eqnarray} which is solved by introducing the streamfunction $\psi$ defined by $\mathbf{u}''=(\psi_y,-\psi_x)$. Taking the curl of Eq.\,(\ref{eq:viscousStokes}) gives the bi-harmonic equation \begin{equation} (\partial_t\nabla^{2}-\nabla^4)\psi=0. \end{equation} Writing $\psi=\Psi(y,t)\sin kx$, the Stokes problem yields the solution \begin{eqnarray} 2k\Psi & = & -\mathrm{e}^{-ky}\int_{-\infty}^{y}\Omega\mathrm{e}^{ky'}\mathrm{d}y'+ \mathrm{e}^{ky}\left(\int_{-\infty}^{0}\Omega\mathrm{e}^{ky'}\mathrm{d}y'+\int_{0}^{y}\Omega\mathrm{e}^{-ky'}\mathrm{d}y'\right)\\ \Omega & = & \Omega(0,t)*\frac{y}{\sqrt{\pi\nu t^3}}\mathrm{exp}\left(-\nu k^{2}t-\frac{y^2}{4\nu t}\right \end{eqnarray} where we have the viscous pressure correction $p''(x,y,t)=\mu\Omega(0,t)\mathrm{e}^{ky}\cos kx$ and the boundary vorticity \begin{equation} \Omega(0,t)=2\left(-\frac{\mathrm{T}_{\parallel}}{\mu}+v_x\right). \end{equation} Henceforth, using non-dimensional variables $\tau=\omega t,$ $\epsilon=\nu k^{2}/\omega$ and $\tilde{\Omega}=\Omega(0,t)/\omega$, the boundary vorticity becomes the integral equation \begin{equation} \tilde{\Omega}(0,\tau) = \mathrm{f}(\tau)+2\epsilon\,\mathrm{B}\,\tilde{\Omega}(0,\tau')*\mathcal{F}(\tau)\label{eq:VORT-EQN} \end{equation} where we have \begin{equation} \mathrm{f}(\tau)=-2[\beta\tilde{\Gamma}(\tau)+\delta(1+\mathrm{B})\dot{A}], \end{equation} for $A=a/a_0$ is the dimensionless amplitude, $\delta=a_{0}k$ and $\dot{}=\mathrm{d}/\mathrm{d}\tau$ denotes the non-dimensional temporal derivative. Substituting the pressure and the velocity into Eq.\,(\ref{eq:normalstress}) and Eq.\,(\ref{eq:conveceqnfull}) gives the simultaneous equation \begin{eqnarray} \ddot{A}+2\epsilon\dot{A}+A & = & \epsilon\tilde{\Omega}(0,\tau)-2\epsilon^{2}\tilde{\Omega}(0,\tau)*\mathcal{F}(\tau),\label{eq:AMPLITUDE-EQN}\\ \dot{\tilde{\Gamma}}+\varsigma\tilde{\Gamma} & = & \delta\dot{A}+\epsilon\tilde{\Omega}(0,\tau)*\mathcal{F}(\tau).\label{eq:SURFACTANT-EQN} \end{eqnarray} Equations (\ref{eq:AMPLITUDE-EQN}) and (\ref{eq:SURFACTANT-EQN}) provides us with a dynamic equation system for the amplitude and the surfactant concentration, the solution of which we outline in the next section. \section{Solution of the simultaneous integro-differential equation}\label{sec:4solution} Let $F(s)=\mathcal{L}[A](s)$, $G(s)=\mathcal{L}[\tilde{\Gamma}](s)$ \changes{and $\hat{\Pi}(s)=sF(s)-A_0$} be the Laplace transforms of $A(\tau),\,\tilde{\Gamma}(\tau)$ and $\dot{A}(\tau)$, define the polynomial expressions $\Theta_{\epsilon}^{(i)}\equiv\Theta_{\epsilon}^{(i)}(s+\epsilon)$ \changes{ for $1\leqslant i\leqslant6$ as \begin{eqnarray} \Theta_{\epsilon}^{(1)} & = &2\left[(s+\epsilon)^{1/2}-\epsilon^{1/2}\right],\\ \Theta_{\epsilon}^{(2)} & = &s\epsilon^{1/2}+\mathrm{B}\epsilon\Theta^{(1)},\\ \Theta_{\epsilon}^{(3)} & = &(s+\varsigma)\Theta^{(2)}+\beta\epsilon\Theta^{(1)},\\ \Theta_{\epsilon}^{(4)} & = &(s^{2}+2\epsilon s+1+2\mathrm{B}'\epsilon s)\Theta^{(2)}-2\epsilon^{2}s\mathrm{B}'^{2}\Theta^{(1)},\\ \Theta_{\epsilon}^{(5)} & = &2\epsilon s\beta\left(\mathrm{B}'\epsilon\Theta^{(1)}-\Theta^{(2)}\right),\\ \Theta_{\epsilon}^{(6)} & = &\delta\Theta^{(2)}-\mathrm{B}'\epsilon\Theta^{(1)}, \end{eqnarray} where $\mathrm{B}'=1+\mathrm{B}$.} The rational function $\hat{\Pi}_{\epsilon}=\hat{\Pi}_{\epsilon}(s+\epsilon)=\mathrm{P}(s^{1/2})/\mathrm{Q}(s^{1/2})$ is decomposed into its partial fraction \begin{eqnarray} \hat{\Pi}_{\epsilon}(s+\epsilon) & \equiv&\sum_{i=1}^{10}\frac{c_{i}}{s^{1/2}+z_{i}}\label{eq:partialdecomp} \\ & = & \frac{(U_{0}s-A_{0})\Theta^{(2)}\Theta^{(3)}+\tilde{\Gamma}_{0}\Theta^{(2)}\Theta^{(5)}}{\Theta^{(4)}\Theta^{(3)}-\tilde{\Gamma}_{0}\Theta^{(5)}\Theta^{(6)}},\label{eq:PI} \end{eqnarray} where $-z_{i}$ are the roots of the polynomial $\mathrm{Q}(s^{1/2})$. \changes{ In the absence of the Marangoni effect, the surface viscosity case is given by \begin{equation} \hat{\Pi}_{\epsilon}(s+\epsilon)=\frac{(U_{0}s-A_{0})\Theta^{(2)}}{(s^{2}+2\epsilon s+1+2\mathrm{B}'\epsilon s)\Theta^{(2)}-2\epsilon^{2}s\mathrm{B}'^{2}\Theta^{(1)}}. \end{equation} } By comparison with Lagrange polynomial \changes{interpolation, we have \begin{eqnarray} \frac{\mathrm{P}(s^{1/2})}{\mathrm{Q}(s^{1/2})}&\equiv & \mathrm{P}(s^{1/2})\prod_{i=1}^{k}\frac{1}{s^{1/2}+z_{i}}\\ &=&\sum_{i=1}^{k}\frac{\mathrm{P}(-z_{i})}{\sigma_{i}^{(k)}(-z_{i})}\frac{1}{s^{1/2}+z_{i}}, \label{eq:lagrangeinterp} \end{eqnarray}} where $\sigma_{i}^{(n)}$ is the $n$-th order cyclic polynomial given by \begin{equation} \sigma_{j}^{(n)}=\prod_{i=1}^{n-1}\left(z_{j+i\,\mathrm{mod}(n)}-z_{j}\right). \end{equation} It follows that by comparing Eq.\,(\ref{eq:partialdecomp}) and Eq.\,(\ref{eq:lagrangeinterp}), the coefficients $c_{i}$ are $c_{i}=\mathrm{P}(-z_{i})/\sigma_{i}^{(10)}(-z_{i})$. \changes{Let \begin{equation} \mathrm{Z}(n,j)=\sum_{i=1}^{n}\frac{\mathrm{P}(-z_{i})}{\sigma_{i}^{(n)}}(-z_{i})^{j}; \end{equation} it follows (see Appendix A) that the condition \begin{equation} \deg\mathrm{Q}-\deg\mathrm{P}=2 \end{equation} implies $\mathrm{Z}(n,0)=0$, where $\deg X$ is the degree of the polynomial $X$. }Taking the inverse Laplace transform of Eq. (\ref{eq:PI}) gives \changes{ \begin{equation} \Pi_{\epsilon}(\tau) =\frac{\mathrm{Z}(10,0)}{\sqrt{\pi t}}-\sum_{i=1}^{10}\frac{\mathrm{P}(-z_{i})}{\sigma_{i}^{(10)}}z_{i}\mathrm{e}^{z_{i}^{2}\tau}\mathrm{erfc}(z_{i}\tau^{1/2}). \end{equation}} Finally, the non-dimensional amplitude is given by \begin{equation} A(\tau)=1+\sum_{i=1}^{10}\frac{z_{i}}{\sigma_{i}^{(10)}}\mathrm{P}(-z_{i})\varphi(z_{i},\tau;\epsilon),\label{eq:AMP-SOL} \end{equation} where $\varphi=\varphi(z_{i},\tau;\epsilon)$ satisfies \begin{equation} \varphi(z_{i},\tau;\epsilon)=\frac{1}{z_i^2-\epsilon}\left(\mathrm{e}^{(z_{i}^{2}-\epsilon)\tau}\mathrm{erfc}(z_{i}\tau^{1/2})+\frac{z_{i}}{\epsilon^{1/2}}\mathrm{erf}[(\epsilon\,\tau)^{1/2}]-1\right). \end{equation} \section{Surface viscosity effects on the wave amplitude}\label{sec:5wavedisperse} \begin{figure} \subfloat[$\lambda=0.95\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig1a.eps}} \subfloat[$\lambda=9.5\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig1b.eps}} \subfloat[$\lambda=95\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig1c.eps}} \subfloat[$\lambda=950\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig1d.eps}} \subfloat[$\lambda=0.95\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig1e.eps}} \subfloat[$\lambda=9.5\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig1f.eps}} \subfloat[$\lambda=95\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig1g.eps}} \subfloat[$\lambda=950\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig1h.eps}} \caption{Wave amplitude as a function of dimensionless time $\tau$ for water at approximate room temperature and pressure for \changes{$\mathrm{Sc}=10^4$}, comparing the influence of the surface viscosity and Marangoni effects for various values of $\mathrm{B}$ (with $\beta=0$) and $\beta$ (with $\mathrm{B}=0$). Here $\nu_0=8.9\times10^{-7}\mathrm{m^2s^{-1}}$ and $\sigma_0=0.072\mathrm{Nm^{-1}}$ denotes the baseline kinematic viscosity and surface tension, respectively. The modified surface tension in subfigures e-f represent a clean system (i.e. $\beta=\mathrm{B}=0$) with $\sigma=\sigma_0-\alpha\Gamma_0$.} \end{figure} \begin{figure} \subfloat[$\lambda=0.95\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig2a.eps}} \subfloat[$\lambda=0.95\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig2b.eps}} \subfloat[$\lambda=9.5\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig2c.eps}} \subfloat[$\lambda=9.5\lambda_c^{(0)}$]{\includegraphics[width=3.25cm]{fig2d.eps}} \subfloat[$\lambda=95\lambda_c^{(0)}$]{\includegraphics[width=6.5cm]{fig2e.eps}} \subfloat[$\lambda=95\lambda_c^{(0)}$]{\includegraphics[width=6.5cm]{fig2f.eps}} \caption{Wave amplitude as a function of dimensionless time $\tau$ for water at approximate room temperature and pressure for \changes{$\mathrm{Sc}=10^4$}, showing the influence of the surface viscosity and Marangoni effects for various values of $\mathrm{B}$ (with $\beta=0$) and $\beta$ (with $\mathrm{B}=0$) for different wavelength $\lambda$. } \end{figure} As shown in \S4, the leading-order surface viscosity effects on the dynamics of small-amplitude capillary waves are characterised by the parameter $\mathrm{B}$ and similarly, the Marangoni effect can be reduced to the non-dimensional variables $\varsigma$ and $\beta$ given in \S 2. In what follows, we use water under room temperature and pressure (rtp), i.e. at 25$^\circ$C, with density $\rho=10^{3}\mathrm{kgm}^{-3}$, surface tension $\sigma=7.2\times10^{-2}\mathrm{Nm}^{-1}$ and viscosity \changes{$\mu=8.9\times10^{-4}\mathrm{Pa\,s}$, as a test system with surfactant Schmidt number $\mathrm{Sc}=\nu/D_s=10^4$}. We define $\lambda_c^{(0)}$ to be the wavelength for which the capillary wave undergoes critical damping, henceforth known as the critical wavelength. The superscript denotes the clean case which we understand as a system without the addition of surface-active material, i.e. $\beta=\mathrm{B}=0$. We will look at this critical wavelength in more detail in \S7, but here we note a harmonic oscillator approximation of $\lambda_c^{(0)}$ \citep{Denner2016b} whereby \begin{equation} \lambda_{\mathrm{c}}^{(0)}=2^{1/3}\pi l_{\mathrm{vc}}/\Theta,\label{eq:HARMONIC-Crit} \end{equation} for $\Theta=1.0625$ and the viscocapillary lengthscale \begin{equation} l_{\mathrm{vc}}=\frac{\mu^{2}}{\rho\sigma}. \end{equation} \changes{In figure 1, we compare the effect of surface viscosity with that of the equivalent bulk viscosity $\nu$, and the Marangoni effect with that of simple reductions in the surface tension coefficient $\sigma=\sigma_0-\alpha\Gamma_0$. Here we use the phrase equivalent to denote the quantity of bulk viscosity and simple reductions in surface tension which results in an identical effect on the overall wave amplitude due to surface viscosity and the Marangoni effect, respectively, under the limit of either a large or small wavelength. For wavelengths near critical damping, in figure 1(a), increasing $\mathrm{B}$ exhibits a relatively large difference to equivalent increases in $\nu$, as compared to the case for $\lambda=950\lambda_c^{(0)}$ in figure 1(d). This difference decreases as we increase the wavelength into less damped regions as shown in figure 1(b) and 1(c). In contrast, the picture is reversed for the Marangoni effect, where equivalent changes in the surface tension coefficient are almost identical to increases in surfactant concentration (through $\beta$) near critical damping in figure 1(e) and their difference increases with larger wavelength from figure 1(f) to figure 1(h); where the dynamic Marangoni effect vastly overshadows the static changes in the surface tension coefficient. Henceforth, we can approximate the Marangoni effect on the wave amplitude near the critical wavelength with the equivalent reduction in $\sigma$. In figure 2, we see in more detail the mechanisms of surface viscosity and the Marangoni effect at altering wave amplitudes for different wavelengths. Both effects admit relatively self-similar solutions and that increasing either the surface viscosity or the surfactant concentration increases damping as expected. One of the explicit differences is that the Marangoni effect reduces surface tension and, thus, lowers the frequency $\omega$, while surface viscosity leaves $\omega$ unchanged. This is due to surface viscosity being dependent on the surfactant concentration only to the linear order, and does not feature in the leading-order nonlinear amplitude equation in Eq.(\ref{eq:AMPLITUDE-EQN}) explicitly, as noted previously. The consequence on the amplitude of this $\omega$-lowering is a horizontal drift of the waveform for systems with the Marangoni effect, as evident from figures 2(d) and 2(f), as opposed to relatively centred waveforms in the cases of surface viscosity in figures 2(c) and 2(e). We also observe that the surface viscosity effect weakens for large wavelengths $\lambda \gg \lambda_c$ but is very potent for small wavelengths $\lambda \sim \lambda_c$. A particular consequence of this potency is its ability to alter the onset behaviour in interfacial phenomena, a number of which occurs near the region of the critical wavelength. The stochastic nature of many of the interfacial instabilities \citep{Aarts2004} are often kickstarted by the small-amplitude local disturbances on the interface. Hence a small change in surface material, and thus the wave damping, can have a significant effect in starting or delaying the initialisation process of more complex phenomena. Furthermore, it would be of interest to obtain the modifications to the critical wavelength upon the addition of a small amount surface active material. However, we need to consider the definition of the critical wavelength for a higher-order system as it is not as readily defined as in a second-order system. } \section{Capillary wave dispersion and the critical wavelength} To quantify the changes due to the presence of the Marangoni and surface viscosity effects to the critical wavelength, we must first obtain the critical wavelength in the clean case. \changes{Following \citet{Lamb1932}, the general dispersion relation for an interface with both the Marangoni effect and surface viscosity can be found (derivation in Appendix B) to take the form \begin{equation} W_{0}(Z;\epsilon)\left(1+Z+\mathrm{B}+\frac{\beta}{\epsilon^{2}(Z^{2}-1)}\right)+\left(\mathrm{B}+\frac{\beta}{\epsilon^{2}(Z^{2}-1)}\right)(Z-1)^{3}=0\label{eq:generaldisp} \end{equation} where \begin{equation} \frac{\mathrm{i}\omega}{\epsilon}=Z^{2}-1 \end{equation} and \begin{equation} W_{0}(Z;\epsilon)=Z^{4}+2Z^{2}-4Z+1+\frac{1}{\epsilon^{2}}. \end{equation}}Specialising to the clean case, Eq.\,(\ref{eq:generaldisp}) reduces to \begin{equation} (\mathrm{i}\omega'+2\epsilon)^{2}-4\epsilon^2\left(1+\frac{\mathrm{i}\omega'}{\epsilon}\right)^{1/2} + 1 =0,\label{eq:dispersion-relation} \end{equation} as derived from linearised hydrodynamics \citep{Levich1962,Lamb1932}. The dispersion relation admits solution of the form \begin{equation} \omega'=2\mathrm{i}\epsilon-\frac{1}{2}h-\left(1-\frac{1}{4}h^{2}-\frac{8\mathrm{i}\epsilon^{3}}{h}\right)^{1/2},\label{eq:omega-root} \end{equation} where $h^2 = \frac{1}{3} - J_{+}^{1/3} - J_{-}^{1/3}$ for \begin{equation} J_{\pm}= \frac{1}{27}-\frac{2}{3}\epsilon^{4}+2\epsilon^{6} \pm \frac{2}{3\sqrt{3}}\epsilon^{3}\sqrt{\mathrm{f}(\epsilon)}, \end{equation} where the polynomial $\mathrm{f}(\epsilon)$ is given by \begin{equation} \mathrm{f}(\epsilon)=11\epsilon^{6}-18\epsilon^{4}-\epsilon^{2}-1.\label{eq:transitionpoly} \end{equation} For $\epsilon\ll\epsilon^{\star},$ the wave frequency can be written as \begin{equation} \omega' \sim\frac{1}{2}h^2-h+2\mathrm{i}\epsilon\left(1+\frac{\epsilon^{2}}{h}\right), \end{equation} and the damping coefficient can be extracted as $\mathrm{Im}(\omega')\sim2\epsilon,\label{eq:damping-low-viscosity}$ where $\epsilon^{\star}\in\mathbb{R}^{+}$ is the transition value defined by the (largest positive) root of the polynomial $\mathrm{f},$ i.e. \begin{equation} \epsilon^{\star}=\sup_{\mathbb{R}^{+}}\left\{ \epsilon:\mathrm{f}(\epsilon)=0\right\}. \end{equation} Solving Eq.\,(\ref{eq:transitionpoly}) exactly gives \begin{equation} \epsilon^\star = \left(\frac{6}{11}+\frac{1}{33}\left(\frac{3}{2}(5571-341\sqrt{93}\right)^{1/3}+\frac{1}{33}\left(\frac{3}{2}(5571+341\sqrt{93}\right)^{1/3}\right)^{1/2} \end{equation} with the numerical value $\epsilon^{\star}\simeq1.3115.$ \changes{Reintroducing dimensional variables, define $P,Q$ and the variable $K$ by} \changes{\begin{eqnarray} K & = & k^{\star}-\frac{\sigma\epsilon^{\star2}}{3\nu^{2}\rho}\\ P & = & -\frac{\sigma^{2}\epsilon^{\star4}}{3\rho^{2}\nu^{4}}\\ Q & = & -\left(\frac{\mathrm{g}\epsilon^{\star2}}{\nu^{2}}+\frac{2\sigma^{3}\sigma^{\star6}}{27\rho^{3}\nu^{6}}\right). \end{eqnarray}}It follows from the definition of $\epsilon$ that the critical wavenumber $k^{\star}$ satisfies the cubic equation \begin{equation} K^{3}+PK+Q=0. \end{equation} We require the real solution given by \begin{equation} K=\left\{ -\frac{1}{2}Q+\Delta^{1/2}\right\} ^{1/3}+\left\{ -\frac{1}{2}Q-\Delta^{1/2}\right\} ^{1/3},\label{eq:k-star-solution} \end{equation} where \begin{eqnarray} \Delta^{1/2} & \equiv & \left(\frac{Q^{2}}{4}+\frac{P^{3}}{27}\right)^{1/2}\\ & = & \frac{\mathrm{g}\epsilon^{\star2}}{2\nu^{2}}\left(1+\frac{4\sigma^{3}\epsilon^{\star4}}{27\rho^{3}\nu^{4}\mathrm{g}}\right)^{1/2}. \end{eqnarray} By inspecting Eq.\,(\ref{eq:k-star-solution}), the critical wavenumber under the limit $k\ll (\rho \mathrm{g}/\sigma)^{1/2}$ is $k^{\star}\sim\left(\mathrm{g}\epsilon^{\star2}/\nu^{2}\right)^{1/3},$ corresponding to the gravity-dominated regime with $\omega_{0}\sim(\mathrm{g}k)^{1/2}$. For $k\geqslant(\rho\mathrm{g}/\sigma)^{1/2}$, the critical wavenumber reduces to \begin{equation} k^{\star}\sim\epsilon^{\star2}\frac{\sigma\rho}{\mu^{2}}=\frac{\epsilon^{\star2}}{l_{\mathrm{vc}}},\label{eq:CRIT-ANALYTIC} \end{equation} corresponding to the capillary-dominated regime with $\omega_{0}\sim(\sigma k^{3}/\rho)^{1/2}$. For $\epsilon\gg\epsilon^{\star}$, we note that $\mathrm{Re}(\omega')=0$ and the system is in an overall overdamped regime. The damping ratio is given by expanding $\omega_{-}'$ (since $\omega_{+}'\gg\omega_{-}'$ and would thus rapidly damp the motion) in Eq.\,(\ref{eq:omega-root}) in ascending powers of $\epsilon^{-1}$, i.e. \begin{equation} \mathrm{Im}(\omega')\sim\frac{1}{2\epsilon}+O\left(\frac{1}{\epsilon^{2}}\right).\label{eq:damping-high-viscosity} \end{equation} This agrees with the asymptotic approximations by \citet{Levich1962} which suggests that increasing viscosity would decrease damping for $\epsilon\gg\epsilon^{\star}$. Using Eq.\,(\ref{eq:CRIT-ANALYTIC}), the analytical critical wavelength (the value of $\lambda$ with associated damping ratio $\zeta=1$) of the capillary wave is given by \begin{equation} \lambda_c^\star = \frac{2\pi}{\epsilon^{\star2}}l_\mathrm{vc}.\label{eq:crit-wavelength} \end{equation} For water under rtp., we have $\lambda_c^\star\doteq 40.1894\mathrm{nm}$. In comparison, a harmonic oscillator approximation \citep{Denner2016b} in Eq.\,(\ref{eq:HARMONIC-Crit}) gives the result $\lambda_c^{(0)}\doteq40.9838\mathrm{nm}.$ \changes{Consider the relative error between the harmonic oscillator and the analytical critical wavelengths \begin{equation} \left|\frac{2^{1/3}}{\Theta}-\frac{2}{\epsilon^{\star2}}\right|\frac{\epsilon^{\star2}}{2}\simeq0.01977, \end{equation} we observe the system is largely second-order in the neighbourhood of the critical wavelength, as the harmonic oscillator value of $\lambda_c$ is within 2\% of the analytical value from the wave dispersion.} \section{Damping ratio for a generalised system} \changes{For systems of a higher order, the damping ratio $\zeta$ is not naturally defined and we usually inspect the root-locus diagram in order to decompose the system into a sum of first- and second-ordered systems to provide an estimate calculation. Here we consider a numerical method to obtain an equivalent damping ratio, whereby $\zeta\geqslant1$ when the area of the amplitude below the settling value vanishes for all time almost everywhere. The critical wavelength is then the supremum of the set of wavelengths such that the above property holds. We express this as \begin{equation} \lambda_{c}=\sup_{\lambda\in\mathbb{R}^{+}}\left\{ \lambda:\lim_{T\rightarrow\infty}\int_{0}^{T}\mathrm{A}(\lambda,\tau)\left[1-H(\mathrm{A}(\lambda,\tau))\right]\,\mathrm{d}\tau\rightarrow0\,\mathrm{a.e.}\right\}, \label{eq:area-method} \end{equation} where $H(x)$ is the Heaviside step function. For underdamped waves, even for second-order systems, the logarithmic decrement or fractional overshoot methods tend to break down or become less accurate near regions of critical damping. Hence, to determine the damping ratios in the neighbourhood of $\zeta\approx1$, we adapt the area method in Eq.\,(\ref{eq:area-method}). Consider that the area under the $t$-axis is given by the function \begin{equation} \Xi(\zeta)=\int_{0}^{\infty}\mathrm{d}t\,\left\{ \mathrm{\Lambda}(\zeta,t)\,\left[1-H(\mathrm{\Lambda}(\zeta,t))\right]\right\} \end{equation} where $\Lambda(\zeta,t)$ satisfies the normalised harmonic oscillator equation \begin{equation} \frac{\mathrm{d}^{2}\Lambda}{\mathrm{d}t^{2}}+2\zeta\frac{\mathrm{d}\Lambda}{\mathrm{d}t}+\Lambda=0\quad\mathrm{subject\ to\ }\Lambda(\zeta,0)=1,\ \frac{\mathrm{d}\Lambda}{\mathrm{d}t}(\zeta,0)=0. \end{equation} The generalised (numerical) damping ratio for $\zeta\leqslant1$ can then be obtained by the inverse operation \begin{equation} \zeta=\Xi^{-1}(X), \end{equation} where $X$ is the area under the $t$-axis of a generalised system. This numerical method agrees well with logarithmic decrement and fractional overshoot schemes in the relevant underdamped regimes. In cases where the higher order system can readily be approximated locally by a second-order system, we note that its dominant poles have a larger residue and time constant $t_c=1/(\zeta_n\omega_n)$ relative to other poles, where $\zeta_n$ and $\omega_n$ are the damping ratio and frequency associated with each pole. In cases that are not clear cut, i.e. where all the poles are closer together with $t_c$ and residues of a similar magnitude, the numerical definition of the damping ratio above only provides an estimate of the true damping ratio and we need to examine the poles in more detail. To encode such information into a convenient form, we construct the \textit{minimal pole matrix} $\{\zeta(\lambda)\}$. To decompose the system, we first consider the minimal realisation of the transfer function. For a general system, let $\Theta_{1}=\{q_{i}\in\mathbb{C}:\ \mathrm{Q}(-q_{i})=0\}$ and $\Theta_{2}=\{p_{j}\in\mathbb{C}:\mathrm{P}(-p_{j})=0\}$ be the set of poles and zeros of the transfer function $\mathrm{tf}(s)\equiv\mathrm{P}(s)/\mathrm{Q}(s)$, which can be written in the form, \begin{equation} \mathrm{tf}(s) = \prod_{q_{i}\in\Theta_{1},p_{j}\in\Theta_{2}}\left(\frac{s+p_{j}}{s+q_{i}}\right). \end{equation} Applying the pole-zero cancellation procedure, we obtain the minimal realisation of the transfer function, henceforth known as the \textit{minimal transfer function} \textit{$\mathrm{mtf}(s)$}, defined by \begin{eqnarray} \mathrm{mtf}(s,\vartheta) & = & \frac{\mathrm{P}_{m}(s)}{\mathrm{Q}_{m}(s)}\\ & = & \sum_{q_{i,m}\in\Theta_{1,m}(\vartheta)}\frac{\mathrm{res}(-q_{i,m})}{s+q_{i,m}} \end{eqnarray} where the polynomials $\mathrm{P}_m(s)$ and $\mathrm{Q}_m(s)$ satisfy \begin{equation} \deg(\mathrm{Q}_{\mathrm{m}}+\mathrm{P_{m})\leqslant\deg(\mathrm{P}+\mathrm{Q})} \end{equation} and $\Theta_{1,m}(\vartheta) \subseteq\Theta_{1}$ is the set of poles of $\mathrm{Q}$ with significant residues (tolerance of order $\vartheta$) \begin{equation} \Theta_{1,m}(\vartheta)=\left\{ q_{i,m}\in\mathbb{C}:\ \mathrm{Q}(-q_{i,m})=0,\ \frac{\left|\mathrm{res}(q_{j})\right|}{\max_{q_{i}\in\Theta_{1}}\left|\mathrm{res}(q_{i})\right|}= O(\vartheta)\right\}. \end{equation} Returning to the construction of the minimal pole matrix, we let the first column of the matrix illustrate the order of the poles of the minimal transfer function in dots form; the second column considers the relative magnitudes of their time constant $t_c$; the third column compares their relative residues; the fourth column gives the damping ratios $\zeta_n$ associated with each pole. Furthermore, to the right of the line separator, we provide an estimated equivalent second-order damping ratio of the entire system using the area numerical method described previously. We say that a system is second-order dominant if one set of complex conjugate poles dominate the other poles (i.e. having the largest $t_c$ and residue). In diagrammatic form for $\vartheta=1$, we have \begin{eqnarray} \{\zeta(\lambda;\beta,\mathrm{B})\}=\left\{ \left. \begin{array}{cccc} \mathrm{pole} & \mathrm{relative} & \mathrm{relative} & \mathrm{associated }\\ \mathrm{type} & t_c & \mathrm{residue} & \zeta_n \\ \{\bullet,\bullet\bullet\} & (0,10) & (0,10) & [0,\infty) \end{array}\right| \begin{array}{c} \mathrm{numerical }\\ \mathrm{\zeta}\\{} [0,\infty) \end{array}\right\} \end{eqnarray} For example, for $\lambda=6.22\lambda_c^\star$ at rtp., the clean case for water exhibits the following minimal pole matrix \begin{eqnarray} {\scriptstyle \left\{ \left.\begin{array}{cccc} \bullet\bullet & 1 & 1 & 0.75\\ \bullet\bullet & 1 & 2.30 & 0.45 \end{array}\right|{\textstyle 0.45}\right\} }, \end{eqnarray} from which we observe that the system is second-order dominant since the set of complex conjugate poles with associate damping ratio 0.45 dominates (in the sense of residue) the other and, thus, we can deduce that the true damping ratio of the system is close to the approximate second-order value. We take this analysis to the region near critical damping, i.e. for $\zeta\rightarrow1^{+}$ and $\zeta\rightarrow1^{-}$. In the clean case, we take the analytical result $\lambda_{c}^{\star}=2\pi l_{\mathrm{vc}}/\epsilon^{\star2}$ to be the definition of the critical (damping) wavelength. The relevant minimal pole matrices take the form \begin{eqnarray} \{\zeta(\lambda\rightarrow\lambda_{c}^{\star+})\} & ={\scriptstyle \left\{ \begin{array}{cccc} \bullet\bullet & 1 & 1 & 1\\ \bullet\bullet & \ \,1^{-} & \ \,1^{+} & < 1 \end{array}\right\} },\\ \{\zeta(\lambda\rightarrow\lambda_{c}^{\star-})\} & ={\scriptstyle \left\{ \begin{array}{cccc} \bullet & \ \, 1^{-} & 1 & 1\\ \bullet & 1 & \ \,1^{-} & 1 \end{array}\right\} .} \end{eqnarray} We can see that this transition from $\lambda_{c}^{\star+}\rightarrow\lambda_{c}^{\star-}$ for the clean case boasts a transformation of the complex conjugate into two separate first-order poles. Or in diagrammatic form, we have \begin{equation} \begin{array}{c} \bullet\bullet\\ \bullet\bullet \end{array}\rightarrow\begin{array}{c} \bullet\\ \bullet\\ \end{array}, \end{equation} i.e. a $2^{2}$ to $1^{2}$ transition. Extending to contaminated systems, we summarise the results of the minimal pole matrices of the system at the critical transition in table 1, \cchanges{where the notation $1^a2^b$ denotes a system with $a$ first-order and $b$ second-order poles with significant relative residue. } \begin{table} \begin{centering} \begin{tabular}{cccccccc} & $\beta=\mathrm{B}=0$ & & $\beta>0,\,\mathrm{B}=0$ & & $\beta=0,\,\mathrm{B}>0$ & & $\beta,\mathrm{B}>0$\tabularnewline & $\begin{array}{c} \bullet\bullet\\ \bullet\bullet \end{array}\rightarrow\begin{array}{c} \bullet\\ \bullet \end{array}$ & & $\begin{array}{c} \bullet\bullet\\ \bullet\bullet \end{array}\rightarrow\begin{array}{c} \bullet\\ \bullet \end{array}$ & & $\begin{array}{c} \bullet\\ \bullet\bullet\\ \bullet\bullet \end{array}\rightarrow\begin{array}{c} \bullet\\ \bullet \end{array}$ & & $\begin{array}{c} \bullet\\ \bullet\bullet\\ \bullet\bullet \end{array}\rightarrow\begin{array}{c} \bullet\\ \bullet \end{array}$\tabularnewline & $2^{2}\rightarrow1^{2}$ & & $2^{2}\rightarrow1^{2}$ & & $1^{1}2^{2}\rightarrow1^{2}$ & & $1^{1}2^{2}\rightarrow1^{2}$\tabularnewline \end{tabular} \par\end{centering} \caption{Schematic of poles at the transition from $\lambda_{c}^{\star+}\rightarrow\lambda_{c}^{\star-}$ for water at room temperature and pressure.} \end{table} We note that the effect of surface viscosity is to introduce an extra first-order pole (with unit damping ratio) to the system, while the Marangoni effect does not change the pole composition for the underdamped region before the critical damping transition. Moreover, we observe at this critical damping transition that the system enters overdamped regime if its minimal pole matrix is of the $1^2$-type irrespective of its type in the underdamped regime prior to the transition. Henceforth, we shall define a generalised higher-order (capillary) wave to be in overall overdamped motion if its minimal pole matrix is of the $1^2$-type. Using the definition of the overdamped regime for a generalised higher-order system, we determine numerically the corrections to the critical wavelength for increasing surface viscosity (through $\mathrm{B}$), surfactant concentration (through $\beta$) and bulk viscosity \cchanges{through $\theta$, where \begin{equation} \nu=(1+\theta)\nu_0 \end{equation}}in figure 3. We note that the curves denoting surface viscosity and the Marangoni effect intersect near $\mathrm{B}\simeq 0.65$ and that while the Marangoni curve is roughly exponential, the surface viscosity curve is the sum of two exponential functions. Moreover, we observe that a small amount of surface viscosity present in the system has an amplified effect on the system and that a 7-fold increase in critical wavelength for $\mathrm{B}=1$ results in very different dynamics and mechanisms as the sub-100nm brings forward the possibility of long-range molecular interactions as well as the hydrodynamics. Also of consideration is the proximity of the critical wavelength to the wavelengths of the visible spectrum of light which allows thin film behaviours to be captured by light scattering methods. Hence the presence of surfactants could determine whether or not we would be within such a range to allow interferometry techniques. \begin{figure} \begin{center} \includegraphics[width=13cm]{fig3.eps} \end{center} \caption{\cchanges{Independent} corrections to the critical wavelength for increasing surface viscosity (via $\mathrm{B}$, \cchanges{with $\theta=\beta=0$}), bulk viscosity (via $\theta$, where $\nu/\nu_0=1+\theta$ \cchanges{and $\beta=\mathrm{B}=0$}) and Marangoni effect (through $\beta$\cchanges{, with $\theta=\mathrm{B}=0$}). } \end{figure} Comparisons of the range under with experimental and computational results on surface viscosity can be made using values reported previously in the literature. Experimentally, \citet{Kanner1969} summarises in table 2 the surface viscosities for both surfactant and polymeric films, where, for a dilute amount of sodium lauryl sulfate and polydimethylsiloxane in particular, the surface viscosity corresponds to a lower bound of $\mathrm{B}=O(1)$ for $k= O(10^6)$. A similar correspondence can be found with the upper bound surface \cchanges{shear} viscosity of $O(10^{-8}\mathrm{Nsm^{-1}})$ found in \citep{Zell2014} for soluble surfactants. \cchanges{We note that this measurement does not include surface dilatational viscosity and so corresponds to the case \citep{Lucassen1966} where diffusional transport between the surface and the bulk is neglected, and assumes that the bulk viscosity and density are constant right up to the interface.} More recently, \citet{Gounley2016} characterises the influence of \cchanges{both shear and dilatational} surface viscosity on droplets in shear flow in the range $\mathrm{B}=O(10^0)$ to $O(10^1)$ for a range of capillary numbers. Beyond the cmc value, the Marangoni effect should in principle have no overall contribution to the capillary wave; the dotted curve in figure 3 would end abruptly at the cmc value. The combination of surface viscosity together with the Marangoni effect is however not straightforward; as in previous experimental studies \citep{Brown1953, Kanner1969}, surface viscosity also appears to alter the ability of the Marangoni effect to lower surface tension. It would therefore be fruitful in a future contribution to investigate this surfactant interference mechanism through a more systematic experimental and theoretical study. In particular, a numerical approach similar to that of \citet{Sinclair2018} could include the usage of a nonlinear equation of state for the surface tension coefficient $\sigma$. The effect of the deviation from the linear equation of state on the amplitude of the capillary wave would aid the analysis near the cmc value of the surfactant solution. } \section{Conclusion} In this work the surface viscosity effect has been incorporated into the integro-differential initial value problem describing the wave dynamics of small-amplitude capillary waves via the Boussinesq-Scriven surface model. We have shown that, particularly at lengthscales close to the critical damping wavelength, a very small amount of surface viscosity can dramatically increase the critical wavelength of the capillary waves, \cchanges{in contrast with the Marangoni effect which becomes prominent at larger wavelengths.} In view of the important role that capillary waves play in inducing the rupture process of thin films \citep{Aarts2004}, we anticipate the various interfacial phenomena controlling the wave dynamics at the very minute lengthscale to contribute towards the understanding of the stability of foams with non-trivial surface viscosity. \changes{In particular, the correction of the critical wavelength due to surface viscosity and Marangoni effects, which we summarised in figure 3 using numerical methods, is bound to alter the onset of fluid instabilities for very thin liquid films. It is also useful towards the optimisation of additives to achieve the desired increases in the critical wavelength. Finally, we expect the concept of a critical damping wavelength and its correction by surface material to be useful to a further number of general interfacial phenomena, such as the onset of thin film quasi-elastic wrinkling and Faraday-like instabilities in the same lengthscale.} \begin{acknowledgements} The authors acknowledge the financial support of the Shell University Technology Centre for fuels and lubricants and the Engineering and Physical Sciences Research Council (EPSRC) through grants EP/M021556/1 and EP/N025954/1. \end{acknowledgements}
1,116,691,498,903
arxiv
\section{Introduction} \label{sec:intro} \begin{sloppypar} Sample compression is a natural learning strategy, whereby the learner seeks to retain a small subset of the training examples, which (if successful) may then be decoded as a hypothesis with low empirical error. Overfitting is controlled by the size of this learner-selected ``compression set''. Part of a more general {\em Occam learning} paradigm, such results are commonly summarized by ``compression implies learning''. A fundamental question, posed by \citet{Littlestone86relatingdata}, concerns the reverse implication: Can every learner be converted into a sample compression scheme? Or, in a more quantitative formulation: Does every VC class admit a constant-size sample compression scheme? A series of partial results \citep{floyd1989space,helmbold1992learning,DBLP:journals/ml/FloydW95,ben1998combinatorial,DBLP:journals/jmlr/KuzminW07,DBLP:journals/jcss/RubinsteinBR09,DBLP:journals/jmlr/RubinsteinR12,MR3047077,livni2013honest,moran2017teaching} culminated in \citet{DBLP:journals/jacm/MoranY16} which resolved the latter question\footnote{ The refined conjecture of \citet{Littlestone86relatingdata}, that any concept class with VC-dimension $d$ admits a compression scheme of size $O(d)$, remains open.}. \end{sloppypar} \citeauthor{DBLP:journals/jacm/MoranY16}'s solution involved a clever use of von Neumann's minimax theorem, which allows one to make the leap from the existence of a weak learner uniformly over all {\em distributions on examples} to the existence of a {\em distribution on weak hypotheses} under which they achieve a certain performance simultaneously over all of the examples. Although their paper can be understood without any knowledge of boosting, \citeauthor{DBLP:journals/jacm/MoranY16} note the well-known connection between boosting and compression. Indeed, boosting may be used to obtain a constructive proof of the minimax theorem \citep{freund1996game,MR1729311} --- and this connection was what motivated us to seek an efficient algorithm implementing \citeauthor{DBLP:journals/jacm/MoranY16}'s existence proof. Having obtained an efficient conversion procedure from consistent PAC learners to bounded-size sample compression schemes, we turned our attention to the case of real-valued hypotheses. It turned out that a virtually identical boosting framework could be made to work for this case as well, although a novel analysis was required. \paragraph{Our contribution.} \begin{sloppypar} Our point of departure is the simple but powerful observation \citep{MR2920188} that many boosting algorithms (e.g., AdaBoost, $\alpha$-Boost) are capable of outputting a family of $O(\log(m)/\gamma^2)$ hypotheses such that not only does their (weighted) majority vote yield a sample-consistent classifier, but in fact a $\approx(\frac12+\gamma)$ super-majority does as well. This fact implies that after boosting, we can sub-sample a constant (i.e., independent of sample size $m$) number of classifiers and thereby efficiently recover the sample compression bounds of \citet{DBLP:journals/jacm/MoranY16}. \end{sloppypar} Our chief technical contribution, however, is in the real-valued case. As we discuss below, extending the boosting framework from classification to regression presents a host of technical challenges, and there is currently no off-the-shelf general-purpose analogue of AdaBoost for real-valued hypotheses. One of our insights is to impose distinct error metrics on the weak and strong learners: a ``stronger'' one on the latter and a ``weaker'' one on the former. This allows us to achieve two goals simultaneously: \begin{itemize} \hide{ \item[(a)] \citet{kegl2003robust}'s MedBoost --- a variant of the classical $\alpha$-Boost algorithm --- achieves a weak-to-strong conversion, for the notions we define. The analysis is standard. } \item[(a)] We give apparently the first generic construction for our weak learner, demonstrating that the object is natural and abundantly available. This is in contrast with many previous proposed weak regressors, whose stringent or exotic definitions made them unwieldy to construct or verify as such. The construction is novel and may be of independent interest. \item[(b)] We show that the output of a certain real-valued boosting algorithm may be sparsified so as to yield a constant size sample compression analogue of the \citeauthor{DBLP:journals/jacm/MoranY16} result for classification. This gives the first general constant-size sample compression scheme having uniform approximation guarantees on the data. \end{itemize} \section{Definitions and notation} \label{sec:defnot} We will write $[k]:=\set{1,\ldots,k}$. An {\em instance space} is an abstract set $\mathcal{X}$. For a concept class $\mathcal{C}\subset\set{0,1}^\mathcal{X}$, if say that $\mathcal{C}$ {\em shatters} a set $\set{x_1,\ldots,x_k}\subset\mathcal{X}$ if $$\mathcal{C}(S) = \set{ (f(x_1),f(x_2),\ldots,f(x_k)) : f\in\mathcal{C}} = \set{0,1}^k . $$ The VC-dimension $d=d_\mathcal{C}$ of $\mathcal{C}$ is the size of the largest shattered set (or $\infty$ if $\mathcal{C}$ shatters sets of arbitrary size) \citep{MR0288823}. When the roles of $\mathcal{X}$ and $\mathcal{C}$ are exchanged --- that is, an $x\in\mathcal{X}$ acts on $f\in\mathcal{C}$ via $x(f)=f(x)$, --- we refer to $\mathcal{X}=\mathcal{C}^*$ as the {\em dual} class of $\mathcal{C}$. Its VC-dimension is then $d^*=d^*_\mathcal{C}:=d_{\mathcal{C}^*}$, and referred to as the \emph{dual VC dimension}. \citet{MR723955} showed that $d^*\le2^{d+1}$. For $\mathcal{F}\subset\mathbb{R}^\mathcal{X}$ and $t>0$, we say that $\mathcal{F}$ $t$-shatters a set $\set{x_1,\ldots,x_k}\subset\mathcal{X}$ if $$\mathcal{F}(S) = \set{ (f(x_1),f(x_2),\ldots,f(x_k)) : f\in\mathcal{F}}\subseteq\mathbb{R}^k$$ contains the translated cube $\set{-t,t}^k+r$ for some $r\in\mathbb{R}^k$. The $t$-fat-shattering dimension $d(t)=d_\mathcal{F}(t)$ is the size of the largest $t$-shattered set (possibly $\infty$) \citep{alon97scalesensitive}. Again, the roles of $\mathcal{X}$ and $\mathcal{F}$ may be switched, in which case $\mathcal{X}=\mathcal{F}^*$ becomes the dual class of $\mathcal{F}$. Its $t$-fat-shattering dimension is then $d^*(t)$, and \citeauthor{MR723955}'s argument shows that $d^*(t)\le2^{d(t)+1}$. A {\em sample compression scheme} $(\kappa,\rho)$ for a hypothesis class $\mathcal{F}\subset\mathcal{Y}^\mathcal{X}$ is defined as follows. A $k$-{\em compression} function $\kappa$ maps sequences $((x_1,y_1),\ldots,(x_m,y_m))\in\bigcup_{\ell\ge1}(\mathcal{X}\times\mathcal{Y})^\ell$ to elements in $\mathcal{K}=\bigcup_{\ell\le k'}(\mathcal{X}\times\mathcal{Y})^\ell \times \bigcup_{\ell\le k''}\set{0,1}^\ell$, where $k'+k''\le k$. A {\em reconstruction} is a function $\rho:\mathcal{K}\to\mathcal{Y}^\mathcal{X}$. We say that $(\kappa,\rho)$ is a $k$-size sample compression scheme for $\mathcal{F}$ if $\kappa$ is a $k$-compression and for all $h^*\in\mathcal{F}$ and all $S=((x_1,h^{*}(x_1)),\ldots,(x_m,h^{*}(y_m)))$, we have $\hat h:=\rho(\kappa(S))$ satisfies $\hat h(x_i)=h^*(x_i)$ for all $i\in[m]$. For real-valued functions, we say it is a \emph{uniformly $\varepsilon$-approximate} compression scheme if \[ \max_{1 \leq i \leq m} | \hat{h}(x_i) - h^*(x_i) | \leq \varepsilon. \] \section{Main results} \label{sec:main-res} Throughout the paper, we implicitly assume that all hypothesis classes are {\em admissible} in the sense of satisfying mild measure-theoretic conditions, such as those specified in \citet[Section 10.3.1]{MR876079} or \citet[Appendix C]{pollard84}. We begin with an algorithmically efficient version of the learner-to-compression scheme conversion in \citet{DBLP:journals/jacm/MoranY16}: \begin{theorem}[Efficient compression for classification] \label{thm:classification} Let $\mathcal{C}$ be a concept class over some instance space $\mathcal{X}$ with VC-dimension $d$, dual VC-dimension $d^*$, and suppose that $\mathcal{A}$ is a (proper, consistent) PAC-learner for $\mathcal{C}$: For all $0<\varepsilon,\delta<1/2$, all $f^*\in\mathcal{C}$, and all distributions $D$ over $\mathcal{X}$, if $\mathcal{A}$ receives $m\ge m_\mathcal{C}(\varepsilon,\delta) $ points $S=\set{x_i}$ drawn iid from $D$ and labeled with $y_i=f^*(x_i)$, then $\mathcal{A}$ outputs an $\hat f\in\mathcal{C}$ such that \begin{eqnarray*} \P_{S\sim D^m}\paren{ \P_{X\sim D}\paren{\hat f(X)\neq f^*(X)\, | \, S} >\varepsilon}<\delta. \end{eqnarray*} For every such $\mathcal{A}$, there is a randomized sample compression scheme for $\mathcal{C}$ of size $O(k\log k)$, where $k=O(dd^*)$. Furthermore, on a sample of any size $m$, the compression set may be computed in expected time \begin{eqnarray*} O\paren{ (m + T_{\calA}(cd))\log m + m T_{\calE}(cd) (d^* + \log m) }, \end{eqnarray*} where $T_{\calA}(\ell)$ is the runtime of $\mathcal{A}$ to compute $\hat f$ on a sample of size $\ell$, $T_{\calE}(\ell)$ is the runtime required to evaluate $\hat f$ on a single $x\in\mathcal{X}$, and $c$ is a universal constant. \end{theorem} Although for our purposes the existence of a distribution-free sample complexity $m_\mathcal{C}$ is more important than its concrete form, we may take $ m_\mathcal{C}(\varepsilon,\delta)= O(\frac{d}{\varepsilon}\log\frac{1}{\varepsilon} + \frac{1}{\varepsilon}\log\frac{1}{\delta})$ \citep{MR0474638,MR1072253}, known to bound the sample complexity of empirical risk minimization; indeed, this loses no generality, as there is a well-known efficient reduction from empirical risk minimization to any proper learner having a polynomial sample complexity \citep{pitt1988computational,haussler1991equivalence}. We allow the evaluation time of $\hat f$ to depend on the size of the training sample in order to account for non-parametric learners, such as nearest-neighbor classifiers. A naive implementation of the \citet{DBLP:journals/jacm/MoranY16} existence proof yields a runtime of order $ m^{cd}T_{\calA}(c'd) + m^{cd^*}$ (for some universal constants $c,c'$), which can be doubly exponential when $d^*=2^d$; this is without taking into account the cost of computing the minimax distribution on the $m^{cd}\times m$ game matrix. Next, we extend the result in Theorem~\ref{thm:classification} from classification to regression: \begin{theorem}[Efficient compression for regression] \label{thm:regression} Let $\mathcal{F}\subset[0,1]^\mathcal{X}$ be a function class with $t$-fat-shattering dimension $d(t)$, dual $t$-fat-shattering dimension $d^*(t)$, and suppose that $\mathcal{A}$ is an ERM (i.e., proper, consistent) learner for $\mathcal{F}$: For all $f^*\in\mathcal{C}$, and all distributions $D$ over $\mathcal{X}$, if $\mathcal{A}$ receives $m$ points $S=\set{x_i}$ drawn iid from $D$ and labeled with $y_i=f^*(x_i)$, then $\mathcal{A}$ outputs an $\hat f\in\mathcal{F}$ such that $\max_{i\in[m]}\abs{\hat f(x_i)-f^*(x_i)}=0$. For every such $\mathcal{A}$, there is a randomized uniformly $\varepsilon$-approximate sample compression scheme for $\mathcal{F}$ of size $O(k\tilde{m} \log( k \tilde{m} ) )$, where $\tilde m=O(d(c\varepsilon)\log(1/\varepsilon))$ and $k=O(d^*(c\varepsilon)\log(d^*(c\varepsilon)/\varepsilon))$. Furthermore, on a sample of any size $m$, the compression set may be computed in expected time \begin{eqnarray*} O(m T_{\calE}(\tilde m) (k + \log m) + T_{\calA}(\tilde m)\log(m)), \end{eqnarray*} where $T_{\calA}(\ell)$ is the runtime of $\mathcal{A}$ to compute $\hat f$ on a sample of size $\ell$, $T_{\calE}(\ell)$ is the runtime required to evaluate $\hat f$ on a single $x\in\mathcal{X}$, and $c$ is a universal constant. \end{theorem} A key component in the above result is our construction of a generic $(\eta,\gamma)$-weak learner. \begin{definition} \label{def:weak} For $\eta \in [0,1]$ and $\gamma \in [0,1/2]$, we say that $f:\mathcal{X}\to\mathbb{R}$ is an an \emph{$(\eta,\gamma)$-weak hypothesis} (with respect to distribution $D$ and target $f^*\in\mathcal{F}$) if $$\P_{X\sim D}(|f(X)-f^*(X)|>\eta)\le \frac12-\gamma.$$ \end{definition} \begin{theorem}[Generic weak learner] \label{thm:gen-weak-learn} Let $\mathcal{F}\subset[0,1]^\mathcal{X}$ be a function class with $t$-fat-shattering dimension $d(t)$. For some universal numerical constants $c_{1},c_{2},c_{3} \in (0,\infty)$, for any $\eta,\delta \in (0,1)$ and $\gamma \in (0,1/4)$, any $f^*\in\mathcal{F}$, and any distribution $D$, letting $X_{1},\ldots,X_{m}$ be drawn iid from $D$, where \begin{equation*} m = \left\lceil c_{1} \left( d(c_{2} \eta ) \ln\!\left( \frac{c_{3}}{\eta} \right) + \ln\!\left(\frac{1}{\delta}\right) \right) \right\rceil, \end{equation*} with probability at least $1-\delta$, every $f \in \mathcal{F}$ with $ \max_{i\in[m]} |f(X_{i}) - f^{*}(X_{i})| = 0$ is an $(\eta,\gamma)$-weak hypothesis with respect to $D$ and $f^*$. \end{theorem} In fact, our results would also allow us to use any hypothesis $f \in \mathcal{F}$ with $\max_{i \in [m]} |f(X_{i}) - f^{*}(X_{i})|$ bounded below $\eta$: for instance, bounded by $\eta / 2$. This can then also be plugged into the construction of the compression scheme and this criterion can be used in place of consistency in Theorem~\ref{thm:regression}. In Sections~\ref{sec:NN} and \ref{sec:BV} we give applications to sample compression for nearest-neighbor and bounded-variation regression. \section{Related work} \label{sec:rel-work} It appears that generalization bounds based on sample compression were independently discovered by \citet{Littlestone86relatingdata} and \citet{MR1383093} and further elaborated upon by \citet{DBLP:journals/ml/GraepelHS05}; see \citet{DBLP:journals/ml/FloydW95} for background and discussion. A more general kind of Occam learning was discussed in \citet{MR1072253}. Computational lower bounds on sample compression were obtained in \citet{gottlieb2014near}, and some communication-based lower bounds were given in \citet{DBLP:journals/corr/abs-1711-05893}. \begin{sloppypar} Beginning with \citet{FreundSchapire97}'s \algname{AdaBoost.R} algorithm, there have been numerous attempts to extend AdaBoost to the real-valued case \citep{bertoni1997boosting,Drucker:1997:IRU:645526.657132,avnimelech1999boosting,Karakoulas00,DuffyHelmbold02,kegl2003robust,NOCK200725} along with various theoretical and heuristic constructions of particular weak regressors \citep{Mason1999,MR1873328,DBLP:journals/ml/MannorM02}; see also the survey \citet{Mendes-Moreira2012}. \end{sloppypar} \citet[Remark 2.1]{DuffyHelmbold02} spell out a central technical challenge: no boosting algorithm can ``always force the base regressor to output a useful function by simply modifying the distribution over the sample''. This is because unlike a binary classifier, which localizes errors on specific examples, a real-valued hypothesis can spread its error evenly over the entire sample, and it will not be affected by reweighting. The $(\eta,\gamma)$-weak learner, which has appeared, among other works, in \citet{DBLP:journals/cpc/AnthonyBIS96,DBLP:journals/siamcomp/Simon97,avnimelech1999boosting,kegl2003robust}, gets around this difficulty --- but provable general constructions of such learners have been lacking. Likewise, the heart of our sample compression engine, \algname{MedBoost}, has been widely in use since \citet{FreundSchapire97} in various guises. Our Theorem~\ref{thm:gen-weak-learn} supplies the remaining piece of the puzzle: {\em any} sample-consistent regressor applied to some random sample of bounded size yields an $(\eta,\gamma)$-weak hypothesis. The closest analogue we were able to find was \citet[Theorem 3]{DBLP:journals/cpc/AnthonyBIS96}, which is non-trivial only for function classes with finite pseudo-dimension, and is inapplicable, e.g., to classes of $1$-Lipschitz or bounded variation functions. The literature on general sample compression schemes for real-valued functions is quite sparse. There are well-known narrowly tailored results on specifying functions or approximate versions of functions using a finite number of points, such as the classical fact that a polynomial of degree $p$ can be perfectly recovered from $p+1$ points. To our knowledge, the only \emph{general} results on sample compression for real-valued functions (applicable to \emph{all} learnable function classes) is Theorem 4.3 of \citet*{david2016supervised}. They propose a general technique to convert any learning algorithm achieving an arbitrary sample complexity $M(\varepsilon,\delta)$ into a compression scheme of size $O(M(\varepsilon,\delta) \log(M(\varepsilon,\delta)))$, where $\delta$ may approach $1$. However, their notion of compression scheme is significantly weaker than ours: namely, they allow $\hat{h} = \rho(\kappa(S))$ to satisfy merely $\frac{1}{m} \sum_{i=1}^{m} | \hat{h}(x_i) - h^{*}(x_i) | \leq \varepsilon$, rather than our \emph{uniform} $\varepsilon$-approximation requirement $\max_{1 \leq i \leq m} | \hat{h}(x_i) - h^{*}(x_i) | \leq \varepsilon$. In particular, in the special case of $\mathcal{F}$ a family of \emph{binary}-valued functions, their notion of sample compression does \emph{not} recover the usual notion of sample compression schemes for classification, whereas our uniform $\varepsilon$-approximate compression notion \emph{does} recover it as a special case. We therefore consider our notion to be a more fitting generalization of the definition of sample compression to the real-valued case. \section{Boosting Real-Valued Functions} \label{subsec:real-boosting} As mentioned above, the notion of a \emph{weak learner} for learning real-valued functions must be formulated carefully. The na\"{i}ve thought that we could take any learner guaranteeing, say, absolute loss at most $\frac{1}{2}-\gamma$ is known to not be strong enough to enable boosting to $\varepsilon$ loss. However, if we make the requirement too strong, such as in \citet{FreundSchapire97} for {\tt AdaBoost.R}, then the sample complexity of weak learning will be so high that weak learners cannot be expected to exist for large classes of functions. However, our Definition~\ref{def:weak}, which has been proposed independently by \citet{DBLP:journals/siamcomp/Simon97} and \citet{kegl2003robust}, appears to yield the appropriate notion of \emph{weak learner} for boosting real-valued functions. In the context of boosting for real-valued functions, the notion of an $(\eta,\gamma)$-weak hypothesis plays a role analogous to the usual notion of a weak hypothesis in boosting for classification. Specifically, the following boosting algorithm was proposed by \citet{kegl2003robust}. As it will be convenient for our later results, we express its output as a sequence of functions and weights; the boosting guarantee from \citet{kegl2003robust} applies to the weighted quantiles (and in particular, the weighted median) of these function values. \RestyleAlgo{ruled} \begin{algorithm}[h] \label{alg:medboost} \SetAlgoLined \caption{\algname{MedBoost}($\{(x_i,y_i)\}_{i\in[m]}$,$T$,$\gamma$,$\eta$)} \begin{algorithmic}[1] \STATE Define $P_{0}$ as the uniform distribution over $\{1,\ldots,n\}$ \FOR {$t=0,\ldots,T$} \STATE Call weak learner to get $h_{t}$ and $(\eta/2,\gamma)$-weak hypothesis wrt $(x_{i},y_{i})\! :\! i \!\sim\! P_{t}$\\ (repeat until it succeeds) \FOR {$i = 1,\ldots,m$} \STATE $\theta_{i}^{(t)} \gets 1 - 2 \mathbb{I}[ |h_{t}(x_{i}) - y_{i}| > \eta/2 ]$ \ENDFOR \STATE $\alpha_{t} \gets \frac{1}{2} \ln\!\left( \frac{ (1-\gamma) \sum_{i=1}^{m} P_{t}(i) \mathbb{I}[ \theta_{i}^{(t)} = 1 ] }{ (1+\gamma) \sum_{i=1}^{m} P_{t}(i) \mathbb{I}[ \theta_{i}^{(t)} = -1 ] } \right)$ \IF {$\alpha_{t} = \infty$} \STATE Return $T$ copies of $h_{t}$, and $(1,\ldots,1)$ \ENDIF \FOR {$i = 1,\ldots,m$} \STATE $P_{t+1}(i) \gets P_{t}(i) \frac{\exp\{-\alpha_{t}\theta_{i}^{(t)}\}}{\sum_{j=1}^{m} P_{t}(j) \exp\{-\alpha_{t}\theta_{j}^{(t)}\}}$ \ENDFOR \ENDFOR \STATE Return $(h_{1},\ldots,h_{T})$ and $(\alpha_{1},\ldots,\alpha_{T})$ \end{algorithmic} \end{algorithm} \begin{sloppypar} Here we define the weighted median as \begin{equation*} {\rm Median}(y_{1},\ldots,y_{T};\alpha_{1},\ldots,\alpha_{T}) = \min\!\left\{ y_{j} : \frac{\sum_{t=1}^{T} \alpha_{t} \mathbb{I}[ y_{j} < y_{t} ] }{\sum_{t=1}^{T} \alpha_{t}} < \frac{1}{2} \right\}. \end{equation*} Also define the weighted \emph{quantiles}, for $\gamma \in [0,1/2]$, as \begin{align*} Q_{\gamma}^{+}(y_{1},\ldots,y_{T};\alpha_{1},\ldots,\alpha_{T}) & = \min\!\left\{ y_{j} : \frac{\sum_{t=1}^{T} \alpha_{t} \mathbb{I}[ y_{j} < y_{t} ] }{\sum_{t=1}^{T} \alpha_{t}} < \frac{1}{2} - \gamma \right\} \\ Q_{\gamma}^{-}(y_{1},\ldots,y_{T};\alpha_{1},\ldots,\alpha_{T}) & = \max\!\left\{ y_{j} : \frac{\sum_{t=1}^{T} \alpha_{t} \mathbb{I}[ y_{j} > y_{t} ] }{\sum_{t=1}^{T} \alpha_{t}} < \frac{1}{2} - \gamma \right\}, \end{align*} and abbreviate $Q_{\gamma}^{+}(x) = Q_{\gamma}^{+}(h_{1}(x),\ldots,h_{T}(x);\alpha_{1},\ldots,\alpha_{T})$ and $Q_{\gamma}^{-}(x) = Q_{\gamma}^{-}(h_{1}(x),\ldots,h_{T}(x);\alpha_{1},\ldots,\alpha_{T})$ for $h_{1},\ldots,h_{T}$ and $\alpha_{1},\ldots,\alpha_{T}$ the values returned by \algname{MedBoost}. \end{sloppypar} Then \citet{kegl2003robust} proves the following result. \begin{lemma} \label{lem:kegl} (\citet{kegl2003robust}) For a training set $Z = \{(x_{1},y_{1}),\ldots,(x_{m},y_{m})\}$ of size $m$, the return values of \algname{MedBoost} satisfy \begin{equation*} \frac{1}{m} \sum_{i=1}^{m} \mathbb{I}\!\left[ \max\!\left\{ \left| Q_{\gamma/2}^{+}(x_{i}) - y_{i} \right|, \left| Q_{\gamma/2}^{-}(x_{i}) - y_{i} \right| \right\} > \eta/2 \right] \leq \prod_{t=1}^{T} e^{\gamma \alpha_{t}} \sum_{i=1}^{m} P_{t}(i) e^{-\alpha_{t} \theta_{i}^{(t)}}. \end{equation*} \end{lemma} We note that, in the special case of binary classification, \algname{MedBoost} is closely related to the well-known AdaBoost algorithm \citep{FreundSchapire97}, and the above results correspond to a standard margin-based analysis of \citet{MR1673273}. For our purposes, we will need the following immediate corollary of this, which follows from plugging in the values of $\alpha_{t}$ and using the weak learning assumption, which implies $\sum_{i=1}^{m} P_{t}(i) \mathbb{I}[ \theta_{i}^{(t)} = 1 ] \geq \frac{1}{2} + \gamma$ for all $t$. \begin{corollary} \label{cor:kegl-T-size} For $T = \Theta\!\left(\frac{1}{\gamma^{2}} \ln(m)\right)$, every $i \in \{1,\ldots,m\}$ has \begin{equation*} \max\!\left\{ \left| Q_{\gamma/2}^{+}(x_{i}) - y_{i} \right|, \left| Q_{\gamma/2}^{-}(x_{i}) - y_{i} \right| \right\} \leq \eta/2. \end{equation*} \end{corollary} \section{The Sample Complexity of Learning Real-Valued Functions} \label{subsec:weak-learning} This section reveals our intention in choosing this notion of weak hypothesis, rather than using, say, an $\varepsilon$-good strong learner under absolute loss. In addition to being a strong enough notion for boosting to work, we show here that it is also a weak enough notion for the sample complexity of weak learning to be of reasonable size: namely, a size quantified by the fat-shattering dimension. This result is also relevant to an open question posed by \citet{DBLP:journals/siamcomp/Simon97}, who proved a lower bound for the sample complexity of finding an $(\eta,\gamma)$-weak hypothesis, expressed in terms of a related complexity measure, and asked whether a related upper bound might also hold. We establish a general upper bound here, witnessing the same dependence on the parameters $\eta$ and $\gamma$ as observed in Simon's lower bound (up to a log factor) aside from a difference in the key complexity measure appearing in the bounds. Define $\rho_{\eta}(f,g) = P_{2m}( x : |f(x) - g(x)| > \eta )$, where $P_{2m}$ is the empirical measure induced by $X_{1},\ldots,X_{2m}$ iid $P$-distributed random variables (the $m$ data points and $m$ ghost points). Define $N_{\eta}(\beta)$ as the $\beta$-covering numbers of $\mathcal{F}$ under the $\rho_{\eta}$ pseudo-metric. \begin{theorem} \label{thm:weak-learning-bound} Fix any $\eta,\beta \in (0,1)$, $\alpha \in [0,1)$, and $m \in \mathbb{N}$. For $X_{1},\ldots,X_{m}$ iid $P$-distributed, with probability at least $1 - \mathbb{E}\!\left[ N_{\eta (1-\alpha)/2}(\beta/8) \right] 2 e^{-m \beta / 96}$, every $f \in \mathcal{F}$ with $\max_{1 \leq i \leq m} |f(X_{i}) - f^{*}(X_{i})| \leq \alpha \eta$ satisfies $P( x : | f(x) - f^{*}(x) | > \eta ) \leq \beta$. \end{theorem} \begin{proof} This proof roughly follows the usual symmetrization argument for uniform convergence \citet{MR0288823,DBLP:journals/iandc/Haussler92}, with a few important modifications to account for this $(\eta,\beta)$-based criterion. If $\mathbb{E}\!\left[ N_{\eta (1-\alpha)/2}(\beta/8) \right]$ is infinite, then the result is trivial, so let us suppose it is finite for the remainder of the proof. Similarly, if $m < 8/\beta$, then $2 e^{-m \beta/96} > 1$ and hence the claim trivially holds, so let us suppose $m \geq 8/\beta$ for the remainder of the proof. Without loss of generality, suppose $f^{*}(x) = 0$ everywhere and every $f \in \mathcal{F}$ is non-negative (otherwise subtract $f^{*}$ from every $f \in \mathcal{F}$ and redefine $\mathcal{F}$ as the absolute values of the differences; note that this transformation does not increase the value of $N_{\eta (1-\alpha)/2}(\beta/8)$ since applying this transformation to the original $N_{\eta (1-\alpha)/2}(\beta/8)$ functions remains a cover). Let $X_{1},\ldots,X_{2m}$ be iid $P$-distributed. Denote by $P_{m}$ the empirical measure induced by $X_{1},\ldots,X_{m}$, and by $P_{m}^{\prime}$ the empirical measure induced by $X_{m+1},\ldots,X_{2m}$. We have \begin{align*} & \P\!\left( \exists f \in \mathcal{F} : P_{m}^{\prime}( x : f(x) > \eta ) > \beta/2 \text{ and } P_{m}( x : f(x) \leq \alpha \eta ) = 1 \right) \\ & \geq \P\left( \exists f \in \mathcal{F} : P( x : f(x) > \eta ) > \beta \text{ and } P_{m}( x : f(x) \leq \alpha \eta ) = 1 \text{ and } P_{m}^{\prime}( x : f(x) > \eta ) > \beta/2 \right). \end{align*} Denote by $A_{m}$ the event that there exists $f \in \mathcal{F}$ satisfying $P( x : f(x) > \eta ) > \beta$ and $P_{m}( x : f(x) \leq \alpha \eta ) = 1$, and on this event let $\tilde{f}$ denote such an $f \in \mathcal{F}$ (chosen solely based on $X_{1},\ldots,X_{m}$); when $A_{m}$ fails to hold, take $\tilde{f}$ to be some arbitrary fixed element of $\mathcal{F}$. Then the expression on the right hand side above is at least as large as \begin{equation*} \P\left( A_{m} \text{ and } P_{m}^{\prime}( x : \tilde{f}(x) > \eta ) > \beta/2 \right), \end{equation*} and noting that the event $A_{m}$ is independent of $X_{m+1},\ldots,X_{2m}$, this equals \begin{equation} \label{eqn:uc-factored-expression} \mathbb{E}\!\left[ \mathbb{I}_{A_{m}} \cdot \P\!\left( P_{m}^{\prime}( x : \tilde{f}(x) > \eta ) > \beta/2 \middle| X_{1},\ldots,X_{m} \right) \right]. \end{equation} Then note that for any $f \in \mathcal{F}$ with $P( x : f(x) > \eta ) > \beta$, a Chernoff bound implies \begin{align*} & \P\Big( P_{m}^{\prime}( x : f(x) > \eta ) > \beta/2 \Big) \\ & = 1 - \P\Big( P_{m}^{\prime}( x : f(x) > \eta ) \leq \beta/2 \Big) \geq 1 - \exp\!\left\{ - m \beta / 8 \right\} \geq \frac{1}{2}, \end{align*} where we have used the assumption that $m \geq \frac{8}{\beta}$ here. In particular, this implies that the expression in \eqref{eqn:uc-factored-expression} is no smaller than $\frac{1}{2} \P(A_{m})$. Altogether, we have established that \begin{align} & \P\!\left( \exists f \in \mathcal{F} : P( x : f(x) > \eta ) > \beta \text{ and } P_{m}( x : f(x) \leq \alpha \eta ) =1 \right) \notag \\ & \leq 2 \P\!\left( \exists f \in \mathcal{F} : P_{m}^{\prime}( x : f(x) > \eta ) > \beta/2 \text{ and } P_{m}( x : f(x) \leq \alpha \eta ) = 1 \right). \label{eqn:uc-ghost-ub} \end{align} Now let $\sigma(1),\ldots,\sigma(m)$ be independent random variables (also independent of the data), with $\sigma(i) \sim {\rm Uniform}(\{i,m+i\})$, and denote $\sigma(m+i)$ as the sole element of $\{i,m+i\} \setminus \{\sigma(i)\}$ for each $i \leq m$. Also denote by $P_{m,\sigma}$ the empirical measure induced by $X_{\sigma(1)},\ldots,X_{\sigma(m)}$, and by $P_{m,\sigma}^{\prime}$ the empirical measure induced by $X_{\sigma(m+1)},\ldots,X_{\sigma(2m)}$. By exchangeability of $(X_{1},\ldots,X_{2m})$, the right hand side of \eqref{eqn:uc-ghost-ub} is equal \begin{equation*} \P\!\left( \exists f \in \mathcal{F} : P_{m,\sigma}^{\prime}( x : f(x) > \eta ) > \beta/2 \text{ and } P_{m,\sigma}( x : f(x) \leq \alpha \eta ) = 1 \right). \end{equation*} Now let $\hat{\mathcal{F}} \subseteq \mathcal{F}$ be a minimal subset of $\mathcal{F}$ such that $\max\limits_{f \in \mathcal{F}} \min\limits_{\hat{f} \in \hat{\mathcal{F}}} \rho_{\eta (1-\alpha)/2}(\hat{f},f) \leq \beta/8$. The size of $\hat{\mathcal{F}}$ is at most $N_{\eta(1-\alpha)/2}(\beta/8)$, which is finite almost surely (since we have assumed above that its expectation is finite). Then note that (denoting by $X_{[2m]} = (X_{1},\ldots,X_{2m})$) the above expression is at most \begin{align} & \P\!\left( \exists f \in \hat{\mathcal{F}} : P_{m,\sigma}^{\prime}( x : f(x) > \eta (1+\alpha)/2 ) > (3/8)\beta \text{ and } P_{m,\sigma}( x : f(x) > \eta (1+\alpha)/2 ) \leq \beta/8 \right) \notag \\ & \leq \mathbb{E}\!\left[ N_{\eta (1-\alpha)/2}(\beta/8) \max_{f \in \hat{\mathcal{F}}} \P\!\left( P_{m,\sigma}^{\prime}( x \!:\! f(x) \!>\! \eta (1+\alpha)/2 ) > (3/8)\beta \right.\right. \notag \\ & {\hskip 4.5cm} \left. \phantom{\max_{f \in \hat{\mathcal{F}}}} \left. \text{ and } P_{m,\sigma}( x \!:\! f(x) \!>\! \eta (1+\alpha)/2 ) \leq \beta/8 \middle| X_{[2m]} \right) \right]\!. \label{eqn:uc-condition-on-data} \end{align} Then note that for any $f \in \mathcal{F}$, we have almost surely \begin{align*} & \P\!\left( P_{m,\sigma}^{\prime}( x : f(x) > \eta (1+\alpha)/2 ) > (3/8)\beta \text{ and } P_{m,\sigma}( x : f(x) > \eta (1+\alpha)/2 ) \leq \beta/8 \middle| X_{[2m]} \right) \\ & \leq \P\!\left( P_{2m}( x : f(x) > \eta (1+\alpha)/2 ) > (3/16) \beta \text{ and } P_{m,\sigma}( x : f(x) > \eta (1+\alpha)/2 ) \leq \beta/8 \middle| X_{[2m]} \right) \\ & \leq \exp\!\left\{ - m \beta / 96 \right\}, \end{align*} where the last inequality is by a Chernoff bound, which (as noted by \citet*{hoeffding}) remains valid even when sampling without replacement. Together with \eqref{eqn:uc-ghost-ub} and \eqref{eqn:uc-condition-on-data}, we have that \begin{align*} & \P\!\left( \exists f \in \mathcal{F} : P( x : f(x) > \eta ) > \beta \text{ and } P_{m}( x : f(x) \leq \alpha \eta ) =1 \right) \\ & \leq 2 \mathbb{E}\!\left[ N_{\eta (1-\alpha)/2}(\beta/8) \right] e^{- m \beta / 96}. \end{align*} \end{proof} \begin{lemma} \label{lem:eps-gam-covering-numbers} There exist universal numerical constants $c,c^{\prime} \in (0,\infty)$ such that $\forall \eta,\beta \in (0,1)$, \begin{equation*} N_{\eta}(\beta) \leq \left(\frac{2}{\eta\beta}\right)^{c d(c^{\prime} \eta \beta)}, \end{equation*} where $d(\cdot)$ is the fat-shattering dimension. \end{lemma} \begin{proof} \citet[Theorem 1]{MR1965359} establishes that the $\eta\beta$-covering number of $\mathcal{F}$ under the $L_{2}(P_{2m})$ pseudo-metric is at most \begin{equation} \label{eqn:mendelson-vershynin-bound} \left(\frac{2}{\eta\beta}\right)^{c d(c^{\prime} \eta \beta)} \end{equation} for some universal numerical constants $c,c^{\prime} \in (0,\infty)$. Then note that for any $f,g \in \mathcal{F}$, Markov's and Jensen's inequalities imply $\rho_{\eta}(f,g) \leq \frac{1}{\eta} \| f - g \|_{L_{1}(P_{2m})} \leq \frac{1}{\eta} \| f - g \|_{L_{2}(P_{2m})}$. Thus, any $\eta\beta$-cover of $\mathcal{F}$ under $L_{2}(P_{2m})$ is also a $\beta$-cover of $\mathcal{F}$ under $\rho_{\eta}$, and therefore \eqref{eqn:mendelson-vershynin-bound} is also a bound on $N_{\eta}(\beta)$. \end{proof} Combining the above two results yields the following theorem. \begin{theorem} \label{thm:eps-gam-weak-sample-complexity} For some universal numerical constants $c_{1},c_{2},c_{3} \in (0,\infty)$, for any $\eta,\delta,\beta \in (0,1)$ and $\alpha \in [0,1)$, letting $X_{1},\ldots,X_{m}$ be iid $P$-distributed, where \begin{equation*} m = \left\lceil \frac{c_{1}}{\beta} \left( d(c_{2} \eta \beta (1-\alpha)) \ln\!\left( \frac{c_{3}}{\eta \beta (1-\alpha)} \right) + \ln\!\left(\frac{1}{\delta}\right) \right) \right\rceil, \end{equation*} with probability at least $1-\delta$, every $f \in \mathcal{F}$ with $\max_{i \in [m]} |f(X_{i}) - f^{*}(X_{i})| \leq \alpha \eta$ satisfies $P( x : | f(x) - f^{*}(x) | > \eta ) \leq \beta$. \end{theorem} \begin{proof} The result follows immediately from combining Theorem~\ref{thm:weak-learning-bound} and Lemma~\ref{lem:eps-gam-covering-numbers}. \end{proof} In particular, Theorem~\ref{thm:gen-weak-learn} follows immediately from this result by taking $\beta = 1/2 - \gamma$ and $\alpha = \gamma/2$. To discuss tightness of Theorem~\ref{thm:eps-gam-weak-sample-complexity}, we note that \citet{DBLP:journals/siamcomp/Simon97} proved a sample complexity lower bound for the same criterion of \begin{equation*} \Omega\!\left( \frac{d^{\prime}(c \eta)}{\beta} + \frac{1}{\beta} \log \frac{1}{\delta} \right), \end{equation*} where $d^{\prime}(\cdot)$ is a quantity somewhat smaller than the fat-shattering dimension, essentially representing a fat Natarajan dimension. Thus, aside from the differences in the complexity measure (and a logarithmic factor), we establish an upper bound of a similar form to Simon's lower bound. \section{From Boosting to Compression} \label{subsec:boosting-to-compression} Generally, our strategy for converting the boosting algorithm \algname{MedBoost} into a sample compression scheme of smaller size follows a strategy of Moran and Yehudayoff for binary classification, based on arguing that because the ensemble makes its predictions with a \emph{margin} (corresponding to the results on \emph{quantiles} in Corollary~\ref{cor:kegl-T-size}), it is possible to recover the same proximity guarantees for the predictions while using only a smaller \emph{subset} of the functions from the original ensemble. Specifically, we use the following general \emph{sparsification} strategy. For $\alpha_{1},\ldots,\alpha_{T} \in [0,1]$ with $\sum_{t=1}^{T} \alpha_{t} = 1$, denote by ${\rm Cat}(\alpha_{1},\ldots,\alpha_{T})$ the \emph{categorical distribution}: i.e., the discrete probability distribution on $\{1,\ldots,T\}$ with probability mass $\alpha_{t}$ on $t$. \begin{algorithm}[h] \caption{\algname{Sparsify}($\{(x_i,y_i)\}_{i\in[m]},\gamma,T,n$)} \begin{algorithmic}[1] \STATE Run \algname{MedBoost}($\{(x_i,y_i)\}_{i\in[m]},T,\gamma,\eta$) \STATE Let $h_{1},\ldots,h_{T}$ and $\alpha_{1},\ldots,\alpha_{T}$ be its return values \STATE Denote $\alpha_{t}^{\prime} = \alpha_{t} / \sum_{t^{\prime}=1}^{T} \alpha_{t^{\prime}}$ for each $t\in[T]$ \REPEAT \STATE Sample $(J_{1},\ldots,J_{n}) \sim {\rm Cat}(\alpha_{1}^{\prime},\ldots,\alpha_{T}^{\prime})^{n}$ \STATE Let $F = \{ h_{J_{1}},\ldots, h_{J_{n}} \}$ \UNTIL {$\max_{1 \leq i \leq m} |\{f \in F: |f(x_{i}) - y_{i}| > \eta\}| < n/2$} \STATE Return $F$ \end{algorithmic} \end{algorithm} For any values $a_{1},\ldots,a_{n}$, denote the (unweighted) median \begin{equation*} {\rm Med}(a_{1},\ldots,a_{n}) = {\rm Median}(a_{1},\ldots,a_{n}; 1,\ldots,1). \end{equation*} Our intention in dicussing the above algorithm is to argue that, for a sufficiently large choice of $n$, the above procedure returns a set $\{f_{1},\ldots,f_{n}\}$ such that \[ \forall i\in[m], | {\rm Med}(f_{1}(x_{i}),\ldots,f_{n}(x_{i})) - y_{i} | \leq \eta. \] We analyze this strategy separately for binary classification and real-valued functions, since the argument in the binary case is much simpler (and demonstrates more directly the connection to the original argument of Moran and Yehudayoff), and also because we arrive at a tighter result for binary functions than for real-valued functions. \subsection{Binary Classification} \label{subsubsec:binary-classification-compression} We begin with the simple observation about binary classification (i.e., where the functions in $\mathcal{F}$ all map into $\{0,1\}$). The technique here is quite simple, and follows a similar line of reasoning to the original argument of Moran and Yehudayoff. The argument for real-valued functions below will diverge from this argument in several important ways, but the high level ideas remain the same. The compression function is essentially the one introduced by Moran and Yehudayoff, except applied to the classifiers produced by the above \algname{Sparsify} procedure, rather than a set of functions selected by a minimax distribution over all classifiers produced by $O(d)$ samples each. The weak hypotheses in \algname{MedBoost} for binary classification can be obtained using samples of size $O(d)$. Thus, if the \algname{Sparsify} procedure is successful in finding $n$ such classifiers whose median predictions are within $\eta$ of the target $y_{i}$ values for all $i$, then we may encode these $n$ classifiers as a compression set, consisting of the set of $k = O(nd)$ samples used to train these classifiers, together with $k \log k$ extra bits to encode the order of the samples.\footnote{In fact, $k \log n$ bits would suffice if the weak learner is permutation-invariant in its data set.} To obtain Theorem~\ref{thm:classification}, it then suffices to argue that $n=\Theta(d^{*})$ is a sufficient value. The proof follows. \begin{proof}[Proof of Theorem~\ref{thm:classification}] Recall that $d^{*}$ bounds the VC dimension of the class of sets $\{ \{ h_{t} : t \leq T, h_{t}(x_i) = 1 \} : 1 \leq i \leq m \}$. Thus for the iid samples $h_{J_{1}},\ldots,h_{J_{n}}$ obtained in \algname{Sparsify}, for $n = 64(2309 + 16d^*) > \frac{2304 + 16d^*+\log(2)}{1/8}$, by the VC uniform convergence inequality of \citet{MR0288823}, with probability at least $1/2$ we get that \begin{equation*} \max_{1 \leq i \leq m} \left| \left( \frac{1}{n} \sum_{j=1}^{n} h_{J_{j}}(x_i) \right) - \left( \sum_{t=1}^{T} \alpha^{\prime} h_{t}(x_i) \right) \right| < 1/8. \end{equation*} In particular, if we choose $\gamma = 1/8$, $\eta = 1$, and $T = \Theta(\log(m))$ appropriately, then Corollary~\ref{cor:kegl-T-size} implies that every $y_i = \mathbb{I}\!\left[ \sum_{t=1}^{T} \alpha^{\prime} h_{t}(x_i) \geq 1/2 \right]$ and $\left| \frac{1}{2} - \sum_{t=1}^{T} \alpha^{\prime} h_{t}(x_i) \right| \geq 1/8$ so that the above event would imply every $y_i = \mathbb{I}\!\left[ \frac{1}{n} \sum_{j=1}^{n} h_{J_{j}}(x_i) \geq 1/2 \right] = {\rm Med}(h_{J_{1}}(x_i),\ldots,h_{J_{n}}(x_i))$. Note that the \algname{Sparsify} algorithm need only try this sampling $\log_{2}(1/\delta)$ times to find such a set of $n$ functions. Combined with the description above (from \citealp{DBLP:journals/jacm/MoranY16}) of how to encode this collection of $h_{J_{i}}$ functions as a sample compression set plus side information, this completes the construction of the sample compression scheme. \end{proof} \subsection{Real-Valued Functions} \label{subsubsec:real-valued-compression} Next we turn to the general case of real-valued functions (where the functions in $\mathcal{F}$ may generally map into $[0,1]$). We have the following result, which says that the \algname{Sparsify} procedure can reduce the ensemble of functions from one with $T=O(\log(m)/\gamma^2)$ functions in it, down to one with a number of functions \emph{independent of $m$}. \begin{theorem} \label{thm:real-sparsification} Choosing $$n = \Theta\!\left( \frac{1}{\gamma^{2}} d^{*}(c\eta) \log^{2}( d^{*}(c\eta)/\eta ) \right)$$ suffices for the \algname{Sparsify} procedure to return $\{f_{1},\ldots,f_{n}\}$ with $$\max_{1 \leq i \leq m} | {\rm Med}(f_{1}(x_{i}),\ldots,f_{n}(x_{i})) - y_{i} | \leq \eta.$$ \end{theorem} \begin{proof} Recall from Corollary~\ref{cor:kegl-T-size} that \algname{MedBoost} returns functions $h_{1},\ldots,h_{T} \in \mathcal{F}$ and $\alpha_{1},\ldots,\alpha_{T} \geq 0$ such that $\forall i \in \{1,\ldots,m\}$, \begin{equation*} \max\!\left\{ \left| Q_{\gamma/2}^{+}(x_{i}) - y_{i} \right|, \left| Q_{\gamma/2}^{-}(x_{i}) - y_{i} \right| \right\} \leq \eta/2, \end{equation*} where $\{(x_{i},y_{i})\}_{i=1}^{m}$ is the training data set. We use this property to sparsify $h_{1},\ldots,h_{T}$ from $T=O(\log(m)/\gamma^2)$ down to $k$ elements, where $k$ will depend on $\eta,\gamma$, and the dual fat-shattering dimension of $\mathcal{F}$ (actually, just of $H = \{h_{1},\ldots,h_{T}\}\subseteq\mathcal{F}$) --- but {\bf not} sample size $m$. Letting $\alpha_{j}^{\prime} = \alpha_{j} / \sum_{t=1}^{T} \alpha_{t}$ for each $j \leq T$, we will sample $k$ hypotheses $\set{\tilde h_1,\ldots,\tilde h_k}=:\tilde H\subseteq H$ with each $\tilde{h}_{i} = h_{J_{i}}$, where $(J_{1},\ldots,J_{k}) \sim {\rm Cat}(\alpha_{1}^{\prime},\ldots,\alpha_{T}^{\prime})^{k}$ as in \algname{Sparsify}. Define a function $\hat{h}(x) = {\rm Med}(\tilde{h}_{1}(x),\ldots,\tilde{h}_{k}(x))$. We claim that for any fixed $i\in[m]$, with high probability \begin{equation} \label{eq:Htil-med} |\hat{h}(x_i)-f^*(x_i)|\le\eta/2. \end{equation} \begin{sloppypar} Indeed, partition the indices $[T]$ into the disjoint sets \begin{align*} L(x) &= \set{ j \in[ T] : h_{j}(x) < Q_{\gamma}^{-}(h_{1}(x),\ldots,h_{T}(x);\alpha_{1},\ldots,\alpha_{T}) },\\ M(x) &= \set{ j \in[ T] : Q_{\gamma}^{-}(h_{1}(x),...,h_{T}(x);\alpha_{1},...,\alpha_{T}) \leq\! h_{j}(x) \!\leq Q_{\gamma}^{+}(h_{1}(x),...,h_{T}(x);\alpha_{1},...,\alpha_{T}) },\\ R(x) &= \set{ j \in[ T] : h_{j}(x) > Q_{\gamma}^{+}(h_{1}(x),\ldots,h_{T}(x);\alpha_{1},\ldots,\alpha_{T}) }. \end{align*} Then the only way \eqref{eq:Htil-med} can fail is if half or more indices $J_{1},\ldots,J_{k}$ sampled fall into $R(x_i)$ --- or if half or more fall into $L(x_i)$. Since the sampling distribution puts mass less than $1/2-\gamma$ on each of $R(x_i)$ and $L(x_i)$, Chernoff's bound puts an upper estimate of $\exp(-2k\gamma^2)$ on either event. Hence, \begin{equation} \label{eq:Htil-med-prob} \mathbb{P}\paren{|\hat{h}(x_i)-f^*(x_i)|>\eta/2} \le2\exp(-2k\gamma^2). \end{equation} \end{sloppypar} Next, our goal is to ensure that with high probability, \eqref{eq:Htil-med} holds simultaneously for all ${i\in[m]}$. Define the map $\boldsymbol{\xi}:[m]\to\mathbb{R}^k$ by $\boldsymbol{\xi}(i)=(\tilde{h}_{1}(x_i),\ldots,\tilde{h}_{k}(x_i))$. Let $G \subseteq[m]$ be a minimal subset of $[m]$ such that $$\max\limits_{i \in [m]} \min\limits_{j \in {G}} \nrm{\boldsymbol{\xi}(i)-\boldsymbol{\xi}(j)}_\infty\le\eta/2 . $$ This is just a minimal $\ell_\infty$ covering of $[m]$. Then \begin{align*} &\P\paren{ \exists i \in[m] : |{\rm Med}(\boldsymbol{\xi}(i))-f^*(x_i)|>\eta }\le\\ &\sum_{j\in G} \P\paren{ \exists i : |{\rm Med}(\boldsymbol{\xi}(i))-f^*(x_i)|>\eta , \nrm{\boldsymbol{\xi}(i)-\boldsymbol{\xi}(j)}_\infty\le\eta/2 } \le\\ & \sum_{j\in G} \P\paren{ |{\rm Med}(\boldsymbol{\xi}(j))-f^*(x_j)|>\eta /2 } \le 2N_\infty([m],\eta/2)\exp(-2k\gamma^2), \end{align*} where $N_\infty([m],\eta/2)$ is the $\eta /2$-covering number (under $\ell_\infty$) of $[m]$, and we used the fact that $$|{\rm Med}(\boldsymbol{\xi}(i))-{\rm Med}(\boldsymbol{\xi}(j))|\le\nrm{\boldsymbol{\xi}(i)-\boldsymbol{\xi}(j)}_\infty.$$ Finally, to bound $N_\infty([m],\eta/2)$, note that $\boldsymbol{\xi}$ embeds $[m]$ into the dual class $\mathcal{F}^*$. Thus, we may apply the bound in \cite[Display (1.4)]{MR2247969}: $$ \log N_\infty([m],\eta/2)\le C d^*(c\eta)\log^2(k/\eta), $$ where $C,c$ are universal constants and $d^*(\cdot)$ is the dual fat-shattering dimension of $\mathcal{F}$. It now only remains to choose a $k$ that makes $\exp\paren{C d^*(c\eta)\log^2(k/\eta)-2k\gamma^2}$ as small as desired. \end{proof} To establish Theorem~\ref{thm:regression}, we use the weak learner from above, with the booster \algname{MedBoost} from K\'{e}gl, and then apply the \algname{Sparsify} procedure. Combining the corresponding theorems, together with the same technique for converting to a compression scheme discussed above for classification (i.e., encoding the functions with the set of training examples they were obtained from, plus extra bits to record the order and which examples which weak hypothesis was obtained by training on), this immediately yields the result claimed in Theorem~\ref{thm:regression}, which represents our main new result for sample compression of general families of real-valued functions. \input{supl} \subsubsection*{Acknowledgments} We thank Shay Moran and Roi Livni for insightful conversations. \bibliographystyle{plainnat} \section{Sample compression for BV functions} \label{sec:BV} \newcommand{t}{t} The function class $\mathrm{BV}(v)$ consists of all $f:[0,1]\to\mathbb{R}$ for which \begin{eqnarray*} V(f):=\sup_{n\in\mathbb{N}}\sup_{0=x_0<x_1<\ldots<x_n=1}\sum_{i=1}^{n-1}|f(x_{i+1})-f(x_i)| \le v. \end{eqnarray*} It is known \citep[Theorem 11.12]{MR1741038} that $d_{\mathrm{BV}(v)}(t)=1+\floor{v/(2t)}$. In Theorem~\ref{thm:bv-dual} below, we show that the dual class has $d^*_{\mathrm{BV}(v)}(t)=\Theta\left(\log(v/t)\right)$. \citet{DBLP:journals/iandc/Long04} presented an efficient, proper, consistent learner for the class $\mathcal{F}=\mathrm{BV}(1)$ with range restricted to $[0,1]$, with sample complexity $m_\mathcal{F}(\varepsilon,\delta)=O(\frac1\varepsilon\log\frac1\delta)$. Combined with Theorem~\ref{thm:regression}, this yields \begin{corollary} Let $\mathcal{F}=\mathrm{BV}(1)\cap[0,1]^{[0,1]}$ be the class $f:[0,1] \to [0,1]$ with $V(f)\le1$. Then the proper, consistent learner $\mathcal{L}$ of \citet{DBLP:journals/iandc/Long04}, with target generalization error $\varepsilon$, admits a sample compression scheme of size $O(k\log k)$, where \[k = \bigO{ \uf{\varepsilon} \log^2 \uf{\varepsilon} \cdot \log \left(\uf{\varepsilon} \log \uf{\varepsilon}\right) }.\] The compression set is computable in expected runtime \[ \bigO{ n \uf{\varepsilon^{3.38}} \log^{3.38} \uf{\varepsilon} \left( \log n + \log \uf{\varepsilon} \log \left( \uf{\varepsilon} \log \uf{\varepsilon} \right) \right) } . \] \hide{ if we denote $\uf{\widetilde{\varepsilon}} := \uf{\varepsilon} \log \uf{\varepsilon}$ we get that \[k \in \bigO{ \uf{\widetilde{\varepsilon}} \log \uf{\widetilde{\varepsilon}} \log \uf{\varepsilon}}\] And the expected run-time is \[\bigO{ \frac{n}{\widetilde{\varepsilon}^{3.38}} \left( \log n + \log \uf{\varepsilon} \log \uf{\widetilde{\varepsilon}} \right) }\] } \end{corollary} \hide{ We show two specific applications of the above algorithm in cases where the dual fat-shattering dimension is relatively low. Recall the bounded-variation property for $f:[0,1] \to [0,1]$ functions \begin{definition} definition of bounded variation \end{definition} Let $F_V$ be the class of function $f:[0,1] \to [0,1]$ whose variation is bounded by $V$. and let $F_V^*$ be the dual function class, namely, $[0,1]$ equipped with the evaluation defined by \[\forall x \in [0,1],f \in F_V : x(f) := f(x)\] } The remainder of this section is devoted to proving \begin{theorem} \label{thm:bv-dual} For $\mathcal{F}=\mathrm{BV}(v)$ and $t<v$, we have $d^*_\mathcal{F}(t)=\Theta\left(\log(v/t)\right)$. \hide{ Let $F_V$ be the class of function $f:[0,1] \to [0,1]$ whose variation is bounded by $V$. and let $F_V^*$ be the dual function class, namely, $[0,1]$ equipped with the evaluation defined by \[\forall x \in [0,1],f \in F_V : x(f) := f(x)\] we get that \[VC(F_V^*) \in \Theta\left(\log_2(V/t)\right)\] } \end{theorem} First, we define some preliminary notions: \begin{definition} For a binary $m \times n$ matrix $M$, define \begin{eqnarray*} V(M, i) &:=& \sum_{j=1}^m \mathbb{I}[M_{j,i} \neq M_{j+1,i}],\\ G(M) &:=& \sum_{i=1}^n V(M,i),\\ V(M) &:=& \max_{i \in [n]}V(M,i). \end{eqnarray*} \end{definition} \begin{lemma} \label{lemma:1} Let $M$ be a binary $2^n \times n$ matrix. If for each $b \in \{0,1\}^n$ there is a row $j$ in $M$ equal to $b$, then \[V(M) \geq \frac{2^n}{n}.\] In particular, for at least one row $i$, we have $V(M,i) \geq 2^n/n$. \end{lemma} \begin{proof} Let $M$ be a $2^n \times n$ binary such that for each $b \in \{0,1\}^n$ there is a row $j$ in $M$ equal to $b$. Given $M$'s dimensions, every $b \in \{0,1\}^n$ appears exactly in one row of $M$, and hence the minimal Hamming distance between two rows is 1. Summing over the $2^n-1$ adjacent row pairs, we have \[G(M) = \sum_{i=1}^n V(M,i) = \sum_{i=1}^n\sum_{j=1}^m \mathbb{I}[M_{j,i} \neq M_{j+1,i}] \geq 2^n-1,\] which averages to \[\frac{1}{n} \sum_{i=1}^n V(M,i) = \frac{G(M)}{n} \geq \frac{2^n-1}{n} .\] By the pigeon-hole principle, there must be a row $j \in [n]$ for which $V(M,i) \geq \frac{2^n-1}{n}$, which implies $V(M) \geq \frac{2^n-1}{n}$. \end{proof} We split the proof of Theorem~\ref{thm:bv-dual} into two estimates: \begin{lemma} \label{lem:bv-ub} For $\mathcal{F}=\mathrm{BV}(v)$ and $t<v$, $d^*_\mathcal{F}(t)\le2\log_2(v/t)$. \end{lemma} \begin{lemma} \label{lem:bv-lb} For $\mathcal{F}=\mathrm{BV}(v)$ and $4t<v$, $d^*_\mathcal{F}(t)\ge\floor{\log_2(v/t)}$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:bv-ub}] Let $\set{f_1,\ldots,f_n}\subset \mathcal{F}$ be a set of functions that are $t$-shattered by $\mathcal{F}^*$. In other words, there is an $r\in\mathbb{R}^n$ such that for each $b \in \{0,1\}^n$ there is an $x_b \in\mathcal{F}^*$ such that \[\forall i \in [n] , x_b(f_i) \begin{cases} \geq r_i + t, & b_i = 1 \\ \leq r_i - t, & b_i = 0 \end{cases} . \] Let us order the $x_b$s by magnitude $x_1<x_2<\ldots<x_{2^n}$, denoting this sequence by $(x_i)_{i=1}^{2^n}$. Let $M \in \{0,1\}^{2^n \times n}$ be a matrix whose $i$th row is $b_j$, the latter ordered arbitrarily. By Lemma~\ref{lemma:1}, there is $i \in [n]$ s.t. \[\sum_{j=1}^{2^n} \mathbb{I}[M(j,i) \neq M(j+1,i)] \geq \frac{2^n}{n}.\] Note that if $M(j,i) \neq M(j+1,i)$ shattering implies that \[x_j(f_i) \geq r_i + t \text{ and } x_{j+1}(f_i) \leq r_i - t\] or \[x_j(f_i) \leq r_i - t \text{ and } x_{j+1}(f_i) \geq r_i + t;\] either way, \[\abs*{f_i(x_j) - f_i(x_{j+1})} = \abs*{x_j(f_i) - x_{j+1}(f_i)} \geq 2t.\] So for the function $f_i$, we have \begin{eqnarray*} \sum_{j=1}^{2^n} \abs*{f_i(x_j) - f_i(x_{j+1})} = \sum_{j=1}^{2^n} \abs*{x_j(f_i) - x_{j+1}(f_i)} \geq \sum_{j=1}^{2^n} \mathbb{I}[b_{j_i} \neq b_{{j+1}_i} \cdot 2t \geq \frac{2^n}{n} \cdot 2t. \end{eqnarray*} As $\{x_j\}_{j=1}^{2^n}$ is a partition of $[0,1]$ we get \[v \geq \sum_{j=1}^{2^n} \abs*{f_i(x_j) - f_i(x_{j+1})} \geq \frac{t 2^{n+1}}{n} \geq t 2^{n/2}\] and hence \[v/t \geq 2^{n/2}\] \[\Rightarrow 2\log_2(v/t) \geq n.\] \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:bv-lb}] We construct a set of $n = \floor{\log_2(v/t)}$ functions that are $t$-shattered by $\mathcal{F}^*$. First, we build a balanced Gray code \citep{flahive2007balancing} with $n$ bits, which we arrange into the rows of $M$. Divide the unit interval into $2^n $ segments and define, for each $j\in[2^n]$, \[ x_j := \frac{j}{2^n} . \] Define the functions $f_1,\ldots,,f_{\floor{\log_2(v/t)}}$ as follows: \[f_i(x_j) = \begin{cases} t, & M(j,i) = 1 \\ -t, & M(j,i) = 0 \end{cases}. \] We claim that each $ f_i \in \mathcal{F}$. Since $M$ is balanced Gray code, \[V(M) = \frac{2^n}{n} \le \frac{v}{t \log_2(v/t)} \leq \frac{v}{2t}.\] Hence, for each $f_i$, we have \[V(f_i) \leq 2 t V(M,i) \leq 2 t \frac{v}{2t} = .v\] Next, we show that this set is shattered by $\mathcal{F}^*$. Fix the trivial offest $r_1=...=r_n = 0$ For every $b \in \{0,1\}^n$ there is a $j \in [2^n]$ s.t. $b = b_i$. By construction, for every $i \in [n]$, we have \[x_j(f_i) = f_i(x_j) = \begin{cases} t \geq r_i + t, & M(j,i) = 1 \\ -t \leq r_i - t, & M(j,i) = 0 \end{cases}. \] \end{proof} \section{Sample compression for nearest-neighbor regression} \label{sec:NN} Let $(\mathcal{X},\rho)$ be a metric space and define, for $L\ge0$, the collection $\mathcal{F}_L$ of all $f:\mathcal{X}\to[0,1]$ satisfying $$ |f(x)-f(x')|\le L\rho(x,x');$$ these are the $L$-Lipschitz functions. \citet{GottliebKK17_IEEE} showed that \begin{eqnarray*} d_{\mathcal{F}_L}(t) = O\paren{ \ceil{L\operatorname{diam}(X)/t}^{\operatorname{ddim}(\mathcal{X})}}, \end{eqnarray*} where $\operatorname{diam}(\mathcal{X})$ is the diameter and $\operatorname{ddim}$ is the {\em doubling dimension}, defined therein. The proof is achieved via a packing argument, which also shows that the estimate is tight. Below we show that $ d^*_{\mathcal{F}_L}(t) = \Theta(\log\left({M}(\mathcal{X},2t/L)\right))$, where $M(\mathcal{X},\cdot)$ is the packing number of $(\mathcal{X},\rho)$. Applying this to the efficient nearest-neighbor regressor\footnote{ In fact, the technical machinery in \citet{DBLP:journals/tit/GottliebKK17} was aimed at achieving {\em approximate} Lipschitz-extension, so as to gain a considerable runtime speedup. An {\em exact} Lipschitz extension is much simpler to achieve. It is more computationally costly but still polynomial-time in sample size. } of \citet{DBLP:journals/tit/GottliebKK17}, we obtain \begin{corollary} Let $(\mathcal{X},\rho)$ be a metric space with hypothesis class $\mathcal{F}_L$, and let $\mathcal{L}$ be a consistent, proper learner for $\mathcal{F}_L$ with target generalization error $\varepsilon$. Then $\mathcal{L}$ admits a compression scheme of size $O(k\log k)$, where \[k = \bigO{ D(\varepsilon) \log \uf{\varepsilon} \cdot \log D(\varepsilon) \log \left(\uf{\varepsilon} \log D(\varepsilon) \right) }\] and \[D(\varepsilon) = \ceil{\frac{L\operatorname{diam}(\mathcal{X})}{\varepsilon}}^{\operatorname{ddim}(\mathcal{X})}.\] \hide{ If we fix $diam(\mathcal{X}) = 1$ and $L = 1$ we get that \[k \in \bigO{ \frac{d}{\varepsilon^d} \log^2 \uf{\varepsilon} \log \left(\frac{d}{\varepsilon} \log \uf{\varepsilon} \right) }\] and \[ \bigO{ \frac{n}{\varepsilon^d} \log \uf{\varepsilon} \left( \log n + d \log \uf{\varepsilon} \log \left( \frac{d}{\varepsilon} \log \uf{\varepsilon} \right) \right) } \] for $d := ddpim(\mathcal{X})$ } \end{corollary} \hide{ Recall that For a metric space $(\mathcal {X}, \rho)$ Let $F_L(\mathcal{X})$ be the class of real-valued functions $f:\mathcal{X} \to [0,1]$ who are L-Lipschitz. and let $F_L^*$ be the dual function class, namely, $[0,1]$ equipped with the evaluation defined by \[\forall x \in \mathcal{X},f \in F_L : x(f) := f(x)\] Lipschitz function are useful in on general metric spaces as they provide the most natural regularized function class, when the Lipschitz-parameter L is used as the regularizator. } \hide{ As before, in order to apply our framework we first need to bound the dual-dimension \begin{lemma}\label{lem:lip-vc} For a metric space $(\mathcal {X}, \rho)$ Let $\mathcal{F}_L(\mathcal{X})$ be the class of real-valued functions $f:\mathcal{X} \to [0,1]$ who are L-Lipschitz. and let $\mathcal{F}_L^*$ be the dual function class, namely, $[0,1]$ equipped with the evaluation defined by \[\forall x \in \mathcal{X},f \in \mathcal{F}_L : x(f) := f(x)\] we get that \[VC(\mathcal{F}_L^*) = log_2\left(\mathcal{M}(\mathcal{X},2t/L)\right)\] \end{lemma} We split the proof into two lemmas } We now prove our estimate on the dual fat-shattering dimension of $\mathcal{F}$: \begin{lemma} For $\mathcal{F}=\mathcal{F}_L$, $d^*_\mathcal{F}(t) \leq \log_2\left(\mathcal{M}(\mathcal{X},2t/L)\right)$. \end{lemma} \begin{proof} Let $\set{f_1,\ldots,f_n}\subset \mathcal{F}_L$ a set that is $t$-shattered by $\mathcal{F}^*_L$. For $b \neq b' \in \{0,1\}^n$, let $i$ be the first index for which $b_i \neq b'_i$, say, $b_i = 1\neq0= b'$. By shattering, there are points $x_b,x_{b'} \in \mathcal{F}^*_L$ such that $x_b(f_i) \geq r_i + t$ and $x_{b'}(f_i) \leq r_i - t$, whence \[f_i(x_b) - f_i(x_{b'}) \geq 2t\] and \[L \rho(x_b,x_{b'}) \geq f_i(x_b) - f_i(x_{b'}) \geq 2t.\] It follows that for $b \neq b' \in \{0,1\}^n$, we have $ \rho(x_b,x_{b'}) \geq 2t / L$. Denoting by ${M}(\mathcal{X}, \varepsilon)$ the $\varepsilon$-packing number of $\mathcal{X}$, we get \[2^n = |\{x_b \mid b \in \{0,1\}^n\}| \leq \mathcal{M}(\mathcal{X}, 2t / L). \] \end{proof} \begin{lemma} For $\mathcal{F}=\mathcal{F}_L$ and $t<L$, $d^*_\mathcal{F}(t) \geq \log_2\left(\mathcal{M}(\mathcal{X},2t/L)\right)$. \end{lemma} \begin{proof} Let $S=\{x_1,...,x_m\} \subseteq \mathcal{X}$ be a maximal $2t/L$-packing of $\mathcal{X}$. Suppose that $c: S \to \{0,1\}^{\floor{\log_2m}}$ is one-to-one. Define the set of function $F = \{f_1,\ldots,f_{\floor{\log_2(m)}}\} \subseteq \mathcal{F}_L$ by \[ f_i(x_j) = \begin{cases} t, & c(x_j)_i = 1 \\ -t, & c(x_j)_i = 0 \end{cases}. \] For every $f \in F$ and every two points $x,x' \in S$ it holds that \[\abs{f(x) - f(x')} \leq 2t = L \cdot 2t / L \leq L \rho(x,x').\] This set of functions is $t$-shattered by $S$ and is of size $\floor{\log_2m} = \floor{\log_2\left(\mathcal{M}(\mathcal{X},2t/L)\right)}$. \end{proof}
1,116,691,498,904
arxiv
\section{Introduction} KM3NeT research infrastructure consists of two neutrino detectors under construction at the bottom of the Mediterranean Sea: ARCA (Astroparticle Research with Cosmics in the Abyss) located off-shore Portopalo di Capo Passero, Sicily, Italy, at a depth of 3500\,m and ORCA (Oscillation Research with Cosmics in the Abyss) off-shore Toulon, France, at a depth of 2450\,m. ARCA is designed to observe TeV-PeV cosmic neutrinos and identify their astrophysical sources. ORCA focuses on the study of GeV atmospheric neutrino oscillations, in order to determine the neutrino mass hierarchy (NMH) \cite{LoI}. This translates roughly to similar muon energies and to primary cosmic ray (CR) energies in PeV-EeV range for ARCA and TeV for ORCA (although the detectors are in fact sensitive to a wider energy range). The angular acceptances of full ARCA and ORCA detectors are best for horizontal muons and slightly decrease towards the vertical direction. Both detectors consist of vertically aligned detection units (DUs), with 18 digital optical modules (DOMs) on each DU \cite{LoI}. Every DOM contains 31 3-inch photomultiplier tubes (PMTs), calibration and positioning instruments and readout electronics boards. ARCA and ORCA have different horizontal (90\,m and 20\,m respectively) and vertical (36\,m and 9\,m respectively) spacing between the DOMs, since they are optimised for different energy ranges. In their final configuration, there will be 115 DUs at ORCA and 2x115 DUs (in two blocks) at ARCA site. The KM3NeT Monte Carlo (MC) simulations of CR-muons are based on two event generators: MUPAGE \cite{MUPAGE} and CORSIKA \cite{CORSIKA}. Propagation of CORSIKA muons from the sea level to the active volume of the detector is done by the gSeaGen code \cite{gSeaGen} with a chosen 3-dimensional muon propagator, PROPOSAL \cite{PROPOSAL}. Next, Cherenkov photons emitted by the water molecules due to the passage of the muons and their detection by the DOMs are simulated with a custom application, called JSirene \cite{JSirene}. Afterwards, the environmental optical background (photons due to bioluminescence and to \(^{40}\mathsf{K}\) decays) is added and front-end electronics response is simulated. The MUPAGE simulation is performed in a run-by-run mode, i.e. a simulated run is produced for each data run. Conversely, average settings over all runs are used for CORSIKA MC. Trigger algorithms, as used for real data, are applied to identify possible interesting events in the simulated sample \cite{AtmMuRate}. Finally, the direction and energy of the selected events are reconstructed based on the hit information from the PMTs with an algorithm "JGandalf" used for the real data stream as well \cite{reco}. For muons above 10 TeV, the angular resolution is 0.2\(^{\circ}\) and the energy resolution is 0.28 in units of \(\log_{10}(E_{\mu})\) \cite{LoI}. At this point the simulated MC events are compared to the calibrated experimental data. Atmospheric muons are the most abundant signal in a neutrino telescope and pose a major background for physics analyses with neutrinos. Moreover, muon data is invaluable for testing the detector performance and validation of the Monte Carlo simulations. In this work, the first data from ARCA2 (ARCA with 2 DUs) and ORCA4 (ORCA with 4 DUs) is compared to MC simulations. Muons convey information about extensive air showers (EAS) and the CR primaries causing them. For muons with PeV and larger energies, the dominant muon yield is expected to come from the decay of short-lived hadrons. Such muons are called prompt muons, opposed to the conventional muons produced mostly in decays of charged pions and kaons. The existence of the prompt muon flux has not been yet experimentally confirmed, however, there is some evidence from IceCube (strongly dependent on the choice of the primary CR flux model) \cite{ICECUBE_PROMPT}. The first results of a simulation study of the prompt muon flux component at KM3NeT detectors are shown in Section \ref{prompt-ana}. What KM3NeT detectors often observe are not single, but multiple muons. Hence, in the following we will use the concept of muon bundles, which are multi-muon events. A single muon event is a bundle with multiplicity equal to one. \section{Data vs MC comparisons} \label{sec:data-vs-MC} \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{plots/ARCA2_reco_costheta.png} \caption{Distribution for ARCA2.} \label{fig:zenith-1} \end{subfigure} \hfill \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{plots/ORCA4_reco_costheta.png} \caption{Distribution for ORCA4.} \label{fig:zenith-2} \end{subfigure} \caption{Rate of atmospheric muons as a function of the reconstructed zenith angle for data and MC simulation for the ARCA2 and ORCA4 detector. Most of the upgoing events (all MC events) are downgoing events that are badly reconstructed as upgoing. No quality cuts have been applied to remove the badly reconstructed tracks.} \label{fig:zeniths} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{plots/ARCA2_reco_energy.png} \caption{Distribution for ARCA2.} \label{fig:energy-1} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{plots/ORCA4_reco_energy.png} \caption{Distribution for ORCA4.} \label{fig:energy-2} \end{subfigure} \caption{Rate of atmospheric muons as a function of the reconstructed energy for data and MC simulation for the ARCA2 and ORCA4 detector. More details on the energy reconstruction can be found in \cite{reco}.} \label{fig:energies} \end{figure} A selection of data runs (each run lasts 6 hours) collected with ARCA2 from the 23th of December 2016 to the 2nd of March 2017 and with ORCA4 from the 23th of July to 16th of December 2019 has been compared with the expectation of a MC simulation based on identical triggering and reconstruction programs. A sample of muon bundles with energies above 10 GeV and multiplicities up to 100 muons per bundle was produced with MUPAGE. The equivalent livetime is about 20\,days for ARCA2 and 10\,days for ORCA4, which is similar to the true detector livetimes. In total, \(2.5\cdot10^9\) showers have been simulated with CORSIKA, using SIBYLL-2.3c as the high-energy hadronic interaction model \cite{SIBYLL}. The simulated CR primaries were: \(p\), \(He\), \(C\), \(O\) and \(Fe\) nuclei with energies between 1\,TeV and 1\,EeV. The events were weighted assuming the GST3 CR composition model \cite{GST}. All events, real and simulated, were reconstructed with the same algorithm (JGandalf). In Figures \ref{fig:zeniths} and \ref{fig:energies} the results of the simulation with MUPAGE and CORSIKA are represented in red and blue respectively. Data are black. Only statistical errors are indicated. No systematic uncertainties are considered in the plots, however, they are expected to be larger than the statistical errors. For CORSIKA, statistical errors are evaluated as \(\Delta x=\sqrt{{\sum}w_{i}^{2}}\), where \(w_{i}\) is the weight of the \(i\)-th MC event. The presented results show that the MC simulations match the data where the muon flux peaks for each detector (downgoing, 1-10 TeV for ARCA and 10 GeV - 1 TeV for ORCA) and reproduce the general shape of the distributions, although there is certainly room for improvement. The muons reconstructed as upgoing in Figure \ref{fig:zeniths} are misreconstructed downgoing muons. No quality selection has been applied to remove these poorly reconstructed tracks. The difference between muons from CORSIKA and data/MUPAGE reconstructed as upgoing was identified as coming from the errors in processing in gSeaGen and will be fixed in the future. The energy distributions in Figure \ref{fig:energies} show significant deviation from the data at highest bundle energies. It is under investigation why it occurs and why only for CORSIKA in ARCA2 and only for MUPAGE in ORCA4 plot. \section{Prompt muon analysis}\label{prompt-ana} The muon flux is commonly divided into conventional and prompt components. The majority of conventional muons come from $\pi^{\pm}$ and $K^{\pm}$ decays, whereas the prompt muons are created in decays of heavy hadrons and light vector mesons. The potential of KM3NeT to observe the prompt muon flux is investigated in this work. The prompt muon analysis is performed with a CORSIKA MC sample containing \(5.3\cdot10^6\) simulated showers and SIBYLL-2.3d as high-energy hadronic interaction model \cite{SIBYLL-2.3d} (version 2.3d includes corrections potentially relevant to the analysis). The same groups of CR nuclei as in Section \ref{sec:data-vs-MC} are used as primaries, however in the energy range from 0.9\,PeV up to 40\,EeV. Primaries with lower energies produce muons well below the expected signal region. The definition of prompt muons was introduced requiring that all muon parent particles have lifetimes shorter than $K_{\mathsf{S}}^{0}$ ($8.95\cdot10^{-11}$s) and that the muon has no more than two parent generations (mother and grandmother). The signal (SIG) for the analysis is defined as muon bundles with at least one prompt muon. The bundles with no prompt muons are considered a background (BGD). All muon bundles together are referred to as TOTAL. The distributions of muon bundle energies and multiplicities for BGD and TOTAL at the detector volume for ARCA115 (one building block of ARCA; 115 Detection Units installed) are shown in Figure \ref{fig:prompt}. There is a clear excess above BGD starting around 1\,PeV for the energy distribution and above \(10^3\) in the case of multiplicity distribution. This is a promising prediction, even though the differences between BGD and TOTAL are not large. Further investigation is required at the reconstruction level and the impact of the systematic uncertainties has to be assessed. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{plots/ARCA115_can_energy.png} \caption{Distribution of muon bundle energies (sum of energies of individual muons).} \label{fig:prompt-1} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{plots/ARCA115_can_multiplicity.png} \caption{Distribution of muon multiplicities (number of muons in a bundle).} \label{fig:prompt-2} \end{subfigure} \caption{Rate of atmospheric muon bundles per detector volume as a function of the bundle energy or multiplicity for BGD and TOTAL for the ARCA115 detector.} \label{fig:prompt} \end{figure} \section{Conclusions} A comparison of the measured and the expected muon rate is possible using data from the first bunches of ARCA and ORCA detection units in operation. The comparisons between data and MC provide consistent results, however, there are ongoing efforts to further improve their agreement. The analysis will be repeated with ARCA6 and ORCA6 (ARCA/ORCA with six Detection Units installed) data and with new MUPAGE and CORSIKA simulations. Several improvements are foreseen in the MC chain: optimisation of the muon propagation step, improved definition of the geometry of muon propagation, a more accurate event weight calculation and inclusion of the delta ray contribution in the energy reconstruction. A systematic uncertainty study and an increase of the statistical significance of the sample is foreseen for the CORSIKA MC production. The first estimation of the expected signal from prompt muons is encouraging and suggests that KM3NeT may be sensitive enough to measure the prompt component. Figure \ref{fig:prompt} presents the expected result for ARCA115. Further investigations at the reconstruction level are required before starting a comparison with IceCube results \cite{ICECUBE_PROMPT} using the most recent prompt muon flux models.
1,116,691,498,905
arxiv
\section{Introduction} \label{sec:Intro} Inner-shell excitation or ionization of an atom or a molecule produces metastable electronic states which can decay by autoionization. The most common such relaxation process is Auger decay\cite{Auger1925jpr} (AD), in which the vacancy is refilled by a valence electron and other electron is emitted to continuum. Closely related process is the interatomic Coulombic decay\cite{Cederbaum1997prl,Jahnke2015jpb} (ICD), involving energy transfer from the ionized or excited species to its environment and emission of secondary electron from a neighboring atom or molecule. Less frequently, electron correlation leads to higher-order multi-electron transitions. Basic examples comprise simultaneous emission of two electrons in the double Auger decay\cite{Carlson1965prl,Journel2008pra,Roos2018scirep} (DAD) or double ICD (DICD)\cite{Averbukh2006prl} processes, or collective recombination of two or more vacancies with emission of a single electron\cite{Feifel2016prl,Zitnik2016pra,Averbukh2009prl}. Interest in these higher-order processes intensified recently due to their significance in multiply ionized or excited systems produced, e.g., after irradiation by high-intensity free-electron lasers. Among fundamental characteristics of a metastable state (resonance) belongs its decay width, the knowledge of which is essential for understanding of the dynamics of the system following the excitation. Although wide range of theoretical approaches is available for its computation, transitions with continuum electrons are inherently more complicated to describe than processes involving only bound states. Number of studies of AD employ many-body perturbation theory\cite{Kelly1975pra,Amusia1992pra} or various many-electron wave function models combined with the golden rule formula for transition probabilities\cite{Pindzola1987pra,Chen1991pra}. To account for the higher-order multi-electron transitions, approximate formulas corresponding to different decay mechanisms such as shake-off or knock-out are often used\cite{Zeng2013pra}. Distinct class of \emph{ab initio} methods builds on the highly developed computational quantum chemistry for bound states and describe the decay processes using square integrable ($\mathcal{L}^2$) basis sets. One possible approach is to introduce complex absorbing potential\cite{Riss1993jpb} (CAP) into the electronic Hamiltonian in order to transform the divergent Siegert states associated with resonances into $\mathcal{L}^2$ wave functions. In this sense, CAP approach is closely related to the exterior complex scaling transformation\cite{Moiseyev2011book}. The resulting non-Hermitian Hamiltonian can then be represented employing various quantum chemical models for excited states, such as configuration interaction\chng{\cite{Santra2002prep,Peng2016jcp,Zhang2012pra}}, algebraic diagrammatic construction\cite{Schirmer1991pra,Vaval2007jcp} (ADC), or equation-of-motion coupled-cluster\cite{Ghosh2013pccp}. Another family of $\mathcal{L}^2$ \emph{ab initio} methods for computation of intra- and inter-atomic non-radiative decay widths stemming from the Fano-Feshbach theory of resonances are the Fano-ADC techniques\cite{Averbukh2005jcp,Gokhberg2007jcp,Kolorenc2008jcp}. They rely on the Fano theory of resonances\cite{Fano1961pr,Howat1978jpb}, ADC in the intermediate state representation\cite{Mertins1996pra,SchirmerMBM2018} (ISR) for the many-electron wave functions, and Stieltjes imaging technique\cite{Langhoff1979empmc} to recover correct normalization of the discretized $\mathcal{L}^2$ representation of the continuum. Over the last decade, Fano-ADC was established as possibly the most efficient approach for calculations of intra- and interatomic decay widths. It has been utilized in a series of studies of ICD\cite{Sisourat2010nphys,Gokhberg2014nat} and made possible prediction of new collective three-electron decay processes\cite{Averbukh2009prl,Feifel2016prl}. Among the attractive properties of the method are size consistency of the ADC scheme, capability of producing converged decay widths over many orders of magnitude, and the possibility to estimate the partial decay widths despite the lack of proper continuum wave functions\cite{Averbukh2005jcp}. The high efficiency of the method is connected with the so-called compactness of the ADC expansion, which involves the smallest possible explicit configuration space needed at a given order of perturbation theory and lack of need to diagonalize the full \chng{ADC(n) Hamiltonian matrix} (as is required in the complex scaling and CAP-based techniques), but rather only the restricted Hamiltonian represented \chng{in the subsets of configurations corresponding to initial and final state subspaces, respectively.} With the emerging time-domain spectroscopy techniques with attosecond resolution\cite{Drescher2002nat,Huetten2018ncomm} and the increasing interest in multiple-ionization processes, the demands on the accuracy and scope of the theoretical methods increases rapidly. At present, the available implementations of the Fano-ADC method are limited to the extended second order ADC schemes, ADC(2)x. For singly ionized systems, ADC(2)x comprises only the two lowest excitation classes, which can be characterized (with respect to the neutral ground state configuration) as one-hole ($1h$) and two-hole-one-particle ($2h1p$). At this level, the expansion of the correlated $1h$-like main ionic states is complete through the second order of perturbation theory (PT) while that of the $2h1p$-like ionization satellite states only through the first order. Since typical AD or ICD process corresponds to a transition from $1h$-like inner-shell vacancy state to a $2h1p$-like state, representing the resulting doubly ionized system plus an electron in continuum, such correlation imbalance can affect the accuracy of the computation of the relevant coupling matrix elements. Furthermore, first order PT is often simply insufficient to describe the correlation in the satellite states adequately, severely limiting the usability of the method for studying decay of the $2h1p$-like ionized-excited states\cite{Rist2017chp}. Finally, the processes involving emission of two electrons into continuum, such as DAD and DICD, are qualitatively not accessible within ADC(2)x approximations because of the absence of the main excitation classes of the corresponding final states of the decay, e.g.\ 3h2p for DAD of single core vacancies. To remedy these issues, we have exploited the flexibility of the ISR approach to develop ISR-ADC(2,2) scheme in which the satellite states are treated consistently with the main ionic states through the second order of PT. The resulting expansion involves also the $3h2p$ excitation class, opening the possibility to study decay processes accompanied by emission of two electrons (DAD, DICD). In this work, we present the Fano-ADC(2,2) method and test its capabilities to produce accurate decay widths on various examples of intra- and interatomic decay processes, with emphasis on the multi-electron transitions. The paper is organized as follows: in Sec.\ \ref{sec:ADCmain}, we give an overview of the ADC procedure, the intermediate state representation, and infer the structure of the ISR-ADC(2,2) scheme. In Sec.\ \ref{sec:FanoADC22} we summarize the Fano theory of resonances and describe in detail the Fano-ADC methodology. In Sec.\ \ref{sec:IPs}, we demonstrate the balanced description of the main and satellite ionization states by computing atomic ionization potentials. Results of the decay widths calculations are given and discussed in Secs.\ \ref{sec:Auger}--\ref{sec:ICD}, and the paper is concluded in Sec.\ \ref{sec:conclusions}. \section{Algebraic Diagrammatic Construction for electron propagator} \label{sec:ADCmain} In energy representation, the electron propagator reads \begin{eqnarray} \label{eq:elprop} G_{pq}(\omega) & = & \langle\Psi_0|c_p\left(\omega-H+E_0+i\eta\right)^{-1}c^\dag_q|\Psi_0\rangle + \langle\Psi_0|c^\dag_q\left(\omega+H-E_0-i\eta\right)^{-1}c_p|\Psi_0\rangle \nonumber \\ & = & G^+_{pq}(\omega)+G^-_{pq}(\omega). \end{eqnarray} Here, $H$ is the Hamiltonian operator, $|\Psi_0\rangle$ is the exact $N$-electron ground state wave function and $E_0$ the ground state energy. $c^\dag_p$ and $c_p$ are electron creation and annihilation operators associated with a basis of one-particle states $|p\rangle$ -- usually Hartree-Fock (HF) orbitals -- and $\eta$ is positive infinitesimal convergence factor. We will focus in particular on the $(N-1)$-electron part $G^-_{pq}(\omega)$ which is relevant for the description of the ionization process. It can be cast into the Lehmann spectral representation, \begin{equation} \label{eq:Lehmann} G^-_{pq}(\omega) = \sum_n\frac{\langle\Psi_0|c^\dag_q|\Psi_n^{N-1}\rangle\langle\Psi_n^{N-1}|c_p|\Psi_0\rangle}{\omega+E_n^{N-1}-E_0-i\eta}, \end{equation} via introduction of complete set of exact $(N-1)$-electron states $|\Psi_n^{N-1}\rangle$. In this representation, the electron propagator is given as a sum of simple poles located at the negative ionization energies, \begin{equation} \label{eq:ione} -I_n = E_0-E_n^{N-1}. \end{equation} Direct ADC procedure \cite{Schirmer1982pra,Schirmer1983pra} is an approach which enables to systematically derive hierarchy of approximations ADC($n$) to the electron propagator (or other type of many-electron Green's function), which are complete up to order $n$ of PT and include infinite partial summations needed to recover the characteristic simple poles structure of the propagator, highlighted in the Lehmann representation. First, closed-form algebraic ansatz is imposed on the matrix $\mathbf{G}^-$, \begin{equation} \label{eq:ADCansatz} \mathbf{G}^-(\omega)=\mathbf{f}^\dag(\omega-\mathbf{K}-\mathbf{C})^{-1}\mathbf{f}, \end{equation} where the \emph{ADC secular matrix} $\mathbf{K}+\mathbf{C}$ consists of the diagonal matrix $\mathbf{K}$ of zero-order ionization energies and the hermitian \emph{effective interaction matrix} $\mathbf{C}$. $\mathbf{f}$ are \emph{effective transition amplitudes}. The infinitesimal $-i\eta$ is not essential in the following and will be omitted. The ADC form \eqref{eq:ADCansatz} can be expanded in a formal perturbation series, assuming the existence of perturbation expansion of the ADC secular matrix and transition amplitudes, \begin{eqnarray} \label{eq:PTforC} \mathbf{K}+\mathbf{C}= && \mathbf{K}^{(0)}+\mathbf{C}^{(1)}+\mathbf{C}^{(2)}+\dots \\ \label{eq:PTforf} \mathbf{f} = && \mathbf{f}^{(0)}+\mathbf{f}^{(1)}+\mathbf{f}^{(2)}+\dots \end{eqnarray} The ADC($n$) approximation scheme (i.e., explicit expressions for $\mathbf{K}$, $\mathbf{C}^{(i)}$ and $\mathbf{f}^{(i)}$) is then obtained through comparison of the formal perturbation expansion of Eq.\ \eqref{eq:ADCansatz} with the standard diagrammatic perturbation expansion for $\mathbf{G}^-$, truncated after the PT order $n$. Once the (exact or approximate) ADC secular matrix is available, its diagonalization \begin{equation} \label{eq:ADCsecular} (\mathbf{K}+\mathbf{C})\mathbf{X}=\mathbf{X}\mathbf{\Omega},\qquad \mathbf{X}^T\mathbf{X}=\mathbf{1} \end{equation} provides the physical information contained in the propagator $\mathbf{G}^-$. In particular, $\mathbf{\Omega}$ is the diagonal matrix of eigenvalues $\omega_n$ which correspond to negative ionization energies $-I_n$. An important characteristic of the ADC approach is that the secular problem $\eqref{eq:ADCsecular}$ is Hermitian. Furthermore, it can be shown that its PT expansion is regular, i.e., the energy denominators appearing in the expansion of the elements of $\mathbf{C}$ are larger than the energy gap between occupied and virtual HF orbitals. For a detailed and pedagogical account on ADC and related methods we refer the reader to the book of J.\ Schirmer\cite{SchirmerMBM2018}. \subsection{Intermediate state representation} \label{sec:ISR} As an alternative to the diagrammatic derivation, ADC can be formulated in the so-called intermediate state representation (ISR) \cite{Schirmer1991pra,Mertins1996pra}. ISR-ADC provides closed-form version of the ADC secular problem \eqref{eq:ADCsecular}, which is equivalent to the direct procedure but is in fact a wave-function method, which significantly extends its scope of applicability. In particular, the availability of explicit representation of the $(N-1)$-electron states $|\Psi_n^{N-1}\rangle$ makes it possible to evaluate matrix elements of general operators\cite{Schirmer2004jcp} or to compute coupling matrix elements driving the decay of metastable states. Furthermore, the ISR approach is more flexible in construction of the perturbation expansion of the effective interaction matrix $\mathbf{C}$, which allows us to devise the desired ISR-ADC(2,2) scheme with balanced representation of main and satellite ionization states. The ISR-ADC approach is based on the observation that the non-diagonal ADC representation \eqref{eq:ADCansatz} of the electron propagator can be obtained from the general formula \eqref{eq:elprop} by using a complete basis of some $(N-1)$-electron intermediate states $\isr{k}$ (IS) instead of the exact Hamiltonian eigenstates $|\Psi_n^{N-1}\rangle$. This suggests that the ADC secular matrix $\mathbf{M}=-(\mathbf{K}+\mathbf{C})$ can be interpreted as a representation of the shifted Hamiltonian $H-E_0$ in a basis of appropriate ISs. The particular set of ISs leading to representation of the electron propagator equivalent to the direct ADC approach described above can be constructed explicitly without a reference to diagrammatic PT \cite{Mertins1996pra,SchirmerMBM2018}. The procedure starts by introducing so-called \emph{correlated excited states} (CES) \begin{equation} \label{eq:CES} \CE{J} = C_J|\Psi_0\rangle, \end{equation} where $C_J$ are the physical excitation operators associated with one-particle basis of HF orbitals, \begin{equation} \label{eq:CJ} \{C_J\} = \left\{c_k; c^\dag_a c_k c_l,k<l;c_a^\dag c_b^\dag c_j c_k c_l,a<b,j<k<l;\dots\right\}. \end{equation} The indices $j,k,l,\dots$ and $a,b,\dots$ correspond to occupied and virtual HF orbitals, respectively. The CESs can therefore be classified into excitation classes as $1h$, $2h1p$, $3h2p$, and so on. In the following, $J$ corresponds to individual configuration while $[J]=\mu$ denotes the whole $\mu h-(\mu-1)p$ class. Unlike the configurations \begin{equation} \label{eq:HFconf} |\Phi_J\rangle = C_J|\Phi_0\rangle \end{equation} derived from the HF ground state $|\Phi_0\rangle$, which form basis of the standard Configuration Interaction (CI) expansion, CESs are not orthonormal. \chng{The non-orthogonality stems from perturbation corrections to the ground state wave function. For instance, second order correction brings about admixture of $1h1p$ excitations into $|\Psi_0\rangle$\cite{SchirmerMBM2018}, which upon action of $c_k$ translates into $2h1p$ contributions to the $1h$ CESs. In turn, CESs from $1h$ and $2h1p$ excitation classes are no longer orthogonal.} It is the specific \emph{excitation class orthogonalization} (ECO) procedure which leads to the ISR-ADC representation. ECO proceeds iteratively as follows. Assuming ISs $\isr{K}$ belonging to excitation classes $[K]=1,\dots,\nu-1$ are available, \emph{precursor states} $|\Psi_J^\#\rangle$ belonging to the class $[J]=\nu$ are constructed through Gram-Schmidt orthogonalization of the CESs $|\Psi_J^0\rangle$ with respect to the ISs belonging to all lower excitation classes as \begin{equation} \label{eq:precursor} |\Psi_J^\#\rangle = \CE{J}-\sum_{[K]<[J]}\isr{K}\langle\tilde{\Psi}_K\CE{J}. \end{equation} The ISs of the class $[J]=\nu$ are then obtained via symmetric orthogonalization of the precursor states within the excitation class, \begin{equation} \label{eq:ISRsymOG} \isr{J} = \sum_{[I]=\nu}|\Psi^\#_I\rangle(\mathbf{S}_\nu^{-1/2})_{IJ}. \end{equation} Here, $\mathbf{S}_\nu$ is the overlap matrix \begin{equation} \label{eq:precursorS} (\mathbf{S}_\nu)_{IJ} = \langle\Psi_I^\#|\Psi_J^\#\rangle \end{equation} of the precursor states belonging to the excitation class $[I]=[J]=\nu$. Note that the procedure can be initiated correctly starting from the lowest $[J]=1$ class as the precursor states $|\Psi^\#_{[J]=1}\rangle$ are equal to CESs $|\Psi_{[J]=1}^0\rangle$ and the Gram-Schmidt orthogonalization step \eqref{eq:precursor} does not apply. Starting from the exact ground state $|\Psi_0\rangle$ and complete manifold of excitation operators~\eqref{eq:CJ}, ECO procedure leads to exact representation of the shifted Hamiltonian (secular matrix), \begin{equation} \label{eq:MIJ} M_{IJ} = \langle\tilde{\Psi}_I|H-E_0\isr{J}. \end{equation} Practical computation scheme is obtained by using truncated PT expansion for the ground state (note the \emph{intermediate normalization} $\langle\Phi_0|\Psi_0\rangle=1$), \begin{equation} \label{eq:GSPTn} |\Psi_0\rangle = |\Phi_0\rangle + |\Psi_0^{(1)}\rangle + \cdots +|\Psi_0^{(n)}\rangle + O(n+1). \end{equation} This in turn leads naturally to PT expansion of the ISs, \begin{equation} \label{eq:ISpt} \isr{J} = |\Phi_J\rangle+|\tilde{\Psi}_J^{(1)}\rangle+\cdots+|\tilde{\Psi}_J^{(n)}\rangle+O(n+1), \end{equation} and, together with the standard PT expansion for the ground state energy, \begin{equation} \label{eq:E0pt} E_0 = E_0^{(0)}+E_0^{(1)}+\dots+E_0^{(n)}+O(n+1), \end{equation} to expansion of the secular matrix elements \begin{equation} \label{eq:MIJpt} M_{IJ} = M_{IJ}^{(0)}+M_{IJ}^{(1)}+\cdots+M_{IJ}^{(n)} + O(n+1). \end{equation} Corresponding PT expansion of the transition amplitudes $\mathbf{f}$ can be derived in the same manner but is redundant for our purposes. An $n$-th order ISR-ADC approximation equivalent to ADC($n$) derived by the direct procedure is obtained by truncating the expansion \eqref{eq:MIJpt} for each $IJ$ block at an appropriate order. Fundamental example is the second order ADC($2$) scheme. The explicit configuration space is spanned by $1h$ and $2h1p$ ISs and the secular matrix has the block structure\cite{SchirmerMBM2018} \begin{eqnarray} \label{eq:ADC2blocks} M_{1h,1h} = && M_{1h,1h}^{(0)} + M_{1h,1h}^{(2)} \nonumber \\ M_{1h,2h1p} = && M_{1h,2h1p}^{(1)} \\ M_{2h1p,2h1p} = && M_{2h1p,2h1p}^{(0)}.\nonumber \end{eqnarray} Together with corresponding approximation of the effective transition amplitudes $\mathbf{f}$, this scheme provides complete second order representation of $\mathbf{G}^-$. However, description of correlated $1h$- and $2h1p$-like states is inconsistent, as will be shown in the following subsection. \subsection{Canonical order relations} The equivalence of the direct ADC and ISR-ADC formulations rests on two common features\cite{SchirmerMBM2018}. One is the separability of the secular matrix with respect to non-interacting subsystems, which leads to size-consistency of the approximation at any given order. The other are the \emph{canonical order relations} (COR) fulfilled by the secular matrix $\mathbf{M}$: the PT expansions \eqref{eq:MIJpt} of the off-diagonal ($[I]\neq[J]$) matrix elements do not begin at the zeroth order but rather follow the general rule\cite{Mertins1996pra,SchirmerMBM2018} \begin{equation} \label{eq:COR} M_{IJ} \sim O(|[I]-[J]|). \end{equation} For instance, the lowest order contribution to matrix elements $M_{1h,3h2p}$, which couple the $1h$ and $3h2p$ excitation classes, are of the second order, $M_{1h,3h2p}\sim O(2)$, see Table~12.1 of Ref.~\onlinecite{SchirmerMBM2018} for further details. From the COR \eqref{eq:COR} it is possible to determine the PT order of an error of ionization energies computed using any given truncated ADC scheme. Let us assume that an eigenvector $X_I$ of the secular matrix can be classified as belonging to the excitation class $\nu$, that is, its expansion is dominated by class $\nu$ ISs. Corresponding eigenvalue, $\omega_I$, is then linear in the matrix elements $M_{IJ}$ belonging to the respective diagonal block of the secular matrix ($[I]=[J]=\nu$) and quadratic in the matrix elements $M_{IK}$ belonging to the off-diagonal blocks directly coupling classes $[I]=\nu$ and $[K]\ne\nu$. The PT order of the error of the eigenvalue is then given by the lowest order correction missing in the secular matrix. As an example, consider the ADC(2) scheme \eqref{eq:ADC2blocks}. For a $1h$ class, the critical correction missing is the third-order $M_{1h,1h}^{(3)}$, therefore, corresponding eigenenergies are correct through the second order of PT. As noted above, coupling to the absent $3h2p$ excitation class is of the second order and would contribute to the $1h$ state energies only by a fourth-order correction, together with the neglected second-order $M_{1h,2h1p}^{(2)}$ coupling to the $2h1p$ class. In contrast, energies of the $2h1p$ states are complete only through the zeroth order. Adding first order elements $M_{2h1p,2h1p}^{(1)}$ to the $2h1p/2h1p$ block leads to the so-called extended second-order [ADC(2)x] scheme and improves the $2h1p$-state energies by one order of PT. COR lie behind the \emph{compactness} property\cite{Mertins1996pra} of the ADC secular matrix. It is best demonstrated in comparison with the CI method, in which the $|\Psi_n^{N-1}\rangle$ states are expanded in terms of the HF configurations \eqref{eq:HFconf}. In the resulting matrix representation of the $(N-1)$-electron Hamiltonian, $\mathbf{H}$, each excitation class $\mu$ is coupled with up to four adjacent classes, $\mu\pm 1$ and $\mu\pm 2$, by first order matrix elements. In particular, $H_{1h,3h2p}\sim O(1)$ and the $3h2p$ excitation class has to be included explicitly into the configuration space in order to recover all second order contributions to the $1h$-state ionization energies. At the ADC(2) level, on the other hand, first order coupling between $1h$ and $3h2p$ classes is taken into account implicitly through the second order $M_{1h,1h}^{(2)}$ matrix elements, which leads to significant reduction of the computational demands. \chng{ The ISR concept and related compactness property are not unique to the ADC methodology. Within the coupled-cluster (CC) framework, treatment of excited or ionized states is based on CESs derived from the CC ground state parametrization. It leads to the so-called biorthogonal coupled-cluster (BCC) representation in which the Hamiltonian is given by a non-hermitian secular matrix. Due to the different quality of the two underlying (left and right) biorthogonal sets of states, the BCC secular matrix combine COR and CI-type order structure. The errors of excitation or ionization energies computed using BCC schemes thus lie between the CI and ADC expansions truncated after the same excitation class. Detailed analysis of the relation between ISR-ADC and BCC can be found in the literature.\cite{Mertins1996pra,Schirmer2009tca} } \subsection{ISR-ADC(2,2) approximation scheme} \label{sec:ADC22} From the above discussion of COR, it is now straightforward to design an ISR-ADC(2,2) scheme in which both the $1h$ and $2h1p$-state energies will be determined consistently through the second order of PT. Starting from the ISR-ADC(2) of Eq.\ \eqref{eq:ADC2blocks}, it follows that the ISR-ADC(2,2) secular matrix has to include also the first- and second order matrix elements $M_{2h1p,2h1p}^{(1)}$, $M_{2h1p,2h1p}^{(2)}$ in the diagonal $2h1p/2h1p$ block. Furthermore, the first order $M_{2h1p,3h2p}^{(1)}$ matrix elements directly coupling the $2h1p$ and $3h2p$ ISs are needed as they contribute to the $2h1p$ states energies by second order correction. Therefore, $3h2p$ excitation class has to be included in the explicit configuration space. Corresponding diagonal block of the secular matrix must contain at least zero order contribution $M_{3h2p,3h2p}^{(0)}$, which again contribute to the $2h1p$ energies at second order. This structure of the secular matrix constitutes the minimal scheme fulfilling the requirements. However, to extend the applicability of the method, it is desirable to improve the description of the $3h2p$ states to at least first order of PT. This is achieved by including first order matrix elements to the $3h2p/3h2p$ block. These terms contribute to the $2h1p$ state energies at third order through corrections of the type \begin{equation} \label{eq:3h2p_to_2h1p} \omega_{2h1p}\propto (M^{(1)}_{2h1p,3h2p})^2 M^{(1)}_{3h2p,3h2p}. \end{equation} Corresponding correction to the $1h$ state energies is only of fifth order, \begin{equation} \label{eq:3h2p_to_1h} \omega_{1h}\propto (M^{(1)}_{1h,2h1p})^2 (M^{(1)}_{2h1p,3h2p})^2 M^{(1)}_{3h2p,3h2p}. \end{equation} This extension significantly improves the consistency between $1h$ and $2h1p$-state energies since the correction \eqref{eq:3h2p_to_2h1p} compensates the respective third order contribution of $M^{(1)}_{2h1p,2h1p}$ to the $1h$ state energies, \begin{equation} \label{eq:2h1p_to_1h} \omega_{1h}\propto (M^{(1)}_{1h,2h1p})^2 M^{(1)}_{2h1p,2h1p}. \end{equation} Furthermore, we will show in Sec.\ \ref{sec:Auger} that inclusion of the second order contributions $M_{1h,2h1p}^{(2)}$ to the $1h/2h1p$ coupling matrix elements is necessary to obtain accurate Auger decay widths. Concerning the eigenenergies, these terms contribute to both the $1h$ and $2h1p$-state energies at third order, \begin{equation} \label{eq:1h2h1p2_to_1h,2h1p} \omega_{1h},\omega_{2h1p} \propto M^{(1)}_{1h,2h1p} M^{(2)}_{1h,2h1p}. \end{equation} However, as both the $1h$ and $2h1p$ ISs have to be expanded through second order to arrive at the ISR-ADC(2,2) approximation, is appears to be vital to account for the direct coupling at the same level. Thorough the rest of the paper, we will drop the ISR prefix an denote the minimal scheme as ADC(2,2)$_m${}, scheme extended by the first-order $3h2p/3h2p$ matrix elements as ADC(2,2)$_x${}, and the full scheme including also the second order $1h/2h1p$ couplings as ADC(2,2)$_f${}. For clarity, block structure of different variants of the proposed ADC(2,2) scheme, together with commonly used ADC($n$) schemes from the standard ADC hierarchy, is summarized in Tab.\ \ref{tab:ADC22secular}. \chng{ Computational cost of ADC(2,2)$_f${} and ADC(2,2)$_x${} scales with the system size as $n_{occ}^3n_{virt}^4$, $n_{occ}$ and $n_{virt}$ being the numbers of occupied and virtual molecular orbitals, respectively. This scaling is determined by the number of nonzero first-order $3h2p/3h2p$ matrix elements and thus applies both to the matrix construction and matrix-vector multiplication. The latter determines the cost of iterative diagonalization methods. ADC(2,2)$_m${} scales more favorably as $n_{occ}^4 n_{virt}^3$ for matrix construction and as $n_{occ}^4 n_{virt}^2$ for matrix-vector multiplication. For comparison, the scaling of the commonly used ADC(2)x is $n_{occ}^3 n_{virt}^2$. } \begin{table}[htb] \renewcommand{\arraystretch}{1.6} \begin{tabular}{c|C{4em}|C{4em}|C{4em}|} \mc{}& \mc{$1h$} & \mc{$2h1p$} & \mc{$3h2p$} \\ \cline{2-4} $1h$ & $M_{11}^{(\mu)}$ & $M_{12}^{(\nu)}$ & -- \\ \cline{2-4} $2h1p$ & $M_{21}^{(\nu)}$ & $M_{22}^{(\xi)}$ & $M_{23}^{(\sigma)}$ \\ \cline{2-4} $3h2p$ & -- & $M_{32}^{(\sigma)}$ & $M_{33}^{(\chi)}$ \\ \cline{2-4} \end{tabular}\\[3ex] \renewcommand{\arraystretch}{1.0} \begin{tabular}{|c||C{3em}|C{3em}|C{3em}|C{3em}|C{3em}|} \hline scheme & $M_{11}^{(\mu)}$ & $M_{12}^{(\nu)}$ & $M_{22}^{(\xi)}$ & $M_{23}^{(\sigma)}$ & $M_{33}^{(\chi)}$ \\ \hline \hline ADC(2) & 0,2 & 1 & 0 & - & - \\ \hline ADC(2)x & 0,2 & 1 & 0,1 & - & - \\ \hline ADC(3) & 0,2,3 & 1,2 & 0,1 & - & - \\ \hline ADC(2,2)$_m$ & 0,2 & 1 & 0,1,2 & 1 & 0 \\ \hline ADC(2,2)$_x$ & 0,2 & 1 & 0,1,2 & 1 & 0,1 \\ \hline ADC(2,2)$_f$ & 0,2 & 1,2 & 0,1,2 & 1 & 0,1\\ \hline \end{tabular} \caption{\label{tab:ADC22secular}Block structure (perturbation theory orders) of the secular matrix $\mathbf{M}$ in standard ADC($n$) schemes and the three proposed variants of ADC(2,2).} \end{table} So far, we have only considered the PT expansion of the eigenvalues of the ADC(2,2) secular matrix. Before concluding this section, comment on the resulting representation of the $(N-1)$-electron wave functions $|\Psi^{N-1}_n\rangle$ is in order. In the ISR-ADC scheme, these are given in terms of ISs as the eigenvectors of the secular matrix, i.e., by the matrix $\mathbf{X}$ in Eq.\ \eqref{eq:ADCsecular}. Assuming $|\Psi^{N-1}_n\rangle$ can be classified as belonging to the excitation class $[n]$, the COR structure \eqref{eq:COR} of the secular matrix $\mathbf{M}$ implies order relations for the eigenvectors in the form\cite{SchirmerMBM2018} \begin{equation} \label{eq:CORX} X_{Jn}=\langle\tilde{\Psi}_J|\Psi^{N-1}_n\rangle\sim O(|[J]-[n]|). \end{equation} It follows that in the ADC(2,2)$_f${} scheme given in Tab.\ \ref{tab:ADC22secular}, $|\Psi^{N-1}_n\rangle$ belonging to both $1h$ and $2h1p$ excitation classes are represented fully only through the first order of PT. For the $1h$ states, the error is determined by the neglected second order direct coupling $M_{1h,3h2p}^{(2)}$ to the $3h2p$ class. For the $2h1p$ states, $M_{2h1p,3h2p}^{(2)}$ and $M_{2h1p,4h3p}^{(2)}$ matrix elements are necessary to account for all second order contributions to the respective wave functions. Inclusion of the $M_{1h,3h2p}^{(2)}$ matrix elements would in principle be possible. However, explicit appearance of the $4h3p$ excitation class would result to an impractical method, computationally intractable even for the smallest atomic systems. \section{Fano-ADC(2,2) method} \label{sec:FanoADC22} In this section, we describe the Fano-ADC methodology for computation of non-radiative decay rates and point out the necessary modification of the previous implementations, related to the use of ADC(2,2) scheme. We start by reviewing the Fano theory of resonances, followed by description how the theory can be combined with ISR-ADC representation of the many-electron wave functions employing an $\mathcal{L}^2$ one-electron basis set. \subsection{Fano theory of resonances} \label{sec:Fano} In the theory of resonances developed by Fano\cite{Fano1961pr} and formulated in a convenient pro\-jec\-ti\-on-operator formalism by Feshbach\cite{Feshbach1964rmp}, the solution $|\Psi_{E,\alpha}\rangle$ of the time-independent Schr\"odinger equation (TISE) \begin{equation} \label{eq:TISE} H|\Psi_{E,\alpha}\rangle = E|\Psi_{E,\alpha}\rangle \end{equation} at some (real) energy $E$ near resonance is represented as a superposition of bound state-like $\mathcal{L}^2$ discrete state $|\Phi\rangle$ and background continuum components $|\chi_{\beta,\epsilon}\rangle$, \begin{equation} \label{eq:Fano_ansatz} |\Psi_{E,\alpha}\rangle = a_\alpha(E)|\Phi\rangle+\sum_{\beta=1}^{N_c}\int C_{\beta,\alpha}(E,\epsilon)|\chi_{\beta,\epsilon}\rangle d\epsilon. \end{equation} Here, $H$ is the full electronic Hamiltonian, $N_c$ is the number of available decay channels, index $\alpha=1\,\dots,N_c$ numbers the independent solutions and $\epsilon$ is the energy of emitted particle. The decomposition \eqref{eq:Fano_ansatz} corresponds to partitioning of the Hilbert space into the continuum subspace $\mathcal{P}$ and the subspace $\mathcal{Q}$ which contains the bound-like discrete state. It can be realized through introduction of the corresponding projection operators, \begin{equation} \label{QPprojectors} Q= |\Phi\rangle\langle\Phi|\quad \text{and}\quad P=\sum_{\beta=1}^{N_c}\int|\chi_{\beta,\epsilon}\rangle\langle\chi_{\beta,\epsilon}| d\epsilon. \end{equation} Typically, the two projectors are constructed as complementary, $P=\mathds{1} -Q$. However, an important feature of the Fano theory is that orthogonality between the two projectors is not strictly required. In some applications, particularly in connection with quantum-chemical calculations carried out in $\mathcal{L}^2$ basis set, non-orthogonal bound-like and continuum-like subspaces arise naturally \cite{Yun2018jcp}. When applied to a decay process, the discrete state $|\Phi\rangle$ and the background continuum states $|\chi_{\beta,\epsilon}\rangle$ can be associated with the initial and final states, respectively. The discrete state is characterized by its mean energy \begin{equation} \label{eq:Ed} E_\Phi=\langle\Phi|H|\Phi\rangle \end{equation} and the background continuum functions $|\chi_{\beta,\epsilon}\rangle$ are assumed to diagonalize the Hamiltonian to a good approximation, \begin{equation} \label{eq:bg_continuum} \langle\chi_{\beta',\epsilon'}|H-E|\chi_{\beta,\epsilon}\rangle \approx (E_\beta+\epsilon-E)\delta_{\beta',\beta}\delta(E_{\beta'}+\epsilon'-E_\beta-\epsilon). \end{equation} Here, $E_\beta$ is energy of the resulting $(N-1)$-electron (with respect to $|\Phi\rangle$) ionic state and the $N_c$ open decay channels in Eq.\ \eqref{eq:Fano_ansatz} then correspond to the energetically accessible states with energies $E_\beta<E_\Phi$. Defining the width function\cite{Howat1978jpb} as a sum of partial widths, \begin{equation} \label{eq:GammaE} \Gamma(E) = \sum_{\beta=1}^{N_c}\Gamma_\beta(E)=2\pi\sum_{\beta=1}^{N_c}\left|\langle\Phi|H-E|\chi_{\beta,E-E_\beta}\rangle\right|^2, \end{equation} and solving the TISE \eqref{eq:TISE}, the coefficient $a_\alpha(E)$ can be expressed as a generalized Lorentzian, \begin{equation} \label{eq:aalpha} |a_\alpha(E)|^2=\frac{1}{2\pi}\frac{\Gamma_\alpha(E)}{(E-E_r)^2+\Gamma(E)^2/4}, \end{equation} with the energy \begin{equation} \label{eq:Eres} E_r = E_\Phi+\Delta(E) = E_\Phi+\sum_{\beta=1}^{N_c}\! \mathscr{P}\!\!\int\frac{|\langle\Phi|H-E|\chi_{\beta,\epsilon}\rangle|^2}{E-E_\beta-\epsilon}d\epsilon \end{equation} defining position of the resonance and $\Gamma=\Gamma(E_r)$ its width ($\mathscr{P}$ stands for principal value integration). In practical applications, in particular those involving $\mathcal{L}^2$ basis, only discretized approximation of the width function $\Gamma(E)$ is acquired and evaluation of the level shift function $\Delta(E)$ is not feasible. Therefore, the resonance energy and width are usually approximated as $E_r\approx E_\Phi$ and $\Gamma\approx\Gamma(E_\Phi)$. In the case of strong channel mixing, Eq.\ \eqref{eq:bg_continuum} is not satisfied for the continuum states $|\chi_{\beta,\epsilon}\rangle$ associated with the decay channels. In such a case, unitary transformation (prediagonalization) of the background continuum is necessary, \begin{equation} \label{eq:chi_prediag} |\chi^-_{\lambda,\epsilon}\rangle = \sum_{\beta=1}^{N_c}\int D_{\lambda,\beta}(\epsilon,\epsilon')|\chi_{\beta,\epsilon'}\rangle d\epsilon'. \end{equation} The rest of the procedure is completely analogical including the formula for the total width, only the expression for partial widths associated with the original channels $\beta$ becomes more involved \cite{Howat1978jpb}. \subsection{Fano theory in the framework of ISR-ADC} \label{sec:FanoISR} Since we are interested in decay of a single inner shell vacancy states, the wave function $|\Psi_{E,\alpha}\rangle$ \eqref{eq:Fano_ansatz} is an $(N-1)$-electron wave function. In this section we describe how the bound-like discrete component $|\Phi\rangle$ and the continuum components $|\chi_{\beta,\epsilon}\rangle$ can be approximated in the framework of an ISR-ADC scheme implemented using an $\mathcal{L}^2$ basis. To this end, we need to divide the configuration space spanned by the ISs into the continuum subspace $\mathcal{P}$, containing the final states of the decay, and the subspace $\mathcal{Q}$ containing the bound states and bound-like discrete components associated with the metastable states. In a rigorous theory, the fundamental difference between the two subspaces is that the $\mathcal{P}$ subspace contains states with at least one electron in continuum while $\mathcal{Q}$ contains strictly $\mathcal{L}^2$ wave functions. In any method employing $\mathcal{L}^2$ one-particle basis sets, however, this distinction is lost as the continuum is discretized and approximated by $\mathcal{L}^2$ wave functions. Therefore, other criteria of the classification of ISs have to be devised. The essential requirement is that, within the $\mathcal{Q}$ subspace itself, the discrete component is bound and can only decay through coupling to the background continuum of the $\mathcal{P}$ subspace. Therefore, any representation of $|\Phi\rangle$ must not contain any contribution corresponding to the possible final states of the decay. Within the ADC(2,2) configuration space, ISs corresponding to the final states of decay of singly ionized states are to be found in the $2h1p$ and $3h2p$ excitation classes as the continuum electron has to be described by a virtual orbital. To identify specific ISs belonging to the $\mathcal{P}$ subspace, various schemes applicable in different situations are possible. The simplest approach, based on the lowest-order estimates for the energies $E_\beta$ of $2h$ ($[\beta]=2$) and $3h$ ($[\beta]=3$) decay channels is described in the original work of Averbukh and Cederbaum\cite{Averbukh2005jcp}. Due to its rather limited applicability, this approach was not implemented in the present work. Instead, we employ two different schemes, one based on hole localization\cite{Averbukh2005jcp} and the other on the adaptation (prediagonalization) of ISs\cite{Kolorenc2015jcp}. \begin{enumerate}[(A)] \item \emph{Selection scheme based on hole localization}\\ In many cases, $2h1p$ and $3h2p$ ISs can be straightforwardly classified as corresponding to open or closed decay channels based on either core/valence character or spatial localization of the vacancies defining the $2h$ and $3h$ configurations, respectively. In the case of AD or DAD of a core vacancy, energetically accessible final states are typically characterized by all $2h$ or $3h$ configurations that do not contain the initial or deeper-lying vacancies. Consider, for example, decay of the Ne($1s$) vacancy. Any dicationic state containing only valence ($2s$ and $2p$) vacancies is possible final state of AD. Similarly, all valence $\text{Ne}^{3+}$ states are accessible through DAD. Therefore, the $\mathcal{Q}$ subspace can be defined by ISs characterized by at least one $1s$ hole while all other ISs are included into the $\mathcal{P}$ subspace. Further examples are given in Supplementary material (SM). Similar strategy can be adopted when studying interatomic decay process in heteronuclear clusters where the MOs are spatially localized on specific atoms, such as \begin{equation} (A^+)^*B\rightarrow\left\{\begin{array}{ll} A^+ + B^+ + e^- & \text{ICD} \\ A + B^{2+}+e^- & \text{ETMD}\cite{Zobeley2001jcp} \end{array}\right. \end{equation} Here, the final states are distinguished by at least one vacancy being localized on the initially neutral cluster constituent B. Therefore, the $\mathcal{Q}$ subspace is spanned by $2h1p$ and $3h2p$ ISs characterized by all holes being localized on the initially ionized subunit $A$. In this way, the intra-atomic relaxation and correlation effects inside the subunit $A$ are taken into account in the discrete state, whereas any kind of interatomic decay can be described only through coupling to the complementary $\mathcal{P}$ subspace. \item \emph{Selection scheme based on adapted ISs}\\ The above scheme relies on the fact that there is a direct one-to-one correspondence between the $2h1p$- and $3h2p$-like ISs and the open or closed channels of the decay process. In a most general situation, this is not the case. As an example, consider ICD in neon dimer, \begin{equation} \text{Ne}_2^+(2\sigma_{g/u}^{-1})\rightarrow \text{Ne}^+ + \text{Ne}^+ + e^-. \end{equation} The final states of the decay process are of two-site character, with each hole localized on a different atom. However, Ne$_2$ MOs are delocalized over both atoms due to inversion symmetry. In turn, $2h$ configurations derived from those MOs, such as $3\sigma_g^{-1}3\sigma_g^{-1}$, are neither two- nor one-site (both holes localized on the same atom) and the corresponding $2h1p$ ISs cannot be directly associated with either $\mathcal{P}$ or $\mathcal{Q}$ subspace. The issue can be resolved by a localization procedure described in Ref.~\onlinecite{Averbukh2006jcp} and generalized in Ref.~\onlinecite{Kolorenc2015jcp} as follows. To restore the localized (one- or two-site) character of the $2h$ configurations, correlated $2h$ wave functions can be constructed through diagonalization of the lowest-order Hamiltonian matrix corresponding to the doubly-ionized system, \begin{equation} \label{eq:2hHam} H_{ij,i',j'} = \langle\Phi_0|c_{j'}^\dag c_{i'}^\dag H c_i c_j|\Phi_0\rangle. \end{equation} The same procedure can be applied directly to ISs. First, the $2h1p$ ISs are divided into subsets characterized by the particle orbital $p$. When the corresponding small blocks of the ADC Hamiltonian $\mathbf{M}$ are diagonalized, the spectrum reflects that of the matrix \eqref{eq:2hHam} with the eigenvalues being essentially only uniformly shifted by the presence of an extra electron in the virtual orbital $p$. Corresponding eigenstates represent \emph{adapted} ISs which can be directly associated with the correlated $2h$ wave functions and, in turn, with open or closed decay channels. The adapted ISs can therefore be readily sorted into $\mathcal{P}$ and $\mathcal{Q}$, respectively. Identical procedure can be applied also to $3h2p$ space -- adapted ISs associated with correlated $3h$ wave functions are obtained by diagonalization of ADC Hamiltonian sub-blocks corresponding to $3h2p$ ISs characterized by the pair of virtual orbitals $p$, $p'$. \end{enumerate} The classification scheme B is in principle completely general. The only input required for the calculations is the number of di- and tri-cationic states accessible in the decay process. For each virtual orbital $p$, corresponding number of lowest-lying adapted $2h1p$ ISs is included in the $\mathcal{P}$ subspace, and similarly for the $3h2p$ ISs. However, special care is needed if the energy gap dividing open and closed channels is very narrow. The extra electron in virtual orbital can then lead to strong mixing of open and closed channels in the adapted $2h1p$ ISs, breaking the strictly bound character of the $\mathcal{Q}$ subspace. In such a case, it might be necessary to further restrict this subspace by excluding the affected ISs. It is also advisable to associate the adapted ISs with the decay channels by comparing its $2h$ component with the eigenvectors of \eqref{eq:2hHam} rather than to rely solely on the energy ordering as the extra electron can swap closely-lying levels. The same holds also for the $3h2p$ class of adapted ISs. The principal advantage of the scheme A is, on the other hand, its simplicity. In general, it is advisable to compare both schemes whenever applicable in order to verify reliability of the results. Once the $\mathcal{Q}$ and $\mathcal{P}$ subspaces are defined within the full configuration space, the initial state of the decay process is represented by a discrete state $|\Phi\rangle$ selected among the eigenstates of the Hamiltonian matrix $\mathbf{QMQ}$ projected onto the $\mathcal{Q}$ subspace. The selection criterion is typically the leading $1h$ configuration. The decay continuum spanned by the components $|\chi_{\beta,\epsilon}\rangle$ is approximated by eigenstates $|\chi_i\rangle$ corresponding to the discrete eigenvalues $\epsilon_i$ of the $\mathbf{PMP}$ Hamiltonian projected onto the $\mathcal{P}$ subspace. \chng{ Computational cost of the Fano-ADC(2,2) method with selection scheme A scales formally as the standard ADC(2,2) calculations, i.e., $n_{occ}^3n_{virt}^4$ for ADC(2,2)$_f${}. Separation into the $\mathcal{Q}$ and $\mathcal{P}$ subspaces does not involve any transformation of the ISs basis and thus merely reduces the number of occupied orbitals active in each subspace. Diagonalization of the two projected matrices is typically cheaper than diagonalization of the full matrix, but it should be noted that Fano-ADC calculations often require considerably larger basis sets than standard calculations of IPs. Selection scheme B is more expensive as the subspaces are defined through a basis set transformation. It is not advisable to explicitly evaluate the projected $\mathbf{QMQ}$ and $\mathbf{PMP}$ matrices as they do not inherit the sparse character of $\mathbf{M}$ and the two projectors but rather to apply the three matrices sequentially in each matrix $\times$ vector operation. The whole procedure thus consists of one full $n_{occ}^3n_{virt}^4$ multiplication and two projections of $n_{occ}^3 n_{virt}^2 n_{3h}$ complexity, where $n_{3h}$ is the number of closed or open triply ionized decay channels for the $\mathcal{Q}$ and $\mathcal{P}$ subspace, respectively. $n_{3h}$ scales approximately as cube of the core or valence orbitals, but in a typical calculation the inequality $n_{virt}^2\gg n_{3h}$ holds and the relative cost of the projections is negligible. It should also be noted that even though full matrix $\times$ vector operation is performed, the projected matrices being diagonalized are smaller, resulting in a lower number of iterations required to reach the desired accuracy. } \chng{ \subsection{Stieltjes imaging} } The discrete character of the $\mathbf{PMP}$ spectrum prevents straightforward use of the eigenfunctions $|\chi_i\rangle$ as an approximation of the background continuum functions in the decay width formula \eqref{eq:GammaE}. First, these wave functions do not satisfy correct scattering boundary conditions and are normalized to unity rather than energy, \begin{equation} \langle\chi_i|\chi_j\rangle = \delta_{ij}. \end{equation} Second, the discretized spectrum has to be interpolated in order to evaluate the resonance width $\Gamma=\Gamma(E_\Phi)$ at the desired energy as the condition $\epsilon_i=E_\Phi$ is in general not fulfilled for any of the discrete levels, except by a coincidence. Both issues can be efficiently resolved using the so-called Stieltjes imaging \cite{Langhoff1979empmc,Hazi1979empmc} technique. The approach relies on the fact that while the wave functions $|\chi_i\rangle$ cannot be used to evaluate the decay width function \eqref{eq:GammaE} directly, they provide good approximations of its spectral moments, \begin{equation} \label{eq:GammaSk} S_k = \int E^k \Gamma(E) dE = 2\pi\sum_{\beta=1}^{N_c}\int E^k |\langle\Phi|H-E|\chi_{\beta,E-E_\beta}\rangle|^2 dE \approx 2\pi\sum_i (\epsilon_i)^k |\langle\Phi|H-E|\chi_i\rangle|^2. \end{equation} Here, we have used the assumption that, within the region defined by the spatial extent of the discrete state $|\Phi\rangle$, the solutions $|\chi_i\rangle$ can replace the basis formed by the exact background continuum wave functions, \begin{equation} \label{eq:discretized_unity} \sum_{\beta=1}^{N_c}\int |\chi_{\beta,\epsilon_\beta}\rangle\langle\chi_{\beta,\epsilon_\beta}| \approx \sum_i |\chi_i\rangle\langle\chi_i|. \end{equation} Using the lowest $2n_S$ spectral moments \eqref{eq:GammaSk} ($k=0,\dots,2n_S-1$), an approximation of order $n_S$ of the decay width function can be recovered using the moment theory. At each order, the decay width is obtained in terms of the $n_S$-point integration quadrature with \emph{a priori} unknown weight function $\Gamma(E)$. An efficient implementation of the Stieltjes imaging procedure is described in Ref.~\onlinecite{Muller1989pra}. In this approach, negative moments $S_{-k}$ are used to improve numerical stability. Convergence of the calculations can be controlled by performing series of approximations of increasing order $n_S$. It follows from the Eq.\ \eqref{eq:discretized_unity} that it is not possible to formulate a rigorous procedure for the calculation of the partial decay widths $\Gamma_\beta(E)$. This problem is common to all $\mathcal{L}^2$ methods as the decay channels are defined only asymptotically with respect to the position of the outgoing particle and, therefore, true continuum functions are needed. However, partial decay widths can still be estimated by constructing approximate channel projectors $P_\beta$ in terms of the $\mathcal{L}^2$ ISs and repeating the Stieltjes imaging procedure with projected functions $P_\beta|\chi_i\rangle$. The method is detailed in Refs.~\onlinecite{Cacelli1986mp,Averbukh2005jcp}. In the present work, we only use this approach to estimate the three-electron collective Auger decay branching ratio in Kr in Sec.\ \ref{sec:KrCAD}. In this particular case, the $\mathcal{P}$ subspace is spanned by $2h1p$ ISs only and the definition of projectors corresponding to two- and three-electron decay pathways is straightforward. General discussion of partial widths in the framework of Fano-ADC(2,2) method will be subject of a future publication. \section{Results and discussion} \label{sec:Results} \subsection{Ionization energies} \label{sec:IPs} Before applying the Fano-ADC(2,2) method to calculations of intra- and inter-atomic electronic decay widths, we will demonstrate the accuracy of the ADC(2,2) approximation scheme for ionization energies. Since the main goal of the work is to develop a method which treats the main $1h$ and satellite $2h1p$ states consistently, we are interested particularly in the relative energy positions of the two classes of states. In this section, we present results of benchmark calculations of atomic and molecular vertical ionization potentials (IP), which can be directly compared to available experimental and theoretical data. In Tabs.\ \ref{tab:BeIP} and \ref{tab:NeIP}, energies of main and satellite ionization states of Be and Ne atoms, computed using different variants of ADC(2,2) scheme, are compared with standard CI-SD and ADC(2)x methods. Values from the NIST database \cite{NIST_ASD} are used as reference. \begin{table}[ht] \begin{tabular}{l|c|c|c|ccc} configuration & NIST & CI-SD & ADC(2)x & ADC(2,2)$_f$ & ADC(2,2)$_x$ & ADC(2,2)$_m$ \\ \hline \multicolumn{7}{c}{frozen $1s$ core} \\ \hline $1s^2 2s^2$ GS & & -397.81 & \multicolumn{4}{c}{-397.37 (MBPT2)} \\ \hline $1s^2 2s$ & \textbf{9.32} & 9.30$_{-0.02}$ & 8.83$_{-0.49}$ & 8.92$_{-0.40}$ & 8.86$_{-0.46}$ & 8.86$_{-0.46}$ \\ $1s^2 2p$ & \textbf{13.28} & 13.29$_{+0.01}$ & 12.03$_{-1.25}$ & 12.85$_{-0.43}$ & 12.85$_{-0.43}$ & 12.85$_{-0.43}$ \\ $1s^2 3d$ & \textbf{21.48} & 21.38$_{-0.10}$ & 20.12$_{-1.36}$ & 20.94$_{-0.54}$ & 20.94$_{-0.54}$ & 20.94$_{-0.54}$ \\ $1s^2 4s$ & \textbf{23.64} & 23.54$_{-0.10}$ & 22.28$_{-1.35}$ & 23.10$_{-0.54}$ & 23.10$_{-0.54}$ & 23.10$_{-0.54}$ \\ \hline \multicolumn{7}{c}{$1s$, $2s$ active} \\ \hline $1s^2 2s^2$ GS & & -398.51 & \multicolumn{4}{c}{-398.08 (MBPT2)} \\ \hline $1s^2 2s$ & \textbf{9.32} & 9.26$_{-0.06}$ & 8.87$_{-0.46}$ & 8.95$_{-0.37}$ & 8.90$_{-0.43}$ & 8.90$_{-0.43}$ \\ $1s^2 2p$ & \textbf{13.28} & 13.84$_{+0.55}$ & 12.01$_{-1.27}$ & 12.82$_{-0.46}$ & 12.82$_{-0.46}$ & 12.83$_{-0.45}$ \\ $1s^2 3d$ & \textbf{21.48} & 22.03$_{+0.55}$ & 20.12$_{-1.36}$ & 21.01$_{-0.47}$ & 21.01$_{-0.47}$ & 21.01$_{-0.47}$ \\ $1s^2 4s$ & \textbf{23.64} & 24.19$_{+0.55}$ & 22.28$_{-1.36}$ & 23.16$_{-0.48}$ & 23.17$_{-0.47}$ & 21.17$_{-0.47}$ \\ \hline \end{tabular} \caption{\label{tab:BeIP}Be ionization potentials (in eV). Lower indices at the computed energies indicate differences from the reference NIST values. In the upper part of the table, results obtained with frozen $1s$ core orbital in all post-HF methods are given, while for the lower part, both $1s$ and $2s$ orbitals were kept active. Neutral ground state energies were computed using corresponding CI-SD and MBPT2 methods with the same sets of active orbitals.} \end{table} For Tab.\ \ref{tab:BeIP}, IPs of Be were computed using the aug-cc-pV5Z basis set further augmented by $4s4p4d$ Rydberg-like Gaussian functions\cite{Kaufmann1989jpb}. Upper half shows results obtained with the $1s$ orbital frozen in the post-HF methods (i.e., only $2s$ vacancies were allowed in the configuration spaces). The data show significant improvement of the $2h1p$ satellite states IPs obtained using ADC(2,2) in comparison to the ADC(2)x method. IP of the $1s^22s^1$ main state remains basically unchanged. As a result, the whole spectrum is consistently shifted by about 0.5\,eV towards lower energies relatively to the experimental benchmark, regardless the character of the state. In this particular example, all variants of ADC(2,2) yield equivalent results. For only one active occupied orbital, CI-SD method comprises complete expansion and is, therefore, clearly superior over ADC. It should be noted, however, that to obtain IPs shown in Tab.\ \ref{tab:BeIP}, independent CI-SD calculation of the neutral ground state energy has to be performed. In contrast, in ADC schemes the ground state correlation is included directly in the secular matrix $\mathbf{M}$ through the higher-order matrix elements. Therefore, ionization energies are obtained through a single matrix diagonalization. MBPT2 ground state energies are given for completeness and do not enter the calculation of IPs. Second half of Tab.\ \ref{tab:BeIP} demonstrates size-extensivity of the ADC method. In Be, energy separation of $1s$ and $2s$ orbitals is about 120\,eV and, therefore, core and valence excitations constitute essentially separated subsystems. Consequently, ionization energies computed using the ADC methods remain unaltered when the $1s$ orbital is included into the active space. CI-SD, on the other hand, is no longer full expansion and the inconsistency of the resulting truncated scheme is reflected in lower accuracy of the satellite states energies. \begin{table}[ht] \begin{tabular}{l|c|c|c|ccc} configuration & NIST & CI-SD & ADC(2)x & ADC(2,2)$_f$ & ADC(2,2)$_x$ & ADC(2,2)$_m$ \\ \hline $2s^2 2p^6$ GS & & -3506.09 & \multicolumn{4}{c}{-3506.33 (MBPT2)} \\ \hline $2s^2 2p^5\,{}^2P$ & \textbf{21.56} & 21.69$_{+0.13}$ & 20.73$_{-0.83}$ & 21.48$_{-0.09}$ & 20.23$_{-1.34}$ & 20.14$_{-1.43}$ \\ $2s^1 2p^6\,{}^2S$ & \textbf{48.48} & 48.78$_{+0.30}$ & 47.46$_{-1.01}$ & 48.19$_{-0.28}$ & 46.59$_{-1.88}$ & 46.43$_{-2.05}$ \\ $2s^2 2p^4 3s\,{}^2P$ & \textbf{49.35} & 52.64$_{+3.29}$ & 56.40$_{+7.05}$ & 48.43$_{-0.92}$ & 48.43$_{-0.92}$ & 45.75$_{-3.60}$ \\ $2s^2 2p^4 3s\,{}^2D$ & \textbf{52.11} & 55.45$_{+3.34}$ & 58.26$_{+6.15}$ & 51.10$_{-1.01}$ & 51.10$_{-1.01}$ & 48.29$_{-3.82}$ \\ $2s^2 2p^4 3p\,{}^2D$ & \textbf{52.69} & 55.89$_{+3.21}$ & 59.07$_{+6.38}$ & 51.77$_{-0.91}$ & \chng{51.77$_{-0.91}$} & 48.67$_{-4.02}$ \\ $2s^2 2p^4 3p\,{}^2S$ & \textbf{52.91} & 56.08$_{+3.18}$ & 59.30$_{+6.39}$ & 51.96$_{-0.94}$ & 51.96$_{-0.94}$ & 48.78$_{-4.12}$ \\ \hline \end{tabular} \caption{\label{tab:NeIP}Ne ionization potentials (in eV) computed using the aug-cc-pV5Z basis set further augmented by $4s4p4d$ Rydberg-like Gaussian functions\cite{Kaufmann1989jpb} with frozen $1s$ core orbital. Lower indices at the computed energies indicate differences from the reference NIST values.} \end{table} Tab.\ \ref{tab:NeIP} shows lowest six IPs of Ne atom computed by the same methods, employing aug-cc-pV5Z basis set further augmented by $4s4p4d$ Rydberg-like Gaussian functions\cite{Kaufmann1989jpb}. In this case, both ADC(2)x and CI-SD are clearly inadequate to describe correlation in satellite states, yielding IPs by more than 6\,eV and 3\,eV too high, respectively. The minimal ADC(2,2)$_m${} scheme provides some improvement of the satellite states over ADC(2)x but the magnitude of the error is still large, comparable to CI-SD. Accuracy of the main state IPs is even lower than at the ADC(2)x level. Moreover, in contrast to CI-SD and ADC(2)x, the $2h1p$ states IPs are significantly underestimated to the extent that the lowest $2s^22p^43s\,{}^2P$ satellite state falls below the $2s^1 2p^6\,{}^2S$ main state. Going to the ADC(2,2)$_x${} scheme leads to considerably higher accuracy of the satellite state IPs. Main state IPs, on the other hand, are virtually unchanged compared to ADC(2,2)$_m${} and show even larger errors than the satellite states. \chng{Tab.\ \ref{tab:NeIP} might suggest that ADC(2,2)$_x${} provides the most balanced treatment of main and satellite states. However, application to molecules shows that the main states energies in both ADC(2,2)$_x${} and ADC(2,2)$_m${} schemes are not shifted uniformly but rather determined with random errors as large as 1.5\,eV. This inconsistency is introduced by the first order $2h1p/3h2p$ coupling and is only corrected by the second order $1h/2h1p$ matrix elements in the ADC(2,2)$_f${} scheme, which yields $1h$ main state IPs comparable to CI-SD and clearly superior to ADC(2)x.} In Tab.\ \ref{tab:22fvs3}, we compare main states vertical IPs of selected small closed-shell molecules computed using the hierarchy of ADC(2), ADC(2)x, ADC(2,2)$_f${} and ADC(3)\cite{Trofimov2005jcp} approximation schemes. The ADC(2,2)$_f${} calculations were performed using aug-cc-pVDZ basis set (cc-pVDZ for H) as in Ref.~\onlinecite{Trofimov2005jcp}, all other data are taken from Tab.~V therein (including the experimental values). The comparison shows that ADC(2,2)$_f${} main state energies are of similar quality as in the third-order method. Compared to the ADC(3) scheme, only the third-order $M^{(3)}_{11}$ contribution to the $1h/1h$ block is missing in ADC(2,2)$_f${}, which at least for the listed molecules does not play significant role. Of course, when only main ionization state energies are required, ADC(3) scheme is clearly superior due to much smaller explicit configuration space, spanned only by the $1h$ and $2h1p$ ISs. \begin{table}[ht] \begin{tabular}{lr|c|c|c|c|c} \multicolumn{2}{c|}{vacancy} & Expt. & ADC(2) & ADC(2)x & ADC(3) & ADC(2,2)$_f$ \\ \hline C$_2$H$_4$ & $1b_{2u}$ & \textbf{10.95} & 10.15$_{-0.80}$ & 10.09$_{-0.86}$ & 10.46$_{-0.49}$ & 10.45$_{-0.50}$ \\ & $1b_{2g}$ & \textbf{12.95} & 12.79$_{-0.16}$ & 12.57$_{-0.38}$ & 13.19$_{+0.25}$ & 12.91$_{-0.04}$ \\ & $3a_{g}$ & \textbf{14.88} & 13.79$_{-1.09}$ & 13.67$_{-1.21}$ & 14.36$_{-0.52}$ & 14.71$_{-0.17}$ \\ & $1b_{3u}$ & \textbf{16.34} & 16.13$_{-0.21}$ & 15.61$_{-0.73}$ & 16.49$_{+0.15}$ & 15.85$_{-0.49}$ \\ & & \textbf{17.80} & & 18.08$_{+0.28}$ & 18.12$_{+0.32}$ & \chng{17.43$_{-0.37}$} \\ & $2b_{1u}$ & \textbf{19.40} & 18.96$_{-0.44}$ & 18.08$_{-1.32}$ & 19.00$_{-0.40}$ & 18.84$_{-0.56}$ \\ & & \textbf{20.45} & & 19.92$_{-0.53}$ & 20.02$_{-0.43}$ & 19.49$_{-0.96}$ \\ CO & $5\sigma$ & \textbf{14.01} & 13.78$_{-0.23}$ & 13.43$_{-0.58}$ & 13.80$_{-0.21}$ & 14.02$_{+0.01}$ \\ & $1\pi$ & \textbf{16.91} & 16.23$_{-0.68}$ & 16.30$_{-0.61}$ & 16.88$_{-0.03}$ & 16.82$_{-0.09}$ \\ & $4\sigma$ & \textbf{19.72} & 18.30$_{-1.42}$ & 18.42$_{-1.30}$ & 20.10$_{+0.38}$ & 19.42$_{-0.30}$ \\ F$_2$ & $1\pi_g$ & \textbf{15.80} & 13.88$_{-1.92}$ & 13.97$_{-1.83}$ & 15.87$_{+0.07}$ & 15.47$_{-0.33}$ \\ & $1\pi_u$ & \textbf{18.80} & 17.03$_{-1.77}$ & 16.84$_{-1.96}$ & 19.11$_{+0.31}$ & 18.59$_{-0.21}$ \\ & $3\sigma_g$ & \textbf{21.10} & 20.24$_{-0.86}$ & 20.48$_{-0.62}$ & 21.01$_{-0.09}$ & 20.83$_{-0.27}$ \\ HF & $1\pi$ & \textbf{16.05} & 14.39$_{-1.66}$ & 14.93$_{-1.12}$ & 16.41$_{+0.36}$ & 15.73$_{-0.32}$ \\ & $3\sigma$ & \textbf{20.00} & 18.67$_{-1.33}$ & 19.11$_{-0.89}$ & 20.30$_{+0.30}$ & 19.72$_{-0.28}$ \\ N$_2$ & $3\sigma_g$ & \textbf{15.60} & 14.79$_{-0.81}$ & 14.72$_{-0.88}$ & 15.60$_{+0.00}$ & 15.78$_{+0.18}$ \\ & $1\pi_u$ & \textbf{16.98} & 16.99$_{+0.01}$ & 16.90$_{-0.08}$ & 16.77$_{-0.21}$ & 17.17$_{+0.19}$ \\ & $2\sigma_u$ & \textbf{18.78} & 17.99$_{+0.79}$ & 17.62$_{-1.16}$ & 18.93$_{+0.15}$ & 18.76$_{-0.02}$ \\ \hline \multicolumn{2}{c|}{$\bar{\Delta}_\text{abs}$} & & 0.89 & 0.91 & 0.26 & 0.29 \\ \multicolumn{2}{c|}{$\Delta_\text{max}$} & & 1.92 & 1.96 & 0.52 & 0.96 \\ \end{tabular} \caption{\label{tab:22fvs3}Molecular vertical ionization potentials (in eV) computed using IP-EOM-CCSDT, ADC(2), ADC(3) and ADC(2,2)$_f$\ methods with aug-cc-pVDZ basis sets. ADC(2), ADC(3) and experimental values are taken from Ref.~\onlinecite{Trofimov2005jcp}. Lower indices give errors of the computed energies as compared to the experiment. In the last two lines, mean absolute error $\bar{\Delta}_\text{abs}$ and the maximum absolute error $\Delta_\text{max}$ are given. } \end{table} \chng{ Comparison between the hierarchy of ADC schemes and equation-of-motion coupled-cluster methods (IP-EOM-CC) for main ionization states can be found in Ref.~\onlinecite{Trofimov2005jcp}. For CO, F$_2$ and N$_2$, both CCSD and CCSDT are more accurate than ADC(3), but both are also more computationally expensive. CCSDT provides particularly accurate results with an average error below 0.1\,eV, but the required solution of the neutral ground state CCSDT equations scales as\cite{Musial2003jcp} $n_{occ}^3n_{virt}^5$. Comparison of satellite state IPs, computed using ADC(2,2)$_f${}, ADC(2)x and different variants of IP-EOM-CC is shown in Tab.\ \ref{tab:CCsatellites}. CCSDTQ level is taken as reference. The computationally cheapest CCSD, scaling as $n_{occ}^2 n_{virt}^4$, is clearly insufficient for representation of satellite states. Considering solely the average error, ADC(2)x is surprisingly accurate, but it is likely coincidental (note also the large spread of errors). In comparison to the CCSDTQ reference, CCSDT and ADC(2,2)$_f${} produce identical average absolute errors, the two methods consistently over- and underestimating the IPs, respectively. Owing to the large spread of available data, comparison to experiment is also inconclusive. } \begin{table}[ht] \chng{ \begin{tabular}{ll|c|c|c|c|c|c} \multicolumn{2}{c|}{} & Exp. & CCSDTQ & CCSD & CCSDT & ADC(2)x & ADC(2,2)$_f${} \\ \hline CO & $\tilde{D}\, {}^2\Pi$ & 22.7, 22.0 & \textbf{22.87} & 26.34$_{+3.47}$ & 23.23$_{+0.36}$ & 22.87$_{+0.00}$ & 22.19$_{-0.68}$ \\ & $3\, {}^2\Sigma^+$ & 23.4, 23.6, 24.1 & \textbf{23.74} & 26.36$_{+2.62}$ & 24.00$_{+0.26}$ & 22.72$_{-1.02}$ & 23.28$_{-0.46}$\\ N$_2$ & $\tilde{D}\, {}^2\Pi_g$ & 24.79, 25.0 & \textbf{24.22} & 28.27$_{+4.05}$ & 24.78$_{+0.56}$ & 24.06$_{-0.16}$ & 23.80$_{-0.42}$\\ & $\tilde{C}\, {}^2\Sigma_u^+$ & 25.51 & \textbf{24.99} & 28.78$_{+3.79}$ & 25.28$_{+0.29}$ & 24.76$_{-0.23}$ & 24.55$_{-0.44}$\\ & $2s\sigma\,\, {}^2\Sigma_g^+$ & & \textbf{37.96}$^a$ & 38.58$_{+0.62}^a$ & 38.58$_{+0.62}^a$ & 36.63$_{-1.33}$ & 37.65$_{-0.31}$\\ \hline \multicolumn{2}{c|}{$\bar{\Delta}_\mathrm{abs}$} & & & 2.91 & 0.42 & 0.55 & 0.46 \\ \multicolumn{2}{c|}{$\Delta_\mathrm{max}$} & & & 4.05 & 0.62 & 1.33 & 0.68 \end{tabular}\\ \caption{\label{tab:CCsatellites}Comparison of satellite states IPs in CO and N$_2$ computed using ADC(2,2)$_f${} and IP-EOM-CC methods. Unless indicated otherwise by a superscript, all calculations were carried out employing cc-pVDZ basis set. CC and experimental data reprinted from Ref.~\onlinecite{Kamiya2006jcp} except the $2s\sigma$ vacancy state of N$_2$ from Ref.~\onlinecite{Matthews2016jcp}.\\ ${}^a$ANO0 basis set\cite{Almlof1987jcp} } } \end{table} To summarize, ADC(2,2) comprises significant step towards the desired balance of the treatment of main and satellite states without loosing the benefit of size-extensivity. From the point of view of the IPs only, the results of the minimal ADC(2,2)$_m${} variant are not satisfactory. ADC(2,2)$_x${} and ADC(2,2)$_f${} provide the expected accuracy of the satellite state IPs -- the errors correspond in magnitude exactly to those of the main state energies in ADC(2)x. \chng{However, the accuracy of the main state energies deteriorate in ADC(2,2)$_m${} and ADC(2,2)$_x${} and must be corrected by the second-order $1h/2h1p$ couplings included in the ADC(2,2)$_f${} scheme.} In fact, the ADC(2,2)$_f${} main state IPs are then comparable to that of the higher-order ADC(3) scheme. Neither of the ADC(2,2) variants thus provides perfectly balanced scheme, \chng{but at least in atoms} the relative shift of the main and satellite states IPs is reduced by nearly 90\% in ADC(2,2)$_f${}. \subsection{Auger decay widths} \label{sec:Auger} In this and the following section, we present results of application of the Fano-ADC(2,2) method to intra-atomic Auger decay widths. We focus on processes in which higher-order three-electron transitions\cite{Kolorenc2016jpb} play a measurable role. In particular, we consider double Auger decay\cite{Journel2008pra,Zeng2013pra} (DAD) of a single core vacancy, in which two electrons are ejected to continuum. As a prototype of the DAD process can be considered the production of Ne$^{3+}$ in the decay of the $1s$ core vacancy in neon\cite{Carlson1965prl}, \begin{equation} \label{eq:Ne1sDAD} \text{Ne}^+(1s^{-1})\,\rightarrow\,\left\{\begin{array}{ll} \text{Ne}^{2+} + e^- & \quad\text{AD} \\ \text{Ne}^{3+} +e_1^- + e_2^- & \quad\text{DAD} \end{array}\right. . \end{equation} In this case, DAD proceeds solely simultaneously -- both secondary electrons are ejected at the same time without involvement of any intermediate state. Most recent experiments estimate the DAD branching ratio as 6\% of the total decay rate\cite{Kolorenc2016jpb}. For heavier atoms, the importance of DAD increases, together with that of the cascade decay pathway. In the cascade decay, the secondary electrons are emitted sequentially with an intermediate dicationic metastable state being populated between the two steps. For the Kr $3d$ vacancy, for instance, the DAD branching ratio was determined between 20\%\cite{Palaudoux2010pra} and 30\%\cite{Tamenori2004jpb} and the process is strongly dominated by the sequential decay. Another relevant three-electron process is shake-up during Auger decay, represented here by the \begin{equation} \label{eq:Mg2s} \text{Mg}^+(2s^1\,2p^6\,3s^2)\,\rightarrow\,\text{Mg}^{2+}(2s^2\,2p^5\, (3p,4s)^1)+e^- \end{equation} transition in the decay of the Mg $2s$ vacancy. Tab.\ \ref{tab:DAD} lists total decay widths of Mg$^+(2s^{-1})$, Ne$^+(1s^{-1})$, Ar$^+(2p^{-1})$ and Kr$^+(3d^{-1})$ Auger-active states, obtained using different variants of the Fano-ADC method. Available experimental or theoretical values are given for comparison, together with branching ratios of the DAD or shake-up processes. For each ADC(2,2) variant, relative difference from the Fano-ADC(2)x result is evaluated, \begin{equation} \label{eq:GammaDelta} \Delta = (\Gamma_\text{ADC(2,2)}-\Gamma_\text{ADC(2)x})/\Gamma_\text{ADC(2,2)}. \end{equation} The uncertainties given for Fano-ADC values are determined as statistical standard deviations related solely to the Stieltjes imaging procedure -- decay widths are evaluated through averaging of nine consecutive orders $n_S$ of the imaging procedure in the region of best convergence. Hence, the error margins do not attempt to reflect any systematical errors connected with the Fano-ADC methodology. \begin{table}[ht] {\footnotesize \begin{tabular}{l|cc|c|cc|cc|cc} \multicolumn{1}{c|}{}& \multicolumn{2}{c|}{Reference} & \multicolumn{1}{c|}{ADC(2)x} & \multicolumn{2}{c|}{ADC(2,2)$_f$} & \multicolumn{2}{c|}{ADC(2,2)$_x$} & \multicolumn{2}{c}{ADC(2,2)$_m$} \\ & $\Gamma$\,(meV) & DAD/shake-up & $\Gamma$\,(meV) & $\Gamma$\,(meV) & $\Delta$ & $\Gamma$\,(meV) & $\Delta$ & $\Gamma$\,(meV) & $\Delta$ \\ \hline \hline Mg$^+$($2s^{-1}$) & 680\cite{Kochur2001aa} & 13\% & $585\pm3$ & $621\pm9$ & $(6\pm 2)\%$ & $684\pm 7$ & $(14\pm 1)\%$ & $674\pm 7$ & $(13\pm 1)\%$ \\ \hline Ne$^+$($1s^{-1}$) & $257\pm6$\cite{Mueller2017aj} & 6\%\cite{Kolorenc2016jpb} & $244\pm 4$ & $271\pm 5$ & $(10\pm 3)\%$ & $318\pm 4$ & $(23\pm 2)\%$ & $283\pm 10$ & $(14\pm 4)\%$ \\ \hline Ar$^+$($2p^{-1}$) & $125\pm 5$\cite{Zeng2013jpb} & $(13\pm 2)\%$\cite{Zeng2013jpb} & $114 \pm 5$ & $125\pm 3$ & $(10\pm 6)\%$ & $149\pm 4$ & $(24\pm 5)\%$ & $134\pm 4$ & $(18\pm 6)\%$ \\ \hline Kr$^+$($3d^{-1}$) & $88\pm 4$\cite{Jurvansuu2001pra} & 20-30\%\cite{Tamenori2004jpb,Palaudoux2010pra} & $68\pm 2$ & $94\pm 2$ & $(27\pm 4)\%$ & $109\pm 2$ & $(37\pm 4)\%$ & $96\pm 3$ & $(29\pm 4)\%$ \\ \hline \end{tabular} } \caption{\label{tab:DAD}Total Auger decay widths and DAD/shake-up contributions available in literature (second and third column) for atomic core vacancies. Fourth column shows total decay widths computed by the Fano-ADC(2)x method, columns 5-10 total contain decay widths obtained using variants of Fano-ADC(2,2) and their relative differences $\Delta$ from Fano-ADC(2)x values. For details on the basis sets and the $\mathcal{Q/P}$ classification procedure, see text and SM. } \end{table} Compared to the reference values, Fano-ADC(2)x method underestimates the decay widths in all listed cases. Using the new ADC(2,2)$_f${} scheme, the agreement is improved significantly, in particular for the Ar and Kr atoms. For Ne, ADC(2,2)$_f${} somewhat overestimates the reference value. However, considering the error margins, the agreement is in fact good for both ADC(2)x and ADC(2,2)$_f${} schemes. Only for the Mg$(2s^{-1})$ vacancy, even the ADC(2,2)$_f${} value stays underestimated by nearly $9\%$ with respect to the reference. Other two variants, ADC(2,2)$_m${} and ADC(2,2)$_x${}, yield total decay widths systematically larger than ADC(2,2)$_f${} (on average by 9\% and 20\%, respectively). With the exception of Mg$^+(2s^{-1})$ vacancy this leads to substantially overestimated results compared to the reference values. The errors of the ADC(2)x results with respect to the reference values correspond approximately to the DAD or shake-up contributions. The differences \eqref{eq:GammaDelta} therefore provide good estimates of the three-electron branching ratios and can be used as indication to what extent the higher-order processes are accounted for in the ADC(2,2) schemes. The results show that for DAD, particularly the ADC(2,2)$_f${} scheme performs very well. It should be pointed out, however, that the difference $\Delta$ is a composite effect of \chng{the DAD contribution and improved description of the single AD transitions due to better representation of both the initial $1h$-like and final $2h1p$-like states.} Partial decay widths determined using suitable channel projectors\cite{Averbukh2005jcp} are needed for reliable analysis of the DAD and shake-up contributions and will be discussed in a follow-up work. The agreement between our results and the literature decay width of Mg$^+(2s^{-1})$ having a significant contribution of the shake-up process is less satisfactory. In this case, ADC(2,2)$_x${} provides apparently the best result, but the agreement is likely incidental. Shake-up transitions are, in principle, more difficult to describe properly within Fano-ADC method than the DAD process. Considering the particular transition of Eq.\ \eqref{eq:Mg2s}, $3h2p$ ISs deriving from the $(2p^{-1}\,3s^{-2})$ $3h$ configurations belong to both $\mathcal{P}$ and $\mathcal{Q}$ subspaces to account for the open $\text{Mg}^{2+}(2s^2\,2p^5\, \{3p,4s\}^1)$ shake-up final channels and closed $\text{Mg}^{3+}(2s^2\,2p^5)$ triply-ionized channels, respectively. In the framework of a $\mathcal{L}^2$ method, however, it is not possible to rigorously distinguish between those two types of final states. In the present calculations, $3h2p$ ISs derived from $2p^{-1}\,3s^{-2}$ configurations were included only into the $\mathcal{P}$-subspace as it is paramount in the Fano theory to fully eliminate continuum from the $\mathcal{Q}$ subspace. It may come at the expense of missing some correlation in the initial state, which might be the reason for the worse agreement with literature values. It should be also pointed out that another source of uncertainty lies in the literature value. It was computed \emph{ab initio} using the $R$-matrix method\cite{Kochur2001aa} and, to the best of our knowledge, was not verified experimentally. The present calculations call for revisiting the Mg$^+(2s^{-1})$ decay width both theoretically and experimentally. To conclude, Fano-ADC(2,2)$_f${} method represents substantial improvement over the original Fano-ADC(2)x approach. ADC(2,2)$_m${} and in particular ADC(2,2)$_x${} variants are inferior, confirming the inconsistencies observed already in the ionization energies of Ne and the arguments motivating the ADC(2,2)$_f${} scheme given in Sec.\ \ref{sec:ADC22}. \subsection{Collective three-electron Auger decay} \label{sec:KrCAD} Another relevant three-electron decay process is the so-called collective Auger decay (CAD) of two vacancies, which recombine simultaneously to eject single secondary electron. To this date, the strongest known atomic three-electron CAD is the relaxation of the $\text{Kr}^+(3d^{10}\,4s^{0}\,4p^6\,5p)$ excited state of ionized Kr, which was conclusively observed in a cascade process initiated by $3d\rightarrow 5p$ excitation of Kr, \begin{equation} \text{Kr}^*(3d^9\,4s^2\,4p^6\,5p)\,\rightarrow\,\text{Kr}^+(3d^{10}\,4s^{0}\,4p^6\,5p)+e^-\,\rightarrow\,\left\{ \begin{array}{ll} \text{Kr}^{2+}(3d^{10}\,4s^1\,4p^5)+e^- & \text{AD} \\ \text{Kr}^{2+}(3d^{10}\,4s^2\,4p^4)+e^- & \text{CAD} \end{array} \right. . \end{equation} In the coincidence experiment carried out by Eland \emph{et al}\cite{Eland2015njp}, the three-electron process was found to be about 40 times weaker than the competing two-electron process, corresponding to branching ratio of 2.4\%. As such, decay of the $\text{Kr}^+(3d^{10}\,4s^{0}\,4p^6\,5p)$ state is perfect case to test the new Fano-ADC(2,2) method. In terms of excitation classes, the above CAD process corresponds to a $2h1p\,\rightarrow\,2h1p$ transition and can be described already in the Fano-ADC(2)x method, at least in principle. The calculations were carried out using the same basis set as for the Kr$^+(3d^{-1})$ decay in previous section (see SM). To correctly determine the CAD contribution, partial decay widths are needed. In this particular case, however, the problem can be handled already at the present stage of development. Since no $3h2p$ ISs enter the final $\mathcal{P}$ subspace, the approach previously implemented\cite{Averbukh2005jcp,Kolorenc2015jcp,Stumpf2017chp} within the Fano-ADC(2)x framework can be directly applied. In particular, projector onto the AD channel is defined by $2h1p$ ISs containing the $4s$ hole while the CAD channel is defined by two $4p$ vacancies. Results are collected in Tab.\ \ref{tab:KrCAD}. While Fano-ADC(2)x estimates the three-electron branching ratio to be only 0.2\%, which account for only about 8\% of the measured intensity, Fano-ADC(2,2)$_f${} scheme recovers 88\% of the experimental value. While ADC(2,2)$_x${} scheme leads to essentially the same results as the ADC(2,2)$_f${}, ADC(2,2)$_m${} in this case fails. Specifically, the spectrum of the projected $QHQ$ Hamiltonian does not show the expected structure of $\text{Kr}^+(3d^{10}\,4s^{0}\,4p^6\,nl)$ states, therefore, it is not possible to identify unambiguously the resonance of interest among the $QHQ$ eigenvectors. The total decay widths calculated using the ADC(2)x and ADC(2,2)$_f${} schemes differ by a factor of 2.2. Such a large difference cannot be attributed to the higher-order transitions but rather to the poor representation of the $2h1p$-like decaying state in the ADC(2)x scheme. Direct comparison with experiment is not available. Owing to the low resolution of the experimental data, it is not possible to determine reliably the lifetime broadening of the observed spectral lines. A rough estimate suggests the upper limit of about 250\,meV, indicating that even the ADC(2,2)$_f${} result might be still too high. \begin{table}[ht] \begin{tabular}{lcc} method & $\quad\Gamma$\,(meV)$\quad$ & CAD BR \\ \hline experiment\cite{Eland2015njp} & - & 2.4\% \\ ADC(2)x & 744 & 0.2\% \\ ADC(2,2)$_f${} & 331 & 2.1\% \end{tabular} \caption{\label{tab:KrCAD} Calculated decay widths of the $\text{Kr}^+(3d^{10}\,4s^{0}\,4p^6\,5p)$ state and three-electron CAD branching ratios.} \end{table} \subsection{Interatomic decay widths} \label{sec:ICD} In this section, we demonstrate the applicability of Fano-ADC(2,2) method to calculation of interatomic decay rates. We will focus on simple, previously studied problems, namely ICD of the $2s$ vacancy in $\text{Ne}_2$ and of the $(1s^{-2}\,2p^1)$ resonances in ionized-excited $\text{He}_2$ dimers. Characteristic feature of these dipole-allowed ICD transitions is the $R^{-6}$ dependence\cite{Santra2001prb,Gokhberg2010pra} of the decay widths on the interatomic distance $R$, valid for large separations. The asymptotic decay widths can then be well approximated by the virtual photon transfer model due to Matthew and Komninos\cite{Matthew1975surfsci,Averbukh2004prl}, \begin{equation} \label{eq:vphm} \Gamma(R) = \frac{3\hbar}{4\pi}\left(\frac{c}{\omega}\right)^4\frac{\tau_\text{rad}^{-1}\,\sigma}{R^6}, \end{equation} where $\tau_\text{rad}$ is the radiative lifetime of the initial vacancy in isolated donor atom and $\sigma$ is the ionization cross section (at the virtual photon energy $\hbar\omega$) of the acceptor atom. Reproducing this behavior is therefore fundamental test of the method. Decay of Ne $2s$ vacancy in neon dimer, \begin{equation} \label{eq:Ne2ICD} \text{Ne}^+(2s^{-1})\text{Ne}\,\rightarrow\,\text{Ne}^+(2p^{-1})+\text{Ne}^+(2p^{-1})+e^-, \end{equation} is one of the most thoroughly investigated examples of ICD, both theoretically\cite{Santra2000prl,Scheit2004jcp, Averbukh2006jcp} and experimentally\cite{Jahnke2004prl,Schnorr2013prl}. Due to the inversion symmetry of the homonuclear dimer, the $2s$ vacancy split into a pair of states delocalized over the whole dimer, $(2\sigma_g^{-1})\,{}^2\Sigma_g^+$ and $(2\sigma_u^{-1})\,{}^2\Sigma_u^+$. Fig.\ \ref{fig:Ne2ICD} shows dependence of the total ICD widths on the internuclear distance $R$ for both gerade and ungerade states, calculated using Fano-ADC(2,2)$_f${} (full lines) and Fano-ADC(2)x (dahed-dotted lines). Details about the present calculations (basis sets, $\mathcal{Q/P}$ partitioning) can be found in SM. \begin{figure}[ht] \includegraphics[width=\linewidth]{Ne2.pdf} \caption{\label{fig:Ne2ICD}Doubly-logarithmic plot of total ICD widths of the $(2\sigma_g^{-1})\,{}^2\Sigma_g^+$ (dark red) and $(2\sigma_u^{-1})\,{}^2\Sigma_u^+$ (light green) vacancy states in Ne$_2$, calculated using Fano-ADC(2,2)$_f${} (full lines) and Fano-ADC(2)x (dashed-dotted lines). Dotted line corresponds to the virtual photon model, Eq.\ \eqref{eq:vphm} (atomic data for Ne are taken from Refs.~\onlinecite{Griffin2001jpb,Samson2002jesrp}). Inset shows magnified comparison the \emph{ab initio} methods and virtual photon model at large interatomic separation.} \end{figure} Qualitatively, both Fano-ADC(2,2)$_f${} and Fano-ADC(2)x yield very similar results. The decay width of the gerade initial state follows the $R^{-6}$ trend over the whole range of internuclear distances while that of the ungerade state is somewhat enhanced at short distances (by a factor of about 2 at the equilibrium interatomic distance). Quantitatively, however, Fano-ADC(2,2)$_f${} decay widths are approximately by a factor of 1.4 smaller for both initial states and over the whole range of interatomic distances. Comparison with Eq.\ \eqref{eq:vphm} at large interatomic distances (see inset of Fig.\ \ref{fig:Ne2ICD}) indicate that Fano-ADC(2,2)$_f${} results are more accurate. Asymptotically, decay widths obtained by Fano-ADC(2)x results are by a factor of 1.5 larger than the prediction of virtual photon model. This discrepancy was attributed to the inaccuracies of the ADC(2)x theoretical description\cite{Averbukh2006jcp}. Indeed, improved description of the $2h1p$-like final states reduces this error by more than 70\%. \begin{figure}[ht] \includegraphics[width=\linewidth]{He2totals_g.pdf} \caption{\label{fig:He2g}Total ICD widths of the ${}^2\Sigma_g$ (dark red) and ${}^2\Pi_g$ (light green) states of the $\text{He}^+(1s^0\,2p^1)$ type. Results fo Fano-ADC(2,2)$_f${} and Fano-ADC(2)x methods are shown by full and dashed-dotted lines, respectively. Results of $R$-matrix calculations\cite{Sisourat2017jcp} are indicated by crosses (${}^2\Sigma_g$) and circles (${}^2\Pi_g$). $R^{-6}$ dependence (dotted line) is fitted to the ${}^2\Pi_g$ state to guide the eye.} \end{figure} \begin{figure}[ht] \includegraphics[width=\linewidth]{He2totals_u.pdf} \caption{\label{fig:He2u}Total ICD widths of the ${}^2\Sigma_u$ (dark red) and ${}^2\Pi_u$ (light green) states of the $\text{He}^+(1s^0\,2p^1)$ type. Results fo Fano-ADC(2,2)$_f${} and Fano-ADC(2)x methods are shown by full and dashed-dotted lines, respectively. Results of $R$-matrix calculations\cite{Sisourat2017jcp} are indicated by crosses (${}^2\Sigma_u$) and circles (${}^2\Pi_u$). $R^{-6}$ dependence (dotted line) is fitted to the ${}^2\Pi_u$ state to guide the eye.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{He2virtual.pdf} \caption{\label{fig:He2virt}Comparison of the state-averaged ICD decay widths \eqref{eq:GammaHe2_aver} for $\text{He}_2$ with the virtual photon model \eqref{eq:vphm}. Radiative lifetime $\tau$ of the $\text{He}^+(2p^1)$ state and the He photoionization cross section $\sigma$ are taken from Refs.~\onlinecite{Marr1976adndt,Drake1992pra}.} \end{figure} ICD in helium dimer, \begin{equation} \label{eq:He2ICD} \text{He}^+(1s^0\,2p^1)\text{He}\,\rightarrow\,\text{He}^+(1s)+\text{He}^+(1s)+e^-, \end{equation} is another thoroughly researched interatomic decay process. Its experimental realization\cite{Havermeier2010prl,Sisourat2010nphys} even provided direct visualization of the nodal structure of the vibrational wave function of the decaying state. Corresponding decay widths were studied extensively using Fano-ADC(2)x method\cite{Kolorenc2010pra,Kolorenc2015jcp} and more recently also with $R$-matrix\cite{Sisourat2017jcp}. As in the case of neon dimer, the initial metastable states are delocalized over the dimer, giving rise to gerade-ungerade pairs derived from atomic $2p$ orbitals oriented parallel (${}^2\Sigma_g$, ${}^2\Sigma_u$) and perpendicular (${}^2\Pi_g$, ${}^2\Pi_u$) to the dimer axis. Decay widths of the gerade states are shown in Fig. \ref{fig:He2g}, of the ungerade states in Fig.\ \ref{fig:He2u}. Results of Fano-ADC(2,2)$_f${} and Fano-ADC(2)x are shown as full and dashed-dotted lines, respectively. Values obtained using the $R$-matrix method at the CI-SD level (cf.\ scheme 2 in Ref.~\onlinecite{Sisourat2017jcp}) are shown at the two interatomic distances at which they were computed. Details of the present calculations are given in SM. For both methods, the decay widths follow the expected $R^{-6}$ trend for internuclear separations larger than 4\,\AA. The $\Gamma_\Sigma/\Gamma_\Pi$ ratio also quickly approaches the value of 4, which can be deduced from the dipole orientation of the states involved in the transition\cite{Gokhberg2010pra}. Quantitatively, however, the two methods differ considerably more than in the case of neon dimer. Comparison with the virtual photon model \eqref{eq:vphm}, which is in this case provided by the state-averaged decay width, \begin{equation} \label{eq:GammaHe2_aver} \bar{\Gamma}=\frac{1}{6}(\Gamma_{\Sigma_g}+2\Gamma_{\Pi_g}+\Gamma_{\Sigma_u}+2\Gamma_{\Pi_u}), \end{equation} is shown in Fig.\ \ref{fig:He2virt}. While the ADC(2,2)$_f${} result agrees for $R>5\,\text{\AA}$ with the asymptotic formula within 5\%, ADC(2)x yields decay widths by a factor of 3.1 too large. Towards smaller interatomic separations, the difference decreases as the Fano-ADC(2,2)$_f${} decay widths are significantly enhanced relative to the $R^{-6}$ trend while the Fano-ADC(2)x deviate much less. As a result, the level of agreement with the $R$-matrix calculations\cite{Sisourat2017jcp} below $R=2\,\text{\AA}$ is similar for both methods. The large asymptotic discrepancy is to be attributed to the $2h1p\,\rightarrow\,2h1p$ character of the interatomic transition. First order representation provided by the ADC(2)x scheme for both the initial and final states is clearly insufficient in this case. \section{Conclusions} \label{sec:conclusions} In the framework of the well-established Fano-ADC methodology, we have presented a method of computation of intra- and interatomic nonradiative decay widths based on the Fano theory of resonances and a new ADC(2,2) scheme for representation of the many-electron wave functions. The principal asset of the new scheme is the balanced representation of both initial and final states of the decay processes, where both are being treated correctly up to the second order of perturbation theory. The ability of the new method to provide accurate decay widths is demonstrated on a number of Auger decay and ICD processes. In the case of ICD, superiority of the Fano-ADC(2,2) method is exemplified by near-perfect agreement of the \emph{ab initio} decay widths with the virtual photon transfer model in the region of large interatomic distances. Fano-ADC(2,2) method allows us, for the first time within the Fano-ADC approach, to take into account the higher-order three-electron transitions, such as DAD, which contribute significantly to the total decay widths and represent basic manifestations of electron correlation\cite{Feifel2018SciRep}. Comparison of our results to the available benchmarks indicates that Fano-ADC(2,2) provides quantitatively correct description of such higher-order transitions. \chng{ The increased accuracy and broader capability of Fano-ADC(2,2) comes at a price, namely the significant growth of computational demands connected with the $3h2p$ excitation class. Compared to Fano-ADC(2)x, the computational cost increases from $n_{occ}^3 n_{virt}^2$ to $n_{occ}^3 n_{virt}^4$. The use of the new method is thus likely to be limited to small systems when high accuracy is required or to investigate higher-order decay processes while Fano-ADC(2)x remains the method of choice for larger polyatomic systems. } \section*{Supplementary material} See supplementary material for information about basis sets and other computational details. \begin{acknowledgments} Financial support by the Czech Science Foundation (Project GA\v{C}R No.\ 17-10866S) is gratefully acknowledged. V.A. acknowledges support by EPSRC/DSTL MURI grant EP/N018680/1. \end{acknowledgments} \section*{Data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{Basis sets and other computational details} Here we provide information on the basis sets and other details of calculations reported in the present work. All basis sets were downloaded from the basis set exchange library \cite{pritchard2019a}. MOLCAS\cite{molcas74} package was used for HF calculation and Coulomb integral evaluations. \subsection*{Sec.\ IV\,B, Tab.\ V} \begin{itemize} \item Mg$^+(2s^{-1})$\\[0.4em] \underline{basis:} uncontracted cc-pCV5Z\cite{feller1996a,schuchardt2007a} augmented by 10s10p8d continuum-like Gaussian basis functions of the Kaufmann-Baumeister-Jungen (KBJ) type\cite{Kaufmann1989jpb}\\[0.4em] \underline{configuration space restrictions:} $1s$ orbital frozen, virtual orbitals restricted to energies below 80\,Ha\\[0.4em] \underline{$Q/P$ selection scheme A:} $\mathcal{P}$ subspace spanned by $2h1p$ ISs with at least one $3s$ hole and by $3h2p$ ISs with two $3s$ holes, with the exception of ISs with at least one $2s$ vacancy, which belong naturally to the $\mathcal{Q}$ subspace \item Ne$^+(1s^{-1})$\\[0.4em] \underline{basis:} uncontracted aug-cc-pCV5Z\cite{dunning1989jcp,kendall1992jcp} augmented by 10s10p8d even-tempered Gaussian functions\\[0.4em] \underline{configuration space restrictions:} virtual orbitals restricted to energies below 460\,Ha\\[0.4em] \underline{$Q/P$ selection scheme A:} all ISs characterized by at least one $1s$ vacancy belong to the $\mathcal{Q}$ subspace \item Ar$^+(2p^{-1})$ \\[0.4em] \underline{basis:} uncontracted aug-cc-pwCV5Z\cite{peterson2002jcp,woon1993jcp} augmented by 4s3p even-tempered Gaussian functions\\[0.4em] \underline{configuration space restrictions:} $1s$ orbital frozen, virtual orbitals restricted to energies below 30\,Ha\\[0.4em] \underline{$Q/P$ selection scheme A:} ISs characterized by at least one $2s$ or $2p$ vacancy belong to the $\mathcal{Q}$ subspace \item Kr$^+(3d^{-1})$ \\[0.4em] \underline{basis:} cc-pwCV5Z-PP\cite{peterson2003jcp,peterson2010jcp} (with Stuttgart-Koeln MCDHF RSC ECP for [Ne]), augmented by 5s5p5d2f continuum-like KBJ Gaussian functions; H- and I-type functions removed from the basis\\[0.4em] \underline{configuration space restrictions:} $3s$ and $3p$ orbitals frozen, virtual orbitals restricted to energies below 11\,Ha \\[0.4em] \underline{$Q/P$ selection scheme A:} $\mathcal{Q}$ subspace is defined by configurations with at least one $3d$ vacancy plus all $3h2p$ ISs associated with Kr$^+(4s^{-2}\,4p^{-1}\,nl^{+2})$ configurations \end{itemize} Convergence of the decay widths with respect to basis (in particular, the augmentation and virtual orbitals restriction) was determined using the Fano-ADC(2)x scheme -- extensive convergence studies based on the ADC(2,2) schemes are impractical due to the large $3h2p$ class of intermediate states. The approach was tested by running Fano-ADC(2,2)$_f${} calculations for Kr($3d^{-1}$) and Mg($2s^{-1}$) using two different augmentations and without any restriction on virtual orbitals. Choice between even-tempered and KBJ-type augmentation is based on the quality of convergence of the Stieltjes imaging procedure. In Tab.\ V, $Q/P$ selection scheme A was used for all atoms. Tab.\ \ref{tab:DAD_QPAB} compares the Fano-ADC(2,2)$_f${} results with decay widths obtained using scheme B with the same number of open decay channels. Both schemes yield results which agree within the error margins deduced from the convergence of the Stieltjes imaging procedure. \begin{table}[ht] \begin{tabular}{c|c|c} & scheme A & scheme B \\[0.4em] \hline \hline Mg$^+$($2s^{-1}$) & $621\pm9$ & $618\pm 3$ \\[0.4em] \hline Ne$^+$($1s^{-1}$) & $271\pm 5$ & $272\pm 4$ \\[0.4em] \hline Ar$^+$($2p^{-1}$) & $125\pm 3$ & $127\pm 1$ \\[0.4em] \hline Kr$^+$($3d^{-1}$) & $94\pm 2$ & $93\pm 2$ \\[0.4em] \hline \end{tabular} \caption{\label{tab:DAD_QPAB} Comparison of total Auger decay widths computed using the Fano-ADC(2,2)$_f${} method employing $Q/P$ selection schemes A and B.} \end{table} \subsection*{Sec.\ IV\,C, Tab.\ VI} Decay widths of the $\text{Kr}^+(3d^{10}\,4s^{0}\,4p^6\,5p)$ resonance were computed using $\mathcal{Q/P}$ classification scheme $B$ with the $\mathcal{P}$-subspace constructed from the adapted $2h1p$ ISs associated with the $\text{Kr}^{2+}(3d^{10}\,4s^1\,4p^5\,{}^1P,{}^3P)$ and $\text{Kr}^{2+}(3d^{10}\,4s^2\,4p^4\,{}^1S,{}^1D,{}^3P)$ open channels. All $3h2p$ ISs are included in the $\mathcal{Q}$ subspace. Same basis set and configuration restrictions as in Sec.\ IV\,B were employed. \subsection*{Sec.\ IV\,D} \begin{itemize} \item ICD in neon dimer (Fig.\ 1)\\[0.4em] \underline{basis:} cc-pVTZ augmented by $(5s,5p,5d)$ continuum-like KBJ Gaussian functions on atomic centers, with additional sets of $(4s,4p,4d)$ continuum-like KBJ Gaussian functions placed at three ghost centers - midpoint between the atoms and $\pm 3R/2$, where $R$ is the internuclear distance\\[0.4em] \underline{configurations space restriction:} $1s$ orbital frozen\\[0.4em] \underline{$Q/P$ selection scheme B:} $\mathcal{P}$ subspace comprises all adapted $2h1p$ ISs associated with the 9 singlet and 9 triplet two-site states derived (in terms of AOs) from the $\text{Ne}(2p^{-1})-\text{Ne}(2p^{-1})$ configurations \item ICD in helium dimer (Figs.\ 2-4)\\[0.4em] \underline{basis:} cc-pV6Z basis augmented by $(9s,9p,9d)$ continuum-like KBJ Gaussian functions on atomic centers, with additional set of $(5s,5p,5d)$ continuum-like KBJ functions at the midpoint between the atoms\\[0.4em] \underline{configurations space restriction:} virtual orbitals restricted to energies below 20\,Ha \underline{$Q/P$ selection scheme B:} $\mathcal{P}$ subspace comprises adapted $2h1p$ ISs associated with the two (${}^1\Sigma_g^+$ and ${}^3\Sigma_u^+$) $\text{He}^+(1s)\text{He}^+(1s)$ open channels. All $3h2p$ ISs were included into both subspaces in order to ensure equivalent correlation description of initial and final states of the process, leading to non-orthogonal $Q$ and $P$ projectors. \end{itemize}
1,116,691,498,906
arxiv
\section{Introduction} \label{sec:intro} The nature of the source class colloquially known as Anomalous X-ray Pulsars (AXPs) has been well explained by the so called magnetar model \citep{td96a}. In this model, AXPs, which appear to be isolated, have pulse periods in the narrow range of 5--12~s, and X-ray luminosities which greatly exceed their spin-down luminosities, are young, isolated, ultra-magnetized neutron stars which are powered by their decaying magnetic fields. The magnetar model was first introduced to explain the enigmatic properties of the Soft Gamma Repeaters (SGRs). The SGRs were first identified by their emission of long-lived super-Eddington flares in soft $\gamma$-rays and by their much more frequent and less energetic short bursts in hard X-rays \citep{mgi+79,hcm+99}. Since their discovery, the quiescent emission of SGRs has been shown to possess numerous similarities with the persistent emission of AXPs. Specifically, the SGRs were shown to have similar pulse periods and spin down rates \citep{kds+98, ksh+99}. For a recent review of AXPs and SGRs see \citet{wt04}. The distinction between AXPs and SGRs was blurred even further when the \textit{Rossi X-ray Timing Explorer} ({\it RXTE}) discovered two magnetar-like X-rays bursts from the direction of AXP \tfe\ \citep{gkw02}. The temporal, energetic and spectral properties of these bursts were very similar to those of SGRs. Due to the large field of view of {\it RXTE}, the AXP could not be unambiguously be confirmed as the source of the bursts, however the burst properties argued strongly that the AXP was indeed the origin. The issue of whether AXPs emit bursts was settled unambiguously when another AXP, 1E~2259+586, underwent a major SGR-like outburst involving 80 bursts accompanied by severe changes to every aspect of the pulsed emission \citep{kgw+03, wkt+04}. The distribution of durations and fluences of these bursts were very similar to the ones seen in SGRs \citep{gkw04}. Despite the many similarities between SGR and AXP bursts, there were some differences. For instance, the first burst detected from the direction of \tfe\ exhibited an unusual spectral feature near $\sim14$~keV during the first $\sim 1$~s of the burst. Furthermore, in AXPs the more luminous bursts had harder spectra, the opposite of what is seen in SGRs. Besides being able to account for the quiescent emission of SGRs and AXPs, the magnetar model provides a clear explanation for the bursts \citep{td95}. The highly twisted interior magnetic fields of a magnetar diffuses out and unwinds, causing enormous stresses on the neutron star's crust. When the crust yields to these internal stresses, the well anchored foot-points of the external field are displaced, sending electromagnetic disturbances, i.e. Alfv\'en waves into the magnetosphere. The Alfv\'en waves provide the energy and momentum required for photons in the magnetosphere to pair-produce, yielding an electron-positron and photon fireball. This fireball is contained by closed field lines, and is gradually radiated away. The magnetic field strengths required to contain the energy released in SGR bursts are consistent with the dipolar field strengths inferred from their spin-down \citep{kds+98, ksh+99}. We report here, using data from our continuing {\it RXTE}\ monitoring program, the discovery of another X-ray burst from AXP \tfe. We show that the pulsed emission increased in the tail of the burst, indicating that the AXP was unambiguously the source of the burst. The identification of \tfe\ as the burst source for this event and the similarities between this burst and the previous two lends further support to the AXP having also been the emitter of the two bursts reported by \citet{gkw02}. \section{Results} \label{sec:results} \subsection{{\it RXTE}\ Observations} The results presented here were obtained using the Proportional Counter Array \citep[PCA;][]{jsg+96} on board {\it RXTE}. The PCA consists of an array of five collimated xenon/methane multi-anode proportional counter units (PCUs) operating in the 2--60~keV range, with a total effective area of approximately $\rm{6500~cm^2}$ and a field of view of $\rm{\sim 1^o}$~FWHM. We use {\it RXTE}\ to monitor all five known AXPs on a regular basis as part of a long-term monitoring campaign \citep[see][and references therein]{gk02}. On 2004 June 29, during one of our regular monitoring observations ({\it RXTE}\ observation identification 90076-02-09-02) that commenced at UT 06:29:28, the AXP \tfe\ exhibited an SGR-like burst. \subsubsection{Burst Search} The burst was identified using the burst searching algorithm introduced in \citet{gkw02} and described in detail in \citet{gkw04}. To summarize briefly, time series were created separately for each PCU using all xenon layers. Light curves with time bin widths of 1/32~s were created. The \texttt{FTOOL}s \texttt{xtefilt} and \texttt{maketime} were used to determine the intervals over which each PCU was off. We further restricted the data set by including only events in the energy range 2--20~keV. Time bins with significant excursions from a running mean were flagged and subject to further investigation. The observation had total on-source integration time of 2.0~ks. There were exactly three PCUs operational at all times and the burst was equally significant in all three PCUs. Data were collected in the \texttt{GoodXenonwithPropane} mode, which records the arrival time (with 1-$\mu$s resolution) and energy (with 256-channel resolution) of every unrejected xenon event as well as all the propane layer events. Photon arrival times were adjusted to the solar system barycenter using a source position of (J2000) $\mathrm{RA} = 10^{\rm h}\ 50^{\rm m}\ 07\fs 12$, $\mathrm{DEC} = -59\degr 53\arcmin 21\farcs 37$ \citep{ics+02} and the JPL DE200 planetary ephemeris. Note that following the burst we initiated a 20.2~ks long {\it RXTE}/PCA Target of opportunity (ToO) observation on 2004 July 8. Similarly, on 2004 July 10, we initiated 3 more {\it RXTE}/PCA ToO observations which had integration times of 3.6~ks, 15.0~ks and 20.2~ks respectively. No more bursts or other unusual behavior were seen in the ToO observations or in any of the monitoring observations since the burst. \subsubsection{Burst Temporal Properties} \label{sec:temporal properties} We analyzed the temporal properties of the burst in order to compare them to those of other bursts from AXPs and SGRs. The analysis methods are explained in greater detail elsewhere \citep[e.g., see][]{gkw02,gkw04,wkg+05} The burst profile is shown in Figure~\ref{fig:profile} and its measured properties are summarized in Table~\ref{ta:burst}. The burst peak time was initially defined, using a time series binned with 1/32~s resolution, as the midpoint of the bin with the highest count rate. We redefined this value, using the event timestamps within this time bin, as the midpoint of the two events having the shortest separation. Using the burst peak time, we determined the occurrence of the burst in pulse phase. We split our observation into four segments and phase-connected these intervals using the burst peak time as our reference epoch. We then folded our data using the resulting ephemeris and cross-correlated our folded profile with a high signal-to-noise template whose peak was centered on phase $\phi=0$, where $\phi$ is measured in cycles. \citep[For a review of our timing techniques, see][]{kcs99,klc00,kgc+01,gk02}. We find that the burst occurred near the peak of \tfe's pulse profile, at $\phi = -0.078\pm0.016$. The burst rise time was determined by a maximum likelihood fit to the unbinned data using a piecewise function having a linear rise and exponential decay. The burst rise time, $t_r$, was defined as the time from the peak to when the linear component reached the background (\S~\ref{sec:flux and fluence} discusses how the background was estimated). The burst duration, $\mathrm{T}_{90}$, is the interval between when 5\% and 95\% of the total 2--20~keV burst fluence was received. As we will note in \S~\ref{sec:flux and fluence}, the burst did not fade away before the end of our observation. Thus we could only place an upper limit of $>699$~s on the burst duration, which is the time from the burst's peak to the end of our observation. This very long tail can be seen in the burst profile in Figure~\ref{fig:profile}, which shows a significant excess from the burst's peak to the end of our observation. \subsubsection{Burst Spectral Evolution} Significant spectral evolution has been noted for the first burst discovered from this source \citep{gkw02} as well as for bursts from AXP XTE~J1810--197\ \citep{wkg+05} and bursts from SGRs \citep{isw+01, lwg+03}. Motivated by these observations we extracted spectra at different intervals within the burst's duration. We increased the integration time of the spectra as we went further away from the burst to maintain adequate signal-to-noise. A background spectrum was extracted from a 1000-s long interval which ended 10~s before the burst. From each of the burst intervals and the background interval we subtracted the instrumental background as estimated from the tool \texttt{pcabackest}. Each burst interval spectrum was grouped so that there were never fewer than 20 counts per spectral bin after background subtraction. The regrouped spectra along with their background estimators were used as input to the X-ray spectral fitting software package \texttt{XSPEC}\footnote{http://xspec.gsfc.nasa.gov}. Response matrices were created using the \texttt{FTOOL}s \texttt{xtefilt} and \texttt{pcarsp}. All channels below 2~keV and above 30~keV were ignored leaving 10--24 spectral bins for fitting. We fit the burst spectra to a photoelectrically absorbed blackbody model which adequately characterized the data. In all fits, the column density was held fixed at the average value of our \cxo\ and {\it XMM-Newton} observations (see \S~\ref{sec:pep} and Table~\ref{ta:spectra}). The burst's bolometric flux, blackbody temperature and radius evolution are shown in Figure~\ref{fig:profile}. The bolometric flux decayed as a power law in time, $F=F_1(t/\mathrm{1\ s})^{\beta}$, where $F_1 = 1.84 \pm 0.36 \times 10^{-8}$~erg~cm$^{-2}$~s$^{-1}$ and $\beta = -0.82 \pm 0.05$. The blackbody temperature decayed as $kT =kT_1 -\alpha\log(t/\mathrm{1\ s})$, where $kT_1 = 6.24 \pm 0.71$~keV and $\alpha = 1.55\pm0.41$~keV. The blackbody emission radius remained relatively flat with an average value of $R=0.100\pm0.01$~km. The blackbody radius was calculated assuming a distance of 5~kpc to the source. We repeated the above procedure using a power-law model, which also adequately characterized the data. Our power-law spectral index time series is shown in Figure~\ref{fig:profile}, where we see that the initial spike of the burst is very hard, with the burst gradually softening as the flux decays. Fitting a logarithmic function to the power-law spectral index time series we find $\Gamma = \Gamma_1 + \alpha\log(t/\mathrm{1\ s})$, where $\Gamma_1 = 0.30 \pm 0.18 $ and $\alpha=0.39\pm0.13$. Possible spectral features have been reported in bursts from AXPs \tfe\ \citep{gkw02} and XTE~J1810--197\ \citep{wkg+05} and from bursts from two SGRs \citep{si00,iss+02,isp03}. We searched for features by extracting spectra of different integration times as was done for the spectral evolution analysis. The spectra were background-subtracted and grouped in the exact same fashion as in the spectral evolution analysis. Energies below 2~keV and above 30~keV were ignored, leaving 13 spectral channels for fitting. Regrouped spectra, background estimators and response matrices were fed into \texttt{XSPEC}. Spectra were fit with a photoelectrically absorbed blackbody model, holding only $N_H$ fixed at the same value used in the spectral evolution analysis. The first 8~s of the burst spectrum was poorly fit by a continuum model, because of significant residuals centered near 13~keV. The apparent line feature was most significant if the first second of the burst was excluded. A simple blackbody fit had reduced $\chi^2_{\mathrm{dof}} = 1.61$ for 11 degrees of freedom (see Figure~\ref{fig:spectra}). The probability of obtaining such a value of $\chi^2$ or larger under the hypothesis that the model is correct is very low, $P(\chi^2 \ge 17.75 ) = 0.088$. The fit was greatly improved by the addition of a Gaussian emission line; in this case the fit had $\chi^2_{\mathrm{dof}} = 0.56$ for 8 degrees of freedom (see Figure~\ref{fig:spectra}). The probability of obtaining such a value of $\chi^2$ or larger under the hypothesis that the model is correct is $P(\chi^2 \ge 4.75 ) = .784$. The line energy was $E=13.09\pm0.25$~keV. Note that a line at a similar energy was found by \citet{gkw02} in the first burst discovered from this source. To firmly establish the significance of this feature, we performed the following Monte Carlo simulation in \texttt{XPSEC}. We generated 10000 fake spectra drawn from a simple blackbody model having the same background and exposure as our data set. We fit the simulated data to a blackbody model and to a blackbody plus emission line model and compared the $\chi^2$ difference between the two. To ensure we were sensitive to narrow lines when fitting our blackbody plus emission line model we stepped through different line energies from 2 to 30~keV in steps of 0.2 keV and refit our spectrum holding the line energy fixed and recorded the lowest $\chi^2$ value returned. In our simulations only 11 events had a $\chi^2$ difference greater or equal to the one found from our data. Thus, the probability of obtaining a spectral feature of equal significance by random chance is $\sim 0.0011$. The significance of the spectral feature reported for this source by \citet{gkw02} at this energy was $\sim 0.0008$. Since these were independent measurements, the probability of finding two spectral features at the same energy by random chance is $\sim 8.8\times 10^{-7}$, thus the emission line at $\sim$13~keV is genuine. \subsubsection{Burst Energetics} \label{sec:flux and fluence} In order to compare the energetics of this burst to those emitted in 2001 we measured its peak flux and fluence. The first step in this analysis was to model the background count rate. First we extracted an instrumental background for the entire observation using \texttt{pcabackest}. The function \texttt{pcabackest} can only estimate the background rate every 16~s seconds, so we interpolated to finer resolution by modeling the background rate as a 5th order polynomial. We then added the average non-burst count rate to this model. We estimated this value by subtracting our interpolated \texttt{pcabackest} model from our data and then measuring the average count rate over the same interval used to estimate the background in the spectral evolution analysis. The 2--20~keV peak flux was determined from the event data using a box-car integrator of width $1/\Delta t$. We used $\Delta t =64$~ms and $\Delta t = t_r$ \citep[for details on the flux calculation algorithm, see][]{gkw04}. At each step of the boxcar we subtracted the total number of background counts as determined by integrating our background model over the boxcar limits. To convert our flux measurements from count rates to CGS units we extracted spectra whose limits were defined by the start and stop time of the boxcars. For each flux measurement we extracted a spectrum for the region of interest, a background spectrum and a response matrix, similar to what was done for the spectral evolution analysis. Each spectrum was fit to a photolectrically absorbed blackbody using \texttt{XSPEC} in order to measure the 2--20~keV flux in CGS units. The 2--20~keV total fluence was determined by integrating our background-subtracted time series. If the burst had emitted all of its energy during the observation, the integrated burst profile would eventually plateau. However, this is not what we observed. The integrated burst profile was still steadily rising even at the end of our observation, indicating that our observation finished before catching the end of the burst. Thus, we can only set an upper-limit on the total 2--20~keV fluence; see Table~\ref{ta:burst}. To convert our fluence upper limit from counts to CGS units the exact procedure was followed as for the peak flux measurements. \subsubsection{Pulsed Flux Measurements} Magnetar candidates have been observed to be highly flux variable, which is why we regularly monitor the pulsed flux of this source \citep[see][for a detailed discussion of pulsed flux calculations for \tfe]{gk04}. The pulsed flux during the entire observation in which the burst occured was not significantly higher than in neighboring observations. However, in some AXPs and SGRs, short time-scale ($\ll 1000$~s) abrupt changes in pulsed flux have been observed in conjunction with bursts \citep[e.g.][]{lwg+03,wkt+04,wkg+05}. Motivated by such observations we decided to search for short-term pulsed flux enhancements around the time of the burst from \tfe. We broke the observation into 10 intervals and calculated the pulsed flux for each. In order to avoid having the burst spike biasing our pulsed flux measurements we removed a 4~s interval centered on the burst peak. A factor of 3.5 increase in pulsed flux can be seen in the tail of the burst (see Fig.~\ref{fig:flux}). This coupling between bursting activity and pulsed flux establishes that \tfe\ is definitely the burst source. \subsection{Imaging X-ray Observations} Following the discovery of a new burst from the direction of \tfe, we triggered observations of the source with imaging X-ray telescopes. The AXP was observed once with \xmm\ on 2004 July 8 for 33 ks and twice with \cxo\ on 2004 July 10 and 15 for 29 and 28 ks, respectively. Simultaneous {\it RXTE}\ observations were performed during the \xmm\ and the first \cxo\ observation to assist in the identification of bursts. For scheduling reasons the second \cxo\ observation could not be coordinated with simultaneous {\it RXTE}\ observations. The \xmm\ data were processed using the {\it XMM-Newton} Science Analysis System\footnote{http://xmm.vilspa.esa.es/external/xmm\_sw\_cal/sas.shtml} (SAS) v6.0.0. The scripts {\tt epchain} and {\tt emchain} were run on the Observation Data Files for the PN and MOS data, respectively. For the \cxo\ data, we started from the filtered event 2 list for all results presented here. Standard analysis threads were followed to extract filtered event lists, light curves and spectra from the processed data. See below for more details. The approximate count rates for the \xmm\ PN observation was $2.69\pm 0.01$~counts~s$^{-1}$ in the 0.5--12~keV band. The first and second \cxo\ observation had a count rate of $1.344\pm 0.007$~counts~s$^{-1}$ and $1.302\pm 0.007$~counts~s$^{-1}$, respectively, in the 0.5--10~keV band. \subsubsection{Burst Search} The \xmm\ PN camera and \cxo\ ACIS detectors (S3 chip) were operated in similar modes ({\tt TIMING} for PN and {\tt CC mode} for ACIS) to optimize the time resolution in order to search for short X-ray bursts (5.96 ms for PN and 2.85 ms for ACIS). Filtered source event lists (0.5$-$12.0 keV for PN and 0.5$-$10.0 keV for ACIS) were extracted for each observation to create light curves at three different time resolutions: 1, 10 and 100 times the nominal time resolution of the data. No bursts were found in any of the light curves. Moreover, no bursts were seen within the {\it RXTE}\ PCA data of the simultaneous observations of \tfe. Aside from the regular X-ray pulsations, the intensity of \tfe\ does not vary significantly within these observations. \subsubsection{Persistent Emission Properties} \label{sec:pep} Motivated by the sometimes dramatic changes in the persistent, pulsed X-ray emission of SGRs and AXPs following burst activity \citep[e.g.][]{wkt+04}, we investigated both the spectral and temporal properties of the X-ray emission from \tfe\ using the \xmm\ and \cxo\ data. Admittedly, only a single, relatively weak burst was observed from \tfe\ on 2004 June 29, so significant changes in these properties are not necessarily expected. However, this particular source has shown significant variability in its pulsed flux and pulsed fraction over the last several years apparently independent of strong burst activity \citep{gk04,mts+04}, so searching for continued evolution in these properties is of interest. The {\tt TIMING} mode for PN data is not yet well calibrated for spectral analysis, so we accumulated a spectrum from the MOS1 camera which was operated in {\tt SMALL WINDOW} mode. The source spectrum was extracted from a circular region centered on the AXP with a radius of 35$^{\prime \prime}$. A background spectrum was extracted from a circular (radius = 80$^{\prime \prime}$), source free region on CCD~\#3, closest to the center of the field of view. The calibration files used to generate the effective area file and response matrix were downloaded on 2004 August 3. The source spectrum was grouped to contain no fewer than 25 counts per channel and fit using \texttt{XSPEC}. We obtained a satisfactory fit to the energy spectrum (0.3$-$12.0 keV) using the standard blackbody plus power-law (BB+PL) model. Fit parameters are given in Table~\ref{ta:spectra}. For the \cxo\ observations, source spectra were extracted from a rectangular region centered on the source with a dimension along the ACIS-S3 readout direction of 16 pixels ($\sim$8$^{\prime \prime}$). Background spectra were extracted from 40 pixel wide rectangular regions on either side of the source with a gap of 7 pixels in between the source and background regions. Effective area files and response matrices were generated using CALDB v2.23. Similar to the \xmm\ spectral analysis, the \cxo\ spectra were grouped and fit to the BB+PL model. We were unable to obtain a satisfactory fit to either \cxo\ data set due to the presence of an emission line at 1.79 keV. This feature is instrumental in origin caused by an excess of Silicon fluorescence photons recorded in the CCD \citep{mskk03}. To avoid this feature and calibration uncertainties between 0.3 and 0.5 keV that give rise to large residuals in this range, we restricted our spectral fits to 0.5$-$1.67 and 1.91$-$10.0 keV. See Table~\ref{ta:spectra} for fit parameters. Using the PN data from \xmm\ and the ACIS data from \cxo, we measured the root-mean-square (rms) pulsed fraction of \tfe\ during each of the three observations for different energy bands. For the PN data, a source event list was extracted from a 5.6 pixel wide rectangular region centered on the AXP\footnote{http://wave.xray.mpe.mpg.de/xmm/cookbook/EPIC\_PN/timing/timing\_mode.html} and the times were corrected to the solar system barycenter. A background event list was extracted from two 10 pixel wide rectangles on either side of the source with a 10 pixel gap between the source and background regions. The same source and background regions used for the spectral analysis of the \cxo\ data were used here for the pulse timing analysis. The \cxo\ photon arrival times were corrected for instrumental time offsets\footnote{http://wwwastro.msfc.nasa.gov/xray/ACIS/cctime/} and to the solar system barycenter. Best-fit frequencies were measured independently for each observation and found to be consistent with the more precise {\it RXTE}\ spin ephemeris. We constructed background-subtracted folded pulse profiles for each observation for different energy ranges and measured the rms pulsed fraction following \citet{wkt+04} using the Fourier power from the first 3 harmonics. The pulsed fraction increases with energy from 46.6$\pm$0.5\% in the 0.5$-$1.7~keV range to 56.8$\pm$1.0\% in the 3.0$-$7.0~keV range. The 2.0$-$10.0 keV pulsed fractions are listed in Table~\ref{ta:spectra}. Comparing our results to those of \citet{mts+04}, we find that both the flux and the pulsed fraction are intermediate between the \xmm\ observations in 2000 December and 2003 June (Figure~\ref{fig:pulsed flux}). The definition of pulsed fraction introduced by \citet{mts+04} is significantly different than ours. Therefore, we analyzed each of the archival \xmm\ data sets for this source in the same manner as described above for the 2004 data set. Similar to the behavior seen in 2004, we find that the pulsed fraction increases significantly with energy within each archival data set. The average pulsed fractions during these observations, however, are significantly different than in 2004. We measure rms pulsed fractions (2.0$-$10.0 keV) of 76.0 $\pm$ 2.3\% and 43.9 $\pm$ 0.4\% for the respective observations. The observation in 2000 took place well before the onset of the first pulsed flux flare \citep{gk04} and the two bursts seen near the peak of that flare \citep{gkw02}. The observation in 2003 took place during the decay of the second flare which peaked one year earlier. We conclude that both the flux and pulsed fraction that changed so drastically during these extended flares appear to be returning to their ``nominal'' pre-flare levels. It is interesting to note that the pulsed fraction decreases during these flares showin a possible anti-correlation with the phase-averaged flux. Thus, the relative increase in the phase-averaged flux is actually much larger than the increase in pulsed flux seen in the {\it RXTE}/PCA light curve (i.e.\ the phase-averaged flux time history may have exhibited a stronger peak). \section{Discussion} \label{sec:discussion} We have discovered the longest, most luminous and most energetic burst from \tfe\ thus far. The short-term pulsed flux enhancement at the time of the burst establishes that \tfe\ is definitely the burst source and in all likelihood was the source of the 2001 bursts as well. An interesting property of all three bursts from \tfe\ is that they occur preferentially at pulse maximum. A similar trend was found for the transient AXP XTE~J1810--197\, for which four bursts occurred near pulse maximum \citep{wkg+05}. Furthermore, in a major outburst from AXP 1E~2259+586\ involving $\sim$80 bursts, \citet{gkw04} found that bursts occurred preferentially at pulse phases for which the pulsed emission was high \citep[note that the pulse profile of 1E~2259+586\ is double-peaked as opposed to the quasi-sinusoidal profiles of \tfe\ and XTE~J1810--197, see][]{gk02}. SGR bursts on the other hand show no correlation with pulse phase. \citet{pal02} found that hundreds of bursts from SGR~1900$+$14 were distributed uniformly in phase. However as discussed by \citet{gkw04}, \citet{wkg+05} and below, this is not the only difference between SGR and AXP bursts. If AXP bursts do occur at specific pulse phases then they must be associated with particularly active regions of the star. This would imply that AXPs burst much more frequently than is observed, but the bursts go unseen because they are beamed away from us. However, even if a burst is missed, it may still leave two characteristic signatures. One is a very long tail: those observed in \tfe\ and XTE~J1810--197\ lasted several pulse cycles. Second, short-term increases in pulsed flux like those observed in this paper would be an indication of a burst whose onset went unobserved. A search for ``naked tails'' or short time scale pulsed flux enhancements could in principle demonstrate the existence of missed bursts. The very long tail ($>699$~s) of the burst reported here makes it very similar to one burst observed from \tfe, some of the bursts seen in AXP 1E~2259+586, and to those observed in XTE~J1810--197. Their long durations set these bursts apart from the brief $\sim0.1$~s burst observed in SGRs. As argued by \citet{wkg+05}, bursts from \tfe\, XTE~J1810--197\ and 1E~2259+586, which have very long tails and occur close to pulse maximum, might constitute a new class of bursts unique to AXPs. Although there were two bursts with extended tails observed from SGR~1900$+$14, many of their properties differ from those of the long-duration AXP bursts. The first extended-tail SGR burst occurred on 1998 April 29 \citep{isw+01} and the second on 2001 April 28 \citep{lwg+03}. In both cases an obvious distinction could be made between the initial ``spike'' and the extended ``tail'' component of the burst. For the AXP bursts which have a fast rise and smooth exponential decay morphology, there is no point which clearly marks the transition between initial spike and extend tail. Furthermore in both of these extended-tail SGR bursts, the majority of the energy was in the initial spike, not the tail. In fact, from the time of their peaks to $\sim 1\%$ of their total duration, $\sim 98\%$ of their total energy was released. By contrast, in the AXP bursts, virtually all the energy is in what would be considered the tail component. For the burst reported here, from the time of its peak to $\sim 1\%$ of its total duration $< 37\%$ of its total energy was released. Also, unlike the long-duration AXP bursts which all occurred near pulse maximum, the extended-tail SGR bursts occurred 180\degr\ apart in pulse phase, i.e., the first burst occurred near pulse maximum and the second burst near pulse minimum \citep{lwg+03}. Last, the long-tailed SGR bursts only occurred following high-luminosity flares: the first followed the 1998 August 27 SGR~1900$+$14 event and the second followed the 2001 April 18 event \citep{gmf+01}. No such high-luminosity flares have ever been observed in an AXP. From its earliest stages, the magnetar model put forward by \citet{td95} offered a viable burst mechanism for SGRs and AXPs. In a magnetar, the magnetic field is strong enough to crack the crust of the neutron star. The fracturing of the crust disturbs the magnetic field foot-points and releases an electron-position-photon fireball into the magnetosphere. The fireball is trapped and suspended above the fracture site by closed field lines. The suspended fireball heats the surface, and in the initial version of this model, it was suggested that the burst duration would be comparable to the cooling time. In more recent work it has been suggested that burst durations can be extended by orders of magnitude via vertical expansion of the surface layers \citep{tlk02} or deep crustal heating \citep{let02}. The surface fracture mechanism can explain the very long durations of the bursts observed from \tfe\ and XTE~J1810--197. Furthermore this mechanism also provides an explanation for the phase dependence of the AXP bursts, since the fracture sites are thought to be preferentially located near the magnetic poles. Hence the bursts would be associated with a particular active region on the surface, resulting in a correlation with pulse phase. \citet{lyu02} proposed another burst emission mechanism within the framework of the magnetar model. He suggested that the bursting activity of AXPs and SGRs is due to the release of magnetic energy stored in non-potential magnetic fields by reconnection-type events in the magnetosphere. In this model, bursts occur at random phases because the emission site is high in the magnetosphere. Hence we observe all bursts. This mechanism will produce harder and shorter bursts as compared to the ones due to surface fracturing. Softer and longer bursts are achieved by a combination of reconnection and a small contribution from surface cooling, as energetic reconnection events will precipitate particles which will heat the surface. Since there is a duration-fluence correlation in both SGR \citep{gkw+01} and AXP \citep{gkw04} bursts, this model suggests that the shorter (less-luminous) bursts are harder than the longer (more luminous) bursts. A hardness-fluence correlation was found for SGR bursts \citep{gkw+01}, but an anti-correlation was found for the 80 bursts from AXP 1E~2259+586\ \citep{gkw04}. It should also be noted that for \tfe\ and XTE~J1810--197\ the more energetic bursts are the hardest, although only three bursts have been observed thus far. Hence the aspects that differentiate the surface-cooling model from the reconnection model for bursts seem to be the same aspects that separate the canonical SGR bursts from the long-duration AXP bursts. In the surface-cooling model one expects longer durations, a correlation with pulse phase and a fluence-hardness anti-correlation. It is possible that both mechanisms (surface and magnetospheric) are responsible for creating AXP and SGR bursts, but that magnetospheric bursts are more common in SGRs. This is not unreasonable if we consider the twisted-magnetosphere model proposed by \citet{tlk02}. In this extension to the magnetar model, \citet{tlk02} suggested that the highly twisted internal magnetic field of a magnetar imposes stresses on the crust which in turn twist the external dipole field. The twisted external fields induce large-scale currents in the magnetosphere. The inferred dipole magnetic field strengths, luminosity and spectra of SGRs all suggest that the global ``twists'' of their magnetic fields are greater than those of the AXPs. If the external fields of SGRs are much more ``twisted'' than those of the AXPs, that would make them more susceptible to reconnection type events in their magnetosphere. Furthermore, if SGRs have stronger magnetic fields than the AXPs, then they would be less susceptible to surface fracture events because at high field strengths the crustal motions are expected to become plastic. What could make a burst tail last longer in AXPs? One possibility is that the energy is released very deep in the crust and the energy conducts to the surface on very long time scales. In this scenario most of the heat is absorbed by the core, which could explain why AXP bursts are in general dimmer than SGR bursts. In order for this model to explain both the extended-tail SGR bursts and the long-duration AXP bursts one can imagine a hybrid scenario in which a sudden twist occurs, which deposits energy both in the magnetosphere and deep in the crust. The energy deposited in the magnetosphere gives rise to a spike and the energy deposited deep in the crust conducts to the surface on longer time scales. The reason why no spike is seen in the AXPs is because the magnetospheric component is small. This reasoning applies to AXPs \tfe\ and XTE~J1810--197\ but it is not surprising that AXP 1E~2259+586\ exhibits both types of bursts, because as argued by \citet{wkt+04} the best explanation for the bursts and contemporaneous flux, pulse profile and spin-down variability from this source was through a catastrophic event that simultaneously impacted the interior and the magnetosphere of the star. The spectrum of the burst reported here is intriguing. First, although this burst is much harder given its luminosity when compared to SGR bursts (the SGRs show a luminosity-hardness anti-correlation), its spectral softening is very similar to those of the extended-tail bursts of SGR~1900$+$14 \citep{isw+01, lwg+03}. Second, the evidence of a spectral feature at $\sim13$~keV makes this burst very similar to the first burst detected from \tfe\ \citep{gkw02}. The probability of observing an emission line at $\sim$13~keV in both bursts is exceedingly small, thus the line is almost certainly intrinsic to the source and has important implications. If it is a proton-cyclotron line then it allows us to calculate the surface magnetic field strength, $B = {m_p c E}/{\hbar e} = 2.1\times 10^{15}\left({E}/{\mathrm{13\ keV}}\right)~\mathrm{G}$, where $m_p$ and $e$ are the proton mass and charge, respectively, and $E$ is the energy of the feature. This value for the magnetic field strength is greater than that measured from the spin-down of the source, but the spin-down is only sensitive to the dipolar component of the field and it is plausible that the field is multipolar. It should be noted that this surface magnetic field strength estimation assumes the burst was a surface cracking phenomenon. An open question is why bursts from certain magnetar candidates exhibit spectral features and others do not. 1E~2259+586\ exhibited over 80 bursts and no spectral features were seen, whereas \tfe\ showed only three bursts with 2 spectral features seen. \citet{wkg+05} reported a spectral feature at approximately the same energy in a burst from AXP XTE~J1810--197\ at a higher significance level than both of the \tfe\ spectral features. We have argued that the bursts from \tfe\ are very similar to those of XTE~J1810--197. Interestingly, the two sources show other similarities. They both have sinusoidal profiles while all other AXPs exhibit rich harmonic content in their pulse profiles \citep[see][]{gk02, ggk+02}. They have shown long-lived ($>$ months) pulsed flux variations which are not due to cooling of the crust after the impulsive injection of heat from bursts. Furthermore, their timing properties are the most reminiscent of the SGRs. The pulsed fraction decrease we have observed lends further evidence that the pulsed flux variations observed by \citet{gk04} represent a new phenomenon seen exclusively in the AXPs. The fact that the pulsed fraction decreased as the pulsed flux increased without any pulse morphology changes implies that there was a greater fractional increase in the unpulsed flux than in the pulsed flux, in agreement with what was found by \citet{mts+04}. Such a flux enhancement cannot be attributed to a particular active region. Thus we can rule out the flux enhancements were due to the injection of heat from bursts that were beamed away from us, because in that scenario one would expect a larger fractional change in pulsed flux than in total flux. Indeed, during the burst afterglow pulsed flux enhancement in SGR~1900$+$14, \citet{lwg+03} found a pulsed flux and pulse fraction increase. \section{Conclusions} \label{sec:conclusions} We reported on the discovery of the latest burst from the direction of AXP \tfe. This burst was discovered as part of our long-term monitoring campaign of AXPs with {\it RXTE}. Contemporaneously with the burst we discovered a pulsed-flux enhancement which unambiguously identified \tfe\ as the burst's origin. The clear identification of \tfe\ as the burster in this case argues that it was indeed the emitter of the two bursts discovered from the direction of this source in 2001, as already inferred by \citet{gkw02}. All three bursts from \tfe\ can only be explained within the context of the magnetar model, however many of their properties differentiate them from canonical SGR bursts. This and the first burst discovered from this source had very long-tails, $>699$~s and $\sim$51~s respectively, as opposed to the $\sim$0.1~s duration SGR bursts. Two ks-long SGR bursts have been reported but we argued that they were a very different phenomenon \citep{isw+01, lwg+03}. Specifically the extended-tail SGR bursts had very energetic initial spikes and the long tails were argued to be the afterglow of this initial injection of energy. However, in the AXP bursts no such spikes are present; in fact, most of the energy is in what would be considered the tail. All three bursts from \tfe\ occurred near pulse maximum, as opposed to the SGR bursts which are uniformly distributed in pulse phase. All of the bursts discovered from AXP XTE~J1810--197\ and a handful of the bursts from AXP 1E~2259+586\ share many properties with the \tfe\ bursts and, as argued by \citet{wkg+05}, long-tailed bursts (with no energetic spikes) which occur near pulse maxima might comprise an new burst class thus far unique to AXPs. The spectral evolution of this burst was very similar to the extended-tail SGR bursts with the trend going from hard to soft. However, at one part of the bursts tail there was an unusual spectral feature at $\sim$13~keV; a similar feature was discovered in the first burst from \tfe\ and in a very high signal-to-noise burst from XTE~J1810--197\ \citep{gkw02,wkg+05}. If features such as these are confirmed they may provide direct estimates of the neutron star's magnetic field especially if harmonic features can be positively identified. The differences between the long- and short-duration bursts might be due to separate emission mechanisms. Two burst mechanisms have been proposed within the magnetar model: surface fracture and magnetospheric reconnection \citep{td95,lyu02}. We argued that this burst is more likely a surface fracture event which agrees with the conclusion reached by \citet{wkg+05} for the long-tailed bursts from XTE~J1810--197. If in fact the two classes of bursts are due to emission mechanisms operating in two distinct regions (near the surface and in the upper magnetosphere), then AXP bursts provide opportunities to probe the physics of these separate regions of a magnetar. \acknowledgments We are very grateful to Maxim Lyutikov for valuable comments, suggestions and useful discussion. This work is supported from NSERC Discovery Grant 228738-03, NSERC Steacie Supplement 268264-03, a Canada Foundation for Innovation New Opportunities Grant, FQRNT Team and Centre Grants, the Canadian Institute for advanced Research, SAO grant GO4-5162X and NASA grant NNG05GA54G. V.M.K. is a Canada Research Chair and Steacie Fellow. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
1,116,691,498,907
arxiv
\section{Introduction} \ Coherent states and their applications in theoretical and technological fields have been the subject of many studies \cite{Art01-01}, \cite{Art01-02}, \cite{Art01-03}, \cite{Art01-04}, \cite{Art01-05}, \cite{Art01-06}, \cite{Art01-07}, \cite{Art01-08}, \cite{Art01-09}, \cite{Art01-10}, \cite{Art01-11}, \cite{Art01-12}, \cite{Art01-13}, \cite{Art01-14}, \cite{Art01-15}, \cite{Art01-16}, \cite{Art01-17}, \cite{Art01-18} , \cite{Art01-19} and \cite{Art01-20}.\\ Coherent states play an important role in quantum physics. Among their important properties is the fact that the mean value of the position operator in these states perfectly reproduce the classical behavior in the case of the harmonic oscillator. These states also saturate the Heisenberg inequality. Many generalizations have been proposed for the notion of coherent states for systems which are more complex than the harmonic oscillator. The path taken by supersymmetric quantum mechanics is a very simple one. When a Hamiltonian can be written as a product of an operator and its adjoint (plus a constant), one is basically in the situation of the harmonic oscillator with a creation and a destruction operator. The coherent states in this context are defined as the eigenvectors of the destruction operator. It has been shown that these states saturate a generalized uncertainty relation which is not the Heisenberg one \cite{Art01-01}.\\ For some potentials which play an important role in Physics and Chemistry (For example the Morse potential), the SUSY coherent states have been computed using a clever trick which bypasses the resolution of the Riccati equation for the determination of the superpotential \cite{Art01-01}. But when it comes to the study of the mean value of the position or the momentum operator, one is confronted by the fact that these coherent states are not normalizable. One may try an approach using wave packets but the analysis becomes cumbersome. The path taken here is the following. For the harmonic oscillator, the coherent states are normalizable. The superpotential is a one degree polynomial. To get an insight into the subject, we study an ad hoc system whose superpotential is polynomial. For simplicity and to ensure normalizability, we take it to be of third degree. The physical potential is then a sixth degree polynomial whose classical trajectories can be computed. The coherent states can be obtained in closed form. We than proceed to the calculation of the mean value of the position operator for these states. Comparison is then made with the classical trajectories.\\ The paper is organized as follows. The second section is a quick reminder of the formalism of SUSY coherent states that we need. The third section deals with the calculation of the mean value of the position operator in general. We apply it to a simple toy model in the fourth section. The treatment of the harmonic oscillator is put in the Appendix and shows that the analysis captures what is known by other methods. \section{A Quick Reminder of Coherent States and SUSYQM} \ There are many definitions of coherent states which are not exactly equivalent.\\ Let us first consider the case of the harmonic oscillator \cite{Art01-27}. One has the following properties : \begin{itemize} \item The map $\mathbb{C} \owns z \longrightarrow \lvert z \rangle \in L^2 (\mathbb{R})$ is continuous. \item $\lvert z \rangle$ is an eigenvector of the annihilation operator : $a \lvert z \rangle = z \lvert z \rangle$. \item The coherent states family resolves the unity $\frac{1}{\pi} \ \int_{\mathbb{C}} \lvert z \rangle \langle z \rvert d^2 z = \mathbf{1}$. \item The coherent states saturate the Heisenberg inequality : $\Delta q \ \Delta p = \frac{1}{2}$. \item The coherent states family is temporally stable. \item The mean value (or '' lower symbol '') of the Hamiltonian mimics the classical energy-action relation : $\check{H}(z) = \langle z \lvert \mathbf{\hat{H}} \rvert z \rangle = \omega \lvert z \rvert^2 + \frac{1}{2} $. \item The coherent states family is the orbit of the ground state under the action of the Weyl-Heisenberg displacement operator : $\lvert z \rangle = e^{( z a^\dag - \overline{z}z )} \lvert 0 \rangle \equiv D(z) \lvert 0\rangle$. \item The coherent states provide a straightforward quantization scheme : Classical state $z \longrightarrow \lvert z \rangle \langle z \rvert$ Quantum state. \item The mean value of the position operator in these states reproduce the classical trajectories. \end{itemize} One studies a system which admits an infinite set of discrete energies obeying the ''partition'' of unity. \vspace{20mm} Let us consider a system with an infinite set of eigen energies which resolves unity. \begin{eqnarray} \begin{split} \label{eqa27} \hat{H} \lvert \psi_n \rangle &= E_n \lvert \psi_n \rangle,\\ \langle \psi_m \lvert \psi_n \rangle &= \delta_{m,n},\\ \sum_{m=1}^{+\infty} \lvert \psi_m \rangle \langle \psi_m \rvert &= 1. \end{split} \end{eqnarray} One introduces the raising and lowering operators by the relations \begin{eqnarray} \begin{split} \label{eqa73} A^- \lvert \psi_n \rangle &= \sqrt{k(n)} \lvert \psi_{n-1} \rangle,\\ A^+ \lvert \psi_n \rangle &= \sqrt{k(n+1)} \lvert \psi_{n+1} \rangle,\\ \rho(n)& = \prod_{i=1}^n k(i), \ \rho(0) = 1. \end{split} \end{eqnarray} For the harmonic oscillator $k(i) = i$. \\ The coherent states can be defined as the eigenvectors of the lowering operator \cite{Art01-31}. \begin{equation} \label{eqa31} A^- \lvert z, \alpha \rangle = z \lvert z, \alpha \rangle. \end{equation} It is possible to show that the time evolution sends a coherent state to another coherent state, with different parameters.\\ All coherent states can be obtained by acting on the ground state by the displacement operator.\\ The definition adopted by Klauder \cite{Art01-16} is that the coherent states are given by \begin{equation} \label{eqa72} \lvert \psi(z) \rangle = \frac{1}{\sqrt{{\cal{N}}(\lvert z \rvert^2)}} \sum_{n \in I} \frac{z^n}{\sqrt{\rho(n)}} \lvert \psi_n \rangle, \end{equation} where $\cal{N}$ is the normalization factor.\\ The general squeezed coherent states for a quantum system with an infinite discrete energy spectrum was defined in \cite{Art01-24} by the relation \begin{equation} \label{eqa76} ( A^- + \gamma A^+ ) \psi(z, \gamma) = z \psi(z, \gamma), \end{equation} \hspace{60mm}with $z, \gamma \in \mathbb{C}$.\\ For a system whose discrete spectrum is finite (Example : The Morse potential) another generalization was studied \cite{Art01-24}.\\ It ressembles \eqref{eqa72} but now the sum runs on a finite number of indices.\\ Let us now turn to SUSYQM. Consider a system described by a potential $V_0$ which admits a ground state energy satisfying the Schroedinger equation \begin{equation} \label{eqa64} -\frac{1}{2} u^{(0)''} + V_0 u^{(0)} = \varepsilon u^{(0)}. \end{equation} \vspace{35mm} Introduce the function \begin{equation} \label{eqa63} \alpha_1 = \frac{u^{(0)'}}{u^{(0)}} \end{equation} and the new potential \begin{equation} \label{eqa61} V_1 = V_0 - \alpha_1. \end{equation} Looking at the Hamiltonians \begin{equation} \label{eqa57} H_0 = - \frac{1}{2} \frac{d^2}{dx^2} + V_0(x), \ H_1 = - \frac{1}{2} \frac{d^2}{dx^2} + V_1(x), \end{equation} one sees that they can be factorized in the following way \begin{eqnarray} \begin{split} \label{eqa65} H_0 &=A_1 A_1^+ + \varepsilon,\\ H_1 & = A_1^+ A_1 + \varepsilon, \end{split} \end{eqnarray} where the operators involved are given by the formulas \begin{equation} \label{eqa59} A_1 = \frac{1}{\sqrt{2}} \bigg\lbrack \frac{d}{dx} + \alpha_1(x) \bigg\rbrack, \ A_1^+ = \frac{1}{\sqrt{2}} \bigg\lbrack - \frac{d}{dx} + \alpha_1(x). \bigg\rbrack. \end{equation} One has the intertwining relations \begin{equation} \label{eqa58} H_1 A_1^+ = A_1^+ H_0, \ H_0 A_1 = A_1 H_1, \end{equation} from which one finds the spectrum of $H_1$ (knowing that of $H_0$) and the link between their eigenstates. So far for factorization and intertwining. The introduction of SUSYQM can then proceed as follows. One introduces the operators \begin{equation} \label{eqa69} Q = \begin{pmatrix} 0 & 0 \\ A_1 & 0 \end{pmatrix}, \ Q^+ = \begin{pmatrix} 0 & A_1^+ \\ 0 & 0 \end{pmatrix} , \end{equation} and sees that the Hamiltonian takes the form \begin{eqnarray} \begin{split} \label{eqa71} H_{SUSY} &= \big \lbrace Q, Q^+ \big\rbrace \\ & = \begin{pmatrix} H_1 - \varepsilon & 0\\ 0 & H_0 - \varepsilon \end{pmatrix}. \end{split} \end{eqnarray} Introducing \begin{eqnarray} \begin{split} \label{eqa70} Q_1 = \frac{1}{\sqrt{2}} \left( Q^+ + Q \right),\\ Q_2 = \frac{1}{i \sqrt{2}} \left( Q^+ - Q \right), \end{split} \end{eqnarray} one has \begin{eqnarray} \begin{split} \label{eqa68} \lbrack Q_i , H_{SUSY}\rbrack &= 0,\\ \big \lbrace Q^+_i , Q_j \big\rbrace &= \delta_{i j} H_{SUSY}, \ i, j =1,2. \end{split} \end{eqnarray} \section{Our Approach} \ We are studying a one dimensional system with a rescaled undimensional coordinate q. Suppose one can write the physical potential $V(q)$ of such a system in terms of a function $x(q)$ by the relation \begin{equation} \label{eq01} V(q)-E_0=\frac{1}{2}\Big\lbrack x^2(q)+\frac{dx(q)}{dq} \Big\rbrack. \end{equation} Then one can factorize the Hamiltonian \begin{equation} \label{eq02} \hat{H}=\hat{A^\dag}\hat{A}+E_0, \end{equation} where the operators $A$ and $\hat{A^\dag}$ have the form \begin{equation} \label{eq03} \hat{A}=\frac{1}{\sqrt{2}} \Big\lbrack \frac{d}{dq}-x(q) \Big\rbrack ;\ \hat{A^\dag}=\frac{1}{\sqrt{2}}\Big\lbrack -\frac{d}{dq}-x(q) \Big\rbrack; \end{equation} so that \begin{equation} \label{eq03a} \Big\lbrack \hat{A} , \hat{A^\dag} \Big\rbrack = - \frac{dx(q)}{dq}. \end{equation} The function $x(q)$ is called the superpotential. Eq.\eqref{eq03} and \eqref{eq03a} are reminiscent of the harmonic oscillator. Using that similarity, the coherent states are defined as the eigenfunctions of the "annihilation operator" $\hat{A}$. Using Eq.\eqref{eq03} one is led to an equation of separable variables. The solution reads \begin{equation} \label{eq04} \psi_H(q, \alpha)=N \exp{\Big\lbrack \sqrt{2}\alpha q+\int_0^q x(\xi) \, d\xi \Big\rbrack}. \end{equation} In Eq.\eqref{eq04}, $\alpha$ is a complex variable which characterizes the coherent states. When trying to implement the SUSY treatment to a system whose physical potential is known, the difficult part is the resolution of the Riccati equation given in Eq.\eqref{eq01}.\\ For the harmonic oscillator, one introduces the rescaled variables \begin{equation} \label{eq05} Q = \sqrt{\frac{\hbar}{m\omega}}\ q \hspace{5mm} ; \hspace{5mm} \ P =\sqrt{m \omega\hbar} \ p, \end{equation} and the rescaled potential \begin{equation} \label{eq06} U (Q)=\hbar\omega \ V (q). \end{equation} The superpotential is given by \begin{equation} \label{eq07} x(q)=-q, \end{equation} so that the coherent state takes the form \begin{equation} \label{eq08} \psi_H(q, \alpha)=N \exp{( \sqrt{2} \alpha q-\frac{1}{2}q^2 )}, \end{equation} which is normalizable.\\ \vspace{10mm} For the Morse potential, the superpotential is given by \cite{Art01-01} \begin{equation} \label{eq09} x(q)=\frac{- s + \exp{(-q \hspace{1mm} \sqrt{2\chi_e}})}{\sqrt{2 \chi_e}} + \sqrt{\frac{\chi_e}{2}}, \end{equation} the parameters $s$ and $\chi_e$ are related to the Morse potential's energy levels: \begin{equation} \label{eq10} E_n = s \left( n+\frac{1}{2}\right)-\chi_e \left( n+\frac{1}{2} \right)^2. \end{equation} So, the coherent states takes the form \begin{equation} \label{eq11} \psi_H(q, \alpha)= \exp{\Big\lbrack -\frac{1}{2\chi_e} \exp{(-q \hspace{1mm} \sqrt{2\chi_e})}-\frac{s-\chi_e}{\sqrt{2\chi_e}}q+\sqrt{2}\alpha q\Big\rbrack }. \end{equation} Clearly, such a function in not square integrable and the mean values can not be computed. The aim of this paper is to construct a superpotential such that the corresponding coherent states are normalizable and the computation of the mean values not too complicated. The quantity we are interested in is the mean value \begin{equation} \label{eq12} \langle \hat{q}\rangle =\frac{\langle \psi_{S ,t} \lvert \hat{q}\rvert \psi_{S ,t} \rangle }{\langle \psi_{S, t} \lvert \psi_{S, t} \rangle }. \end{equation} One has to realize that the states given in Eq.\eqref{eq04} were obtained in the Heisenberg picture : the wave function does not depend on time. To pass to the Schrodinger picture one has to use the evolution operator \begin{equation} \label{eq13} \lvert \psi_{S,t} \rangle = \exp {( -\frac{i}{\hbar}\hat{\mathbf{H}}t )} \rvert \psi_H\rangle, \end{equation} where the Hamiltonian takes the form \begin{equation} \label{eq14} \hat{\mathbf{H}}=\hbar \omega \Big\lbrack \frac{\hat{p}^2}{2} + \hat{V}(q) \Big\rbrack. \end{equation} The factorization of the Hamiltonian and the evolution operator can be used to obtain the wave function in the Schrodinger picture as a power series \begin{equation} \label{eq15} \lvert \psi_{S,t}\rangle = \exp(- \frac{i}{\hbar} E_0 t ) .\sum_{k=0}^{n} (-i)^k \frac{(\omega t)^k}{k!}(\hat{A^+}\hat{A})^k \lvert \psi_H \rangle. \end{equation} The mean value then reads \begin{equation} \label{eq16} \langle \psi_{S ,t} \lvert \hat{q} \rvert \psi_{S ,t}\rangle = \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \frac{(-1)^m i^{m+n}}{m! n!} t^{m+n} \langle \psi_H \lvert \hat{H}^n \hat{q} \hat{H}^m \rangle \psi_H \rangle. \end{equation} The relation \begin{equation} \label{eq17} \hat{A^\dag}=-\hat{A}-\sqrt{2}x(q) \end{equation} is obtained by adding the two relations given in Eq.\eqref{eq03}.\\ With the fact that the wave function, we are studying, is an eigenvector of the "annihilation operator", we find \begin{equation} \label{eq18} \hat{A^\dag}\hat{A}\lvert \psi_H \rangle = \alpha \Big\lbrack -\alpha-\sqrt{2}x(q)\Big\rbrack \rvert \psi_H \rangle. \end{equation} Subtracting the relations in \eqref{eq03}, we similarly find \begin{equation} \label{eq19} \frac{\partial}{\partial q} \lvert \psi_H \rangle = \Big\lbrack \sqrt{2}\alpha + x(q) \Big\rbrack \rvert \psi_H \rangle. \end{equation} This allows us to obtain the second order contribution of the wave function \begin{equation} \label{eq20} (\hat{A^\dag}\hat{A})^2 \lvert \psi_H \rangle = \Big\lbrack \alpha \frac{\sqrt{2}}{2} \frac{d^2x(q)}{dq^2} + \left( 2\alpha^2+ \sqrt{2} \alpha x(q) \right) \frac{dx(q)}{dq} + \left( \sqrt{2}\alpha x(q)+\alpha^2 \right)^2 \Big\rbrack \rvert \psi_H\rangle. \end{equation} This suggests the introduction of special functions $f_n$ such that \begin{equation} \label{eq21} (\hat{A^\dag}\hat{A})^n \lvert \psi_H \rangle = f_n(q,\alpha) \psi_H \rangle . \end{equation} From the previous considerations, one derives the recurrence formula \begin{equation} \label{eq22} f_{n+1}=-\frac{1}{2} \frac{\partial^2f_n}{\partial q^2} - \Big\lbrack \sqrt{2} \alpha + x(q) \Big\rbrack \frac{\partial f_n}{\partial q} - \Big\lbrack \sqrt{2} \alpha x(q) + \alpha^2 \Big\rbrack f_n. \end{equation} Note that \begin{equation} \label{eq23} f_0(q,\alpha)=1. \end{equation} Working in the position representation, the time dependent wave function \begin{equation} \label{eq24} \psi_S (q, t, \alpha)=\exp{(-\frac{i}{\hbar} E_0 t)}\sum_{m=0}^{\infty} \frac{(-i\omega t)^m}{m!}\Big\lbrack (\hat{A^\dag}\hat{A})^m \psi_H(q,\alpha)\Big\rbrack \end{equation} takes the form \begin{equation} \label{eq25} \psi_S (q, t, \alpha)=\exp{(-\frac{i}{\hbar} E_0 t)}\sum_{m=0}^{\infty} \frac{(-i\omega t)^m}{m!}f_m(q,\alpha)\psi_H(q,\alpha). \end{equation} The position mean value is then given by an infinite double sum \begin{equation} \label{eq26} _S\langle \psi \lvert \hat{q} \rvert \psi\rangle_S = \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \frac{i^{m+n }(-1)^n}{m!n!} C_{m, n} \ (\omega t)^{m+n} \end{equation} where the coefficients $C_{m, n}$ are the integrals \begin{equation} \label{eq27} C_{m, n}= \int_{-\infty}^{+\infty} dq .q f^*_m (q,\alpha) f_n (q,\alpha) \psi_H^* (q,\alpha) \psi_H (q,\alpha)\ . \end{equation} The double summation can be rewritten as a power series in the time parameter \begin{equation} \label{eq28} _S\langle \psi \lvert \hat{q}\rvert \psi\rangle_S = \sum_{l=0}^{\infty} (\omega t)^l \Omega_l, \end{equation} with \begin{equation} \label{eq29} \Omega_l=i^l (-1)^l\sum_{m=0}^{l} \frac{(-1)^m}{m!(l-m)!}C_{m, l-m}. \end{equation} Explicitly, we can write \begin{eqnarray} \begin{split} \label{eqa02} \Omega_0 &= C_{0, 0}, \\ \Omega_1 &= -i \left( C_{0,1} - C_{1,0} \right), \\ \Omega_2 &= \frac{1}{2!} \left( -C_{0,2} + 2 C_{1,1} - C_{2,0} \right), \\ \Omega_3 &= i \lbrack \ \frac{1}{2!} \left( C_{2,1} - C_{1,2} \right) + \frac{1}{3!} \left( C_{0,3} - C_{3,0} \right) \ \rbrack, \\ \Omega_4 &= \frac{1}{4!} \left( C_{0,4} + C_{4,0} \right) - \frac{1}{3!} \left( C_{1,3} + C_{3,1} \right) + \frac{1}{2! \ 2!} \ C_{2,2},\\ &\cdots \end{split} \end{eqnarray} Eq.\eqref{eq28} is written with the fact the quantity $_S \langle \psi \lvert \psi \rangle_S$ is an unessential constant.\\ One can consider that Eq.\eqref{eq28} and Eq.\eqref{eq29} gives the answer to our question about the mean value of the position operator in this context. For any practical case, one has to compute the integrals of Eq.\eqref{eq27} and makes the appropriate summation.\\ At this point one has to point technical difficulties. The first one is that the recursion relations of Eq.\eqref{eq22} can quickly leads to large formulas even for simple superpotentials. The second one is that it is not always possible to have an analytical expression for the integrals appearing in Eq.\eqref{eq27}. Third, one has to be careful about the order at which one can stop to in the series Eq.\eqref{eq28} to obtain a reliable estimate.\\ To test our approach, we first used it on the harmonic oscillator. The results are good and reproduce some important characteristics of the coherent states as known in the literature. The solution to the classical equations of motion is given by \begin{equation} \label{eqq01} q_{class}(t) = C_1 \cos{\omega t}+ C_2 \sin{\omega t}, \end{equation} where $ C_1$ and $C_2$ are constants.\\ This can be written as a power series \begin{equation} \label{eqq02} q(t)=A_0 + A_1 \omega t + A_2 (\omega t)^2 + \cdots \end{equation} where one has \begin{equation} \label{eqq03} \frac{A_2}{A_0}=-\frac{1}{2}, \ \frac{A_3}{A_1}=-\frac{1}{6}, \cdots \end{equation} We can write the ''quantum trajectory'' as \begin{equation} \label{eqq04} \langle \psi_{S,t} \lvert \hat{q} \rvert \psi_{S,t} \rangle = \sum_{l=0}^{+\infty} \Omega_l . (\omega t)^l. \end{equation} It can be shown that the classical trajectory verifies the same relations i.e. \begin{equation} \label{eqqq02} \frac{\Omega_2}{\Omega_0}=-\frac{1}{2}, \ \frac{\Omega_3}{\Omega_1}=-\frac{1}{6}, \cdots \end{equation} To keep our presentation light, we have put this treatment in the Appendix. \section{A Toy Model} \ The generalization of coherent states considered here was introduced in \cite{Art01-01}. For many potentials used in theoretical chemistry, the corresponding states are not normalizable and this leads to technical difficulties when one is interested in mean values. We want to restrict ourselves to systems with square integrable generalized coherent states. In the context of quantum supersymetry, the most important ingredient is the superpotential. For the harmonic oscillator, the superpotential in linear in the position. The next non trivial thing is to study a polynomial superpotential. A second degree superpotential is readily seen to lead to a non normalizable generalized coherent state. This leads us to study a toy model whose superpotential is given by \begin{equation} \label{eq30} x(q)=x_1 q -\frac{1}{3} x_1^2 q^3, \end{equation} with \begin{equation} \label{eq31} x_1 < 0, \end{equation} \hspace{50mm}where $x_1$ is a free parameter.\\ The corresponding physical potential is a sixth degree polynomial (See Eq.\eqref{eq01}) \begin{equation} \label{eq32} U(Q)= \hbar\omega ( U_0 + U_2 Q^2+ U_4 Q^4+ U_6 Q^6). \end{equation} Its coefficients are related to those of the rescaled superpotential by \begin{equation} \label{eq33} U_0 = 0; \ U_2 = 0; \ U_4 = -(\frac{\hbar}{m \omega})^2 \frac{x_1^3}{3}; \ U_6 = (\frac{\hbar}{m \omega})^3 \frac{x_1^4}{18}. \end{equation} It has to be noted that if one begins with the potential, the characteristic frequency is given by \begin{equation} \label{eq34} \hbar \omega=\frac{U^3_4}{U^2_6}. \end{equation} This potential has the form given in Fig.1 given in the next page. \begin{figure}[ht] \centering \includegraphics[scale=0.65]{imarticle01.jpg} \caption{{\footnotesize{A Potential for the Toy Model for the values $U_4=1$ and $U_6=1$.}}} \end{figure} \vspace{40mm} Classically, all the trajectories are bounded and periodic, with the period \begin{equation} \label{eq35} T_{class}=4\sqrt{\frac{2}{m}} \int_0^{Q_+} \frac{1}{\sqrt{E-U(Q)}} dQ\ \end{equation} which depends on the energy of the system because $Q_+$ is such that $E - U(Q_+)=0$.\\ We now analyze the mean value of the position operator for the corresponding coherent states and compare them with the classical trajectories. From Eq.\eqref{eq22} one easily sees that the functions $f_n$ will be polynomial in the the variable $q$. This will highly simplify the computations. We thus introduce the coefficients $\tilde{f}_n (\alpha)$ by \begin{equation} \label{eq36} f_n (q,\alpha)=\sum_{k=0}^{3n} \tilde{f}_{n,k}(\alpha) q^k \end{equation} with \begin{equation} \label{eq37} \tilde{f}_{0,0}=\tilde{f}_{0,0}^\star=1. \end{equation} The coefficients we have introduced obey the following recursion relations which naturally come from Eq.\eqref{eq22}: \begin{eqnarray} \begin{split} \label{eq39} \tilde{f}_{n+1,0}&=- \tilde{f}_{n,2} - \sqrt{2} \alpha \tilde{f}_{n,1} - \alpha^2 \tilde{f}_{n,0}; \\ \tilde{f}_{n+1,1}&=-3 \tilde{f}_{n,3} -2 \sqrt{2} \alpha \tilde{f}_{n,2} - (x_1 + \alpha^2) \tilde{f}_{n,1} - \sqrt{2} \alpha x_1\tilde{f}_{n,0} ;\\ \tilde{f}_{n+1,2}&=-6 \tilde{f}_{n,4} -3 \sqrt{2} \alpha \tilde{f}_{n,3} - (2x_1 + \alpha^2) \tilde{f}_{n,2} - \sqrt{2} \alpha x_1 \tilde{f}_{n,1} ;\\ \tilde{f}_{n+1,l}&=-\frac{1}{2} (l+1)(l+2) \tilde{f}_{n,l+2} - \sqrt{2} \alpha (l+1) \tilde{f}_{n,l+1} - (x_1 l + \alpha^2) \tilde{f}_{n,l} - \sqrt{2} \alpha x_1 \tilde{f}_{n,l-1}\\ &+ \frac{1}{3} x_1^2 (l-2)\tilde{f}_{n, l-2}+\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, l-3}\ (3 < l < 3n-2); \\ \tilde{f}_{n+1,3n-1}&= -3n \sqrt{2} \alpha \tilde{f}_{n,3n} -[(3n-1)x_1+ \alpha^2] \tilde{f}_{n,3n-1} - \sqrt{2} \alpha x_1 \tilde{f}_{n,3n-2}\\ &+ \frac{1}{3} x_1^2 (3n-3)\tilde{f}_{n, 3n-3}+\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n-4}; \\ \tilde{f}_{n+1,3n}&= -(3n x_1+ \alpha^2) \tilde{f}_{n,3n}- \sqrt{2} \alpha x_1 \tilde{f}_{n,3n-1}+ \frac{1}{3} x_1^2 (3n-2)\tilde{f}_{n, 3n-2}+\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n-3};\\ \tilde{f}_{n+1,3n+1}&= - \sqrt{2} \alpha x_1 \tilde{f}_{n,3n}+ \frac{1}{3} x_1^2 (3n-1)\tilde{f}_{n, 3n-2}+\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n-2};\\ \tilde{f}_{n+1,3n+2}&= n x_1^2 \tilde{f}_{n,3n} +\frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n-1};\\ \tilde{f}_{n+1,3n+3}&= \frac{\sqrt{2}}{3}\alpha x_1^2 \tilde{f}_{n, 3n}. \end{split} \end{eqnarray} We know have to compute \begin{equation} \label{eq38} C_{m,n}=\sum_{k=0}^{m}\sum_{p=0}^{n} \int_{-\infty}^{+\infty} dq. q^{k+p+1} \tilde{f^\star}_{m,k}(\alpha) \tilde{f}_{n,p}(\alpha) \exp{(2\beta q+x_1 q^2-\frac{1}{6}x_1^2 q^4)}\ . \end{equation} From Eq.\eqref{eq38}, we have \begin{equation} \label{eq40} C_{0,0}= \int_{-\infty}^{+\infty} dq. q\ \exp{(2\beta q+x_1 q^2-\frac{1}{6}x_1^2 q^4)}\ . \end{equation} The integrals in Eq.\eqref{eq38} have the generic form \begin{equation} \label{eq41} J_n = \int_{-\infty}^{+\infty} dq. q^n\ \exp{(2\beta q+x_1 q^2-\frac{1}{6}x_1^2 q^4)}\ . \end{equation} At this point, there are two ways to tackle the computation. The first insight is that one may need to compute only the integral $J_0$. The second one will use recursion relations.\\ We begin with the first approach. Its main interest relies in the fact that it leads to analytical expressions. Its main limitation is that it works only for big values of the real part of the parameter $\alpha$ describing th coherent state.\\ In short, from Eq.\eqref{eq41} it can be shown that \begin{equation} \label{eq43} J_n = \frac{1}{2^n} \frac{d^n}{d\beta^n} J_0. \end{equation} For convenience, we introduce the function \begin{equation} \label{eq50} g(q)=2\beta q+x_1 q^2-\frac{1}{6}x_1^2 q^4. \end{equation} Our integral then becomes \begin{equation} \label{eqa07} J_0 = \int_{-\infty}^{\infty} dq \ \exp{[ g(q) ]}. \end{equation} To evaluate $J_0$, we shall use the saddle point approximation. This is due to the fact that the exponential is a very rapidly decaying function, due to the fourth degree term with a negative sign in the exponential. The extrema of the integrand satisfy the equation \begin{equation} \label{eq44} q^3-\frac{3}{x_1}q-\frac{3\beta}{x_1^2}=0. \end{equation} This equation is a particular case of the following \begin{equation} \label{eq45} q^3+a_1q^2+a_2q+a_3=0. \end{equation} The solution to such an equation can be recast using the Cardan formula. One introduces the intermediate quantities \begin{equation} \label{eq46} Q=\frac{-1}{x_1}; R=\frac{3\beta}{2x_1^2}; \end{equation} \begin{equation} \label{eq47} D = Q^3+R^2; S=\sqrt[3]{R+\sqrt{D}};\ T=\sqrt[3]{R-\sqrt{D}}. \end{equation} The only real extremum (actually a maximum) is found at \begin{equation} \label{eq48} q_0= S+T-\frac{1}{3}a_1. \end{equation} One finally arrives at the following expression \begin{equation} \label{eq49} q_0=\sqrt[3]{\frac{3\beta}{2x_1^2}+\sqrt{\frac{9\beta^2}{4x_1^4}-\frac{1}{x_1^3}}}+\sqrt[3]{\frac{3\beta}{2x_1^2}-\sqrt{\frac{9\beta^2}{4x_1^4}-\frac{1}{x_1^3}}}. \end{equation} Let us first expand the function $g$, given by \eqref{eq50}, near its minimum \begin{equation} \label{eqa10} g(q) = g(q_0) + \frac{1}{2!} g''(q_0) ( q - q_0)^2 + \frac{1}{3!} g'''(q_0) ( q - q_0)^3 + \frac{1}{4!} g^{(4)}(q_0) ( q - q_0)^4 + \cdots \end{equation} Introducing the centered and rescaled variable $y$ by \begin{equation} \label{eqa11} q = q_0 + \frac{y}{\sqrt{- g''(q_0)}} \end{equation} one obtains \begin{equation} \label{eqa14} g(q) = g(q_0) - \frac{1}{2!} y^2 + A y^3 + B y^4 + \cdots \end{equation} where the parameters $A$ and $B$ are given by \begin{equation} \label{eqa16a} A = \frac{1}{3!} \ \frac{g^{(3)}(q_0)}{\lbrack - g''(q_0) \rbrack^{\frac{3}{2}}}, \ B = \frac{1}{4!} \ \frac{g^{(4)}(q_0)}{\lbrack - g''(q_0) \rbrack^2}. \end{equation} This leads to a sum of gamma functions \begin{equation} \label{eqa17} J_0 \simeq \frac{\exp[g(q_0)]}{\sqrt{-g''(q_0)}} \ \int_{-\infty}^{+\infty} dy \exp{\left(-\frac{1}{2}y^2\right)} \ \exp{\left(Ay^3+By^4 + \cdots \right)} \end{equation} From this, we shall derive the domain of validity of our approach. To have an asymptotic series for the quantity under investigation, we need $A$ et $B$ to be negligible. A plot of these functions shows this to be true for big values of the parameter $\beta$. \begin{figure}[ht] \centering \includegraphics[scale=0.65]{imarticle04.jpg} \caption{{\footnotesize{Plot of the functions $A$ and $B$ for $x_1=-1$.}}} \end{figure} We use this to simplify our formulas. For large values of $\beta$, one has that the maximum of the function $g$ occurs at \begin{equation} \label{eqa15} q_0 \simeq \left( \frac{3\beta}{x_1^2} \right)^{\frac{1}{3}}, \end{equation} (which comes from Eq\eqref{eq49}) because in that limit \begin{equation} \label{eqa16b} A \sim \frac{1}{\beta^{\frac{1}{3}}}, \ B \sim \frac{-4 x_1^2}{\beta^{\frac{4}{3}}}. \end{equation} One then finds the dominant contribution (in the limit $\beta \longrightarrow \infty)$ to be given by \begin{equation} \label{eqa21} J_0 \simeq \sqrt{\pi} \exp{\left( z_0 \beta^{\frac{2}{3}}+ z_1 \beta^{\frac{4}{3}} \right)} . \ \left( z_2 + z_3 \beta^{\frac{2}{3}} \right)^{-\frac{1}{2}} \end{equation} where the constants coefficients are given by \begin{equation} \label{eqa22} z_0 = -x_1 \left( \frac{3\beta}{x_1^2} \right)^{\frac{2}{3}}, \ z_1 = \frac{x_1^2}{2} \left( \frac{3\beta}{x_1^2} \right)^{\frac{4}{3}}, \ z_2 = -x_1, \ z_3 = x_1^2 \left( \frac{3\beta}{x_1^2} \right)^{\frac{2}{3}}. \end{equation} For the rest of our treatment, it appears more simple to write this result directly in terms of $q_0$ : \begin{equation} \label{eqa23} J_0 = \sqrt{\pi} \left( -x_1 + x_1^2 q_0^2 \right)^{-\frac{1}{2}}\exp{(-x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4)}. \end{equation} One can now use the relation between $J_n$ and $J_0$. For example, for the first two following, one gets \begin{eqnarray} \begin{split} \label{eqa25} J_1 = \frac{\sqrt{\pi}}{2} q_0^{-1}& \left( -x_1 + x_1^2 q_0^2 \right)^{-\frac{3}{2}} \left( 1 - 4 x_1 q_0^2 + 2 x_1^2 q_0^4 \right) \exp{(-x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4)}, \\ J_2 = \frac{\sqrt{\pi}}{4} x_1^{-1} q_0^{-4}& \left( -x_1 + x_1^2 q_0^2 \right)^{-\frac{5}{2}}\\ &\times ( 1 +2 x_1 q_0^2 -10 x_1^2 q_0^4 + 22 x_1^3 q_0^6 -16 x_1^4 q_0^8 + 4 x_1^5 q_0^{10} ) \exp{(-x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4)}. \end{split} \end{eqnarray} It should be noted that only the dominant term in the polynomial expression is relevant. This is justified by the fact the contribution coming from the coefficients $A$ and $B$ in Eq.\eqref{eqa17} are ignored when computing the dominant term. This approach, which gives analytical formula, will be used later in the computation of the time scale where the classical solution begins separating from the mean value of the position operator for the coherent states studied here.\\ The second approach uses integration by parts. One has \begin{equation} \label{eqa25a} J_n = \int_{-\infty}^{+\infty} dq \ q^n \ \exp{\lbrack g(q) \rbrack}. \end{equation} If one introduces $u = q^n \ \exp{(x_1 q^2 - \frac{1}{6} x_1^2 q^4)}$ and $dv = \exp{(2\beta \ q)}dq$, one gets \begin{eqnarray} \begin{split} \label{eqa26} J_3 &= \frac{3}{x_1} J_1 + \frac{3\beta}{x_1^2} J_0 , \\ J_{n+3} &= \frac{3n}{2 \ x_1^2} J_{n-1}+ \frac{3\beta}{x_1^2} J_n + \frac{3}{x_1} J_{n+1} \ ; n \geq 1. \end{split} \end{eqnarray} These relations are exact. They apply to small as well as to large values of $\beta$. They can be used in the following way. For a given value of $\beta$, one computes numerically $J_0, J_1$ and $J_2$. All the others are then obtained by the preceding recursion formulas.\\ Finally, we use the mean value of the position operator given in Eq.\eqref{eq28}. We first write explicitly the relation contained in Eq.\eqref{eq29} \begin{equation} \label{eqa03} C_{m,n} = \sum _{k=0}^{3m} \sum _{p=0}^{3n} \tilde{f}^\star_{m,k} \ \tilde{f}_{n,p} \ J_{k+p+1}. \end{equation} The expressions of the $C_{m, n}$ (Eq.\eqref{eqa03}) then leads to \begin{eqnarray} \begin{split} \label{eqa04} \Omega_0 &= J_1, \\ \Omega_1 = i \ & \Big\lbrack \left( - \tilde{f}_{1,0} + \tilde{f}^\star_{1,0} \right) J_1 + \left( - \tilde{f}_{1,1} + \tilde{f}^\star_{1,1} \right) J_2 \Big\rbrack, \\ \Omega_2 = - \frac{1}{2} & \ \Big\lbrace \ \left( \tilde{f}_{2,0} + \tilde{f}^\star_{2,0} - 2\tilde{f}^\star_{1,0} \tilde{f}_{1,0} \right) J_1 \\ &+ \Big\lbrack \tilde{f}_{2,1} + \tilde{f}^\star_{2,1} - 2 \left( \tilde{f}^\star_{1,0} \tilde{f}_{1,1} + \tilde{f}_{1,0} \tilde{f}^\star_{1,1} \right) \Big\rbrack J_2 + \ \left( \tilde{f}_{2,2} + \tilde{f}^\star_{2,2} - 2\tilde{f}^\star_{1,1} \tilde{f}_{1,1} \right ) J_3 \ \Big\rbrace, \\ \Omega_3 = \frac{i}{6} & \ \Big\lbrace \ \Big\lbrack \left( \tilde{f}_{3,0} - \tilde{f}^\star_{3,0} \right) + 3 \left( - \tilde{f}^\star_{1,0} \tilde{f}_{2,0} + \tilde{f}_{1,0} \tilde{f}^\star_{2,0} \right) \Big\rbrack J_1 \\ &+ \Big\lbrack \left( \tilde{f}_{3,1} - \tilde{f}^\star_{3,1}\right) + 3 \left( -\tilde{f}^\star_{1,0} \tilde{f}_{2,1} - \tilde{f}_{2,0} \tilde{f}^\star_{1,1} + \tilde{f}_{1,0}\tilde{f}^\star_{2,1} + \tilde{f}_{1,1} \tilde{f}^\star_{2,0} \right) \Big\rbrack J_2 \\ &+ \ \Big\lbrack \left( \tilde{f}_{3,2} - \tilde{f}^\star_{3,2} \right) + 3 \left( -\tilde{f}^\star_{1,0} \tilde{f}_{2,2} - \tilde{f}_{2,1} \tilde{f}^\star_{1,1} + \tilde{f}_{1,0}\tilde{f}^\star_{2,2} + \tilde{f}_{1,1} \tilde{f}^\star_{2,1} \right) \Big\rbrack J_3 \\ & + \Big\lbrack \left( \tilde{f}_{3,3} - \tilde{f}^\star_{3,3} \right) + 3 \left( - \tilde{f}^\star_{1,1} \tilde{f}_{2,2} + \tilde{f}_{1,1} \tilde{f}^\star_{2,2} \right) \Big\rbrack J_4 \Big\rbrace,\\ &\cdots \end{split} \end{eqnarray} On the other hand, the classical trajectories are analytical functions of time. Rather than relying on their periodic character, we shall consider the equation of motion \begin{equation} \label{eq64} q''(t)-\mu q^3(t)-\sigma q^5(t)=0, \end{equation} \hspace{40mm}with $ \mu=-\frac{4}{3} \omega^2 x_1^3$ and $ \sigma= \frac{1}{3} \omega^2 x_1^4 $.\\ One sees that a power series solution of the form \begin{equation} \label{eq65} q_{class}(t)=\sum_{i=0}^{\infty} A_i . (\omega t)^i \end{equation} leads to the following relation between the coefficients \begin{eqnarray} \begin{split} \label{eq66} A_2 & = \frac{1}{6}(A^5_0 - 4A^3_0); \\ A_3 & = \frac{1}{18}( 5A_1 A^4_0 - 12A_1 A^2_0); \\ A_4 & = \frac{1}{216}( 5A_0^9 - 32A^7_0 + 48A^5_0 + 60A^2_1 A^3_0 - 72 A^2_1 A_0); \\ A_5 & = \frac{1}{1080}(85A_1 A^8_0 - 432A_1 A^6_0 + 432 A_1 A^4_0 + 180 A^3_1 A^2_0 - 72 A^3_1);\\ \cdots \end{split} \end{eqnarray} The trajectories will be identical if the equations Eq.\eqref{eq66} are still satisfied when the constants $A(i)$ are replaced by the coefficients $\Omega_i$. For our case, this amount to the vanishing of the following quantities \begin{equation} \label{eqz01} \Big \lvert \frac{q_{cl}(t) - q_{moy} (t)}{q_{cl} (t)} \Big \rvert \approx \varepsilon \ll 1 \end{equation} \begin{equation} \label{eqz02} \lvert \omega t \rvert \leq \varepsilon^{\frac{1}{2}} \Big \lvert \frac{x_1^2}{f(x_1, \beta)} \Big \rvert^{\frac{1}{2}} \end{equation} \begin{eqnarray} \begin{split} \label{eqz03} f(x_1, \beta) = &6x_1 \Big\lbrack -4 \gamma^2 \left( 3x_1 \beta^4 \right)^{\frac{1}{3}} - \left( 9 x_1^2 \beta^2 \right)^{\frac{1}{3}} \left( \beta^2 + \gamma^2 \right) - 2 \beta^2 \gamma^2 + x_1 \left( \beta^2 - \gamma^2 \right) \Big\rbrack\\ & + \pi^2 \exp{\Big\lbrack 4 \left( -x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4 \right) \Big\rbrack} - 4 \pi x_1^2 \exp{\Big\lbrack 2 \left( -x_1 q_0^2 + \frac{1}{2} x_1^2 q_0^4 \right) \Big\rbrack}. \end{split} \end{eqnarray} \begin{equation} \label{eqz04} \lvert \omega t \rvert \leq \varepsilon^{\frac{1}{2}} \Big \lvert \frac{6 \Omega_0}{ \Omega_0^5 - 4 \Omega_0^3 - 6 \Omega_2} \Big \rvert^{\frac{1}{2}} \end{equation} This will hold true for times very small compared to the intrinsic frequency of the system. \\ \textbf{\Large{Conclusions}} \\ In this paper, we devised a general method for the computation of the quantum trajectories of generalized coherent states in the supersymmetric context. Our approach was successfully applied to the harmonic oscillator. We then applied it to one of the simplest superpotentials whose coherent states are normalizable. We found for this specific case the timescale after which the classical and quantum trajectories begin having significant differences. Although technical difficulties arise, our approach can be used to analyze the product of the uncertainties in physical position and momentum operators and see how it evolves in view of the absolute bound given by the Heisenberg inequality. \\ Some mathematical issues have not been addressed here. One of them is the convergence of the series obtained for the position operator mean value. This is a very difficult point since we do not have the explicit formula for the coefficient $\Omega_l$. But in principle, since the problem is well posed, the results should be meaningful.\\ The method we followed here gives the position as an analytic series in time. One has to sum a finite number of terms to obtain an approximation. Such a finite sum is polynomial. This explains why after some time the graph goes to infinity; this simply means we are out of the domain where the approximation is valid.\\ The SUSY structure has been extensively used (see Eq.\eqref{eq17}, Eq.\eqref{eq18},Eq.\eqref{eq19}). This culminated in the recurrence formula of Eq.\eqref{eq22} which gives the nth contribution to the wave function. The superpotential $x(q)$ appears many times because it is the result of the commutation relations between the operators A and $A^\dag$ (Eq.\eqref{eq03a}). The coefficients $C_{m,n}$ are not easy to find analytically, even for simple superpotentials like the one we studied here. We devised recipes which can tackle this successfully. This was the case of the saddle point approximation whose domain of validity was explicitly given. \\ It should be emphasized that the coherent states studied here were not constructed as superposition of the quantum energy eigenstates . So it does not make sense to question its normalisability. It is normalizable in the Heisenberg picture ; since the evolution operator which relates it to the Schr0dinger picture is unitary ; its time evolved equivalent has the same norm. Let us finish with some considerations concerning formal remarks which can be misleading. It has been argued in the case of coherent states defined as infinite superpositions of energy eigenstates that in most cases the wave function so obtained is only formal i.e it does not converge when considering for example imaginary times. First, we have no reason to turn to imaginary times in this work. Secondly, the definition of coherent states adopted here is a priori different so that there is no reason the claim made above, if true , should apply here. To finish, our work can be seen as a particular illustration of the Ehrenfest theorem i.e in most general cases, the classical and quantum trajectories are not the same. \\ \vspace{5mm} \textbf{\Large{Appendix : The Harmonic Oscillator}} \\ We give here the results obtained by our treatment when applied to the harmonic oscillator. The functions $f_n$ are polynomial \begin{equation} \label{eq69} f_n(q, \alpha)=\sum_{k=0}^n \tilde{f}_{n,k}(\alpha) q^k; \tilde{f}_{0,0}= \tilde{f}_{0,0}^\star=1. \end{equation} The coefficients we need are given by the following integrals \begin{eqnarray} \begin{split} \label{eq70} C_{m,n}&=\sum_{k=0}^{m}\sum_{p=0}^{n} \int_{-\infty}^{+\infty} dq. q^{k+p+1} \tilde{f}^\star_{m,k}(\alpha) \tilde{f}_{n,p}(\alpha) \psi_H^\star (q,\alpha) \psi_H (q,\alpha)\ ; \\ C_{m,n}&=\sum_{k=0}^{m}\sum_{p=0}^{n} \int_{-\infty}^{+\infty} dq. q^{k+p+1} \tilde{f^\star}_{m,k}(\alpha) \tilde{f}_{n,p}(\alpha) \exp{(2\beta q-q^2)}\ \end{split} \end{eqnarray} where, by definition, the coefficient $\beta$ is given by \begin{equation} \label{eq71} 2\beta=\sqrt{2}(\alpha+\alpha^\star). \end{equation} We have fewer recursion relations \begin{eqnarray} \begin{split} \label{eq72} \tilde{f}_{n+1,0}&=- \tilde{f}_{n,2} - \sqrt{2} \alpha \tilde{f}_{n,1} - \alpha^2 \tilde{f}_{n,0}; \\ \tilde{f}_{n+1,l}&=-\frac{1}{2} (l+1)(l+2) \tilde{f}_{n,l+2} - \sqrt{2} \alpha (l+1) \tilde{f}_{n,l+1} + ( l - \alpha^2) \tilde{f}_{n,l} + \sqrt{2} \alpha \tilde{f}_{n,l-1};\\ \tilde{f}_{n+1,n-1}&=- \sqrt{2} \alpha n \tilde{f}_{n,n} +(n-1-\alpha^2) \tilde{f}_{n,n-1} + \sqrt{2} \alpha \tilde{f}_{n,n-2}; \\ \tilde{f}_{n+1,n}&=(n-\alpha^2)\tilde{f}_{n,n} + \sqrt{2} \alpha \tilde{f}_{n,n-1}; \\ \tilde{f}_{n+1,n+1}&= \sqrt{2} \alpha \tilde{f}_{n,n}. \end{split} \end{eqnarray} Our coefficients $C_{m,n}$ are given by integrals of the product of an exponential and a power of the variable $q$ (See Eq.\eqref{eq70}). Actually, the only thing one needs is \begin{equation} \label{eq73} J_0 =\int_{-\infty}^{+\infty} dq. \exp{(-q^2+2\beta q)}\, \end{equation} because \begin{equation} \label{eq74} \int_{-\infty}^{+\infty} dq. q^{k+p+1}\exp{(2\beta q-q^2)}\ =\sqrt{\pi}\ \frac{1}{2^{k+p+1}}\ \frac{\partial^{k+p+1}}{\partial \beta^{k+p+1}} \Big\lbrack \exp{(\beta^2)} \Big\rbrack. \end{equation} The coefficients can now be recast in the form \begin{equation} \label{eq75} C_{m,n}=\sqrt{\pi} \exp{\beta^2}\sum_{k=0}^{m}\sum_{p=0}^{n} \tilde{f^\star}_{m,k}(\alpha)f_{n,p}(\alpha)\ \frac{1}{2^{k+p+1}}\ P_{k+p+1}(\beta). \end{equation} The polynomials $P_s$ are defined by the property \begin{equation} \label{eq76} \frac{d^s}{d\beta^s}\exp{(\beta^2)}=\exp{(\beta^2)}\ P_s(\beta). \end{equation} One readily finds that they obey the recursion relations \begin{eqnarray} \label{eq77} P_{s+1}(\beta)&=&2\beta P_s(\beta) + \frac{d}{d\beta}P_s(\beta); \\ \nonumber P_0(\beta)&=&1. \end{eqnarray} Let us now compare the quantum and the classical trajectories for the harmonic oscillator using our approach.\\ For the quantum behavior on the other side, one finds \begin{eqnarray} \begin{split} \label{eq80} \Omega_0 & = \sqrt{\pi} \ \beta \ \exp{(\beta^2)},\\ \Omega_1 &= i \sqrt{\frac{\pi}{2}} \ \beta \ \exp{(\beta^2)},\\ \Omega_2 &=-\frac{1}{2} \sqrt{\pi}\ \exp{(\beta^2)},\\ \Omega_3 &=-\frac{i}{6} \sqrt{\frac{\pi}{2}} \ \beta \ \exp{(\beta^2)},\\ \Omega_4 &=\frac{1}{24} \sqrt{\pi}\ \exp{(\beta^2)},\\ &\cdots \end{split} \end{eqnarray} The following ratios \begin{eqnarray} \begin{split} \label{eq85} &\frac{\Omega_2}{\Omega_0}=-\frac{1}{2},\ \frac{\Omega_4}{\Omega_0}=\frac{1}{24},\ \frac{\Omega_6}{\Omega_0}=-\frac{1}{720},\ \frac{\Omega_8}{\Omega_0}=\frac{1}{40 320},\ \frac{\Omega_{10}}{\Omega_0}=-\frac{1}{3 628 800},\ \cdots \\ &\frac{\Omega_3}{\Omega_1}=-\frac{1}{6},\ \frac{\Omega_5}{\Omega_1}=\frac{1}{120},\ \frac{\Omega_7}{\Omega_1}=-\frac{1}{5 040},\ \frac{\Omega_9}{\Omega_1}=\frac{1}{362 880},\ \cdots \end{split} \end{eqnarray} drives us to conclude that the behavior of the classical trajectory given in Eq.\eqref{eqq01} is recovered, at least in the lowest order. One can go to higher order and verify this still works.\\ \begin{figure}[ht] \centering \includegraphics[scale=0.62]{imarticle02d.jpg} \caption{{\footnotesize{Plot of $q(t)$ for the Harmonic Oscillator, for the value $\alpha=10$, an approximation to the 8th order.}}} \end{figure} The same calculation done in section 3 for our toy model shows the plot of $q(t)$ below. The Figure 3 shows a kind of oscillation for a small interval of time.\\ We were able to plot the function $q(t)$ for the parameters $x_1=-1$ and $\beta=\frac{4}{3}$. But this was not sufficient to completely determine the coherent state's parameter $\alpha$, the unique information was $Re(\alpha)=\frac{2\sqrt{2}}{3}$ and $Im(\alpha)\leq0$. The "kind of oscillations" appeared for the smallest values of $Im(\alpha)$ and the smallest times, as shown in the Figure 3. \begin{figure}[ht] \centering \includegraphics[scale=0.60]{imarticle03.jpg} \caption{{\footnotesize{Plot of $q(t)$ for the Toy for the values $\alpha=\frac{100\sqrt{2}}{3}-\frac{i}{10000}$ and $x_1=-1$.The calculation was done to the 4th order.}}} \end{figure}
1,116,691,498,908
arxiv
\section{Introduction}\label{introduction} Decision trees are a fundamental machine learning tool. They partition the feature (input) space into regions of response homogeneity, such that the response (output) value associated with any point in a given partition can be predicted from the average for that of its neighbors. The classification and regression tree (CART) algorithm of \cite{breiman_classification_1984} is a common recipe for building trees; it grows greedily through a series of partitions on features, each of which maximizes reduction in some measure of impurity at the current tree leaves (terminal nodes; i.e., the implied input space partitioning). The development of random forests (RF) by \cite{breiman_random_2001}, which predict through the average of many CART trees fit to bootstrap data resamples, is an archetype for the successful strategy of tree ensemble learning. For prediction problems with training sets that are large relative to the number of inputs, properly trained ensembles of trees can predict out-of-the-box as well as any carefully tuned, application-specific alternative. This article makes three contributions to understanding and application of decision tree ensembles (or, \textit{forests}). \textit{Bayesian forest:} A nonparametric Bayesian (npB) point-of-view allows interpretation of forests as a sample from a posterior over trees. Imagine CART applied to a data generating process (DGP) with finite support: the tree greedily splits support to minimize impurity of the partitioned response distributions (terminating at some minimum-leaf-probability threshold). We present a nonparametric Bayesian model for DGPs based on multinomial draws from (large) finite support, and derive the Bayesian forest (BF) algorithm for sampling from the distribution of CART trees implied by the posterior over DGPs. Random forests are an approximation to this BF exact posterior sampler, and we show in examples that BFs provide a small but reliable gain in predictive performance over RFs. \textit{Posterior tree variability:} Based upon this npB framework, we derive results on the stability of CART over different DGP realizations. We find that, conditional on the data allocated to a given node on the sample CART tree, the probability that the next split for a posterior DGP realization matches the observed full-sample CART split is \begin{equation}\label{treestable} \mathrm{p}\left(\text{split matches sample CART}\right) \gtrsim 1 - \frac{p}{\sqrt{n}} e^{-n}, \end{equation} where $p$ is the number of possible split locations and $n$ the number of observations on the node. Even if $p$ grows with $n$, the result indicates that partitioning can be stable conditional on the data being split. This conditioning is key: CART's well known instability is due to its recursive nature, such that a single split different from sample CART at some node removes any expectation of similarity below that node. However, for large samples, (\ref{treestable}) implies that we will see little variation at the top hierarchy of trees in a forest. We illustrate such stability in our examples. \textit{Empirical Bayesian forests:} the npB forest interpretation and tree-stability results lead us to propose empirical Bayesian forests (EBF) as an algorithm for building approximate BFs on massive distributed datasets Traditional empirical Bayesian analysis fixes parameters in high levels of a hierarchical model at their marginal posterior mode, and quantifies uncertainty for the rest of the model conditional upon these fixed estimates. EBFs work the same way: we fit a single shallow CART \textit{trunk} to the sample data, and then sample a BF ensemble of \textit{branches} at each terminal node of this trunk. The initial CART trunk thus maps observations to their branch, and each branch BF is fit in parallel without any communication with the other branches. With little posterior variability about the trunk structure, an EBF sample should look similar to the (much more costly, or even infeasible) full BF sample. In a number of experiments, we compare EBFs to the common distributed-computing strategy of fitting forests to data subsamples and find that the EBFs lead to a large improvement in predictive performance. This type of strategy is key to efficient machine learning with Big Data: focus the `Big' on the pieces of models that are most difficult to learn. Bayesian forests are introduced in Section 2 along with a survey of Bayesian tree models, Section 3 investigates tree stability in theory and practice, and Section 4 presents the empirical Bayesian forest framework. Throughout, we use publicly available data on home prices in California to illustrate our ideas. We also provide a variety of other data analyses to benchmark performance, and close with description of how EBF algorithms are being built and perform in large-scale machine learning at eBay.com. \section{Bayesian forests}\label{bayesian-forests} Informally, write $dgp$ to represent the stochastic process defined over a set of possible DGPs. A Bayesian analogue to classical `distribution-free' nonparametric statistical analysis \citep[e.g.,][]{hollander_nonparametric_1999} has two components: \begin{enumerate} \item set a nonparametric statistic $\mathcal{T}(dgp)$ that is of interest in your application regardless of the true DGP, \item and build a flexible model for the DGP, so that the posterior distribution on $\mathcal{T}(dgp)$ can be derived from posterior distribution on possible DGPs. \end{enumerate} In the context of this article, $\mathcal{T}(dgp)$ refers to a CART tree. Indeed, trees are useful precisely because they are good predictors regardless of the underlying data distribution -- they do not rely upon distributional assumptions to share information across training observations. Our DGP model, detailed below, leads to a posterior for $dgp$ that is represented through random weighting of observed support. A Bayesian forest contains CART fits corresponding to each draw of support weights, and the BF ensemble prediction is an approximate posterior mean. \begin{figure*} \includegraphics[width=\textwidth]{graphs/mcycle} \caption{\label{mcycle} Motorcycle data illustration. The lines show each of CART fit to the data sample, CART fit to a sample with weights drawn IID from an exponential (point size proportional to weight), and the BF predictor which averages over 100 weighted CART fits.} \end{figure*} \subsection{Nonparametric model for the DGP}\label{dgpmodel} We employ a Dirichlet-multinomial sampling model in nonparametric Bayesian analysis. The approach dates back to \citet{ferguson_bayesian_1973}. \citet{chamberlain_nonparametric_2003} provide an overview in the context of econometric problems. \citet{rubin_bayesian_1981} proposed the Bayesian bootstrap as an algorithm for sampling from versions of the implied posterior, and it has since become closely associated with this model. Use $\mathbf{z}_i = \{\mathbf{x}_i,y_i\}$ to denote the features and response for observation $i$. We suppose that data are drawn \emph{independently} from a finite $L$ possible values, \begin{equation}\label{dgpeq} dgp = \mathrm{p}(\mathbf{z}) = \sum_{l=1}^L \omega_l \mathds{1}_{[\mathbf{z} = \boldsymbol{\zeta}_l]} \end{equation} where $\omega_l\geq0\forall l$ and $\sum_l \omega_l = 1$. Thus the generating process for observation $i$ draws $l_i$ from a multinomial with probability $\omega_{l_i}$, and this indexes one of the $L$ support points. Since $L$ can be arbitrarily large, this so-far implies no restrictive assumptions beyond that of independence. The conjugate prior for $\boldsymbol{\omega}$ is a Dirichlet distribution, written $\mathrm{Dir}(\boldsymbol{\omega}; \boldsymbol{\nu}) \propto \prod_{l=1}^L\omega_j^{\nu_l-1}$. We will parametrize the prior with a single concentration parameter $\boldsymbol{\nu} = a >0$, such that $\mathbb{E}[\omega_l] = a/La = 1/L$ and $\mathrm{var}(\omega_l) = (L-1)/[L^2(La+1)]$. Suppose you have the observed sample $\mathbf{Z} = [\mathbf{z}_1 \cdots \mathbf{z}_n]'$. For convenience, we allow $\boldsymbol{\zeta}_l=\boldsymbol{\zeta}_k$ for $l \neq k$ in the case of repeated values. Write $l_1 \dots l_n = 1 \dots n$ so that $\mathbf{z}_i = \boldsymbol{\zeta}_i$ and $\mathbf{Z} = [\boldsymbol{\zeta}_1 \cdots \boldsymbol{\zeta}_n]'$. Then the posterior distribution for $\boldsymbol{\omega}$ has $\omega_i = a+1$ for $i\leq n$ and $\omega_l = a$ for $l>n$, so that \begin{equation}\label{omegapost} \mathrm{p}(\boldsymbol{\omega}) \propto \prod_{i=1}^n \omega_i^{a} \prod_{l=n+1}^L \omega_l^{a-1}. \end{equation} This, in turn, defines our posterior for the data generating process through our sampling model in (\ref{dgpeq}). There are many possible strategies for specification of $a$ and $\zeta_l$ for $l>n$.\footnote{The unobserved $\zeta_l$ act as data we imagine we might have seen, to smooth the posterior away from the data we have actually observed. See \citet{poirier_bayesian_2011} for discussion.} The non-informative prior that arises as $a\rightarrow 0$ is a convenient default: in this limit, $\omega_l = 0$ with probability one for $l>n$.\footnote{For $l>n$ the posterior has $\mathbb{E}[\omega_l]=0$ with variance $\mathrm{var}(\omega_l) = \lim_{a \to 0} a[n+a(L-1)]/[(n+La)^2(n+La+1)] = 0$.} We apply this limiting prior throughout, such that our posterior for the data generating process is a multinomial draw from the observed data points, with a uniform $\mathrm{Dir}(\boldsymbol{1})$ distribution on the $\boldsymbol{\omega} = [\omega_1 \dots \omega_n]'$ sampling probabilities. We will also find it convenient to parametrize un-normalized $\boldsymbol{\omega}$ via IID exponential random variables: $\boldsymbol{\theta} = [\theta_1 \dots \theta_n]$, where $\theta_i \stackrel{ind}{\sim} \mathrm{Exp}(1)$ in the posterior and $\omega_i = \theta_i/|\boldsymbol{\theta}|$ with $|\boldsymbol{\theta}| = \sum_i \theta_i$. \subsection{CART as posterior functional} Conditional upon $\boldsymbol{\theta}$, the population tree $\mathcal{T}(dgp)$ is defined through a weighted-sample CART fit. In particular, given data $\mathbf{Z}^\eta = \{\mathbf{X}^\eta,\mathbf{y}^\eta\}$ in node $\eta$, sort through all dimensions of all observations in $\mathbf{Z}^\eta$ to find the split that minimizes the average of some $\boldsymbol{\omega}$-weighted impurity metric across the two new child nodes. For example, in the case of regression trees, the impurity to minimize is weighted-squared error \begin{equation} \mathcal{I}(\mathbf{y}^\eta) = \sum_{i\in \eta} \theta_i (y_i - \mu_\eta )^2 \end{equation} with $\mu_\eta = \sum_i \theta_i y_i/|\boldsymbol{\theta}^\eta|$. For our classification trees, we minimize Gini impurity. The split candidates are restricted to satisfy a minimum leaf probability, such that every node in the tree must have $|\boldsymbol{\theta}^\eta|$ greater than some threshold.\footnote{In practice this can be replaced with a threshold on the minimum number of observations at each leaf.} This procedure is repeated on every currently-terminal tree node until it is no longer possible to split while satisfying the minimum probability threshold. To simplify notation, we refer to the resulting CART tree as $\mathcal{T}(\boldsymbol{\theta})$. \subsection{Posterior sampling}\label{posterior-sampling} Following \cite{rubin_bayesian_1981} we can sample from the posterior on $\mathcal{T}(\boldsymbol{\theta})$ via the Bayesian bootstrap in Algorithm 1. \begin{algorithm}[ht] \caption{Bayesian Forest} \label{alg:bf} \begin{algorithmic} \FOR{$b=1$ {\bfseries to} $B$} \STATE draw $\boldsymbol{\theta}^b \stackrel{iid}{\sim} \mathrm{Exp}(\mathbf{1})$ \STATE run weighted-sample CART to get $\mathcal{T}_b = \mathcal{T}(\boldsymbol{\theta}^b)$ \ENDFOR \end{algorithmic} \end{algorithm} We've implemented BF through simple adjustment of the \texttt{ensemble} module of python's \texttt{scikit-learn} \citep{scikit-learn}.\footnote{Replace variable \texttt{sample\_counts} in \texttt{forest.py} to be drawn exponential rather than multinomial when bootstrapping.} As a quick illustration, Figure \ref{mcycle} shows three fits for the conditional mean velocity of a motorcyle helmet after impact in a crash: sample CART $\mathcal{T}(\mathbf{1})$, a single draw of $\mathcal{T}(\boldsymbol{\theta})$, and the BF average prediction \citep[data are from the MASS R package,][]{mass}. Note that the Bayesian forest differs from Breiman's random forest only in that the weights are drawn from an exponential (or Dirichlet, when normalized) distribution rather than a Poisson (or multinomial) distribution. To the extent that RF sampling provides a coarse approximation to the BF samples, the former is a convenient approximation. Moreover, we will find little difference in predictive performance between BFs and RFs, so that one should feel free to use readily available RF software while still relying on the ideas of this paper for intuition and interpretation. \subsection{Bayesian tree-as-parameter models}\label{bayesian-tree-as-parameter-models} Other Bayesian analyses of DTs treat the tree as a parameter which governs the DGP, rather than a functional thereof, and thus place some set of restrictions on the distributional relationship between inputs and outputs. The original Bayesian tree model is the Bayesian CART (BCART) of \citet{chipman_bayesian_1998}. BCART defines a likelihood where response values for observations allocated (via $\mathbf{x}$) to the same leaf node are IID from some parametric family (e.g., for regression trees, observations in each leaf are IID Gaussian with shared variance and mean). The BCART authors also propose linear regression leaves, while the Treed Gaussian Processes of \citet{gramacy_bayesian_2008} use Gaussian process regression at each leaf. The models are fit via Markov chain Monte Carlo (above examples) or sequential Monte Carlo \citep{taddy_dynamic_2011} algorithms that draw from the posterior by proposing small changes to the tree (e.g.,~grow or prune).\footnote{The Bayesian bootstrap is also a potential sampling tool in this tree-as-parameter setting. See \citet{clyde_bagging_2001} for details on the technique and its relation to model averaging.} Monte Carlo tree sampling use the same incremental moves that employed in CART. Unfortunately, this means that they tend to get stuck in locally-optimal regions of tree-space. Bayesian Additive Regression Trees \citep[BART;][]{chipman_bart:_2010} replace a single tree with the sum of many small trees. An input vector is allocated to a leaf in each tree, and the corresponding response distribution has mean equal to the average of each leaf node value and variance shared across observations. Original BART has Gaussian errors, and extensions include Gaussian mixtures. Since BART only samples short trees, it is fast and mixes well. Easy sampling comes at a potentially steep price: the assumption of homoskedastic additive errors.\footnote{For classification, this is manifest through a probit link.} Despite this restriction, empirical studies have repeatedly shown BART to outperform alternative flexible prediction rules. Many response variables (especially after, say, log transformation) have the property that they are well fit by flexible regression with homoskedastic errors. Whenever the model assumptions in BART are close enough to true, it should outperform methods which do not make those assumptions. In contrast, the npB interpretation of forests (BF or RF) makes it clear that they are suited to applications where the response distribution defies parametric representation, such that CART fit is the most useful DGP summary available. We often encounter this situation in application. For example, internet transaction data combines discrete point masses with an incredibly fat right tail \citep[e.g., see][]{taddy_heterogeneous_2014}. In academia it is common to transform such data before analysis, but businesses wish to target the response on the scale measured (e.g., clicks or dollars) and need to build a predictor that does well on that scale. \begin{figure} ~\includegraphics[width=0.45\textwidth]{graphs/fried} \vskip -.25cm \caption{\label{friedpic}Friedman experiment predictive RMSE over 100 runs.} \end{figure} \subsection{Friedman example}\label{friedman-example} \begin{figure*} ~~~\includegraphics[width=0.325\textwidth]{graphs/ca_hist} ~~~~~~~ \includegraphics[width=0.6\textwidth]{graphs/ca_rmse} \caption{\label{calihist} California housing data. The response sample is on the left, and the right panel shows predictive RMSE across 10-fold CV.} \end{figure*} A common simulation experiment in evaluating flexible predictors is based around the \citet{friedman_multivariate_1991} function, \begin{align}\label{friedfun} y &= f(\mathbf{x}) + \varepsilon \\ &= 10\mathrm{sin}(\pi x_1 x_2) + 20 (x_3-0.5)^2 + 10x_4 + 5x_5 + \varepsilon.\notag \end{align} where $\varepsilon \sim \mathrm{N}(0,1)$ and $x_j \sim \mathrm{U}(0,1)$. For our experiment, we follow previous authors by including as features for training the spurious $x_6 \dots x_{p}$. Each regression models are fit to 100 random draws from (\ref{friedfun}) and tested at 1000 new $\mathbf{x}$ locations. Root mean square error (RMSE) is calculated between predicted and true $f(\mathbf{x})$. Results over 100 repeats are shown in Figure \ref{friedpic}.\footnote{In this and the next example, CART-based algorithms had minimum-leaf-samples set at 3 and the ensembles contain 100 trees. BART and BCART run at their \texttt{bayestree} and \texttt{tgp} R package defaults, except that BART draws only 200 trees after a burn-in of 100 MCMC iterations. This is done to get comparable compute times; for these settings BART requires around 75 seconds per fold with the California housing data, compared to BF's 10 seconds when run in serial, and 3 seconds running on 4 cores.} As forecast, the only model which assumes the true homoskedastic error structure, BART, well outperforms all others. The two forests, BF and RF, are both a large improvement over DT. The BF average RMSE is only about 1\% better than the RF's, since Bayesian and classical bootstrapping differ little in practice. BCART does very poorly: worse than a single DT. We hypothesis that this is due to the notoriously poor mixing of the BCART MCMC, such that this fit is neither finding a posterior mean (as it is intended to) or optimizing to a local posterior mode (as DT does). We also include the extremely randomized trees (ET) of \citet{geurts_extremely_2006}, which are similar to RFs except that (a) instead of optimizing greedy splits, a few candidate splits are chosen randomly and the best is used and (b) the full unweighted data sample is used to fit each tree. ET slightly outperforms both BF and RF; in our experience this happens in small datasets where the restriction of population support to observed support (as assumed in our npB analysis) is invalid and the forest posteriors are over-fit. \subsection{California housing data}\label{california-housing-data} As more realistic example, we consider the California housing data of \citet{calidata} consisting of median home price along with eight features (location, income, housing stock) for 20640 census blocks. Since prices tend to vary with covariates on a multiplicative scale, standard analyses take log median price as the response of interest. Instead, we will model the conditional expectation of dollar median price. This is relevant to applications where prediction of raw transaction data is of primary interest. The marginal response distribution for median home price is shown in the left panel of Figure \ref{calihist}.\footnote{Values appear to have been capped at \$500k} Figure \ref{calihist} shows results from a 10-fold cross-validation (CV) experiment, with details in Table \ref{calitab}. Except for DT and BCART, which still perform worse than all others\footnote{We've left off BCART's average RMSE of \$82k, 70\% WTB.}, results are reversed from those for the small-sample and homoskedastic Friedman data. Both RF and BF do much better than BART and it's restrictive error model. BF again offers a small gain over RF. Since the larger sample size makes observed support a better approximation to population support, the forests outperform ET. The EBF (empirical Bayesian forest) and SSF (sub-sample forest) predictors are based on distributed computing algorithms that we introduce later in Section 3. At this point we note only that the massively scalable EBF is amongst the top performers; the next section helps explain why. \begin{table} {\footnotesize \begin{tabular}{r|c c c c c c c} &BF & RF & EBF & ET & SSF & BART & DT \\ \cline{2-8}\rule{0pt}{3ex} RMSE & 48.2 & 48.5 & 49.4 & 52.5 & 53.1 & 54.8 & 65.6 \\ \%WTB & 0.0& 0.5& 2.4& 8.7&10.0&13.4&35. \end{tabular}} \caption{\label{calitab} Average RMSE in \$10k and \% worse-than-best for the California housing data 10-fold CV experiment.} \end{table} \section{Understanding the posterior over trees}\label{treeuncertainty} Theory on decision trees is sparse. The original CART book of \citet{breiman_classification_1984} provides consistency results; they show that any partitioner that is able to learn enough to split the DGP support into very low probability leaves will be able to reproduce the conditional response distribution of that DGP. However, this result says nothing about the structure of the underlying trees, nor does it say anything about the ability of a tree to predict when there is not enough data to finely partition the DGP. Others have focused on the frequentist properties of individual split decisions. In the CHAID work of \citet{kass_exploratory_1980}, splits are based upon $\chi^2$ tests at each leaf node. \citet{loh_regression_2002} and \citet{hothorn_unbiased_2006} are generalizations, both of which combat multiple testing and other biases inherent in tree-building through a sequence of hypothesis tests. However, such contributions provide little intuition in the setting where we are not working from a no-split null hypothesis distribution. Despite this lack of theory, it is generally thought that there is large uncertainty (sampling or posterior) about the structure of decision trees. For example, \citet{geurts_investigation_2000} present extensive simulation of tree uncertainty and find that the location and order of splits is often no-better than random (this motivates work by the same authors on extremely randomized trees). The intuition behind such randomness is clear: the probability of a tree having any given branch structure is the product of conditional probabilities for each successive split. After enough steps, any specific tree approaches probability of zero. However, it is possible that elements of the tree structure are stable. For example, in the context of boosting, \citet{appel_quickly_2013} argue that the conditionally optimal split locations for internal nodes can be learned from subsets of the full data allocated to each node. They use this to propose a faster boosting algorithm. In this section we make a related claim: in large samples, there is little posterior variation for the top of the tree. We make the point first in theory, then through empirical demonstration. \subsection{Probability of the sample CART tree}\label{probability-of-the-sample-cart-tree} We focus on regression trees for this derivation, wherein node impurity is measured as the sum of squared errors. Consider a simplified setup with each $x_j \in \{0,1\}$ a binary random variable (possibly created through discretization of a continuous input). We'll investigate here the probability that the impurity minimizing split on a given node is the same for a given realization of posterior DGP weights as it is for the unweighted data sample. Suppose $\{\mathbf{z}_1 \ldots \mathbf{z}_n\}$ are the data to be partitioned at some tree node, with $\mathbf{z}_i = [y_i, x_{i1}, \dots, x_{ip}]'$. Say that $\texttt{f}_j=\{i:x_{ij}=0\}$ and $\texttt{t}_j=\{i:x_{ij}=1\}$ are the partitions implied by splitting on a given $x_j$. The resulting impurity is \begin{equation} \sigma^2_j(\boldsymbol{\theta}) = \frac{1}{n}\sum_i \theta_i \left[y_i - \mu_j(x_{ij})\right]^2, \end{equation} $\mu_j(0) = \sum_{i \in \texttt{f}_j}y_i \,\theta_i/\left|\boldsymbol{\theta}_{\texttt{f}_j}\right|$, $\mu_j(1) = \sum_{i \in \texttt{t}_j}y_i \,\theta_i/\left|\boldsymbol{\theta}_{\texttt{t}_j}\right|$. We could use the Bayesian bootstrap to simulate the posterior for $\sigma_j$ implied by the exponential posterior on $\boldsymbol{\theta}$, but an analytic expression is not available. Instead, we follow the approach used in \citet{lancaster_note_2003}, \citet{poirier_bayesian_2011}, and \citet{taddy_heterogeneous_2014}: derive a first-order Taylor approximation to the function $\sigma_j$ and describe the posterior for that related functional. In particular, the $1\times n$ gradient of $\sigma_j$ with respect to $\boldsymbol{\theta}$ is \begin{align} \nabla \sigma^2_j & = \nabla \frac{1}{n}\left[\sum_i \theta_iy_i^2 - \frac{1}{\left|\boldsymbol{\theta}_{\texttt{f}}\right|}\left(\mathbf{y}_{\texttt{f}}'\boldsymbol{\theta}_{\texttt{f}}\right)^2 - \frac{1}{\left|\boldsymbol{\theta}_{\texttt{t}}\right|}\left(\mathbf{y}_{\texttt{t}}'\boldsymbol{\theta}_{\texttt{t}}\right)^2\right] \end{align} which has elements $ \nabla_{i} \sigma^2_j = \left(y_i - \mu(x_{ij})\right)^2/n $ The Taylor approximation is then \begin{align} \sigma^2_j \approx \tilde\sigma^2_j &= \sigma^2_j(\boldsymbol{1}) + \nabla \sigma^2\big |_{\boldsymbol{\theta}=\mathbf{1}} (\boldsymbol{\theta} - \boldsymbol{1}) \notag\\ &= \frac{1}{n}\sum_i \theta_i \left(y_i - \bar{y}_j(x_{ij})\right)^2 \end{align} with $\bar y_j(0) = \frac{1}{n_{\texttt{f}_j}}\sum_{i \in \texttt{f}_j}y_i$ and $\bar y_j(1) = \frac{1}{n_{\texttt{t}_j}}\sum_{i \in \texttt{t}_j}y_i$ the observed response averages in each partition. Suppose that $\sigma_1(\boldsymbol{1}) < \sigma_j(\boldsymbol{1})~\forall j$, so that variable `1' is that selected for splitting based on the unweighted data sample. Then we can quantify variability about this selection by looking at differences in approximate impurity, \begin{align} \Delta_j(\boldsymbol{\theta}) & = \tilde\sigma_1^2 - \tilde\sigma_j^2 \\ & = \frac{1}{n}\sum_i \theta_i \big[ \bar{y}_1^2(x_{i1})-\bar{y}^2_j(x_{ij}) - \notag\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2y_i\left(\bar{y}_1(x_{i1})-\bar{y}_j(x_{ij})\right)\big] \notag \\ & \equiv \frac{1}{n}\sum_i \theta_i d_{ji}. \notag \end{align} Say $\mathbf{d}_j = [d_{j1}\ldots d_{jn}]'$ is the vector of squared error differences. Then the total difference has mean $\mathbb{E}\Delta_j = \bar{d}_j$ and variance $\mathrm{var}\Delta_j = \mathbf{d}_j'\mathbf{d}_j/n^2$. Since $\Delta_j$ is the mean of independent Exponential random variables with known means and variances, the central limit theorem applies and it converges in distribution to a Gaussian: \begin{equation} \sqrt{n}(\Delta_j(\boldsymbol{\theta}) -\bar{d}_j)\rightsquigarrow \mathrm{N}(0,~\mathbf{d}_j'\mathbf{d}_j/n ). \end{equation} The weighted-sample impurity-minimizing split matches that for the unweighted-sample if and only if all $\Delta_j$ are negative, which occurs with probability (note $\bar d_j < 0$) \begin{align} \mathrm{p}\left(\Delta_j< 0 :~ j = 2\dots p \right) & \geq 1 - \sum_{j=2}^p \mathrm{p}( \Delta_j > 0 ) \\ & \rightsquigarrow 1 - \sum_{j=2}^p \Phi\left( -\frac{\sqrt{n}\left|\bar{d}_j\right|}{\sqrt{\mathbf{d}_j'\mathbf{d}_j/n}}\right)\notag \\ & \geq 1 - \frac{1}{\sqrt{2\pi}} \sum_{j=2}^p \frac{\exp(-n z^2_j/2)}{z_j\sqrt{n}} \notag \end{align} where $z_j = \left|\bar{d}_j\right|\left(\mathbf{d}_j'\mathbf{d}_j/n\right)^{-\frac{1}{2}}$ is sample mean increase in impurity over the sample standard deviation of impurity. This ratio is bounded in probability, so that that the exponential bound goes to zero very quickly with $n$. In particular, ignoring variation in $z_j$ across variables, we arrive at the approximate lower bound on the probability of the sample split \[ \mathrm{p}\left(\text{posterior split matches sample split}\right) \gtrsim 1 - \frac{p}{\sqrt{n}} e^{-n}. \] Even allowing for $p \approx n\times d$, with $d$ some underlying continuous variable dimension and $p$ the input dimension discretization on these variables, the probability of the observed split goes to one at order $\mathcal{O}(n^{-1})$ if $d < e^{n}/n^{3/2}$. Given this, why is there any uncertainty at all about trees? The answer is recursion: each split is conditionally stable given the sample at the current node, but the probability of any specific sequence of splits is roughly (we don't actually have independence) the product of individual node split probabilities. This will get small as we move deeper down the tree, and given one split different from the sample CART the rest of the tree will grow arbitrarily far from this modal structure. In addition, the sample size going into our probability bounds is shrinking exponentially with each partition, whereas the dimension of eligible split variables is reduced only by one at each level. Regardless of overall tree variability, we can take from this section an expectation that for large samples the high-level tree structure varies little across posterior DGP realizations. The next section shows this to be the case in practice. \begin{figure*} ~~~ \includegraphics[width=0.6\textwidth]{graphs/ca_trunk} ~~~~~~ \includegraphics[width=0.31\textwidth]{graphs/ca_splits} \caption{\label{calistable} California housing data. The left shows the CART fit on unweighted data with a minimum of 3500 samples-per-leaf. The right panel shows the distribution of first and second split locations -- always on median income -- across 100 draws from the BF posterior. } \end{figure*} \subsection{Trunk stability in California}\label{stability-in-california} We'll illustrate the stability of high-level tree structure on our California housing data. Consider a `trunk' DT fit to the unweighted data with no less than 3500 census blocks (out of 20640 total) in each leaf partition. This leads to the tree on the left of Figure \ref{calistable} with five terminal nodes. It splits on the obvious variable of median income, and also manages to divide California into its north and south regions (34.455 degrees north is just north of Santa Barbara). To investigate tree uncertainty, we can apply the BF algorithm and repeatedly fit similar CART trees to randomly-weighted data. Running a 100 tree BF, we find that the sample tree occurs 62\% of the time. The second most common tree, occurring 28\% of the time, differs only in that it splits on median income again instead of on housing median age. Thus 90\% of the posterior weight is on trees that split on income twice, and then latitude. Moreover, a striking 100\% of trees have first two splits on median income. From this, we can produce the plot in the right panel of Figure \ref{calistable} showing the locations of these first two splits: each split-location posterior is tightly concentrated around the corresponding sample CART split. \section{Empirical Bayesian forests} Empirical Bayes (EB) is an established framework with a successful track record in fast approximate Bayesian inference; see, e.g., \citet{efron_large-scale_2010} for an overview. In parametric models, EB can be interpreted as fixing at their marginal posterior maximum the parameters at higher levels of a hierarchical model. For example, in the simple setting of many group means (the average student test score for each of many schools) shrunk to a global center (the outcome for an imaginary `average school'), an EB procedure will shrink each group mean toward the overall sample average (each school towards all-student average). \citet{kass_approximate_1989} investigate such procedures and show that, under fairly general conditions, the EB conditional posterior for each group mean quickly approaches the fully Bayesian unconditional posterior as the sample size grows. \textit{This occurs because there is little uncertainty about the global mean.} CART trees are not a parametric model, but they are hierarchical and admit an interpretation similar to those studied in \citet{kass_approximate_1989}. The data which reaches any given interior node is a function of the partitioning implied by nodes shallower in the tree. Moreover, due to the greedy algorithm through which CART grows, a given shallow trunk is unaffected by changes to the tree structure below. Finally, Section 3 demonstrated that, like high levels in a parametric hierarchical model, there is relatively little uncertainty about high levels of the tree. An empirical Bayesian forest (EBF) takes advantage of this structure by fixing the highest levels in the hierarchy -- the earliest CART splits -- at a single estimate of high posterior probability.\footnote{These earliest splits -- requiring impurity search over large observation sets -- are also the most expensive to optimzie.} We fit a single shallow CART \textit{trunk} to the unweighted sample data, and sample a BF ensemble of \textit{branches} at each terminal node of this trunk. The initial CART trunk maps observations to their branch, and each branch BF deals with a dataset of manageable size and can be fit in parallel without any communication with the other branches. That is, EBF replaces the BF posterior sample of trees with a conditional posterior sample that takes the pre-fit trunk as fixed. Since the trunk has relatively low variance for large samples (precisely the setting where such distribution is desirable), the EBF should provide predictions similar to that of the full BF at a fraction of the cost. In contrast, consider the `sub-sample forest' (SSF) algorithm, which replaces the full data with random sub-samples of manageable size. Independent forests are fit to each sub-sample and predictions are averaged across all forests. SSF is a commonly applied strategy \citep[e.g., see mention, but not recommendation, of it in][]{panda_planet:_2009}, but it implies using only partial data for learning deep tree structure. Although the tree trunks are stable, the full tree is highly uncertain and learning such structure is precisely where you want to use a full Big Data sample. \subsection{Out-of-sample experiments}\label{oos-experiment} In the California housing experiment of Figure \ref{calihist} and Table \ref{calitab}, EBF\footnote{EBFs use five node trunks in this Section. The SSFs are fit on data split into five equally sized subsets.} predictions are only 2\% worse than those from the full BF. In contrast, SSF predictions are 10\% worse. We consider two additional prediction problems. The first example, taken from \cite{CorCer09}, involves prediction of an `expert' quality ranking on the scale of 0-10 for wine based upon 11 continuous attributes (physiochemical properties of the wine) plus wine color (red or white). There are 4898 observations. Results are in Figure \ref{wine}: EBF is only 1\% worse than the full BF, while SSF is 12\% worse. The second example is from the Nielson Consumer Panel data, available for academic research through the Kilts Center at Chicago Booth, and our sample contains 73,128 purchases of light beer in a number of US markets during 2004. The response of interest is brand choice, of a possible five major light beer labels: Budweiser, Miller, Coors, Natural, and Busch. Each purchase is associated with a household that is codified through 16 standard demographic categorical variables (maximum age, total income, main occupation, etc). Applying classification forests and DTs (based on Gini impurity) leads to the results in Figure \ref{beer}: EBF is only 4.4\% worse than the BF, while SSF is 38\% worse. \begin{figure} \begin{minipage}{0.5\linewidth} \includegraphics[width=\textwidth]{graphs/wine} \end{minipage} ~~ \begin{minipage}{0.4\linewidth} {\footnotesize \begin{tabular}{c c | l} $\overline{\text{RMSE}}$ & \% WTB & \\ \cline{1-2}\rule{0pt}{3ex} 0.5905 & 0.0 & BF \\ 0.5953 & 0.8 & EBF \\ 0.6607 & 11.9 & SSF \\ 0.7648 & 29.5 & DT \\ \end{tabular}} \end{minipage} \caption{\label{wine}Wine Data: 10-fold OOS prediction experiment Root Mean Square Error and \% Worse than Best for the mean RMSE. } \end{figure} \begin{figure} \begin{minipage}{0.5\linewidth} \includegraphics[width=\textwidth]{graphs/beer} \end{minipage} ~~ \begin{minipage}{0.4\linewidth} {\footnotesize \begin{tabular}{c c | l} $\overline{\text{MCR}}$ & \% WTB & \\ \cline{1-2}\rule{0pt}{3ex} 0.4341 & 0.0 & BF \\ 0.4531 & 4.4 & EBF \\ 0.5989 & 38.0 & SSF \\ 0.6979 & 60.8 & DT \\ \end{tabular}} \end{minipage} \caption{\label{beer} Beer Data: 10-fold OOS prediction experiment Miss-Classification Rate and \% Worse than Best for the average MCR. } \end{figure} \subsection{Choosing the trunk depth} From the distributed computing perspective which motivates this work, you should fix the trunk only as deep as you must. In our full scale applications (next section), this means setting branch size so that the working memory required to fit a forest at each branch is slightly less than the memory available on each machine. This leaves open questions, e.g., on the trade-off between number of trees in the forest and depth of the trunk. We don't yet have rigorous answers, but the npB framework can help in further investigation. In the interim, Table \ref{msltab} shows the effect on OOS error from doubling and halving the minimum leaf size for the EBFs in our three examples. \begin{table}[h] {\footnotesize \begin{tabular}{c | c c c | c c c | c c c} & \multicolumn{3}{l|}{CA housing} & \multicolumn{3}{l|}{Wine} &\multicolumn{3}{l}{Beer} \\ \cline{2-10} \rule{0pt}{3ex} \!\!\!\!\!\!MLS $10^3$ & 6 & 3 & 1.5 & 2 & 1 & 0.5 & 20 & 10 & 5\\ \!\!\!\!\!\!\% WTB & \!1.6 & \!2.4 & \!4.3 & \!0.3 & \!0.8 & \!2.2 & \!1.0 & \!4.4 & \!7.6 \end{tabular}} \caption{\label{msltab}\% Worse than Best on OOS error for different minimum leaf sizes (MLS) specified in thousands of observations.} \end{table} \section{Scaling for Big Data} To close, we note that this work is motivated by the need for reliable forest fitting algorithms that can be deployed on millions or hundreds of millions of observations, as encountered when analyzing internet commerce data. A number of additional engineering details are required for such deployment, but the basic approach is straightforward: an initial trunk is fit (possibly to a data subset)\footnote{Note that the original CART trunk can itself be fit in distribution, e.g. using the \texttt{MLLib} library for Apache Spark.}, and this trunk acts as a sorting function to map observations to separate locations corresponding to each branch. Forests are then fit at each location (machine) for each branch. Preliminary work at eBay.com applies EBFs for prediction of `Bad Buyer Experiences' (e.g. complaints, returns, or shipping problems) on the site. Training on a relatively small sample of 12 million transactions, the EBF algorithm using 32 branch chunks is able to provide a 1.3\% drop in misclassification over the SSF alternatives. This amounts to more than 20,000 extra detected BBE occurrences over the short sample window, potentially giving eBay the opportunity to try and stop them before they occur. \section*{Acknowledgments} Taddy is also a scientific consultant at eBay and a Neubauer family faculty fellow at the University of Chicago.
1,116,691,498,909
arxiv
\section{Introduction} The interpretation of Quantum Mechanics (QM) remains a subject of intense investigation and heated debate, as well as the source of puzzlement and intellectual discomfort for many. To deepen our understanding of QM, various ontological models which posit some type of underlying physical ``reality'' to QM have been proposed. An illuminating discussion on ontological models of QM can be found in Harrigan and Spekkens \cite{Harrigan:2010}, in which the models are classified into two broad categories: $\psi$-ontic, and $\psi$-epistemic. $\psi$-ontic refers to models in which different wave-functions $\psi$ correspond to different physical states of the system itself, while $\psi$-epistemic refers to models in which different $\psi$'s correspond to different states of our (limited) knowledge about the system. Both types of models have their pros and cons, and are often unable to completely reproduce the predictions of canonical QM. Nevertheless, scrutinizing where they succeed and where they fail will help us identify how or whether the models can be improved, clarify the limitations of our ``classical'' thinking, and guide us to think more ``quantum-ly'' about QM. In this note, we will look at a simple $\psi$-epsitemic ontological model proposed by Spekkens in Ref.~\cite{Spekkens:2007}. The most elementary version of the model succeeds in reproducing the predictions of canonical QM of spin restricted to the six states of $\ket{0}$ (spin up), $\ket{1}$ (spin down), $\ket{\pm}=\frac{1}{\sqrt{2}}(\ket{0}\pm\ket{1})$, and $\ket{\!\pm\! i}=\frac{1}{\sqrt{2}}(\ket{0}\pm i\ket{1})$. Furthermore, the two-``spin'' system in Spekkens' model possesses the analog of entanglement. However, one aspect of QM which is missing from Spekkens' model is \textit{linearity}. Linearity, or the superposition principle, is what leads to the various mysteries of canonical QM, so it is a property one would like to impart on any model of QM. We argue that this can be done to Spekkens' model by mapping it to a QM defined over the finite field $\mathbb{F}_5$ \cite{GQM:2013,GQM-Spin:2013,Chang:2013,Chang:2014,QFun:2014}. \section{Spekkens' Toy Model} We begin with a quick review of Spekkens' toy model \cite{Spekkens:2007}. \subsection{Ontic \& Epsitemic States} Spekkens' \textit{knowledge balance principle} states that for a system which requires $2N$ bits of information to completely specify, only $N$ bits of information may be known about the system at any time. The state of the system is called the \textit{ontic state}, while the state of our (limited) knowledge about the system is called the \textit{epistemic state}. The most elementary system onto which Spekkens' knowledge balance principle could be applied would be such that require $2$ bits of information to completely specify. Such an elementary system would have $2^2 = 4$ ontic states which can be represented graphically as \begin{equation} \begin{ytableau} *(white) 1 & *(white) 2 & *(white) 3 & *(white) 4 \end{ytableau} \end{equation} The elementary system inhabits one and only one of these four ontic states at any time. The epistemic states each represent 1 bit of knowledge about this system, which would only allow us to narrow down the possible ontic states to two: $a\vee b$ where $a,b\in\{1,2,3,4\}$ and $a\neq b$. $a\vee b$ and $b\vee a$ represent the same knowledge, so we will use the convention that $a<b$ for all epistemic states $a\vee b$.\footnote{% The symbol $\vee$ represents an OR.} Therefore, an elementary system possesses six epistemic states and these can be represented by shading the boxes where the actual ontic state may be: \begin{eqnarray} 1\vee 2 & \quad & \begin{ytableau} *(green) & *(green) & *(white) & *(white) \end{ytableau} \;=\; \begin{ytableau} *(white) \bullet & *(white) & *(white) & *(white) \end{ytableau} \vee \begin{ytableau} *(white) & *(white) \bullet & *(white) & *(white) \end{ytableau} \cr 3\vee 4 & \quad & \begin{ytableau} *(white) & *(white) & *(green) & *(green) \end{ytableau} \;=\; \begin{ytableau} *(white) & *(white) & *(white) \bullet & *(white) \end{ytableau} \vee \begin{ytableau} *(white) & *(white) & *(white) & *(white) \bullet \end{ytableau} \cr 1\vee 3 & \quad & \begin{ytableau} *(green) & *(white) & *(green) & *(white) \end{ytableau} \;=\; \begin{ytableau} *(white) \bullet & *(white) & *(white) & *(white) \end{ytableau} \vee \begin{ytableau} *(white) & *(white) & *(white) \bullet & *(white) \end{ytableau} \cr 2\vee 4 & \quad & \begin{ytableau} *(white) & *(green) & *(white) & *(green) \end{ytableau} \;=\; \begin{ytableau} *(white) & *(white) \bullet & *(white) & *(white) \end{ytableau} \vee \begin{ytableau} *(white) & *(white) & *(white) & *(white) \bullet \end{ytableau} \cr 2\vee 3 & \quad & \begin{ytableau} *(white) & *(green) & *(green) & *(white) \end{ytableau} \;=\; \begin{ytableau} *(white) & *(white) \bullet & *(white) & *(white) \end{ytableau} \vee \begin{ytableau} *(white) & *(white) & *(white) \bullet & *(white) \end{ytableau} \cr 1\vee 4 & \quad & \begin{ytableau} *(green) & *(white) & *(white) & *(green) \end{ytableau} \;=\; \begin{ytableau} *(white) \bullet & *(white) & *(white) & *(white) \end{ytableau} \vee \begin{ytableau} *(white) & *(white) & *(white) & *(white) \bullet \end{ytableau} \end{eqnarray} For each epistemic state $a\vee b$, we assume that the probability that the actual ontic state is $a$ or $b$ is $\frac{1}{2}$ each. The subset of ontic states $\{a,b\}$ that appear in the epistemic state $a\vee b$ is called the \textit{ontic support} of the epistemic state. When the ontic supports of two epistemic states are disjoint, then the two epistemic states are said to be \textit{disjoint}. $1\vee 2$ and $3\vee 4$, $1\vee 3$ and $2\vee 4$, $2\vee 3$ and $1\vee 4$ are disjoint pairs of epistemic states. We will also call one member of these disjoint pairs the \textit{disjoint} of the other. \subsection{Measurements, Observables, \& Eigenstates} The only allowed \textit{measurements} in Spekkens' model are yes-no inquiries on whether one of the $a\vee b$ statements is true or not. For instance, if the system is in ontic state 1, the result of a measurement of $1\vee 2$ would result in ``yes,'' while a measurement of $3\vee 4$ would result in ``no.'' It is clear that an ``yes'' answer for ``$1\vee 2$?'' is equivalent to a ``no'' answer for ``$3\vee 4$?'', yielding the same information on whether the system is in state $1\vee 2$ or $3\vee 4$. Each measurement provides information on whether the system is in one epistemic state or its disjoint. Therefore, \textit{observables} can be defined as pairs of disjoint epistemic states, which are $X=\{ 1\vee 3, 2\vee 4 \}$, $Y=\{ 2\vee 3, 1\vee 4 \}$, and $Z=\{ 1\vee 2, 3\vee 4 \}$. The three observables can be represented graphically as \begin{eqnarray} X & \;:\quad & \begin{ytableau} *(white) + & *(white) - & *(white) + & *(white) - \end{ytableau} \cr Y & \;:\quad & \begin{ytableau} *(white) - & *(white) + & *(white) + & *(white) - \end{ytableau} \cr Z & \;:\quad & \begin{ytableau} *(white) + & *(white) + & *(white) - & *(white) - \end{ytableau} \end{eqnarray} where we denote the two possible outcomes of a measurement of each observable for a given underlying ontic state with $+1$ and $-1$. The measurement of $Z$, for instance, on the epistemic state $1\vee 2$ always yields $+1$, while on the epistemic state $3\vee 4$ always yields $-1$. On the other hand, the measurement of $X$ on the epistemic state $1\vee 2$, which is in ontic state 1 with probability $\frac{1}{2}$ and in ontic state 2 with probability $\frac{1}{2}$, will yield $\pm 1$ with probability $\frac{1}{2}$ each. Let us call the epistemic states comprising each observable the \textit{eigenstates} of the observable. For instance, $1\vee 3$ and $2\vee 4$ are the eigenstates of $X$. A measurement of $X$ on $1\vee 3$ will always yield $+1$ while a measurement of $X$ on $2\vee 4$ will always yield $-1$. \subsection{Ontic State Update/Epistemic State Collapse} The knowledge balance principle demands that repeated measurements cannot be allowed to narrow down the possible underlying ontic state. The ontic state must therefore be perturbed by each measurement and update accordingly. For instance, a $+1$ result on the measurement of $X$ tells us that the system is in ontic state 1 or 3 with $\frac{1}{2}$ probability each. If the actual ontic state were 1, and did not update as a result of the measurement of $X$, a followup measurement of $Z$ would yield $+1$ with probability 1 and allow us to deduce what the ontic state was. Therefore, the measurement of $X$ must knock the ontic state 1 into ontic state 3 with $\frac{1}{2}$ probability so that while another measurement of $X$ will yield $+1$ as before, a measurement of $Z$ (or $Y$) will yield $\pm 1$ with probability $\frac{1}{2}$ each. A measurement will also update the epistemic state. A measurement of $X$ yielding $+1$ will update the epistemic state to $1\vee 3$ while a $-1$ outcome will update it to $2\vee 4$. In other words, a measurement of an observable will \textit{collapse} the epistemic state to one of the observable's eigenstates. \subsection{Correspondence to Qubit/Spin States} Spekkens' six epistemic states introduced above correspond nicely to qubit/spin states as \begin{eqnarray} \{ 1\vee 2, 3\vee 4 \} & \quad\Leftrightarrow\quad & \{ |\,0\,\rangle , |\,1\,\rangle \}\;,\cr \{ 1\vee 3, 2\vee 4 \} & \quad\Leftrightarrow\quad & \{ |+\rangle , |-\rangle \}\;,\cr \{ 2\vee 3, 1\vee 4 \} & \quad\Leftrightarrow\quad & \{ |\!+\!i\,\rangle, |\!-\!i\,\rangle \}\;, \end{eqnarray} where $|\,0\,\rangle$ (spin up) and $|\,1\,\rangle$ (spin down) are the computational basis states of the qubit (eigenstates of $\hat{Z}=\sigma_z$) while \begin{equation} |\pm\rangle \,=\, \dfrac{1}{\sqrt{2}}\Bigl(|\,0\,\rangle \pm |\,1\,\rangle \Bigr)\;,\qquad |\!\pm\! i\,\rangle \,=\, \dfrac{1}{\sqrt{2}}\Bigl(|\,0\,\rangle \pm i\,|\,1\,\rangle \Bigr)\;. \label{Qsuperpositions} \end{equation} The states $|\pm\rangle$ and $|\!\pm\! i\,\rangle$ are respectively the eigenstates of $\hat{X}=\sigma_x$ and $\hat{Y}=\sigma_y$ with eigenvalues $\pm 1$. Given this correspondence, it is straightforward to show that the measurement outcomes of $X$, $Y$, $Z$ on Spekkens' six epistemic states correspond exactly to the measurement outcomes of $\hat{X}$, $\hat{Y}$, $\hat{Z}$ on the six qubit/spin states. In other words, Spekkens' toy model in its most elementary form succeeds in reproducing the predictions of a spin-$\frac{1}{2}$ quantum mechanical system albeit restricted to the six states listed here. \subsection{Entanglement of two ``Qubits/Spins''} A pair of elementary systems, each with $2^2=4$ ontic states, will have $2^4=16$ ontic states. A complete specification of the ontic state will require 4 bits of information, whereas the knowledge balance principle demands that only 2 bits of information may be known about the system at any time. Furthermore, the knowledge balance principle continues to apply to each of the 2-bit subsystems. This restricts the epistemic states of the elementary-system pair to the types depicted graphically in Figure~\ref{product_and_entangled_states}: there are $6^2=36$ product states, and $4!=24$ entangled states. The entangled states must have one shaded box in each row and each column. Each measurement of either system 1 or system 2 will update the ontic and epistemic states as in the one-system case. Any measurement on an entangled state will collapse it to a product state. \begin{figure}[t] \begin{center} \includegraphics[height=3cm]{ProductAndEntangled.png} \caption{Examples of product and entangled states in Spekkens' model. } \label{product_and_entangled_states} \end{center} \end{figure} \subsection{Linearity} In Ref.~\cite{Spekkens:2007}, Spekkens defines four types of ``coherent superpositions'' as \begin{equation} \begin{array}{l} (a\vee b) +_1 (c\vee d) \;=\; a \vee c\;,\cr (a\vee b) +_2 (c\vee d) \;=\; b \vee d\;,\cr (a\vee b) +_3 (c\vee d) \;=\; b \vee c\;,\cr (a\vee b) +_4 (c\vee d) \;=\; a \vee d\;. \end{array} \label{FOILsums0} \end{equation} If applied to $(1\vee 2)$ and $(3\vee 4)$, we have \begin{equation} \begin{array}{l} (1\vee 2) +_1 (3\vee 4) \;=\; 1 \vee 3\;,\cr (1\vee 2) +_2 (3\vee 4) \;=\; 2 \vee 4\;,\cr (1\vee 2) +_3 (3\vee 4) \;=\; 2 \vee 3\;,\cr (1\vee 2) +_4 (3\vee 4) \;=\; 1 \vee 4\;, \end{array} \label{FOILsums1} \end{equation} in exact parallel to Eq.~\eqref{Qsuperpositions}. Thus, these four different sums are analogous to addition but with different phases multiplying the second state of the sum. However, note that these sums are only defined for pairs of disjoint epistemic states. The sum of $1\vee 2$ and $1\vee 3$, for instance, is not defined. Moreover, the sums are dependent on the ordering of the ontic-state labels. One could try to identify $a\vee b$ and $b\vee a$ with the same qubit/spin state but with different phases, but it can be shown that such an identification would not work. Since one cannot define arbitrary sums of the six qubit/spin states $\ket{0}$, $\ket{1}$, $|\pm\rangle$, and $|\!\pm\! i\,\rangle$ in canonical QM either without going outside of this set, one could argue that the inability to define sums of arbitrary pairs of epistemic states in Spekkens' model is not really a problem. Nevertheless, the lack of linearity suggests that an important aspect of QM has not been completely captured within the model. \section{Finite Field Quantum Mechanics} The missing linearity of Spekkens' toy model can be supplied by mapping it to a QM defined over the finite field $\mathbb{F}_5$. There are multiple ways to define QM on finite fields as discussed in Refs.~\cite{GQM:2013,GQM-Spin:2013,Chang:2013,Chang:2014,QFun:2014}. The formalism presented here is closest to Refs.~\cite{GQM:2013,GQM-Spin:2013}. The finite field $\mathbb{F}_5 = \{\underline{0},\pm\underline{1},\pm\underline{2}\}$ consists of five elements with addition and multiplication defined modulo 5. We underline elements of $\mathbb{F}_5$ to distinguish them from elements of $\mathbb{Z}$. Note that $\underline{2}^2 = (-\underline{2})^2 = -\underline{1}$, that is: \begin{equation} \pm\underline{2} \;=\; \pm\sqrt{-\underline{1}}\;. \end{equation} In analogy to the qubit/spin states in canonical QM, which are vectors in the complex projective space $CP^1 = P^1(\mathbb{C})$ (the Bloch sphere), the ``qubit/spin'' states in $\mathbb{F}_5$ QM are vectors in the $\mathbb{F}_5$ projective space $P^1(\mathbb{F}_5)$. This is basically the 2D vector space $\mathbb{F}_5^2$ with vectors differing by overall non-zero multiplicative constants identified as representing the same state. Consequently, in $P^1(\mathbb{F}_5)$ there are $(5^2-1)/4=6$ inequivalent states and these can be taken to be \begin{equation} \ket{a} = \left[\begin{array}{r} \underline{1} \\ \underline{0} \end{array}\right],\; \ket{b} = \left[\begin{array}{r} \underline{0} \\ \underline{1} \end{array}\right],\; \ket{c} = \left[\begin{array}{r} \underline{1} \\ \underline{1} \end{array}\right],\; \ket{d} = \left[\begin{array}{r} \underline{1} \\ -\underline{1} \end{array}\right],\; \ket{e} = \left[\begin{array}{r} \underline{1} \\ \underline{2} \end{array}\right],\; \ket{f} = \left[\begin{array}{r} \underline{1} \\ -\underline{2} \end{array}\right]. \end{equation} The inequivalent dual-vectors can be taken to be \begin{eqnarray} \bra{a} & = & \bigl[\;\underline{1}\;\;\underline{0} \;\bigr]\;,\quad \bra{b} \;=\; \bigl[\;\underline{0}\;\;\underline{1} \;\bigr]\;,\quad \bra{c} \;=\; \bigl[\;-\underline{2}\;-\!\underline{2} \;\bigr]\;,\cr \bra{d} & = & \bigl[\;-\underline{2}\;\;\underline{2} \;\bigr]\;,\quad \bra{e} \;=\; \bigl[\;-\underline{2}\;-\!\underline{1} \;\bigr]\;,\quad \bra{f} \;=\; \bigl[\;-\underline{2}\;\;\underline{1} \;\bigr]\;. \end{eqnarray} The actions of these dual-vectors on the vectors are: \smallskip \begin{center} \begin{tabular}{|c||cc|cc|cc|} \hline & $\quad\ket{a}\quad$ & $\quad\ket{b}\quad$ & $\quad\ket{c}\quad$ & $\quad\ket{d}\quad$ & $\quad\ket{e}\quad$ & $\quad\ket{f}\quad$ \\ \hline $\;\;\bra{a}\;\;$ & $\phantom{-}\underline{1}$ & $\phantom{-}\underline{0}$ & $\phantom{-}\underline{1}$ & $\phantom{-}\underline{1}$ & $\phantom{-}\underline{1}$ & $\phantom{-}\underline{1}$ \\ $\bra{b}$ & $\phantom{-}\underline{0}$ & $\phantom{-}\underline{1}$ & $\phantom{-}\underline{1}$ & $-\underline{1}$ & $\phantom{-}\underline{2}$ & $-\underline{2}$ \\ \hline $\bra{c}$ & $-\underline{2}$ & $-\underline{2}$ & $\phantom{-}\underline{1}$ & $\phantom{-}\underline{0}$ & $-\underline{1}$ & $\phantom{-}\underline{2}$ \\ $\bra{d}$ & $-\underline{2}$ & $\phantom{-}\underline{2}$ & $\phantom{-}\underline{0}$ & $\phantom{-}\underline{1}$ & $\phantom{-}\underline{2}$ & $-\underline{1}$ \\ \hline $\bra{e}$ & $-\underline{2}$ & $-\underline{1}$ & $\phantom{-}\underline{2}$ & $-\underline{1}$ & $\phantom{-}\underline{1}$ & $\phantom{-}\underline{0}$ \\ $\bra{f}$ & $-\underline{2}$ & $\phantom{-}\underline{1}$ & $-\underline{1}$ & $\phantom{-}\underline{2}$ & $\phantom{-}\underline{0}$ & $\phantom{-}\underline{1}$ \\ \hline \end{tabular} \end{center} Observables are defined by pairs of dual-vectors: \begin{equation} \underline{X} = \{ \bra{c}, \bra{d}\} \;,\qquad \underline{Y} = \{ \bra{e}, \bra{f}\} \;,\qquad \underline{Z} = \{ \bra{a}, \bra{b}\} \;, \end{equation} where the first (second) dual-vector of the pair corresponds to the outcome $+1$ ($-1$). The probability of obtaining outcome $+1$ ($-1$) when observable $O=\{\bra{x},\bra{y}\}$ is measured on state $\ket{\psi}$ is defined as \begin{equation} P(+1,O,\psi) \,=\, \dfrac{|\braket{x}{\psi}|^2}{|\braket{x}{\psi}|^2+|\braket{y}{\psi}|^2}\;,\qquad P(-1,O,\psi) \,=\, \dfrac{|\braket{y}{\psi}|^2}{|\braket{x}{\psi}|^2+|\braket{y}{\psi}|^2}\;,\qquad \end{equation} where the absolute value of $\underline{k}\in\mathbb{F}_5$ is defined to be $|\underline{k}|=0$ if $\underline{k}=\underline{0}$, and $|\underline{k}|=1$ otherwise. It is straightforward to demonstrate that the predictions of $\mathbb{F}_5$ QM agrees with those of canonical QM and Spekkens' toy model via the correspondence \begin{equation} \begin{array}{lll} \{ 1\vee 2, 3\vee 4 \} & \Leftrightarrow\;\; \{ |\,0\,\rangle , |\,1\,\rangle \} & \Leftrightarrow\;\; \{ \ket{a},\ket{b} \} \;,\\ \{ 1\vee 3, 2\vee 4 \} & \Leftrightarrow\;\; \{ |+\rangle , |-\rangle \} & \Leftrightarrow\;\; \{ \ket{c},\ket{d} \} \;,\\ \{ 2\vee 3, 1\vee 4 \} & \Leftrightarrow\;\; \{ |\!+\!i\,\rangle, |\!-\!i\,\rangle \} & \Leftrightarrow\;\; \{ \ket{e},\ket{f} \} \;. \end{array} \label{SpekkensF5correspondence} \end{equation} However, in $\mathbb{F}_5$ QM, the sums of arbitrary pairs of states are well defined. In addition to \begin{eqnarray} \ket{c} & = & \ket{a} + \ket{b}\;,\cr \ket{d} & = & \ket{a} - \ket{b}\;,\cr \ket{e} & = & \ket{a} + \underline{2}\,\ket{b} \;=\; \ket{a} + \sqrt{-\underline{1}}\,\ket{b}\;,\cr \ket{f} & = & \ket{a} - \underline{2}\,\ket{b} \;=\; \ket{a} - \sqrt{-\underline{1}}\,\ket{b}\;, \end{eqnarray} we also have, for instance: \begin{eqnarray} \ket{a} + \ket{c} & = & \underline{2}\ket{f}\;,\cr \ket{a} - \ket{c} & = & -\ket{b}\;,\cr \ket{a} + \underline{2}\,\ket{c} & = & -\underline{2}\ket{d}\;,\cr \ket{a} - \underline{2}\,\ket{c} & = & -\ket{e}\;, \end{eqnarray} which corresponds to \begin{eqnarray} (1\vee 2)+_1(1\vee 3) & = & (1\vee 4)\;,\cr (1\vee 2)+_2(1\vee 3) & = & (3\vee 4)\;,\cr (1\vee 2)+_3(1\vee 3) & = & (2\vee 4)\;,\cr (1\vee 2)+_4(1\vee 3) & = & (2\vee 3)\;. \end{eqnarray} In this fashion, one can introduce sums of non-disjoint epistemic states into Spekkens' toy model via the correspondence Eq.~\eqref{SpekkensF5correspondence}. \begin{figure}[t] \begin{center} \includegraphics[height=4.5cm]{DisjointSums.png} \caption{Possible ``coherent superpositions'' of disjoint epistemic states.} \label{DisjointSums} \end{center} \end{figure} \section{Entanglement in $\mathbb{F}_5$ QM and in Spekkens' Toy Model} Since $\mathbb{F}_5$ QM is equipped with a vector space, the two-qubit/spin system state space is obtained by tensoring the one-qubit/spin system state space. This gives us $P^3(\mathbb{F}_5)$, namely $\mathbb{F}_5^4$ with vectors differing by overall non-zero multiplicative constants identified. The number of inequivalent states in this space is $(5^4-1)/4=156$, of which $6^2=36$ are product states and $156-36=120$ are entangled. All 120 entangled states can be expressed as superpositions of two product states without a common factor. There is clearly a mismatch in the number of entangled states between $\mathbb{F}_5$ QM and Spekkens' model. Thus, imposing linearity on Spekkens' model seems to require many more entangled epistemic states to be included in the model than were originally allowed. So which ones are they? And can we prevent the inclusion of such states from violating the knowledge balance principle, perhaps by modifying the ontic-state update rule upon measurement? Before pursuing these questions further, let us first identify which of the 120 $\mathbb{F}_5$ entangled states correspond to the 24 Spekkens' entangled states. Do all Spekkens' entangled states have an $\mathbb{F}_5$ analog? Considering how ``coherent superpositions'' were originally defined for the elementary system, one guess would be that the Spekkens' entangled states would result from sums of disjoint product states as depicted in Figure~\ref{DisjointSums}. Indeed, for the disjoint pair $\{\ket{a}\otimes\ket{a},\ket{b}\otimes\ket{b}\}$ one finds \begin{equation} \begin{array}{lll} \ket{a}\otimes\ket{a}+\ket{b}\otimes\ket{b} & = \; -2(\ket{c}\otimes\ket{c}+\ket{d}\otimes\ket{d}) & = \; -2(\ket{e}\otimes\ket{f}+\ket{f}\otimes\ket{e}) \;,\\ \ket{a}\otimes\ket{a}-\ket{b}\otimes\ket{b} & = \; -2(\ket{c}\otimes\ket{d}+\ket{d}\otimes\ket{c}) & = \; -2(\ket{e}\otimes\ket{e}+\ket{f}\otimes\ket{f}) \;,\\ \ket{a}\otimes\ket{a}+\underline{2}\ket{b}\otimes\ket{b} & = \; -2(\ket{c}\otimes\ket{e}+\ket{d}\otimes\ket{f}) & = \; -2(\ket{e}\otimes\ket{c}+\ket{f}\otimes\ket{d}) \;,\\ \ket{a}\otimes\ket{a}-\underline{2}\ket{b}\otimes\ket{b} & = \; -2(\ket{c}\otimes\ket{f}+\ket{d}\otimes\ket{e}) & = \; -2(\ket{f}\otimes\ket{c}+\ket{e}\otimes\ket{d}) \;. \end{array} \label{aabbsuper} \end{equation} \begin{figure}[t] \begin{center} \includegraphics[height=4.2cm]{aapm2bb.png} \caption{Spekkens' entangled states and the corresponding $\mathbb{F}_5$ entangled states.} \label{aapm2bb} \end{center} \end{figure} \noindent Comparing the result of various measurements on these states with those of Spekkens' states, one finds that $\ket{a}\otimes\ket{a}\pm\underline{2}\,\ket{b}\otimes\ket{b}$ have Spekkens' analogs, cf. Figure~\ref{aapm2bb}, but $\ket{a}\otimes\ket{a}\pm \ket{b}\otimes\ket{b}$ do not. For instance, the Spekkens' state \begin{equation} \begin{ytableau} *(green) & *(white) & *(white) & *(white) \\ *(white) & *(green) & *(white) & *(white) \\ *(white) & *(white) & *(green) & *(white) \\ *(white) & *(white) & *(white) & *(green) \\ \end{ytableau} \end{equation} will collapse to the product state $\ket{a}\otimes\ket{a}$ or $\ket{b}\otimes\ket{b}$ if $Z$ is measured on system 1 or 2, $\ket{c}\otimes\ket{c}$ or $\ket{d}\otimes\ket{d}$ if $X$ is measured on system 1 or 2, and $\ket{e}\otimes\ket{e}$ or $\ket{f}\otimes\ket{f}$ if $Y$ is measured on system 1 or 2. However, as we have demonstrated in Eq.~\eqref{aabbsuper}, no superposition of $\ket{a}\otimes\ket{a}$ and $\ket{b}\otimes\ket{b}$ is also a superposition of $\ket{c}\otimes\ket{c}$ and $\ket{d}\otimes\ket{d}$, and a superposition of $\ket{e}\otimes\ket{e}$ and $\ket{f}\otimes\ket{f}$ simultaneously. Indeed, of the $4!=24$ entangled states of Spekkens' model, 12 do not have $\mathbb{F}_5$ analogs that give the same predictions, cf. Figure~\ref{entangled_states}. \begin{figure}[b] \begin{center} \includegraphics[width=12cm]{Entangled_Spekkens.png} \caption{The $4!=24$ entangled states of the Spekkens' model. The 12 states above the dashed line do not have $\mathbb{F}_5$ analogs.} \label{entangled_states} \end{center} \end{figure} The reason for this mismatch can be traced to the fact that the correspondence of Eq.~\eqref{SpekkensF5correspondence} does not respect the symmetries of the respective models. Spekken's toy model is invariant under any relabelling of the four ontic states. In other words, it has an $S_4$ symmetry. On the other hand, the qubit/spin states of both canonical QM and $\mathbb{F}_5$ QM transform under 2D projective representations of $SO(3)$. Though $S_4$ can be embedded into $SO(3)$ \cite{Klein}, Eq.~\eqref{SpekkensF5correspondence} demands that all the even permutations of $S_4$ (the $A_4$ subgroup) be mapped to rotations, but all the odd permutations be mapped to reflections, which would involve a complex conjugation. In fact, this mismatch was already evident in Ref.~\cite{Spekkens:2007} where it was noted that the coherent superpositions of Eq.~\eqref{FOILsums0} do not produce the expected results for sums of $1\vee 3$ and $2\vee 4$. Thus, we have identified a rather serious problem if one wishes to utilize Spekkens' model as a model of qubits/spin. While the linearity requirement demanding more entangled epistemic states to be added to the mix is interesting in itself, the symmetry mismatch suggests we could be comparing apples and oranges. One possible way out may be to embed the $S_4$ permutations into 4D projective representations of $SO(3)$ in canonical and $\mathbb{F}_5$ QM. This will allow the odd permutations to be also represented as rotations, but it will also double the number of states available leading to a different set of complications. These, and other questions will be addressed in more detail in a forthcoming paper. \ack Takeuchi would like to thank the Institute of Advanced Studies at Nanyang Technological University, Singapore, for their hospitality during the QM90 Conference (23-26 January 2017), and Sixia Yu for bringing Spekkens' model to his attention. We would also like to thank Chen Sun and Matthew Henriques for their respective contributions to this project. \section*{References}
1,116,691,498,910
arxiv
\section{Introduction} \label{sec:intro} Dust attenuation is an important factor in interpreting observations of galaxies. This is particularly true for high-redshift galaxies, for which our primary observational constraints are confined to the restframe ultraviolet where the effect of dust extinction is particularly strong. Our ability to account for dust affects the accuracy of derived galaxy properties such as the implied star formation rates, ionization parameters, and gas-phase metallicities. Complicating the situation further, studies have uncovered evidence that the dust law in galaxies is not universal, neither amongst members of the Local Group \citep[e.g.,][]{pei1992,gordon2003} nor at higher redshifts \citep[$z\sim0.5-3.0$;][]{kriek2013,salmon2016}. Correcting an observed quantity for the effects of dust requires understanding both the amount of dust along the line-of-sight and the amount of extinction that dust causes as a function of wavelength, i.e., the dust extinction law, which is related to the dust composition and distribution of grain sizes. When considering an entire galaxy, it is often more accurate to use the term ``attenuation" to describe the effects of dust \citep[e.g.,][]{calzetti2001,salim2020}. Whereas dust extinction is the removal of light from the line-of-sight due to absorption and scattering by dust, dust attenuation encompasses the contributions of absorption plus scattering both out of and back into of the line-of-sight. In practice, it is non-trivial to determine the dust attenuation for distant galaxies. Typically, the ratio of two nebular lines with a known intrinsic ratio is used to estimate the amount of dust attenuation, which is then used to scale a dust attenuation law (the curve representing dust attenuation versus wavelength) of an assumed shape. The popular Balmer decrement method, for example, involves the observed flux ratio of the H$\alpha$$\lambda6563$\AA\ and H$\beta$$\lambda4861$\AA\ lines, while ratios of longer wavelength hydrogen lines in the Paschen and Brackett series have also been explored for sightlines to nearby star-forming regions \citep{puxley1994,petersen1997,greve2010}. Commonly used empirical attenuation curves include the prescriptions of \citet{cardelli1989}, based on measurements of individual stars in the Milky Way, and \citet{calzetti2000}, derived using a sample of local starburst galaxies. \citet{charlot2000} showed that a power-law dust attenuation law in which the optical depth $\tau_{\lambda}$ is proportional to $\lambda^{-n}$ is successful at reproducing observations of starburst galaxies, with the exponent $n$ being sensitive to whether the radiation is emitted from the diffuse interstellar medium ($n\sim0.7$) versus from stars embedded in their birth clouds ($n\sim1.3$). This flexible attenuation law has been implemented in spectral energy distribution fitting codes \citep[e.g., the Code Investigating GALaxy Emission, CIGALE;][]{boquien2019} Under physically plausible assumptions about nebular conditions (electron density, $n_{e}$, and temperature, $T_{e}$), combining constraints on H$\alpha$\ and H$\beta$\ with an additional nebular hydrogen line at a wide wavelength separation from the other two, e.g., a Paschen series line in the near-infrared, can provide separate measurements of the dust attenuation at two different wavelengths and in principle, therefore, constrain both the normalization and the slope of the dust attenuation law within an individual galaxy. While conceptually straight-forward, the approach has not seen widespread use. This is likely due to the fact that Paschen series lines are in the rest-frame near-infrared and are intrinsically weaker than the strong Balmer lines in the rest-frame optical. As a result, Paschen emission line flux measurements have been published for only a handful of galaxies. For example, HST/NICMOS was used to obtain Paschen-$\alpha$ (Pa$\alpha$; $\lambda=1.875$ $\mu$m) imaging of local galaxies \citep{alonsoherrero2006,calzetti2007,kennicutt2007}, although these data covered only the central 1\arcmin\ of each galaxy. The full disks of two nearby galaxies were imaged more recently in Paschen-$\beta$ (Pa$\beta$; $\lambda=1.2822$ $\mu$m) using HST/WFC3 \citep{kessler2020}. Further afield, a recent study combined measurements of ground-based H$\alpha$\ and H$\beta$\ line measurements with Pa$\beta$\ constraints obtained with the HST/WFC3 grism for eleven galaxies at $z\approx0.2$ \citep{cleri2020}. Some of these galaxies showed extreme ratios of Pa$\beta$\ to H$\alpha$, raising a range of possibilities such as varying covering fractions, underestimated slit loss corrections, or unusual dust laws. At higher redshifts ($z>2$), Pa$\alpha$\ measurements have been reported for a handful of individual lensed galaxies \citep{papovich2009, finkelstein2011, shipley2016}. In this paper, we explore what constraints on three nebular lines (specifically H$\alpha$, H$\beta$, and Pa$\beta$) can tell us about the range of dust laws in the sample of eleven $z\approx0.2$ galaxies \citep{cleri2020}. In Section~\ref{sec:sample}, we introduce the observed galaxy sample, and the emission line fluxes and galaxy morphology measurements that will be used in our analysis. We explore the implied constraints on dust law slopes and normalizations for the galaxy sample using an analytic approach in Section~\ref{sec:analytic}. In Section~\ref{sec:grid}, we estimate the implied dust law parameters after folding in observational uncertainties, and then in Section~\ref{sec:influences}, we explore whether additional astrophysical or observational effects could be biasing the observed line ratios. We discuss the implications of these results for the utility and reliability of the three-line method in Section~\ref{sec:discussion}, and we conclude in Section~\ref{sec:conclusions}. For consistency with \citet{cleri2020}, we assume a WMAP9 cosmology ($\Omega_{M}$=0.287, $\Omega_{\Lambda}$=0.713, $h$=0.693); the angular scale at $z\approx0.2$ is 3.3~kpc/\arcsec. We also assume the intrinsic emission line ratios appropriate for Case~B recombination, i.e., H$\alpha$/H$\beta$=2.86 and Pa$\beta$/H$\alpha$=17.6 ($T=10^4$ K, $n_{e}=10^4$ cm$^{-3}$), values that are relatively insensitive to density and gas temperature \citep{ost89}. Unless otherwise specified, throughout this paper we use $\tau_{V}$ to refer to the nebular attenuation, which is typically a factor of $\sim2$ higher than the attenuation of the stellar continuum \citep[e.g.,][]{calzetti1997}. \section{The Target Sample} \label{sec:sample} \citet{cleri2020} presented a sample of eleven galaxies with flux measurements for all three lines -- Pa$\beta$, H$\alpha$, and H$\beta$. The disk-integrated Pa$\beta$\ flux measurements were derived as part of the CANDELS Lya Emission at Reionization (CLEAR) survey \citep[HST GO Cycle 23, PI: Papovich;][]{simons2020}, a HST/WFC3 G102 slitless grism survey of 12 pointings in the GOODS-N and GOODS-S extragalactic fields \citep{giav04}, with existing multiband HST imaging from the Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey \citep[CANDELS;][]{grogin11,koekemoer11} and HST/WFC3 G141 grism coverage thanks to the 3D-HST Survey \citep{momcheva15,brammer2012}. The Balmer line measurements were obtained via ground-based spectroscopy from Keck/DEIMOS using a 1\arcsec-wide slit as part of the Team Keck Redshift Survey of the GOODS-N field \citep{wirth2004}. DEIMOS does not correct for atmospheric dispersion; we note, however, that the typical spatial offset between the H$\beta$\ and H$\alpha$\ wavelengths is estimated to be $\sim$0.5\arcsec\ (less than the slitwidth) for $sec(z)<2$.\footnote{https://www2.keck.hawaii.edu/inst/newsletters/Vol4/Volume\_4.html} Since these ground-based data were not flux-calibrated, line fluxes were derived by estimating the equivalent widths of the lines, interpolating broadband photometric magnitudes of the entire galaxy to compute the absolute magnitude in the continuum, and computing the corresponding total galaxy line fluxes \citep{weiner2007}. This approach was used to account on average for both throughput and slit losses, under the assumption that the equivalent width is constant across the galaxy. No corrections were made for Balmer stellar absorption, since at the spectral resolution of DEIMOS, the narrower Balmer emission lines can be distinguished from the broader underlying absorption profile. For our analysis, we collected the galaxy redshifts and Pa$\beta$, H$\alpha$, and H$\beta$\ line fluxes from \citet{cleri2020}, as well as the morphological measurements (effective radius $r_{eff}$, S\'ersic $n$, axis ratio, position angle) for each galaxy from \citet{vanderwel2012}. \section{Analytic Approach} \label{sec:analytic} With two line ratio measurements (H$\alpha$/H$\beta$\ and Pa$\beta$/H$\alpha$) and two parameters (slope and normalization), it is possible to constrain a galaxy's nebular attenuation law analytically. We start by assuming a uniform dust screen with no slit losses as our baseline scenario. Similar to what is done elsewhere \citep{charlot2000}, we parameterize the dust attenuation law using the optical depth as a function of wavelength $\tau_{\lambda}$, scaled to the V-band (5500\AA) optical depth $\tau_{V}$: \begin{equation} \tau_{\lambda} = \tau_{V} (\lambda/5500\mathrm{\AA})^{-n} \end{equation} where $n$ is the dust law slope, and the normalization, $\tau_{V}$, can be expressed as $A_{V} = 1.086 \tau_{V}$ in magnitudes. The observed fluxes of the emission lines can then be written as: \begin{equation} \label{eq:pab} F_{i} = L_{i} e^{-\tau_{V}(\lambda_{i}/5500\mathrm{\AA})^{-n}} \times \frac{1}{4 \pi d_{L}^2} \end{equation} where $i=$[Pa$\beta$, H$\alpha$, H$\beta$], and $d_{L}$ is the luminosity distance at the redshift of the galaxy. The relevant line ratios are therefore: \begin{equation} \label{eq:pabharatio} \frac{F_{Pa\beta}/F_{H\alpha}}{L_{Pa\beta}/L_{H\alpha}} = e^{-\tau_{V}[(\lambda_{Pa\beta}/5500\mathrm{\AA})^{-n} - (\lambda_{H\alpha}/5500\mathrm{\AA})^{-n}]} \end{equation} \begin{equation} \label{eq:hahbratio} \frac{F_{H\alpha}/F_{H\beta}}{L_{H\alpha}/L_{H\beta}} = e^{-\tau_{V}[(\lambda_{H\alpha}/5500\mathrm{\AA})^{-n} - (\lambda_{H\beta}/5500\mathrm{\AA})^{-n}]} \end{equation} Combining these two equations and eliminating $\tau_{V}$, we derive a relationship between the observable line ratios, the known intrinsic line ratios, and the dust law slope $n$: \begin{equation} \label{eq:finalratio} \frac{\ln[(F_{H\alpha}/F_{H\beta})/(L_{H\alpha}/L_{H\beta})]}{\ln[(F_{Pa\beta}/F_{H\alpha})/(L_{Pa\beta}/L_{H\alpha})]} =\frac{ (\lambda_{H\beta}/\lambda_{H\alpha})^{-n} - 1}{1 - (\lambda_{Pa\beta}/\lambda_{H\alpha})^{-n}} \end{equation} Solving Equation~\ref{eq:finalratio} using Newton-Raphson iteration, we obtain $n$; plugging the result back into Equations~\ref{eq:pabharatio}-\ref{eq:hahbratio}, we can solve for $\tau_{V}$. As a demonstration, we apply this approach first to one of the galaxies in the CLEAR sample (GN3 34368). The measurement of the Balmer decrement yields a curve within the slope-normalization parameter space. The Pa$\beta$/H$\alpha$\ ratio corresponds to a second curve, and the point where the two curves cross represents the value of slope and normalization implied by the nebular lines from that galaxy. This is shown in Figure~\ref{fig:fiduciallineratios} (left panel). When we apply this approach to the rest of the CLEAR sample, we find that a total of four galaxies yield physically plausible, albeit varied, dust law slope and normalization values. In Figure~\ref{fig:fiduciallineratios} (left panel), we show the results for these four galaxies. For comparison, we also plot the dust law slopes proposed by \citet{charlot2000}, with $n=0.7$ representing diffuse interstellar medium (ISM) and $n=1.3$ representing stellar birth clouds. Hereafter, we refer to the range $n=0.7-3$ as ``standard'' dust laws. Under our baseline scenario, the results in Figure~\ref{fig:fiduciallineratios} (left panel) seem to imply a wide range of dust laws for this small subset of galaxies. In the rest of the galaxy sample, however, this analytic approach yields negative dust law slopes and/or normalizations. In Figure~\ref{fig:fiduciallineratios} (right panel), we plot the observed emission line ratios for the entire sample, along with predictions for standard dust law slopes and a range of normalizations. Four galaxies have line ratios that are consistent with standard dust laws to within $1\sigma$, but a comparable number (five galaxies) have significant Pa$\beta$/H$\alpha$\ excesses. While this could in part be due to the fact that this sample was Pa$\beta$-selected, we note that four of these cases are more than 3$\sigma$ away from the expectations for standard dust laws. Finally, two galaxies are 1-3$\sigma$ ``outliers,'' with observed line ratios below the intrinsic Case~B values. \section{Accounting for Observational Uncertainties} \label{sec:grid} Assuming our baseline scenario, we can then determine the best estimate for the dust law slope and normalization after accounting for observational uncertainties. To do this, we employed a posterior-sampling approach, equivalent to a Bayesian inference method with uniform priors. We generated a model grid varying the slope and normalization of the dust attenuation law. Assuming the intrinsic ratios of H$\alpha$/H$\beta$\ and Pa$\beta$/H$\alpha$\ appropriate for Case~B recombination, we then calculated the expected observed ratios for each combination of dust law slope and normalization. We assumed wide uniform priors on both the slope and normalization ($0<n<30$, $0<\tau_{V}<30$). For each galaxy, we then used the observed line ratios and observational errors to compute $\chi^{2}$, and found the maximum likelihood solution implied by the grid. The likelihood contours are shown for each galaxy in Figure~\ref{fig:gridplot}, and the implied slope and normalization corresponding to the maximum likelihood (minimum $\chi^{2}$) is indicated. Out of the sample, one galaxy (GN2 19221) has a well-constrained dust law slope ($n\sim1.3$) and normalization ($\tau_{V}\sim1.0$). Three additional galaxies (GN1 37683, GN2 18157, GN3 34368) have solutions consistent with standard dust law slopes (within the contour containing 68.3\% of the total likelihood, given the assumed priors) and far from the assumed prior boundaries. These are the same four galaxies that yielded acceptable solutions using the simple analytic approach (Section~\ref{sec:analytic}). However, six galaxies show results at or near the boundary of our uniform prior ranges. In three of these cases (GN2 15610, GN3 33511, GN4 24611), the 68.3\% confidence intervals overlap standard dust law slopes, but for three galaxies (GN3 34456, GN3 34157, GN3 35455), one of which (GN3 34456) has the most extreme Pa$\beta$/H$\alpha$\ ratio of the sample, the confidence intervals are skewed towards shallower or steeper slopes. As before, for one galaxy (GN5 33249), the smallest in the sample, no acceptable solution is found. If we were to make the assumption that this sample of galaxies represents eleven independent realizations of the same underlying physical conditions, we can repeat the analysis using a composite galaxy defined by the error-weighted mean and uncertainty of the emission line ratios for the full sample. The result, shown in the lower right panel of Figure~\ref{fig:gridplot}, implies that this galaxy sample prefers somewhat shallower-than-standard dust law slopes. This is consistent with the take-away from Figure~\ref{fig:fiduciallineratios} (right panel), where a fair fraction of the sample showed enhanced Pa$\beta$/H$\alpha$\ ratios above the standard dust law curves, i.e., in the direction of lower $n$. Thus, we are left with the impression that the observed emission line ratios imply a diversity of rather extreme and in some cases unphysical dust law slopes across this sample of galaxies. \section{Influence of Astrophysical versus Observational Effects} \label{sec:influences} It is possible that the baseline scenario is not accurate and that the unusual observed line ratios driving these results reflect sub-unity covering fraction of the dusty ISM or ground-based Balmer line measurements that are either over- or undercorrected for slit losses. In the rest of this section, we explore these two possibilities to see how well we can explain the line ratios from the observed sample. \subsection{Covering Fractions} \label{sec:coveringfractions} One explanation for the unusual observed line ratios in certain galaxies is different covering fractions of dusty ISM \citep{cleri2020}, such that some regions are completely opaque to the Balmer lines whereas others are much less so. In this scenario, the disk-integrated Paschen emission comes from the entire galaxy, but the Balmer lines are detected predominantly from the subset of sightlines with low dust. To explore this effect, we make the following adjustments to the analytic calculation. We assume that lines-of-sight within the galaxy are either dusty or dust-free. All Paschen and Balmer line flux from the dust-free lines-of-sight is observed, whereas line flux from the dusty portion is attenuated according to the dust law. We define the covering fraction $f_\mathrm{cov}$ to be the fraction of the galaxy covered by dusty ISM. Thus, $f_\mathrm{cov}=0.0$ implies that none of the galaxy experiences dust attenuation, whereas $f_\mathrm{cov}=1.0$ means that the entire galaxy experiences dust attenuation given by $\tau_{\lambda}$, at which point the situation reduces to the baseline scenario in Section~\ref{sec:analytic}. We write down the observed line flux ($F_{\lambda}$) as a function of the true line luminosity ($L_{\lambda}$): \begin{equation} \label{eq:fcovequation} F_{\lambda} = [(1-f_\mathrm{cov}) L_{\lambda} + f_\mathrm{cov} L_{\lambda} e^{-\tau_{\lambda}}] \times \frac{1}{4 \pi d_{L}^2} \end{equation} where $\lambda$ corresponds to the wavelengths of Pa$\beta$, H$\alpha$, and H$\beta$, respectively. The corresponding line ratios are therefore: \begin{equation} \label{eq:fcovratioPabHa} \frac{F_{Pa\beta}/F_{H\alpha}}{L_{Pa\beta}/L_{H\alpha}} = \frac{(1-f_\mathrm{cov}) + f_\mathrm{cov} e^{-\tau_{V}(\lambda_{Pa\beta}/5500\mathrm{\AA})^{-n}}}{(1-f_\mathrm{cov}) + f_\mathrm{cov} e^{-\tau_{V} (\lambda_{H\alpha}/5500\mathrm{\AA})^{-n}}} \end{equation} \begin{equation} \label{eq:fcovratioHaHb} \frac{F_{H\alpha}/F_{H\beta}}{L_{H\alpha}/L_{H\beta}} = \frac{(1-f_\mathrm{cov}) + f_\mathrm{cov} e^{-\tau_{V}(\lambda_{H\alpha}/5500\mathrm{\AA})^{-n}}}{(1-f_\mathrm{cov}) + f_\mathrm{cov} e^{-\tau_{V} (\lambda_{H\beta}/5500\mathrm{\AA})^{-n}}} \end{equation} Thus far, we have not imposed any restrictions on the analysis, other than that the dust law slope and normalization must be non-negative. In practice, however, large attenuation of the Balmer lines would imply very large, and potentially implausible, star formation rates. The measured star formation rates from \citet{cleri2020} are all $\lesssim$2 M$_{\odot}$ yr$^{-1}$ based on the observed ultraviolet and infrared luminosities. Therefore, we impose a conservative upper limit on the implied star formation rate of 10 M$_{\odot}$ yr$^{-1}$. We solve Equations~\ref{eq:fcovratioPabHa}-\ref{eq:fcovratioHaHb} and determine the range of covering fractions for which such ``acceptable'' solutions exist {(i.e., a non-negative dust law normalization and slope, and an implied dust-corrected star formation rate of $<$10 M$_{\odot}$ yr$^{-1}$)}. Table~\ref{tab:coveringfraction} lists the derived minimum and maximum covering fractions for the galaxy sample. For two galaxies, there is no acceptable solution. This is not surprising, as these two galaxies are the known outliers, with observed line ratios in the ``unphysical'' regime (Figure~\ref{fig:fiduciallineratios}, right panel). In Figure~\ref{fig:coveringfraction}, we show the range of covering fractions that produce acceptable results for each of the other nine galaxies. Allowing for a sub-unity covering fraction can explain the line ratios for the rest of the sample, with the lowest allowed covering fractions being 64-99\% and the highest allowed covering fractions being 97-100\%. However, in each galaxy, a lower covering fraction implies a steeper dust law slope and higher normalization. Since substantially sub-unity covering fractions would imply very extreme dust laws, they are therefore not the preferred explanation for the line ratios in this sample. For standard dust law slopes, high but in some cases sub-unity covering fractions ($\gtrsim$97\%) are required by our analysis. \subsection{Slit Loss Correction Underestimates} \label{sec:slitlosses} On the other hand, observational biases may be driving the unusual line ratios within the sample. While the Pa$\beta$\ measurements represent disk-integrated values, the H$\alpha$\ and H$\beta$\ fluxes were derived using a spectroscopic slit and were then corrected for both throughput and slit losses as described in Section~\ref{sec:sample}. However, this correction assumed that there was no difference in equivalent width between regions inside and outside of the slit. While this assumption was reasonable on average given a typical galaxy size of $r_{eff} < 0.95$\arcsec\ \citep{weiner2007}, it may be violated in galaxies that are large compared to the size of the slit or with line emission that is offset or more extended than the continuum. The resulting variable equivalent width could lead to larger slit losses for line emission than for continuum. In that case, the assumption of constant equivalent width across the galaxy would result in an underestimate of the emission line slit loss correction for the Balmer line fluxes and, as a result, a measured Pa$\beta$/H$\alpha$\ enhancement. Figure~1 (right panel) shows that at least half of the galaxy sample have a Pa$\beta$/H$\alpha$\ enhancement that cannot be explained by acceptable dust laws, therefore it seems plausible that the assumption of constant equivalent width versus position may not be valid for some of these galaxies. To assess the relative importance of slit losses, we empirically estimated the effective slit loss correction that was applied previously (as described in Section~\ref{sec:sample}) using the known size and morphology of the galaxy \citep{vanderwel2012} and the width of the spectroscopic slit employed in the ground-based observations \citep[1\arcsec;][]{wirth2004}. One complication is that while the ground-based observations used a Keck/DEIMOS multislit mask at a particular position angle, the individual slitlets were milled at an orientation within $\pm$30 degrees of that angle. Although the stated goal was to align the slit with the major axis of the galaxy, when possible, the resulting position angle of the Keck/DEIMOS slitlets relative to each galaxy's major axis is not specified. We therefore computed the slit losses for both the best (and more likely) case, with the slitlet fully aligned with the galaxy major axis resulting in minimal slit losses, and the worst case, with the slitlet perpendicular to the galaxy major axis resulting in maximal slit losses. In all cases, we assumed the galaxy was centered in the slitlet. While these empirical slit loss corrections could be underestimated, for example, in galaxies that depart from a single Sersic profile due to strong dust lanes, they nonetheless provide an indication of which galaxies will be most susceptible to errors in the slit loss correction. The empirically estimated slit losses, along with the tabulated $r_{eff}$ values \citep{vanderwel2012}, are given in Table~\ref{tab:empiricalslitlosses}. In the best case, the slit losses were between 0-49\%, while in the worst case they ranged from 2-68\%. Some galaxies are small compared to the slit (at least as measured by continuum $r_{eff}$), resulting in minimal empirical slit loss corrections. Interestingly, both of the two outlier galaxies had essentially no slit losses due to their small effective radii ($r_{eff}\sim0.2$\arcsec). On the other hand, five of the galaxies are large compared to the slit, resulting in large ($>$50\%, in the worst case) empirical slit loss corrections. This suggests that variable equivalent width or line emission that is more extended than or offset from the continuum could plausibly have led to an underestimate in the slit loss correction in these cases. To further explore the possibility of over- or underestimated slit loss corrections in this sample, we make the following adjustments to the analytic calculation. We use $f_\mathrm{slit}$ as a factor that accounts for any additional uncorrected slit losses, with a range from -0.99 to 0.99. Physically, a negative value of $f_\mathrm{slit}$ corresponds to the situation where the emission line equivalent width is smaller outside slit than inside, resulting in a previous overcorrection, whereas a positive $f_\mathrm{slit}$ means the line equivalent width is larger outside the slit, leading the previous correction to be underestimated. We then write down the tabulated line flux ($F_{\lambda}$, including the nominal slit loss correction) as a function of the true line luminosity ($L_{\lambda}$, fully slit-loss-corrected) for the two Balmer lines: \begin{equation} \label{eq:haslit} F_{H\alpha} = L_{H\alpha} e^{-\tau_{V}(\lambda_{H\alpha}/5500\mathrm{\AA})^{-n}} \times \frac{(1-f_\mathrm{slit})}{4 \pi d_{L}^2} \end{equation} \begin{equation} \label{eq:hbslit} F_{H\beta} = L_{H\beta} e^{-\tau_{V}(\lambda_{H\beta}/5500\mathrm{\AA})^{-n}} \times \frac{(1-f_\mathrm{slit})}{4 \pi d_{L}^2} \end{equation} We continue to use Equation~\ref{eq:pab} for Pa$\beta$, as the disk-integrated slitless grism measurements are unaffected by slit losses. The corresponding Pa$\beta$/H$\alpha$\ line ratio is therefore: \begin{equation} \label{eq:pabharatioslit} \frac{F_{Pa\beta}/F_{H\alpha}}{L_{Pa\beta}/L_{H\alpha}} = \frac{e^{-\tau_{V}[(\lambda_{Pa\beta}/5500\mathrm{\AA})^{-n} + (\lambda_{H\alpha}/5500\mathrm{\AA})^{-n}]}}{(1-f_\mathrm{slit})} \end{equation} The H$\alpha$/H$\beta$\ ratio is still given by Equation~\ref{eq:hahbratio} as the slit losses are assumed to be the same and therefore cancel out for the two Balmer lines. We then calculate what range of $f_\mathrm{slit}$ values would be required in order to produce an acceptable solution. As before, we require non-negative $n$ and $\tau_{V}$, and impose a limit on the implied star formation rate of 10 M$_{\odot}$ yr$^{-1}$. The results are shown in Figure~\ref{fig:slitlosses} and in Table~\ref{tab:slitlossunderestimates}, along with the estimated stellar masses from \citet{skelton14}. One of the outlier galaxies (GN5 33249) returned unphysical results as before; this is not surprising as the H$\alpha$/H$\beta$\ ratio for this galaxy is below the Case~B intrinsic value, so there is no slit loss correction that can bring it into the physical regime. The other outlier galaxy (GN3 35455) can be explained but only in the case of a high slit loss correction overestimate ($f_\mathrm{slit}<-0.71$), which is implausible given its small physical size ($r_{eff}\sim0.22$\arcsec). The remaining nine galaxies show a range of additional slit loss corrections. In three cases, allowable $f_\mathrm{slit}$ values can be either positive or negative, whereas in six cases only $f_\mathrm{slit} > 0$ (a slit loss correction underestimate) is permitted. Correcting for such slit loss underestimates generically reduces the Pa$\beta$/H$\alpha$\ flux ratio towards the intrinsic Case~B value without modifying the Balmer decrement, changing the best-fit slope and normalization. Thus, correcting for slit losses that may have been moderately over- or underestimated previously may bring most, although not all, of the galaxy sample into agreement with standard dust laws. \section{Discussion} \label{sec:discussion} Using an analytic approach, we showed that four of the eleven galaxies are consistent with a simple power-law dust attenuation law. However, the required slopes ($n$) ranged from much flatter than expected for the diffuse ISM ($n=0.7$) to much steeper than expected for stellar birth clouds ($n=1.3$). A closer look revealed that the remaining seven galaxies have unusual line ratios that are far from the predicted locus using standard dust law slopes. While the current sample of galaxies may be biased towards higher Pa$\beta$/H$\alpha$\ and lower H$\alpha$/H$\beta$\ ratios due to the fact that it was selected by requiring detections in Pa$\beta$, H$\alpha$, and H$\beta$\ \citep{cleri2020}, it nonetheless provides a useful opportunity to explore the physical interpretation of these more extreme line ratios. \subsection{Astrophysical and Observational Considerations} \citet{cleri2020} discussed the possibility that these unusual line ratios are the result of sub-unity covering fractions of dusty ISM. This might occur, for example, if a galaxy has a prominent dust lane \citep{cleri2020} or a collimated, dusty outflow \citep[e.g.,][]{walter2002}. In this case, relatively dust-free lines-of-sight would contribute Paschen-to-Balmer line ratios close to their intrinsic Case~B values, but dusty lines-of-sight would show highly attenuated Balmer relative to Paschen emission, creating an enhancement of the overall observed Paschen-to-Balmer line ratio. Our analysis has suggested that sub-unity covering fractions could plausibly explain the extreme Pa$\beta$/H$\alpha$\ line ratios in at least some of the galaxy sample. To further explore how this works, in Figure~\ref{fig:hookfigure}, we show the resultant line ratios for four different covering fractions $f_{cov}$, along with the observed line ratios for the galaxy sample. In each panel, we assume a different dust attenuation law slope $n$, ranging from shallower ($n=0.5$) to steeper ($n=2.0$) than the standard range ($n=0.7-1.3$). The resultant curves correspond to a range of dust law normalizations ($0<\tau_{V}<30$), with the direction of increasing $\tau_{V}$ indicated with an arrow in the bottom righthand panel. With a unity covering fraction, the line ratios increase with normalization, as higher dust column leads to progressively greater attenuation of the shorter wavelength H$\alpha$\ and H$\beta$\ emission lines. For slightly sub-unity covering fractions ($f_{cov}\gtrsim0.9$), however, the curves form loops in the line ratio diagram. This comes about because at sufficient dust columns, the few lines-of-sight that are dust-free start to dominate the observed flux, causing the observed line ratios to converge back towards the intrinsic Case~B values. The dust law normalization where this occurs, however, depends on the line ratio, with the shorter wavelength H$\alpha$/H$\beta$\ ratio beginning to return to intrinsic values at lower dust columns than the longer wavelength Pa$\beta$/H$\alpha$\ ratio. This can be seen in Figure~\ref{fig:lineratiosVtauv}, where for non-unity covering fractions, the line ratios initially increase as a function of dust law normalization, but then decrease again past a critical value, with the Pa$\beta$/H$\alpha$\ ratio turning over at higher normalizations than the H$\alpha$/H$\beta$\ ratio. We note that this effect occurs only for very high but non-unity covering fractions; at $f_{cov}\lesssim0.9$, the larger fraction of dust-free lines-of-sight causes the observed line ratios to remain close to their intrinsic Case~B values. Thus, sub-unity covering fractions do seem to be a plausible explanation for most of the unusual line ratios observed in this sample. We note that a high or even unity covering fraction is favored for the galaxy with the most extreme Pa$\beta$/H$\alpha$\ ratio (GN3 34456), which shows a clear dust lane in the existing GOODS-N and CANDELS imaging. Future measurements with more than three emission lines will allow for independent constraints on the covering fraction of dusty ISM in galaxies. A second possibility was that underestimates in ground-based slit loss corrections suppressed the measured Balmer line emission, which, when compared with space-based full-galaxy Pa$\beta$\ measurements, resulted in an enhancement of the observed Pa$\beta$/H$\alpha$\ line ratio. In exploring this possibility further, we found that a range of additional slit loss corrections ($f_\mathrm{slit}$) are allowed by the data. In particular, for three out of the eleven galaxies the allowed range straddled $f_\mathrm{slit}=0$ and standard dust law slopes, suggesting that the applied slit loss corrections were likely correct. For six of the galaxies, however, additional slit loss corrections of at least 6-93\%, depending on the galaxy, were required in order for the line ratios to be consistent with acceptable dust law parameters. Since the nominal slit loss correction described in Section~\ref{sec:sample} assumed a constant emission line equivalent width across the galaxy (both inside and outside the slit), deviations from this assumption would lead to errors in the resulting flux ratios. This would be the case if the line emission is more or less extended than the continuum emission from the galaxy. Indications from the 3D-HST survey are that this is the case at $z\sim1$, with H$\alpha$\ emission being somewhat more extended than the $R$-band continuum \citep{nelson2012,nelson2013}. In addition, recent results from the CLEAR survey suggest that the relative compactness of the line and continuum emission may correlate with galaxy mass (J. Matharu et al. 2021, in prep.). Specifically, for galaxies less massive than $\mathrm{log}(M_{*}/M_{\odot}) \sim 9.2$ (8/11 galaxies in the current sample), the H$\alpha$\ emission appears to be comparable to or more compact on average than the continuum, which would lead to a slit loss overestimate (negative $f_\mathrm{slit}$). The applicability of this result to our galaxy sample is unclear, for while this subset includes the three galaxies with negative allowed $f_\mathrm{slit}$, it also includes two galaxies for which only large positive $f_\mathrm{slit}$ values are allowed. On the other hand, three galaxies in the sample are more massive than $\mathrm{log}(M_{*}/M_{\odot}) \sim 9.2$, where there is evidence that H$\alpha$\ emission will on average be more extended than the continuum. In this case, one would expect the slit loss correction to be underestimated (positive $f_\mathrm{slit}$) leading to enhanced Pa$\beta$/H$\alpha$\ ratios. Indeed, all three of these more massive galaxies require large positive $f_{slit}$ values, and intriguingly, two of them (GN3 34456, GN3 34157) represent the highest two Pa$\beta$/H$\alpha$\ ratios in the sample. Thus, while it is difficult to draw strong conclusions, it appears that variations in equivalent width across the galaxy, and the resulting underestimated slit loss corrections, could explain some of the most enhanced Pa$\beta$/H$\alpha$\ line ratios. Clearly, it would be best to circumvent these issues by removing the need for slit loss corrections altogether. This could be done either by using disk-integrated Balmer line fluxes from narrowband imaging (requiring a correction for [\hbox{{\rm N}\kern 0.1em{\sc ii}}]\ in the case of H$\alpha$), integral field spectroscopy (IFS), or slitless grism observations, or by obtaining all three line measurements with a consistent spectroscopic slit from space or with appropriate correction for atmospheric dispersion. In addition, high signal-to-noise constraints on all three lines will be important for deriving meaningful constraints on the dust law normalization and slope. From the existing data on this sample, one galaxy (GN2 19221) is in that position, where the effects of sub-unity covering fraction are likely negligible, the slit losses are plausibly well-accounted for, and the observational uncertainties are small enough that galaxy is constrained to have $n\sim1.3$ and $\tau_{V}\sim1$. This galaxy therefore provides a promising example of what will be possible in the near future. \subsection{Galaxy Morphologies} It is natural to wonder whether the morphologies across the sample can explain some of the variation in recovered dust law parameters. To explore this, Figure~\ref{fig:galimages} replicates the Pa$\beta$/H$\alpha$\ vs. H$\alpha$/H$\beta$\ line ratio plot, but with each galaxy indicated using its composite CANDELS $iYH$-band imaging. However, no clear trends emerge. The four galaxies most consistent with standard dust law slopes (GN2 19221, GN1 37683, GN2 18157, GN3 34368) show a range of morphologies with examples of knotty star-forming disks (GB2 19221, GB3 34368), one edge-on galaxy (GB2 18157), and one that resembles a ``chain'' galaxy (GN1 37683). The effective radii for this subset vary widely ($r_{eff}$=0.3-1.20\arcsec; see Table 2). For the six galaxies at or near prior boundaries and the one with no solution, the morphologies also span a broad range, including knotty star-forming disks (GN2 15610, GN4 24611, GN3 34157), two that are very compact (GN5 33249, GN3 35455), and two edge-on systems (GN3 33511, GN3 34456), with the latter showing a particularly prominent dust lane. The effective radii span a similarly broad range ($r_{eff}=$0.2-1.49\arcsec). Thus, galaxy morphology is not obviously correlated with the dust law constraints within this sample. \subsection{Non-universal Dust Laws} In the existing analysis, we found that some galaxies are in agreement with standard dust laws, within the broad confidence intervals. In other cases, the solutions are skewed to steeper or shallower dust law slopes, and when treating the entire sample as a single composite galaxy, we found a preference for shallower dust law slopes. Thus, taking the results from our relatively small sample of galaxies at face value, we see tentative evidence for a variety of dust laws in galaxies at $z\sim0.2$. This would not be entirely surprising. Even in the local universe we see ample indications of variations, with the Milky Way dust attenuation curve differing substantially in shape from those of the neighboring Large and Small Magellanic Clouds \citep[LMC, SMC; e.g.,][]{pei1992,gordon2003}. At higher redshift ($z\sim0.5-3$), SED-fitting approaches have presented evidence for the non-universality of the dust law in galaxies \citep{kriek2013,salmon2016}. This is perhaps to be expected, as the dust extinction law will reflect the dust composition and the grain size distribution, which will in turn reflect the dust production processes in the galaxy. Moreover, the geometry of the ISM can lead to different emergent dust attenuation curves in the optical and near-infrared even if the intrinsic extinction curves are the same, implying that attenuation curves aren't uniquely determined even if the stellar mass, star formation rate, and metallicity are known \citep{narayanan2018}. We have shown that constraining dust laws using more than two emission lines can, in principle, provide a nearly model-independent, complementary way to study the intrinsic variability in dust attenuation law parameters. In doing so, we have used the assumption of a power-law dust law. This was a simplification but was useful for exploring the three-line method along with various astrophysical and observational challenges. However, in reality the dust law will have a more complicated shape and potentially additional features, e.g., the 2175\AA\ bump. It is possible that some of the other galaxies in the sample could be better explained by a different dust law. It would be beneficial therefore to further investigate the detailed shape of the nebular dust law once more emission line constraints are available for a larger sample of galaxies. \subsection{Leveraging the Three-Line Approach} This work motivates a number of additional investigations to leverage the potential power of multiple emission lines to put constraints on the shape of the dust law in galaxies. In the local universe, Pa$\beta$\ measurements from HST already exist for a few galaxies \citep[e.g., NGC5194 and NGC6946;][]{kessler2020}. Narrowband imaging or, better, IFS observations of these galaxies could be used to study the dust law in individual apertures across the galaxy disks, while disk-integrated measurements could be compared with those derived at higher redshifts. Even more exciting is the prospect of using the James Webb Space Telescope (JWST) to follow up order-of-magnitude larger samples of intermediate to high-redshift galaxies (z$\approx$0.3-2), where at least three nebular lines can be observed using identical NIRSpec Microshutter Array (MSA) slitlets, the NIRSpec IFU, or the NIRISS and NIRCam grisms, removing the issue of differential slit losses between key lines. Using deeper near-infrared constraints on Paschen lines from JWST and a proper treatment of upper limits, we can obtain more representative and largely model-independent constraints on the slope and normalization of the dust law of galaxies at the peak of cosmic star formation. \section{Conclusions} \label{sec:conclusions} In this paper, we explored both the power of and practical challenges associated with using three hydrogen emission lines to constrain not just the normalization but also the slope of the dust attenuation law. Using a sample of eleven galaxies with existing H$\alpha$, H$\beta$, and Pa$\beta$\ measurements, and accounting for observational uncertainties, we showed that the dust law slope and normalization of one galaxy are well-constrained, an indication of what will be possible in future work. Eight of the other ten galaxies can be explained using a simple power-law dust attenuation curve given current observational uncertainties, although some prefer dust law slopes that are significantly steeper or shallower than expected from theoretical arguments. This may suggest that low-mass, low-redshift galaxies may have a diversity of dust laws, reminicent of the claims of varying dust laws in the higher redshift Universe. However, we then explored other astrophysical or observational biases that may influence the derived results. Using an analytic approach, we showed that high but sub-unity ($>$97\%) covering fractions of dusty ISM can explain some of the unusual ratios seen in this sample of galaxies, but that different slit loss corrections between key lines may also be contributing to the results. This work therefore emphasizes how the three-line approach can provide important complementary constraints on the shape of the dust attenuation law, but that a naive implementation can overlook potentially serious systematic uncertainties. With deeper measurements and a consistent observational set-up across all emission lines, we can hope to constrain the slope and normalization of the dust attenuation law, and potentially, with additional recombination line constraints, the dust covering fraction in individual galaxies. This in turn will improve measurements of key galaxy properties including gas-phase metallicities, ionization parameters, and instantaneous star formation rates. \section{acknowledgments} The authors would like to thank Ben Weiner, Jasleen Matharu, and Syndey Lower for helpful discussions, and the anonymous referee for useful suggestions that substantially improved the quality of this paper. Figures 1-6 were created using color-blind-friendly IDL color tables developed by Paul Tol (https://personal.sron.nl/~pault/). MKMP acknowledges support from NSF grant AAG-1813016, and KMF acknowledges support from NSF grant AAG-2006550. NJC and JRT acknowledge support from NSF grant CAREER-1945546 and NASA grant JWST-ERS-01345. This work is based on data obtained for the CLEAR program (GO-14227), the 3D-HST Treasury Program (GO 12177 and 12328), and the CANDELS Multi-Cycle Treasury Program by the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5- 26555. \input{tab1.tex} \input{tab2.tex} \input{tab3.tex} \clearpage \begin{figure} \center \includegraphics[angle=0,width=7in]{f1-eps-converted-to.pdf} \caption{(Left) Dust law slopes and normalizations implied by the analytic approach for galaxies with acceptable values (non-negative $n$ and $\tau_{V}$ with SFR$<$10 M$_{\odot}$ yr$^{-1}$). For one galaxy (GN3 34368; star), we show the family of solutions from the individual Pa$\beta$/H$\alpha$ (red dotted line) and H$\alpha$/H$\beta$ (orange dashed line) ratios separately, with the joint solution where the two dotted lines cross. Standard dust law slopes expected for diffuse ISM ($n=0.7$) and stellar birth clouds ($n=1.3$) are shown \citep[light blue dashed lines;][]{charlot2000}. (Right) Observed emission line ratios for the eleven galaxies in the \citet{cleri2020} sample (blue filled symbols). The expected line ratios for standard dust law slopes (orange; $n=0.7-1.3$) and a range of normalizations (red; $\tau_{V}=0.1-10$) are shown, with the arrows indicating the direction of increasing values. The intrinsic line ratios, assuming Case~B recombination, are shown with dashed black lines; two galaxies are $\sim$1-3$\sigma$ outliers, with measurements lying below these values, i.e., in the ``unphysical" regime (grey shaded region). The four galaxies that are well-explained by a simple power-law dust attenuation law are shown in both panels (filled dark blue symbols), with the example galaxy indicated (GN3 34368; star). Individual galaxy ID numbers are shown in both panels. } \label{fig:fiduciallineratios} \end{figure} \begin{figure} \center \includegraphics[angle=0,width=5.5in]{f4-eps-converted-to.pdf} \caption{Best-fit dust law slope ($n$) and normalization ($\tau_{V}$) accounting for observational uncertainties for each galaxy, and in the lower right panel, for a composite generated by computing the error-weighted mean and uncertainty of the full sample. Contours contain 68.3\%, 95.4\%, and 99.7\% of the total likelihood given the assumed priors, and the corresponding location of the minimum $\chi^2$ is shown (filled red circles). Standard dust law slopes expected for diffuse ISM ($n=0.7$) and stellar birth clouds ($n=1.3$) are shown \citep[light blue dashed lines;][]{charlot2000}. One galaxy (GN2 19221) is well-constrained with the existing data. Most of the sample overlaps standard dust law slopes, given the current observational uncertainties, although some individual galaxies (and the full-sample composite) are more consistent with steeper or shallower dust laws. } \label{fig:gridplot} \end{figure} \begin{figure} \center \includegraphics[angle=0,width=5.5in]{f2-eps-converted-to.pdf} \caption{Range of covering fractions ($f_\mathrm{cov}$, indicated with the color bar) that produce acceptable results from the analytic approach (Equations~\ref{eq:fcovratioPabHa}-\ref{eq:fcovratioHaHb}). Dotted lines connect the minimum (open circle) and maximum (filled circle) covering fraction allowed for each galaxy. Standard dust law slopes expected for diffuse ISM ($n=0.7$) and stellar birth clouds ($n=1.3$) are shown \citep[light blue dashed lines;][]{charlot2000}. While allowing for $f_\mathrm{cov}<1$ permits an acceptable solution for 9/11 galaxies in the sample, substantially sub-unity covering fractions imply very extreme dust law slopes ($n>1.3$). For standard dust law slopes, high but in some cases sub-unity covering fractions ($0.97<f_{cov}<1$) are indicated. } \label{fig:coveringfraction} \end{figure} \begin{figure} \center \includegraphics[angle=0,width=5.5in]{f3-eps-converted-to.pdf} \caption{(Left) Range of allowed additional slit loss corrections ($f_\mathrm{slit}$, indicated with the color bar) that produce acceptable results, i.e., non-negative $n$ and $\tau_{V}$ with SFR$<$10 M$_{\odot}$ yr$^{-1}$, from the analytic approach (Equations~\ref{eq:hahbratio} and \ref{eq:pabharatioslit}). Dashed and dotted lines connect the minimum (open) and maximum (filled) $f_\mathrm{slit}$ values for each galaxy, where dashed lines indicate negative $f_\mathrm{slit}$ values (a previous overcorrection) and dotted lines indicate positive $f_\mathrm{slit}$ values (a previous undercorrection). Standard dust law slopes expected for diffuse ISM ($n=0.7$) and stellar birth clouds ($n=1.3$) are shown \citep[light blue dashed lines;][]{charlot2000}. Moderate slit loss over- or undercorrection may be present in the tabulated Balmer fluxes, assuming that standard dust laws apply. } \label{fig:slitlosses} \end{figure} \begin{figure} \center \includegraphics[angle=0,width=5.5in]{f5-eps-converted-to.pdf} \caption{Effect of covering fraction ($f_{cov}$) on the Pa$\beta$/H$\alpha$\ versus H$\alpha$/H$\beta$\ line ratios (orange/red lines), for a different value of the dust attenuation law slope ($n$) in each panel. The direction of increasing dust attenuation law normalization ($\tau_{V}$) is indicated with an arrow in the lower right ($n=2.0$) panel. For comparison, the observed emission line ratios for the eleven galaxies in the \citet{cleri2020} sample are shown as in Figure~\ref{fig:fiduciallineratios} (blue filled symbols). The intrinsic line ratios, assuming Case~B recombination, are shown with dashed black lines, with the ``unphysical" regime indicated (grey shaded region). Slightly sub-unity covering fractions ($f_{cov}\gtrsim0.9$) lead to extreme line ratios, similar to what is seen in the galaxy sample. However, for substantially sub-unity covering fractions ($f_{cov}\lesssim0.9$) the emission line ratios return to the Case~B intrinsic values due to the higher fraction of unattenuated sightlines. } \label{fig:hookfigure} \end{figure} \begin{figure} \center \includegraphics[angle=0,width=5.5in]{f6-eps-converted-to.pdf} \caption{Normalized line ratios as a function of dust attenuation law normalization ($\tau_{V}$), for different values of covering fraction ($f_{cov}$). Pa$\beta$/H$\alpha$\ and H$\alpha$/H$\beta$\ ratios are shown as solid and dashed lines, respectively. For slightly sub-unity covering fractions ($f_{cov}\gtrsim0.9$), both line ratios increase with increasing dust attenuation up to a certain maximum value, after which they decline back to their intrinsic values. However, the shorter wavelength pair, H$\alpha$/H$\beta$, reaches its peak ratio at lower dust attenuation than Pa$\beta$/H$\alpha$. Thus, slightly sub-unity covering fractions combined with moderate to high dust attenuation can lead to unusually high Pa$\beta$/H$\alpha$\ ratios, as seen in the observed sample. } \label{fig:lineratiosVtauv} \end{figure} \begin{figure} \center \includegraphics[angle=0,width=5.5in]{f7.pdf} \caption{Observed Pa$\beta$/H$\alpha$\ and H$\alpha$/H$\beta$\ ratios for the eleven galaxies in the \citet{cleri2020} sample, as shown in Figure~\ref{fig:fiduciallineratios} (right) but here indicated with $iYH$-band composite images. The intrinsic Case~B recombination line ratios are shown as dotted black lines. The sample includes a diverse range of galaxy morphologies but no obvious trends with line ratios. The highest Pa$\beta$/H$\alpha$\ ratio in the sample, GN 34456, shows a significant dust lane through its center. } \label{fig:galimages} \end{figure}
1,116,691,498,911
arxiv
\section{Introduction} \label{sect:Introduction} Recent studies of the interstellar medium have shown a number of observed variations on scales smaller than 1\,pc, indicating the existence of small-scale structure. These have been reported in the form of temporal variations and differences in column densities over small spatial scales. Interferometric data of neutral hydrogen in 1976 showed variations in \ion{H}{i} over small scales \citep{Dieter}. Since then a number of studies of small-scale structures in \ion{H}{i} have been performed, by 21-cm absorption lines against pulsars with time variability \citep[e.g.,][]{Clifton88,Frail94}, sampling the gas on scales of tens to hundreds of AUs, or using interferometric observations with the Very Long Baseline Array (VLBA) \citep[e.g.,][]{Faison:Goss} finding optical depth variations on about the same scales. Optical spectra at high spectral resolution have been taken of stars in globular clusters, providing a grid of very close sight-lines (angular scales of only few arcsec) to detect structure in interstellar gas on scales down to 0.01\,pc. In these studies column density variations have been found in mainly Ca\,{\sc ii} and Na\,{\sc i} lines, and in differential reddening \citep[e.g.][]{Cohen,Bates:1990,Bates:1991,Bates:1995,Meyer:1999,Andrews}. Using nearby binaries and multiple star systems with separations $<20$\,arcsec, the smallest projected scales have been probed \citep[e.g.,][]{Meyer:1990,Watson:Meyer,Lauroesch:Meyer2000}. When measurements on small-scale structure were repeated, temporal variations have been found in several cases. Examples have been presented in the above cited investigations by, e.g., \citet{Lauroesch:Meyer2000}, and \citet{Lauroesch:2003}. In some other cases, where a proper motion star was the target, the repeated observations have revealed the spatial variations in the gas on scales smaller than the ones provided by multiple stars and binaries, as the lines of sight sample different parts of the same cloud at different times \citep{Welty2001,Welty2007}. \clearpage \begin{figure*} \centering \includegraphics[scale=0.65]{3454f1.ps} \caption{H$\alpha$ image of six background stars (star symbols) in their LMC environment with their Dec (in degrees) and RA (in hours) coordinates. Note that the luminous gas around the stars is warm ionised gas of the LMC, which is not relevant for our study. (This figure was adopted from \citet{Atlas}.)} \label{fig:3454f1} \end{figure*} \begin{table*}[h] \begin{center} \caption{Information about the FUSE data as retrieved from the MAST archive. } \label{tab:Stars} \begin{tabular}{l l l l l l l l l} \hline\hline Star & RA$^{\mathrm{a}}$ & Dec$^{\mathrm{a}}$ & Spectral$^{\mathrm{a}}$ & $V$ & date & programme & exp.time & aperture\\ & (J2000) & (J2000) & type & [mag] & & & [ks] & \\ \hline \\ Sk\,-67$^\circ$111 & 05 26 48.00 & -67 29 33.0 & O6 Iafpe & 12.57 & 2002-03-20 & C155 & 7.98 & LWRS \\ & & & & & 2002-03-20 & C155 & 5.58 & LWRS \\ & & & & & 2002-03-19 & C155 & 7.91 & LWRS \\ & & & & & 2002-03-19 & C155 & 6.83 & LWRS \\ & & & & & 2002-03-20 & C155 & 4.52 & LWRS \\ & & & & & 2002-03-19 & C155 & 4.35 & LWRS \\ LH\,54-425 & 05 26 24.25 & -67 30 17.2 & O3IIIf+O5 & 13.08 & 2008-05-07 & F321 & 9.42 & LWRS \\ & & & & & 2008-05-07 & F321 & 23.20 & LWRS \\ & & & & & 2008-05-07 & F321 & 17.01 & LWRS \\ & & & & & 2008-05-07 & F321 & 19.79 & LWRS \\ Sk\,-67$^\circ$107 & 05 26 24.00 & -67 30 00.0 & B0 & 12.50 & 2000-02-09 & A111 & 11.18 & MDRS \\ Sk\,-67$^\circ$106 & 05 26 13.76 & -67 30 15.1 & B0 & 11.78 & 2000-02-09 & A111 & 11.27 & MDRS \\ Sk\,-67$^\circ$104 & 05 26 04.00 & -67 29 56.5 & O8 I & 11.44 & 1999-12-17 & P103 & 5.09 & LWRS \\ Sk\,-67$^\circ$101 & 05 25 56.36 & -67 30 28.7 & O8 II & 12.63 & 1999-12-15 & P117 & 1.26 & LWRS \\ & & & & & 1999-12-20 & P117 & 6.25 & LWRS \\ & & & & & 2000-09-29 & P117 & 4.38 & LWRS \\ \hline \end{tabular} \begin{list}{}{} \item[$^{\mathrm{a}}$]The coordinates and spectral types of the background LMC stars are from \citet{Atlas}. \end{list} \end{center} \end{table*} \clearpage Despite the number of detections, the nature of these small-scale structures and their ubiquity is still a subject of study. It has not always been clear whether they reflect the variation in H\,{\sc i} column density, or whether they are caused by changes in the physical conditions of the gas over small scales. However, some explanations have been suggested for the ISM model consisting of fine scale structures, such as filamentary structures \citep{Heiles}, fractal geometries driven by turbulence \citep{Elmegreen1997}, or a separate population of cold self-graviting clouds \citep{Walker:Wardle,Draine,Wardle:Walker}. In absorption line spectroscopy most cases of temporal or spatial small-scale structures are observed in optical high-resolution spectra which, however, often provide little information on the physical conditions in the gas. Using UV spectra, valuable information can be gained about the physical properties of the gas. From fine structure levels of \ion{C}{i}, \citet{Welty2007} discovered that the observed variations in Na\,{\sc i} and Ca\,{\sc ii} lines towards HD\,219188 reflect the variation in the density and ionisation in the gas, and do not refer to high-density clumps. A different approach was used by \citet{Richter:2003b,Richter:2003a} who found, using the many absorption lines of molecular hydrogen in the Lyman and Werner bands, that the molecular gas in their studied lines of sight resides in compact filaments, which possibly correspond to the tiny-scale atomic structures in the diffuse ISM \citep{Richter:2003b}. The Large Magellanic Cloud (LMC) has many bright stars with small angular separation on the sky, and provides a good background for studying the gas in the disc and the halo of the Milky Way at small scales. Here we have chosen the six LMC stars Sk\,-67$^\circ$111, LH\,54-425, Sk\,-67$^\circ$107, Sk\,-67$^\circ$106, Sk\,-67$^\circ$104, and Sk\,-67$^\circ$101, with small angular separations to study the structure of that foreground gas. We refer the interested reader to \citet{Atlas} for an atlas of FUSE spectra of the LMC sight-lines. This paper is organised as follows: In Sect.\,\ref{sect:Data} we describe the data used in this work. In Sect.\,\ref{sect:Handling of spectral data} we explain the LMC sight-line and the methods used to gain information from the spectra. In Sect.\,\ref{sect:H2} and Sect.\,\ref{sect:Metal} respectively we discuss the results for molecular hydrogen and metal absorption. In Sect.\,\ref{sect:Density} we derive a density relation based on H$_2$ and O\,{\sc i} to investigate the density variations within the sight-lines, and combine that with the derived average density from the C\,{\sc i} excitation to understand the properties of the gas. We discuss the results and interpretations in Sect.\,\ref{sect:summary}. \section{Data} \label{sect:Data} We used far ultraviolet (FUV) absorption line data from the Far Ultraviolet Spectroscopic Explorer (FUSE) with relatively high S/N to analyse the spectral structure in six lines of sight with small angular separations. For four of our sight-lines high-resolution Space Telescope Imaging Spectrograph (STIS) observations were also available. In total we cover the wavelength range 905-1730\,{\AA}, where we can find most electronic transitions of many atomic species and molecular hydrogen. \begin{table}[tbh] \begin{center} \caption{S/N per resolution element measured within a few {\AA}ngstr{\"o}m around the listed wavelength.} \label{tab:SN} \begin{tabular}{l c c c c c c} \hline\hline Star & LiF:1A & 2A & 1B & 2B & SiC & STIS \\ & \hspace{-10mm}$\lambda$[{\AA}]= 1047 & 1148 & 1148 & 1047 & 980 & 1605 \\ \hline Sk\,-67$^\circ$111 & 97 & 74 & 66 & 71 & 55 & - \\ LH\,54-425 & 88 & 68 & 60 & 58 & 44 & - \\ Sk\,-67$^\circ$107 & 42 & 41 & 35 & 26 & 17 & 17 \\ Sk\,-67$^\circ$106 & 62 & 55 & 48 & 30 & 16 & 17 \\ Sk\,-67$^\circ$104 & 57 & 50 & 42 & 35 & 31 & 21 \\ Sk\,-67$^\circ$101 & 44 & 52 & 40 & 36 & 20 & 16 \\ \hline \end{tabular} \end{center} \end{table} The FUSE instrument consists of four co-aligned telescopes and two micro-channel plate detectors, one coated with Al+LiF, the other with SiC. Each has their maximum efficiency in different parts of the spectral range. The spectral data have a resolution of about 20 km\,s$^{-1}$\,(FWHM), and cover the wavelength range 905-1187\,{\AA}, of which 905-1100\,{\AA} are by the SiC coatings and 1000-1187\,{\AA} by the Al+LiF coatings. Each detector is divided into two segments, A and B. For detailed information about the instrument and observations see \citet{Moos} and \citet{Sahnow}. In the order of right ascension our target stars are: Sk\,-67$^\circ$111, LH\,54-425, Sk\,-67$^\circ$107, Sk\,-67$^\circ$106, Sk\,-67$^\circ$104, and Sk\,-67$^\circ$101\ (see Fig.\,\ref{fig:3454f1}). These are all bright stars of spectral types O and B, lying on an almost straight line at constant declination, and spread over $5\arcmin$ in RA. Because the stars are separated mainly in their RA, we have considered the projected separations in that coordinate. The smallest separation is however between LH\,54-425\ and Sk\,-67$^\circ$107, which has almost the same RA, which means that we have to take into account the true separation of $17\farcs2$ (essentially in the declination), when comparing these two sight-lines. We retrieved the already reduced data from the MAST\footnote[1]{Multi-Mission Archive at Space Telescope: http://archive.stsci.edu/} archive and co-added them separately for each of the four co-aligned telescopes. For the information about the retrieved data as well as the properties of the stars, see Table\,\ref{tab:Stars}. The data were all reduced with the {\sc calfuse} v3.2.1, except for the LH\,54-425\ and Sk\,-67$^\circ$106, which were reduced with {\sc calfuse} v3.2.2. The typical S/N of the data varies for different detector channels and sight-lines (see Table\,\ref{tab:SN} for more information on the S/N of the FUSE and the STIS data). The data from the SiC channels have in general lower S/N, while the data quality is significantly higher for all the FUSE detectors for the Sk\,-67$^\circ$111\ and LH\,54-425\ sight-lines. Due to the different wavelength zero points in the calibrated data, the spectra of different stars and different cameras might shift within $\pm20$\,km\,s$^{-1}$. This shift however is steady for the whole spectral range and does not affect the line identification. The STIS observations were part of a programme for studying the N\,51\,D superbubble (Wakker et al., in preparation). Only the stars Sk\,-67$^\circ$107, Sk\,-67$^\circ$106, Sk\,-67$^\circ$104, and Sk\,-67$^\circ$101\ were observed in that programme. STIS provides a better resolution of $6.8$\,km\,s$^{-1}$, and covers the wavelength range longward of the FUSE spectrum from about 1150 to 1730\,{\AA}. \begin{figure*}[t] \centering \includegraphics[scale=0.93]{3454f2.ps} \caption{O\,{\sc i} absorption at 936.63 (from FUSE:SiC), 1039.23 (from FUSE:LiF), and 1302.17\,{\AA} (from STIS) and the continuum fit is shown for all six stars (names given at right). The solid line is the continuum fit to which the spectrum has been normalised, the dashed lines show the upper and lower continuum choice. The bars at the top mark the known absorption component velocities for the O\,{\sc i} lines (see Sect.\,\ref{subsect:LMCline}).} \label{fig:3454f2} \end{figure*} \section{Handling of spectral data} \label{sect:Handling of spectral data} \subsection{LMC sight-line} \label{subsect:LMCline} The line of sight towards the LMC is rather complicated, because it contains several gas components. Knowing the absorption lines in the ISM with their transition wavelengths we are able to identify the different elements, and distinguish the different gas components through their radial velocities. We followed the comprehensive description given by, e.g., \citet{Savage:deBoer}. On a line of sight there is gas in the solar vicinity, normally seen around $v_{\rm rad} \simeq 0$ km\,s$^{-1}$\ (all velocities given are LSR velocities). Next is the intermediate velocity gas around $\simeq +60$ km\,s$^{-1}$\ on essentially all LMC lines of sight; this gas likely is a foreground intermediate velocity cloud (IVC). Most lines of sight also show the presence of a high-velocity cloud (HVC) at $v_{\rm rad} \simeq +110$ km\,s$^{-1}$. The properties of the IVC and HVC will be discussed in Nasoudi-Shoar et al. (in preparation). Finally, the LMC gas itself shows its presence in the velocity range between +180 and +300 km\,s$^{-1}$. In this paper we analyse the Milky Way disc component at the velocity $v_{\rm rad}=16\pm 0.5$ km\,s$^{-1}$, where the various rotational levels of molecular hydrogen have relatively strong absorption and allow for a detailed study of the properties of the gas. The absolute wavelength calibration of the STIS data is much more reliable, therefore we base our radial velocities on these data. This reveals an additional weak component at $v_{\rm rad}=-23\pm 1.8$ km\,s$^{-1}$, which we describe briefly, while we concentrate on the analysis of the strong disc component. We refer to the absorptions with similar radial velocity as parts of the same cloud. \subsection{Column-density measurements} The continua of the spectra consist of the spectral structures from the background stars with broad lines and P-Cygni profiles. Before measuring the equivalent widths we estimated a continuum level by eye, and normalised the spectra to unity with the programme {\sc spectralyzor} \citep{Ole}. In this way we eliminated the large scale variation of the continuum as well as the possible variations on scales less than 1{\AA}. The absorption lines from those parts of the spectra where the continuum placement is highly uncertain were considered with special caution or were excluded from the final analysis. For each velocity component the equivalent width of the absorption was measured either by integrating the pixel to pixel area up to the continuum level, or by Gaussian fits with {\sc midas alice} (only for unsaturated lines). Owing to the large number of spectral lines within the FUSE wavelength range and the wide velocity absorption range for each transition, there is considerable blending in the interstellar lines. For the absorption lines that needed deblending, a multi-Gaussian fit was used, except for the clearly saturated lines, where the former method was more suitable. For the final results we only took those lines into consideration where we found no blending or which could easily be decomposed by a multi-Gaussian fit. For the \ion{O}{i} line at 1302{\AA} however, which is heavily saturated and also blended with its also saturated component at $-23$\,km\,s$^{-1}$, we used a different approach. We used profile fitting in addition to the curve-of-growth ({\sc CoG}) technique, with the derived column density and $b$-value from the other \ion{O}{i} lines (based on the {\sc CoG}\ method) given as the initial guess, to separate the equivalent widths of the two components. The dominant source of error in equivalent-width measurements is the placement of the continuum. We therefore estimated the errors based on the photon noise and the global shape of the continuum. The errors due to the continuum fit were measured by manually adopting a highest/lowest continuum within a stretch of few {\AA}ngstr{\"o}m around each absorption line. The minimum errors in this way depend on the local S/N and correspond to the change of the continuum around $\pm 1\sigma$ noise level. Due to the complex shape of the stellar continuum, this way of choosing the continuum by eye is more reliable than an automatic continuum fitting. Examples are shown in Fig.\,\ref{fig:3454f2} for some of the \ion{O}{i} lines. We selected primarily absorption lines of H$_2$ to study the fine structure of the gas. This allows us to derive the physical properties of the gas. Furthermore, H$_2$ exists in the coolest portions of the interstellar clouds and thus will produce the narrowest absorption lines. Moreover, in the FUV range many absorption lines from the same lower electronic level are available, thus allowing the easy determination of the respective {\sc CoG}\ (see, e.g., \cite{deBoer98}, \cite{Richter98}, \cite{Richter00}). We also included available metal lines in the study, which are the lines of \ion{C}{i}, \ion{N}{i}, O\,{\sc i} , Al\,{\sc ii}, Si\,{\sc ii}, P\,{\sc ii}, S\,{\sc iii}, Ar\,{\sc i}, and \ion{Fe}{ii}. For the four sight-lines with available STIS observations, the lines of \ion{C}{i*}, \ion{C}{i**}, Mg\,{\sc ii}, Si\,{\sc iv}, S\,{\sc ii}, Mn\,{\sc ii}, and Ni\,{\sc ii} were additionally detected. A standard {\sc CoG}\ method was used to estimate the column density $N$ and the Doppler parameter $b$ for each species. The theoretical {\sc CoG}s were constructed for a range of $b$-values in the interval of 1\,km\,s$^{-1}$, based on the damping constant of the strongest measured transition in each species. The column densities were determined by finding the the best representative {\sc CoG}\ for the set of measured equivalent widths and their known log($f\lambda$). The $f$-values used are taken from the list of \cite{Abgrall:a,Abgrall:b} for H$_2$, and \citet{Morton} for other species. The final column densities were derived through a polynomial regression and are presented in Table~\ref{tab:N} together with the $1\sigma$ errors\footnote[2]{Based on 68$\%$ of the data points having a residual within $1\sigma$ from the corresponding {\sc CoG}\ coefficient.}. Thus the uncertainties in the column densities are based on the statistical errors in equivalent-width measurements and the uncertainties in the $b$-values. \section{Molecular hydrogen in Galactic disc gas} \label{sect:H2} \begin{figure}[] \centering \includegraphics[scale=0.45]{3454f3.ps} \caption{Logarithmic column densities of H$_2$ and O\,{\sc i} towards Sk\,-67$^\circ$111, LH\,54-425, Sk\,-67$^\circ$107, Sk\,-67$^\circ$106, Sk\,-67$^\circ$104\, and Sk\,-67$^\circ$101\ (from left to right; for the identification see the numbers in the lower part of the plot), plotted against the angular separation of each star with respect to Sk\,-67$^\circ$111\ (only in RA). The data for all sight-lines are presented as filled symbols (circles for H$_2$, and triangles for \ion{O}{i}), except for LH\,54-425, where they are given as empty symbols. The O\,{\sc i} data were slightly shifted to the left for a better visualisation. Data for LH\,54-425\ were shifted slightly more to the left for clarity. On top of the plot the projected linear separations in pc between each pair of stars is marked for the assumed distances of $d=100$\,pc respective $d=1$\,kpc. The increase of the H$_2$ column density from Sk\,-67$^\circ$111\ to Sk\,-67$^\circ$101\ is not seen in the O\,{\sc i} column density.} \label{fig:3454f3} \end{figure} We determined the column densities of the molecular hydrogen for the different rotational states $J=0-4$. These are given in logarithmic values in Table\,\ref{tab:N} together with the corresponding $b$-values. The $b$-values for various rotational states of molecular hydrogen are estimated separately in each {\sc CoG}\ analysis and vary over the range of 2-6\,km\,s$^{-1}$\ for the low $J$ for all sight-lines. Thus, the various $J$-levels might differ slightly in their $b$-values, because they may sample different physical regions of the Galactic disc gas. In our FUSE spectra we did not detect any level higher than $J=4$ for any of our sight-lines. Moreover the $J=4$ detection was not always certain. We therefore estimated a 1\,$\sigma$ equivalent width $EW_{1\sigma}$ based on the local noise fluctuation and the FUSE spectral resolution: $EW_{1\sigma}=1.06\times FWHM_{\rm inst}\times \sigma_{\rm noise}$. For sight-lines with no measured absorption above $3\sigma$, we used a $EW_{3\sigma}$ based on the strongest unblended transition in a fit to the linear part of the {\sc CoG}\ for estimating an upper limit for the column densities\footnote[3]{The fit has been including all possible $b>2$ km\,s$^{-1}$.}. In order to visualise the various regions of the foreground cloud we are looking at, these column densities are plotted in Fig.\,\ref{fig:3454f3} against the angular separation of the background stars. Assuming that the gas exists in the disc at distances up to 1\,kpc (Lockman et al. 1986), we have marked in that figure the projected separations between our sight-lines. The H$_2$ absorption shows a clear decline in column density towards Sk\,-67$^\circ$111, with a significant drop towards LH\,54-425. \subsection{Note on the errors} There are different sources of errors contributing to our measured column densities. We included the statistical errors due mainly to the continuum placement and the local S/N in the equivalent-width measurements. However, the different detectors seem to have different responses, sometimes causing slightly lower or higher equivalent widths of the same absorption. This in itself should be included in our errors in the column densities, but combining the measurements from different detectors might influence the fit to the {\sc CoG}, because they include different transitions in a wide range on the {\sc CoG}. For the $J=1$ lines, a crucial role is played by one line at 1108\,{\AA}. If that line is for some reason an outlier on the {\sc CoG}, $N(J=1)$ could be overestimated for the Sk\,-67$^\circ$101\ to Sk\,-67$^\circ$107\ sight-lines. On the other hand, a direct comparison of the absorption profiles clearly shows that the components towards LH\,54-425\ are much weaker than on the other sight-lines, particularly for $J=0$ and $J=1$ (see Fig.\,\ref{fig:3454f4} for some examples). Furthermore, it is also obvious that the $J=0$ lines towards Sk\,-67$^\circ$111\ are much weaker than the $J=1$ lines in comparison to the other sight-lines. Thus, the differences in the $N(J=0)$/$N(J=1)$ ratios between the different sight-lines are real. The large errors in the derived column densities for some of the $N(J=0)$ and $N(J=1)$ are mostly due to a degeneracy of the $N$-$b$ combinations, rather than covering the whole range of possible column densities; i.e. given that our best fit represents the ``true'' $N$-$b$ combination, the errors should be much smaller than the indicated errors. \begin{figure*}[tbp] \centering \includegraphics[scale=0.93]{3454f4.ps} \caption{Sample of H$_2$ absorption lines from different excited levels shown for the different sight-lines. The sight-lines are labelled on the right, from top to bottom: Sk\,-67$^\circ$111, LH\,54-425, Sk\,-67$^\circ$107, Sk\,-67$^\circ$106, Sk\,-67$^\circ$104, and Sk\,-67$^\circ$101. The spectra are normalised to unity in all cases. The corresponding transitions are labelled on the upper plots of Sk\,-67$^\circ$111, the same portion of the spectrum is plotted in each panel for the other sight-lines. 1\,{\AA} corresponds to 270 km\,s$^{-1}$.} \label{fig:3454f4} \end{figure*} \subsection{Physical properties of the gas from H$_2$} The various rotational levels of molecular hydrogen reveal the physical state of the gas. While lower $J$-levels are mostly populated by collisional excitations, higher $J$-levels are excited by photons through UV-pumping \citep{Spitzer+Zweibel}. The population of the lowest levels can be fitted to a Boltzmann distribution resulting in $T_{\rm exc}$ for the gas, the population of higher levels can be fitted in a similar way leading to an $"$equivalent UV-pumping temperature$"$, $T_{\rm UV-pump}$. In Fig.\,\ref{fig:3454f5} we plotted the column density of H$_2$ in level $J$, divided by the statistical weight, $g_J$, against the rotational excitation energy, $E_J$. For most of our sight-lines the rotational excitation can be represented by a two-component fit. A Boltzmann distribution can be fitted to the three lower levels $J=0$, $J=1$, and $J=2$ to find the excitation temperature $T_{0,2}$ of the gas. We get the equivalent UV-pumping temperature $T_{3,4}$ through a fit to the $J=3$ and $J=4$ levels. The Boltzmann excitation temperature ranges from 65\,K towards Sk\,-67$^\circ$101\ and Sk\,-67$^\circ$104, to about 80\,K in the directions of Sk\,-67$^\circ$106\ and Sk\,-67$^\circ$107. This is in the typical temperatures range found for the cold neutral medium, CNM. Towards LH\,54-425\ and Sk\,-67$^\circ$111, on the other hand, a fit to the $J=0-3$ levels would give a $T_{0,3}$ similar to $T_{0,1}$ and $T_{0,2}$, indicating that the $J=2$ level is already excited by UV-pumping. Given that the derived column densities for $J=4$ are upper limits, the gas in these two lines of sight may be fully thermalised, and a single Boltzmann fit through all points might be sufficient. In this case we have derived the excitation temperature $T_{03}\sim$200\,K. \begin{figure*}[tbp] \centering \includegraphics[scale=0.9]{3454f5.ps} \caption{Rotational excitation of H$_2$ in foreground Galactic disc gas. For each sight-line the column density [cm$^{-2}$] of H$_2$ in level $J$ is divided by the statistical weight $g_J$ and plotted on a logarithmic scale against the excitation energy $E_J$. The corresponding Boltzmann temperatures are given with the fitted lines. This temperature is represented by a straight line through the three lowest rotational states $J=0$, 1, and 2. The higher $J$-levels indicate the level of UV-pumping. For Sk\,-67$^\circ$111\ and LH\,54-425, $T_{0,3}$ is determined instead, because the higher $J$-levels as well as the lower ones may be populated by UV-pumping. The values within parenthesis are the upper/lower limits for the temperature based on the extremes of the errorbars (see Sect.\,\ref{sect:H2}.1).} \label{fig:3454f5} \end{figure*} Furthermore, the temperature plot confirms the estimated column densities, as the indicated errors by the {\sc CoG}\,-fit would lead to unrealistic ratios of the column densities and the statistical weight between the different $J$-levels. This steep drop of $N(J)$ from $J=0$ to $J=2$ is also generally seen in the ISM for sight-lines with $\log N(J=0)>17$ \citep{Spitzer:Cochran}. Molecular hydrogen in the interstellar medium forms in cores of clouds where the gas is self-shielded from the ultraviolet radiation. The two-component fit to four of our sight-lines with low $T_{\rm exc}$ can be interpreted as a core-envelope structure of the partly molecular clouds, where the inner parts are self-shielded from the UV radiation. Detecting less H$_2$ towards Sk\,-67$^\circ$111\ and LH\,54-425, together with the higher $T_{\rm exc}$ in these two directions, can be understood as looking towards the edge of the H$_2$ patch in those sight-lines, where the H$_2$ self-shielding is less efficient. However, Sk\,-67$^\circ$107, with a true separation of only $17.2\arcsec$ from LH\,54-425, shows a significantly higher column density of H$_2$ and a much lower temperature. Assuming that the disc gas exists at a distance between 100\,pc to 1\,kpc, the projected linear separation between LH\,54-425\ and Sk\,-67$^\circ$107\ corresponds to a spatial scale of 0.01 to 0.08\,pc (essentially only in declination). This suggests that the observed H$_2$ patch has a dense core and a steep transition to the edge. \section{Metal absorption and abundances} \label{sect:Metal} We determined the column densities for the species \ion{C}{i}, \ion{N}{i}, O\,{\sc i}, Al\,{\sc ii}, Si\,{\sc ii}, P\,{\sc ii}, S\,{\sc iii}, Ar\,{\sc i}, and \ion{Fe}{ii} from the FUSE spectra. In addition, the column densities of \ion{C}{i*}, \ion{C}{i**}, Mg\,{\sc ii}, Si\,{\sc iv}, S\,{\sc ii}, Mn\,{\sc ii}, and Ni\,{\sc ii} were obtained for Sk\,-67$^\circ$101, Sk\,-67$^\circ$104, Sk\,-67$^\circ$106, and Sk\,-67$^\circ$107\ using STIS data. These metal column densities are presented in Table\,\ref{tab:N}. For a list of absorption lines used for the column density determination, see Table\,\ref{tab:lines}. Some of the species in the FUSE spectral range have additional transitions in STIS spectra, which made a more accurate determination of their column densities possible. This, together with the location of the data points on the {\sc CoG}, results in uncertainties in the column densities that are smaller along some sight-lines compared to others. The better resolution of the STIS data allows us to separate the velocity component at $\sim -20$\,km\,s$^{-1}$\ that otherwise appears as an unresolved substructure in, e.g., \ion{P}{ii} and O\,{\sc i} absorption in the FUSE spectra. This component is strongest in the metal lines of O\,{\sc i}, Mg\,{\sc ii}, and Si\,{\sc ii}. We do not consider this component any further, as it may be part of an infalling cloud, which is not connected to the disc gas for which we study the small-scale structure in this paper. Only for strong saturated lines does this component affect our measurements, as we mentioned in Sect.\,\ref{sect:Handling of spectral data}. Within our spectral range \ion{Fe}{ii} appears in a number of transitions with a wide range in $f$-values, making an accurate fit to the {\sc CoG}\ possible. Some of the other species, on the other hand, have only few transitions in our spectra within a small ${\rm log}f\lambda$ range, and therefore can be fit to a wide range of $b$-values. Owing to the lack of more information about the Doppler widths of these species, we adopted the $b$-value of \ion{Fe}{ii} for each line of sight, and we determined the column densities of other ions from the \ion{Fe}{ii}-{\sc CoG}. There are some exceptions discussed below for \ion{O}{i}, \ion{N}{i}, and \ion{Ar}{i} (and \ion{C}{i}, discussed in the next section). For \ion{O}{i} we find a number of transitions within the FUSE spectral range, and two additional lines within the STIS spectral range (see Table\,\ref{tab:lines}). The derived Doppler parameters of \ion{O}{i} are very similar to those of \ion{Fe}{ii}. This can be expected because turbulent broadening dominates thermal broadening in the LISM. We have derived them separately however. With both saturated and very weak \ion{O}{i} lines, we are able to make a relatively accurate fit to the {\sc CoG}\ (see Fig.\,\ref{fig:3454f6}) and do not need to rely on the accuracy of the \ion{Fe}{ii}-{\sc CoG}. Adopting the \ion{Fe}{ii}-{\sc CoG}\ would furthermore lead to unsatisfying fits in most cases. We plotted in Fig.\,\ref{fig:3454f3} the measured O\,{\sc i} column densities together with those of H$_2$. The $N\rm (\ion{O}{i})$ stays constant within the errors between the lines of sight, except for the decrease towards LH\,54-425. \ion{N}{i} is also detected in a number of transitions. These lines also yield $b=7$\,km\,s$^{-1}$, even though they are mainly on the flat part of the {\sc CoG}, which makes the determination of the {\sc CoG}\ less certain. \ion{N}{i} probably originates from the same physical region of the neutral gas as \ion{O}{i}. We therefore adopted the \ion{O}{i}-{\sc CoG}\ to determine the column densities of \ion{N}{i}. Assuming that also \ion{Ar}{i} mainly exists with the \ion{O}{i} in the neutral gas, we fitted the two detected \ion{Ar}{i} absorptions to the \ion{O}{i}-{\sc CoG}\ as well. \ion{Si}{ii} has only one moderately strong line in the FUSE spectra (see Table\,\ref{tab:lines}) and some very strong absorptions in STIS. The absorption at 1020.7 {\AA} is kind of an outlier though when adopting the \ion{Fe}{ii}-{\sc CoG}, because it appears too low on the {\sc CoG}. Hence the reduction of the $N(\ion{Si}{ii})$ towards Sk\,-67$^\circ$111\ and LH\,54-425, compared to the rest of the sight-lines, is mainly due to the lack of further available data, rather than reflecting the differences in the equivalent width of the 1020.7 line. We therefore represent the $N(\ion{Si}{ii})$ towards Sk\,-67$^\circ$111\ and LH\,54-425\ as lower limits in Table\,\ref{tab:N}. It is possible that this problem occurs because of the sampling of different gas components. This would appear more strongly in the stronger absorptions compared to the weaker ones, which would explain why the 1020.7 line appears too low on the {\sc CoG}, compared to the much stronger lines of the STIS spectra. The metal column densities show mostly insignificant variations between our lines of sights, but are generally the lowest to LH\,54-425. Below we investigate the variations in form of abundance ratios. Due to the lack of information about H\,{\sc i} column densities with a spatial resolution that corresponds to our particular sight-lines, we derive the abundances as the ratio $\rm [X/O]=log(X^{i}/O)-log(X/O)_{\odot}$, with the solar abundances taken from \citet{Asplund}, and the assumption that the detected $X^{i}$ is the dominant ion of the element $X$. O\,{\sc i} is an ideal tracer for H\,{\sc i} since, unlike iron and silicon, only small fractions are depleted into dust grains, it has the same ionisation potential as hydrogen, and both atoms are coupled by a strong charge-exchange reaction in the neutral ISM. Some of the derived metal abundances are plotted in Fig.\,\ref{fig:3454f7} against the angular separation of the sight-lines. \begin{figure*}[th] \centering \includegraphics[scale=0.88]{3454f6.ps} \caption{Sample of {\sc CoG}s of \ion{O}{i} for the six sight-lines (the star name on top of each figure). Each plot shows {\sc CoG}s for $b=1,3,6,7,8$\,km\,s$^{-1}$. The solid line is the resulting {\sc CoG}\ from the best fit ($b=7$\,km\,s$^{-1}$), and the dotted lines ($b=6$ and $b=8$\,km\,s$^{-1}$) correspond to approximately $1\sigma$ errors.} \label{fig:3454f6} \end{figure*} \ion{N}{i}, with an ionisation potential of 14.1\,eV, acts as another tracer for the neutral gas. Our derived [\ion{N}{i}/\ion{O}{i}] abundances are in general close to solar, except towards Sk\,-67$^\circ$106\ and Sk\,-67$^\circ$104, where they are somewhat lower. The ionisation balance of \ion{N}{i} is more sensitive to local conditions of the ISM, as \ion{N}{i} is not as strongly coupled to \ion{H}{i} as \ion{O}{i} \citep{Jenkins2000,Moos2002}. We therefore can expect the [\ion{N}{i}/\ion{H}{i}] to vary within a spatial scale, while [\ion{O}{i}/\ion{H}{i}] remains constant. Thus, the variation of [\ion{N}{i}/\ion{O}{i}] could suggest a lower shielding towards the sight-lines with observed lower [\ion{N}{i}/\ion{O}{i}]. \cite{Jenkins2000} have modelled the deficiency of local \ion{N}{i} with decreased \ion{H}{i} column density. It is thus possible that the variation of [\ion{N}{i}/\ion{O}{i}] between our sight-lines are caused by the actual variations in the density. The \ion{N}{i} absorptions are, however, not easily measured due to the multiplicity of the transition, which can lead to high uncertainties in the \ion{N}{i} column densities. Fe, Mn, and Ni are generally depleted into dust grains in the Galactic disc, which can explain their low relative abundances (Fig.\,\ref{fig:3454f7}). The underabundance of Ar can, however, be explained by the large photo-ionisation cross section of \ion{Ar}{i} \citep{Sofia}. The slightly higher [Ar/O] towards LH\,54-425\ and Sk\,-67$^\circ$111\ would then suggest a higher shielding in these sight-lines, compared to the others, given that the derived column densities are precise. [\ion{Si}{ii}/\ion{O}{i}], on the other hand, is close to solar in the lines of sight Sk\,-67$^\circ$107\ to Sk\,-67$^\circ$101\ (with available weak and strong absorption lines). As \ion{Si}{ii} is the dominant ion in both warm neutral medium (WNM), and warm ionised medium (WIM), these ratios might indicate that part of the \ion{Si}{ii} absorption originates from the ionised gas. If silicon is partly depleted into dust grains, these ratios could be even higher. Furthermore, the detection of \ion{Si}{iii} (not included in Table\,\ref{tab:lines} due to heavy saturation), and \ion{Si}{iv} strongly suggests the presence of gas that is partially or predominantly ionised. While the metal absorption shows no change in the ionisation structure in the gas on the lines of sight Sk\,-67$^\circ$107\ to Sk\,-67$^\circ$101, the lines of sight to Sk\,-67$^\circ$111\ and LH\,54-425\ distinguish themselves from the other sight-lines. Every line of sight samples a mix of CNM, WNM, and WIM, of which the fraction of ionised gas appears to be higher in the lines of sight Sk\,-67$^\circ$107\ to Sk\,-67$^\circ$101. It would have been favourable if the \ion{H}{i} column densities would have been known from 21 cm data. Such high spatial resolution data do not exist. The Galactic All Sky Survey, GASS \citep{McClure}, has a resolution of only 16\arcmin, which does not help for our investigation. ATCA interferometric 21\,cm data on the other hand are only available for velocities $v > 30$ km\,s$^{-1}$. Yet, the GASS 21\,cm profiles show over 30\arcmin\ a slight trend. The high velocity resolution of the GASS 21\,cm data reveals a two-component profile at velocities of $+4$ and $-4$ km\,s$^{-1}$. The intensity of the two components increases/decreases to opposite east/west directions overall in favour of a stronger \ion{H}{i} emission to lower RA. These velocity components are not resolved in our FUSE, or STIS spectra. Considering the changes in the N(\ion{O}{i}), however, it is likely that our pencil beam spectra could be going through the cloud boundary, where the column density summed over the two components is less to e.g., LH\,54-425, compared to the other sight-lines. \clearpage \begin{landscape} \begin{table} \caption{Column densities $\log N$ ($N$ in cm$^{-2}$) of Galactic foreground gas, 1\,$\sigma$ errors, and Doppler parameter $b$\,[km\,s$^{-1}$] measured from the {\sc CoG}.$^{\mathrm{a}}$} \label{tab:N} \centering \begin{tabular}{l lll lll lll lll lll lll} \hline\hline & \multicolumn{2}{c}{Sk\,-67$^\circ$111} & & \multicolumn{2}{c}{LH\,54-425} & & \multicolumn{2}{c}{Sk\,-67$^\circ$107} & & \multicolumn{2}{c}{Sk\,-67$^\circ$106} & & \multicolumn{2}{c}{Sk\,-67$^\circ$104} & & \multicolumn{2}{c}{Sk\,-67$^\circ$101} & \\ & \multicolumn{2}{c}{(FUSE)} & & \multicolumn{2}{c}{(FUSE)} & & \multicolumn{2}{c}{(FUSE+STIS)} & & \multicolumn{2}{c}{(FUSE+STIS)} & & \multicolumn{2}{c}{(FUSE+STIS)} & & \multicolumn{2}{c}{(FUSE+STIS)} & \\ Species & $b$ & $\log N$ & & $b$ & $\log N$ & & $b$ & $\log N$ & & $b$ & $\log N$ & & $b$ & $\log N$ & & $b$ & $\log N$ & \\ \hline & & & & & & & & & & & & & & & & & \\ H$_2$$^{\mathrm{F}}$ & & & & & & & & & & & & & & & & & \\ $J=0$ & $4^{+1}_{-3}$ & $15.3^{+1.6}_{-0.1}$ & & $2^{+1}_{-1}$ & $15.4^{+1.1}_{-0.1}$ & & $3^{+4}_{-1}$ & $17.3^{+0.1}_{-1.7}$$^{\mathrm{c}}$ & & $3^{+8}_{-1}$ & $17.8^{+0.1}_{-2.1}$ & & $2^{+1}_{-1}$ & $17.9^{+0.2}_{-0.2}$$^{\mathrm{c}}$ & & $4^{+9}_{-1}$ & $17.9^{+0.1}_{-2.0}$ \\ $J=1$ & $6^{+1}_{-4}$ & $15.6^{+1.8}_{-0.1}$ & & $4^{+1}_{-2}$ & $15.6^{+1.3}_{-0.2}$ & & $3^{+3}_{-1}$ & $17.3^{+0.3}_{-1.5}$ & & $3^{+1}_{-1}$ & $17.5^{+0.1}_{-0.1}$ & & $2^{+9}_{-1}$ & $17.7^{+0.1}_{-2.3}$ & & $2^{+1}_{-1}$ & $17.7^{+0.1}_{-0.2}$ \\ $J=2$ & $4^{+1}_{-3}$ & $15.0^{+1.9}_{-0.1}$ & & $5^{+3}_{-2}$ & $14.5^{+0.2}_{-0.1}$ & & $4^{+2}_{-3}$ & $15.0^{+1.9}_{-0.2}$ & & $4^{+1}_{-2}$ & $15.5^{+1.5}_{-0.3}$ & & $5^{+1}_{-4}$ & $15.0^{+2.1}_{-0.1}$ & & $4^{+1}_{-3}$ & $15.1^{+2.0}_{-0.1}$ \\ $J=3$ & $8^{+6}_{-2}$ & $14.5^{+0.1}_{-0.1}$ & & $6^{+4}_{-3}$ & $14.3^{+0.2}_{-0.1}$ & & $5^{+9}_{-3}$ & $14.5^{+0.6}_{-0.1}$ & & $4^{+1}_{-3}$ & $15.1^{+1.9}_{-0.2}$ & & $4^{+1}_{-1}$ & $14.9^{+0.1}_{-0.1}$ & & $5^{+4}_{-1}$ & $14.8^{+0.2}_{-0.1}$ \\ $J=4$ & $>2$ & $<13.6$ & & $>2$ & $<13.6$ & & $>2$ & $<13.9$ & & $5^{+9}_{-3}$ & $14.1^{+0.1}_{-0.1}$ & & $4^{+1}_{-2}$ & $13.8^{+0.2}_{-0.1}$ & & $2^{+12}_{-1}$ & $13.8^{+0.2}_{-0.1}$ \\ Total & & $15.9^{+1.7}_{-0.1}$ & & & $15.8^{+1.2}_{-0.1}$ & & & $17.6^{+0.3}_{-1.5}$ & & & $18.0^{+0.2}_{-0.6}$ & & & $18.1^{+0.2}_{-0.4}$ & & & $18.1^{+0.1}_{-0.6}$ \\ \hline & & & & & & & & & & & & & & & & & \\ C\,{\sc i}$^{\mathrm{FS}}$ & $3^{+1}_{-1}$ & $14.0^{+0.1}_{-0.1}$ & & $3^{+1}_{-1}$ & $13.8^{+0.1}_{-0.1}$ & & $3^{+1}_{-1}$ & $13.8^{+0.3}_{-0.1}$ & & $3^{+1}_{-1}$ & $13.9^{+0.3}_{-0.1}$ & & $3^{+1}_{-1}$ & $14.0^{+0.1}_{-0.1}$ & & $3^{+1}_{-1}$ & $14.2^{+0.1}_{-0.1}$ \\ C\,{\sc i*}$^{\mathrm{FS}}$ & & $<13.8$ & & & $<13.8$ & & & $13.5^{+0.2}_{-0.1}$ & & & $13.5^{+0.3}_{-0.1}$ & & & $13.6^{+0.2}_{-0.1}$ & & & $13.6^{+0.2}_{-0.1}$ \\ C\,{\sc i**}$^{\mathrm{FS}}$ & & $<13.8$ & & & $<12.9$ & & & $<12.8$ & & & $<12.9$ & & & $<12.8$ & & & $<12.8$ \\ O\,{\sc i}$^{\mathrm{FS}}$ & $7^{+1}_{-1}$ & $17.3^{+0.4}_{-0.1}$ & & $7^{+1}_{-1}$ & $16.9^{+0.2}_{-0.1}$ & & $7^{+1}_{-1}$ & $17.6^{+0.1}_{-0.1}$ & & $7^{+1}_{-1}$ & $17.7^{+0.1}_{-0.5}$ & & $7^{+1}_{-1}$ & $17.6^{+0.2}_{-0.3}$ & & $7^{+1}_{-1}$ & $17.7^{+0.3}_{-0.3}$ \\ N\,{\sc i}$^{\mathrm{FS}}$ & & $16.4^{+0.1}_{-0.1}$ & & & $16.2^{+0.4}_{-0.1}$ & & & $16.8^{+0.1}_{-0.7}$ & & & $16.3^{+0.4}_{-0.4}$ & & & $16.4^{+0.1}_{-0.3}$ & & & $16.8^{+0.4}_{-0.7}$ \\ Ar\,{\sc i}$^{\mathrm{F}}$ & & $14.8^{+0.6}_{-0.1}$ & & & $14.3^{+0.2}_{-0.1}$ & & & $14.8^{+0.8}_{-0.3}$ & & & $14.9^{+0.8}_{-0.3}$ & & & $14.8^{+1.0}_{-0.3}$ & & & $14.8^{+0.9}_{-0.3}$ \\ \hline \\ \ion{Fe}{ii}$^{\mathrm{F}}$ & $8^{+1}_{-1}$ & $14.8^{+0.1}_{-0.1}$ & & $7^{+1}_{-1}$ & $14.6^{+0.1}_{-0.1}$ & & $8^{+1}_{-1}$ & $14.8^{+0.1}_{-0.1}$ & & $8^{+1}_{-1}$ & $14.8^{+0.1}_{-0.1}$ & & $9^{+1}_{-1}$ & $14.8^{+0.1}_{-0.1}$ & & $8^{+1}_{-1}$ & $14.8^{+0.1}_{-0.1}$ \\ Mg\,{\sc ii}$^{\mathrm{S}}$ & & & & & & & & $15.7^{+0.1}_{-0.1}$ & & & $15.9^{+0.1}_{-0.1}$ & & & $15.8^{+0.1}_{-0.1}$ & & & $15.8^{+0.1}_{-0.1}$ \\ Al\,{\sc ii}$^{\mathrm{FS}}$ & & $<14.5$ & & & $<14.8$ & & & $15.0^{+0.2}_{-0.9}$ & & & $15.0^{+0.3}_{-0.9}$ & & & $14.2^{+0.8}_{-0.7}$ & & & $14.8^{+0.3}_{-0.8}$ \\ Si\,{\sc ii}$^{\mathrm{FS}}$ & & $>15.1$$^{\mathrm{b}}$ & & & $>14.9$$^{\mathrm{b}}$ & & & $16.5^{+0.3}_{-0.6}$$^{\mathrm{c}}$ & & & $16.5^{+0.3}_{-0.6}$$^{\mathrm{c}}$ & & & $16.3^{+0.2}_{-0.6}$ & & & $16.4^{+0.1}_{-0.7}$$^{\mathrm{c}}$ \\ Si\,{\sc iv}$^{\mathrm{S}}$ & & & & & & & & $13.0^{+0.1}_{-0.1}$ & & & $13.1^{+0.1}_{-0.1}$ & & & $13.1^{+0.1}_{-0.1}$ & & & $13.3^{+0.1}_{-0.2}$ \\ P\,{\sc ii}$^{\mathrm{FS}}$ & & $13.8^{+0.2}_{-0.2}$ & & & $13.5^{+0.1}_{-0.2}$ & & & $13.8^{+0.2}_{-0.1}$ & & & $13.8^{+0.2}_{-0.2}$ & & & $13.6^{+0.2}_{-0.1}$ & & & $13.7^{+0.3}_{-0.1}$ \\ S\,{\sc ii}$^{\mathrm{S}}$ & & & & & & & & $15.5^{+0.4}_{-0.2}$ & & & $15.5^{+0.5}_{-0.1}$ & & & $15.4^{+0.4}_{-0.1}$ & & & $15.5^{+0.5}_{-0.1}$ \\ S\,{\sc iii}$^{\mathrm{FS}}$ & & $14.4^{+0.2}_{-0.1}$$^{\mathrm{b}}$ & & & $14.4^{+0.2}_{-0.2}$$^{\mathrm{b}}$ & & & $14.7^{+0.2}_{-0.1}$ & & & $14.6^{+0.2}_{-0.1}$ & & & $14.5^{+0.1}_{-0.1}$ & & & $14.7^{+0.2}_{-0.1}$ \\ Mn\,{\sc ii}$^{\mathrm{S}}$ & & & & & & & & $13.0^{+0.1}_{-0.1}$$^{\mathrm{b}}$ & & & $13.0^{+0.1}_{-0.1}$$^{\mathrm{b}}$ & & & $13.0^{+0.2}_{-0.1}$$^{\mathrm{b}}$ & & & $13.1^{+0.1}_{-0.1}$$^{\mathrm{b}}$ \\ Ni\,{\sc ii}$^{\mathrm{S}}$ & & & & & & & & $13.5^{+0.1}_{-0.1}$ & & & $13.6^{+0.1}_{-0.1}$ & & & $13.5^{+0.1}_{-0.1}$ & & & $13.7^{+0.1}_{-0.1}$ \\ \hline \end{tabular} \begin{list}{}{} \item[$^{\mathrm{a}}$] Most of the metal column densities are based on the adopted $b$ from \ion{Fe}{ii} in each line of sight. \ion{N}{i}, and \ion{Ar}{i} are derived from the {\sc CoG}\ of \ion{O}{i}, and the fine-structure levels of carbon from the {\sc CoG}\ of \ion{C}{i}. \item[$^{\mathrm{b}}$] Based on one absorption line \item[$^{\mathrm{c}}$] The best fit includes outliers (less than 32$\%$ of the data points). For the H$_2$ data, these outliers are all the absorptions from FUSE:SiC (see Sect.\,4.1). For the \ion{Si}{ii} data, the outlier is the weak line at 1020.7 {\AA} (see Sect.\,5). \item[$^{\mathrm{F}}$] Based on FUSE data only \item[$^{\mathrm{S}}$] Based on STIS data only \item[$^{\mathrm{FS}}$] Based on both FUSE and STIS for Sk\,-67$^\circ$107\ to Sk\,-67$^\circ$101\ \end{list} \end{table} \end{landscape} \addtocounter{table}{1} \begin{figure}[tbp] \centering \includegraphics[scale=0.43]{3454f7.ps} \caption{Metal abundances plotted against the angular separation. The abundances are expressed as the ratio $\rm [X/O]=log(X^{i}/O)-log(X/O)_{\odot}$, assuming that the $X^{i}$ is the dominant ion of $X$. $X^{i}$ is given on the right side of the figure.} \label{fig:3454f7} \end{figure} \section{Gas densities} \label{sect:Density} The available data allow us to derive the gas density in two ways. One possibility is based on the H$_2$/O abundance and models for the self-shielding of the gas. The other uses the level of collisional excitation of C\,{\sc i} as derived from the absorption lines. In the latter case the result is not only limited by the accuracy of column densities of the fine-structure levels of neutral carbon, but also by the derived $T_{\rm exc}$ from H$_2$. \subsection{Density variations derived from H$_2$} \label{subsect:DensityH2} The observed column density variations along our sight-lines observed for the various ions and H$_2$ suggest a change in the physical conditions in the foreground gas at relatively small spatial scales. For the metal ions, the observed column densities depend on three physical parameters: the volume density of the respective element in the gas, the fractional abundance of the observed ionisation state of that element (reflecting the ionisation conditions in the gas), and the thickness of the absorbing gas layer. These three quantities may vary between adjacent sight-lines, so that the ion column densities and their variations alone tells us very little about the actual (total) particle-density variations $\Delta n_{\rm H}$ in the gas. Yet it is these density variations that we wish to know in order to study the density structure of the ISM at small scales. As we show below, it is possible to reconstruct the particle density distribution in the gas by {\it combining} our ion column-density measurements with the H$_2$ measurements in our data sample, and assuming that the molecular abundance is governed by an H$_2$ formation-dissociation equilibrium (FDE). In FDE, the neutral to molecular hydrogen column density ratio is given by \begin{equation} \frac{N({\rm H\,I})}{N({\rm H}_2)} = \frac{\langle k \rangle \,\beta}{R\,n_{\rm H}} \ \ \ , \end{equation} \noindent where $\langle k \rangle \approx 0.11$ is the probability that the molecule is dissociated after photo absorption, $\beta$ is the photo-absorption rate per second within the cloud, and $R$ is the H$_2$ formation coefficient on dust grains in units cm$^{3}$\,s$^{-1}$. Unfortunately, the local H\,{\sc i} column density cannot be obtained directly from our absorption measurements because the damped H\,{\sc i} Ly\,$\beta$ absorption in our FUSE data represents a blended composite of all absorption components in the Milky Way and LMC along the line of sight. However, as mentioned in the previous section, neutral oxygen has the same ionisation potential as neutral hydrogen, and both atoms are coupled by a strong charge-exchange reaction in the neutral ISM, so that O\,{\sc i} serves as an ideal tracer for H\,{\sc i}. If we now solve the previous equation for $n_{\rm H}$ and assume that $N$(H\,{\sc i}$)=z_{\rm O}\, N$(O\,{\sc i}) with $z_{\rm O}$ as the local hydrogen-to-oxygen ratio, we obtain \begin{equation} n_{\rm H} = \frac{N({\rm H}_2)}{N({\rm O\,I})}\, \frac{\langle k \rangle \,\beta}{z_{\rm O}\,R} \ \ \ . \end{equation} For interstellar clouds that are optically thick in H$_2$ (i.e., log $N$(H$_2)\gg 14$) H$_2$ line self-shielding has to be taken into account. The self-shielding reduces the photo absorption rate in the cloud interior and depends on the total H$_2$ column density in the cloud. \citet{Draine:Bertoldi} find that the H$_2$ self-shielding can be characterised by the relation $\beta=S\,\beta_0$, where $S=(N_{\rm H_2}/10^{14}$cm$^{-2})^{-0.75}<1$ is the self-shielding factor and $\beta_0$ is the photo absorption rate at the edge of the cloud (which is directly related to the intensity of the ambient UV radiation field). Because all our sight-lines are passing with small separation through the same local interstellar gas cloud, the respective parameters $\beta_0$, $R$, and $z_{\rm O}$ should be identical along these sight-lines. The observed quantities $N$(H\,{\sc i}) and $N$(O\,{\sc i}) instead vary from sight-line to sight-line due to the local density variations $\Delta n_{\rm H}$ and different self-shielding factors in the gas. If we now consider Eq.\,(2) and assume $\beta=S(N($H$_2))\,\beta_0$, for two adjacent lines of sight (LOS1 and LOS2) through the same cloud, we obtain for the local {\it density ratio} $\xi$ between LOS1 and LOS2 \begin{equation} \xi = \frac{n_{\rm H,LOS1}}{n_{\rm H,LOS2}} = \frac{N({\rm \ion{O}{i}})_{\rm LOS2}}{N({\rm \ion{O}{i}})_{\rm LOS1}}\, \left(\frac{ N({\rm H}_2)_{\rm LOS1}}{N({\rm H}_2)_{\rm LOS2}} \right)^{0.25} . \end{equation} A small rise in $n_{\rm H}$ only slightly increases the H$_2$ grain formation $Rn_{\rm H}$, but it substantially reduces the photo dissociation rate $\langle k \rangle \beta_0 S$ due to the increased H$_2$ self-shielding, so that in the end $\Delta N$(H$_2) \propto \Delta n_{\rm H}$$^4$ while $\Delta N$(O\,{\sc i}$) \propto n_{\rm H}$. Therefore, the H$_2$/O\,{\sc i} column density ratio represents a very sensitive tracer for density variations in the gas. We now use Eq.\,(3) to study the density variations in the local gas towards our six lines of sight. In Fig.\,\ref{fig:3454f8} we plot the relative density to Sk\,-67$^\circ$111\ together with the derived $T_{\rm exc}$ versus the separations in RA coordinates. While the excitation temperature suggests a cold core in the sight-lines Sk\,-67$^\circ$101, Sk\,-67$^\circ$104, Sk\,-67$^\circ$106, and Sk\,-67$^\circ$107, the total mean density appears to be lower by a factor of almost 2 compared to LH\,54-425. While the column density variations of different excitational levels of H$_2$ suggest that we are observing a high density core of the H$_2$ between the sight-lines Sk\,-67$^\circ$101\ to Sk\,-67$^\circ$106, the result from the relative abundances together with the derived total density variation point to a higher mean density to LH\,54-425. This might be due to a smaller total absorber pathlength in the direction of LH\,54-425, which can include a slightly more confined region, for which the mean density averaged over the pathlength is higher compared to the other sight-lines, or due to the higher fraction of neutral gas towards LH\,54-425, on which we base $\xi$. If we analyse the density $n_{\rm H}$ directly from Eq.\,(3) by using the typical values observed for $R=3\times 10^{-17} \,\rm {cm^{3}}\,\rm {s^{-1}}$ and $\beta_0=5\times 10^{-10} \,\rm {s^{-1}}$ \citep{S78}, and the total neutral hydrogen column density $N(\ion{H}{i})$ derived from $N(\ion{O}{i})$ assuming solar oxygen abundance, we obtain $n_{\rm H}\simeq 2 \,\rm {cm^{-3}}$ on average. This represents the lower limit required for the H$_2$ to exist in the FDE. The observed $N(\ion{O}{i})$ represents the indirect measure of the total $N(\ion{H}{i})$ along a line of sight. It is likely that only a fraction of this \ion{H}{i} is available for the H$_2$ formation, and that the H$_2$ self-shielding is less efficient than assumed because of a complex geometry of the H$_2$ structures. \begin{figure}[tbp] \centering \includegraphics[scale=0.43]{3454f8.ps} \caption{Density ratio, $\xi$, and the excitation temperature, $T_{\rm exc}$, with angular separation from Sk\,-67$^\circ$111\ (for separation in pc see Fig.\,\ref{fig:3454f3}). The parameter $\xi$ (filled triangles) is calculated according to Eq.\,(3) based on Sk\,-67$^\circ$111\ (hence $\xi=1$ at separation 0). $T_{\rm exc}$ (filled circles) is taken from Fig.\,\ref{fig:3454f5}. The sight-lines are marked with the three last numbers of the star name. While $T_{\rm exc}$ suggests the existence of a confined core with molecular hydrogen in the sight-lines Sk\,-67$^\circ$101\ to Sk\,-67$^\circ$107, $\xi$ appears to be significantly higher towards LH\,54-425.} \label{fig:3454f8} \end{figure} \subsection{Density from C\,{\sc i} excitation} The ground electronic state of C\,{\sc i} is split into three fine-structure levels. The two upper levels C\,{\sc i*} and C\,{\sc i**} are populated through collisional excitation and some UV-pumping \citep{deBoerCI}. We can therefore use the fine-structure levels of neutral carbon together with the excitation temperature derived from H$_2$, to derive the density $n_{\rm H}$ of the gas. Using the population of the fine-structure levels as a function of $n_{\rm H}$ and $T$, calculated by \cite{deBoerCI}, we find the densities in the range of $\sim 40-60$ cm$^{-3}$ on the lines of sight Sk\,-67$^\circ$101\ to Sk\,-67$^\circ$106, and a higher density of $\sim 80$ cm$^{-3}$ towards Sk\,-67$^\circ$107\ (see Table\,\ref{tab:summary}). The densities derived in this way are dependent on the accuracy in the temperature, as well as the C\,{\sc i} column densities. For LH\,54-425\ and Sk\,-67$^\circ$111\ we are only able to derive an upper limit for C\,{\sc i*}, and hence upper limits of $<126$ and $<65$ cm$^{-3}$, respectievly for $n_{\rm H}$ in these sight-lines. Note that these upper limits are not comparable. While the upper limit of C\,{\sc i*} towards LH\,54-425\ was based on an absorption with $f$-value similar to the ones used towards Sk\,-67$^\circ$107\ to Sk\,-67$^\circ$101, the C\,{\sc i*} upper limit for the Sk\,-67$^\circ$111\ sight-line was based on a weaker line, since the same spectral region was occupied by an absorption of a different velocity component. The column densities of C\,{\sc i} also allow us to calculate the ionisation balance, assuming that the abundance ratio of C to O is solar. One thus has \begin{equation} \frac{N(\ion{C}{ii})}{N(\ion{C}{i})} = \frac{\Gamma}{\alpha(T)\ n_e} \ \ \ , \end{equation} and with $N$(C)=(C/O)$_{\odot}\cdot N$(O) one obtains \begin{equation} \frac{N(\ion{C}{i})}{N(\ion{O}{i})} = \frac{{\rm (C/O)_{\odot}}\ \alpha(T)\ n_e}{\Gamma} \ \ \ . \end{equation} Using (C/O)$_{\odot}\simeq$0.6 \citep{Asplund}, $\alpha(T\simeq 70\,\rm {K}) \simeq 12 \cdot 10^{-12}$ cm$^3$\,s$^{-1}$ \citep{Pequignot} and $\Gamma({\rm C})=310 \cdot 10^{-12}$ s$^{-1}$ \citep{deBoer73}, one arrives at \begin{equation} \frac{N(\ion{C}{i})}{N(\ion{O}{i})} = 0.025\ n_e \ \ \ . \end{equation} Note that for $T\simeq 210$\,K $\alpha$ would be a factor 1.7 smaller. And for strong shielding of UV radiation, $\Gamma$ would be up to a factor of 2 smaller (see \cite{deBoer73}). The column density ratios of C\,{\sc i} to O\,{\sc i} lead to values of $n_e$ as given in Table\,\ref{tab:summary}. We used the value of $\alpha$ for $T=70$\,K for the sight-lines Sk\,-67$^\circ$107\ to Sk\,-67$^\circ$101\ and for the other two sight-lines the 1.7 smaller $\alpha$. These values, in combination with $n$(C\,{\sc i}), indicate a larger ionisation fraction towards Sk\,-67$^\circ$111\ and LH\,54-425. \section{Conclusions} \label{sect:summary} We have presented UV absorption line measurements of Galactic H$_2$, \ion{C}{i}, \ion{N}{i}, O\,{\sc i}, Al\,{\sc ii}, Si\,{\sc ii}, P\,{\sc ii}, S\,{\sc iii}, Ar\,{\sc i}, and \ion{Fe}{ii} towards the six LMC stars Sk\,-67$^\circ$111, LH\,54-425, Sk\,-67$^\circ$107, Sk\,-67$^\circ$106, Sk\,-67$^\circ$104, and Sk\,-67$^\circ$101, and analysed the properties of the Galactic disc gas in these lines of sight. Our sight-lines were chosen within $5\arcmin$ in RA, with an almost constant Dec, to allow us to investigate the small-scale structures of the interstellar gas within $<1.5$\,pc (assuming that the foreground gas exists at a distance of $<1$\,kpc), with the smallest separation corresponding to $<0.08$\,pc. For four of the sight-lines, Sk\,-67$^\circ$101, Sk\,-67$^\circ$104, Sk\,-67$^\circ$106, and Sk\,-67$^\circ$107, we also analysed STIS spectral data, from which we have further determined the column densities of \ion{C}{i*}, \ion{C}{i**}, Mg\,{\sc ii}, Si\,{\sc iv}, S\,{\sc ii}, Mn\,{\sc ii}, and Ni\,{\sc ii}. The STIS data provide additional absorption lines for some of the species already available in the FUSE spectra, allowing us to have a more reliable determination of the column densities. In the spectral range of FUSE we are able to gain information from the different excitation levels of H$_2$, which provides us with valuable information about the fine structure of the gas, because H$_2$ arises in the coolest regions of the interstellar clouds, and enables us to derive the physical properties of the gas. The H$_2$ absorptions show considerable variation between our sight-lines. The lines of sight to Sk\,-67$^\circ$101\ and Sk\,-67$^\circ$104\ contain a significantly higher amount of H$_2$, about 2\,dex in their column densities, compared to Sk\,-67$^\circ$111. The rotational excitation of H$_2$ towards Sk\,-67$^\circ$101\ to Sk\,-67$^\circ$107\ agrees with a core-envelope structure of the H$_2$ gas with an average temperature of $T_{\rm exc}\sim70$\,K, typical for CNM. This, and the low Doppler parameter of the lower $J$-levels, indicates that the molecular hydrogen probably arises in confined dense regions in these sight-lines, where it is self-shielded from UV-radiation. The gas to LH\,54-425\ and Sk\,-67$^\circ$111, on the other hand, appears to be fully thermalised, with $T_{\rm exc}\sim200$\,K. This could suggest that within the $<5\arcmin$, we sample a core of a molecular hydrogen cloud, and reach the edge between Sk\,-67$^\circ$107\ and Sk\,-67$^\circ$111. After including the LH\,54-425\ sight-line in our study, however, the spatial variation is not a smooth change from Sk\,-67$^\circ$101\ to Sk\,-67$^\circ$111\ (Fig.\,\ref{fig:3454f8}), and density fluctuation on scales smaller than the extent of 5\arcmin\ cannot be excluded. In order to trace these variations back to the actual density variations and changes in the physical properties of the gas we have derived the densities partly based on the formation-dissociation equilibrium of molecular hydrogen using the measured H$_2$ and O\,{\sc i} column densities, and partly based on the excitation of the fine-structure levels of neutral carbon. For the latter we have also used the derived $T_{\rm exc}$(H$_2$), assuming that C\,{\sc i} and H$_2$ exist in the same physical region. The density derived in this way is $n_{\rm H} \simeq 40-80 \,\rm cm^{-3}$ towards Sk\,-67$^\circ$101\ to Sk\,-67$^\circ$107, for which we have relatively accurate measurements for column densities of C\,{\sc i} and C\,{\sc i*}, as well as $T_{\rm exc}$. This is well above the value $n_{\rm H} \simeq 2 \,\rm cm^{-3}$ derived based on a formation-dissociation equilibrium of H$_2$. This is most likely because the available hydrogen for molecular formation is less than the total Galactic $N(\ion{H}{i})$ derived from N(\ion{O}{i}) along a line of sight. Moreover, the H$_2$ self-shielding could be overestimated (and thus $n_{\rm H}$ underestimated) because of a complex absorber geometry. Thus that value for $n_{\rm H}$ gives a minimum density required for the existence of the H$_2$. On all sight-lines $N(\rm {H_2}) \ll N(\ion{H}{i})$ . If we assume that $[N_{\rm H}]_{\rm tot}$ is proportional to $n_{\rm H}$ (from \ion{C}{i}) in the same way as for the inner region of a molecular cloud (meaning $[n_{\rm H}/N(\ion{H}{i})]_{\rm tot}=[n_{\rm H}/N(\ion{H}{i})]_{\rm mol}$), we can derive from Eq.\,(2) the column density of \ion{H}{i} that coexists with H$_2$ in FDE. We so find that this is 18\% of the total $N(\ion{H}{i})$ (averaged over the sight-lines Sk\,-67$^\circ$101\ to Sk\,-67$^\circ$107). It is of course likely that the partly molecular core is more dense because it is confined, and $n_{\rm H}$ not linearly related to the $N(\ion{H}{i})$ as assumed above. Furthermore, $[n_{\rm H}]_{\rm tot}$ as derived from \ion{C}{i} is the averaged density of the molecular part of the gas, where \ion{C}{i} exists, and again not related to all the \ion{H}{i} gas along a line of sight as assumed above. Thus, the $N(\ion{H}{i})_{\rm tot}$ together with the $n_{\rm H}=40-80\,\rm cm^{-3}$ give only an upper limit for the pathlength of the partly molecular cloud to be $D_{\rm {mol}}=0.5-1.8$\,pc (included in Table\,\ref{tab:summary}). These derived pathlengths are of the same size as (or smaller than) our lateral extent of the sight-lines ($<1.5$\,pc). This derived upper limit for $D_{\rm {mol}}$ also agrees with the higher density ratio for the LH\,54-425\ sight-line, derived in Sect.\,\ref{sect:Density}.1, which otherwise is difficult to explain if we consider this sight-line to sample the edge of the same cloud with the core on the Sk\,-67$^\circ$101\ to Sk\,-67$^\circ$107\ sight-lines. Based on $\xi$ and an average density of $n_{\rm H}=60\,\rm cm^{-3}$, $D_{\rm {mol}}$ in the direction of LH\,54-425\ and Sk\,-67$^\circ$111\ is estimated to be 0.1 and 0.6\,pc, respectively. Given these small sizes, the H$_2$ patches observed on these six lines of sight are not necessarily connected. The small pathlength is further important for the shielding of the molecular gas, given the small amount of \ion{H}{i} that is possibly available in FDE with H$_2$. We thus conclude that the H$_2$ observed along these six lines of sight exists in rather small cloudlets with an upper size $D_{\rm {mol}}=0.5-1.8$, and possibly even less, $<0.1$\,pc, as implied by the LH\,54-425\ sight-line. Part of the absorbing gas is possibly located at a distance closer than the 1\,kpc assumed thus far, resulting in an even smaller spatial scale. This agrees with the sub-pc structure of molecular clouds previously detected by e.g., \cite{Pan}, \cite{Lauroesch:Meyer2000}, \cite{Richter:2003b,Richter:2003a}, and \cite{Marggraf}. On a smaller scale of 5 to 104 AU however, \cite{Boisse2009} found no variation in the column density of the H$_2$, observed over a period of almost five years towards the Galactic high-velocity star HD\,34078. This despite the variations in the CH column densities found over the same period. They concluded that while the variation in CH and CH$^+$ is due to the chemical structure of a gas that is in interaction with the star, the H$_2$, located in a quiescent gas unaffected by the star, remains homogeneously distributed. Hence no small-scale density structure was proposed in the quiescent gas within this scale. Our spatial resolution limits us to the smallest separation of 17\arcsec. We have therefore no further information about the AU-scale structure of the molecular gas in these lines of sight. The absorber pathlength through the disc has previously been found to be divided into smaller cloudlets of \ion{H}{i}-absorbers \citep{Welty}, which are not necessarily physically connected to the cloud with the observed H$_2$. While the clumpy nature of H$_2$ is verified by different studies, the Galactic \ion{H}{i} gas, when sampled over a line of sight, normally does not show variations on small scales, since such effects statistically cancel out and smooth the total observed \ion{H}{i} (or \ion{O}{i} in our case). If such cloudlets exist in the Galactic disc component, their LSR velocity should be similar, and therefore would not be resolved in our spectra. Our derived density ratio $\xi$, as well as the relative abundances, suggest that the number of these absorbers might differ along the sight-lines, being less towards LH\,54-425\ and Sk\,-67$^\circ$111. This interpretation also agrees with the higher electron density derived for these lines of sight. We summarised in Sect.\,\ref{sect:Introduction} the detection of small-scale structure by different independent methods. This paper presents yet another approach to identify the structure of the gas on small scales. FUV and UV spectral data, with the large range of transitions particularly of H$_2$ and \ion{O}{i} together with the fine-structure of \ion{C}{i}, provide a sensitive approach to study the physical properties of the gas on small scales. However, our spatial resolution is limited to the separation of our background sources, and does not allow us to directly detect the fine structure that might exist on even smaller scales. Our findings are consistent with the small-scale structure found earlier in the interstellar medium, and suggest inhomogeneity in the interstellar Galactic gas down to, and possibly below, scales of 0.1\,pc. \begin{acknowledgements} We thank Ole Marggraf for his careful proofreading of this paper. SNS was supported through grant BO779/30 by the German Research Foundation (DFG). \end{acknowledgements} \bibliographystyle{aa}
1,116,691,498,912
arxiv
\section{Introduction} The current infrastructure of the national grid is aging and has been mainly built to support a one-way power flow. It is a centralized system that energy is generated in power plants, transmitted over long distances to consumptions sites, where consumers are majorly passive. See Figure 1(a) \citep{ElecGenerationFigure} for an illustration. The long-distance transmission not only creates power losses but also does not integrate demand and supply in real-time, which causes overproduction. For example, during winter 2017, Germany has experienced unexpected negative electricity prices, where they produce too much when there is not enough demand (over the weekends). As a consequence, they have to pay to be able to ``sell" their product. Thus, people have been looking for alternatives to mitigate these problems. \begin{figure}[h] \begin{subfigure}[t]{0.4\textwidth} \hspace{-0.5em} \includegraphics[width=80mm,scale=0.9]{centralgeneration.png} \end{subfigure}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ \begin{subfigure}[t]{0.4\textwidth} \hspace{-1em} \includegraphics[width=80mm,scale=0.9]{distributedgeneration.png} \end{subfigure} \caption{\footnotesize (a) Central generation, one-way power flow; \,\,\,\,\,\,\ (b) Distributed generation \& storage, two-way power flow.} \label{fig:generation} \end{figure} Meanwhile, we are moving to a more distributed network, arguably, also more complex, in which energy generation will no longer only come from power plants. This distributed network involves houses with solar photovoltaic panels (PV), electric vehicle (EV) charging stations (that can act as storage), locally produced wind power, and other renewable energy technologies. Individuals are no longer just consumers but are actively taking part in the productions of their energy and collectively feeding electricity into the grid. They are the so-called \emph{prosumers}, both consumers and producers of energy. That is to say, the large companies will no longer be the only ones with a stake in the energy space, but the public as well. This new movement creates a range of opportunities to retrofit the centralized way of managing today's grid, one of which is the possibility to introduce the ability to react in real-time to the intermittent generation and volatility of the national grid in the current wholesale market. A microgrid is formed by agglomerating small-scale prosumers; they constitute a local energy market and trade energy within their community. A microgrid can help with reflecting the real-time prices of energy and facilitate a sustainable and reliable way of locally balancing generation and consumption. The local energy markets can be viewed as a rising form of \emph{sharing economy}, in addition to the well-developed house-sharing and transportation sharing. See Figure 1(b) \citep{ElecGenerationFigure} for an illustration. Such an initiative can help reduce the latency for managing congestion and the distribution faults \citep{Jimeno2011}. Its decentralized structure can help with cyber attack resilience. Overall, it helps communities become more self-sustainable, ecologically, and economically. Smart grid hardware, such as smart meters, has been a significant catalyst in moving to more autonomous networks. They can be used for handling more complex information, such as optimizing energy consumption patterns, adapting to them, and using these to optimize the price of energy as well as controlling the parameters of the appliances in their houses by using the customer-generated energy profile. In the meantime, these systems need a secure, reliable, and transparent mechanism to manage these markets, helping with their local energy balancing needs, and providing a user-friendly application for people to use. Therefore, we necessitate new control frameworks to fulfill these requests. The Blockchain technologies enable the possibility of building up an autonomous, secure, reliable, and transparent market. Blockchain is formed by a series of blocks which contain a record of transactions. Each block references the previous block via its hash value, thereby creating a chain of transactional records that can be traced back to the genesis block. The most trusted and reliable chain is the longest. The Blockchain is not provided from a single server but is a distributed transactional database with globally distributed nodes that are linked by gossip protocols. Blockchain protocols are designed to possess a number of important features including \emph{decentralization}, \emph{immutability}, and \emph{pseudo-anonimity}. Therefore, they provide a level of security, trustless, and privacy. {\bf Problem Statement.} To realize autonomous peer-to-peer energy trading within microgrids, one must demonstrate that the use of blockchain technologies to build a peer-to-peer energy trading system is technically feasible and reliable. More importantly, it is desirable to explore associated costs and limitations. {\bf Contributions.} We {\bf develop two frameworks} that aggregate local demand and supply by auction mechanisms, buy and sell energy on behalf of prosumers by implementing predefined agents logic, and enable autonomous transactions through smart contracts functions. The {\bf evaluation} is undertaken predominantly from an economic point of view. We would compute the aggregated costs if households were to always pay to the national grid when they lack energy or not generating enough from their PV systems. This data will be compared to the results obtained from the two separate auction mechanisms. These {\bf cost analyses} would provide an insight into whether blockchain technologies are effective in integrating necessary components to constitute a micro-market setup. Last, we reflect on the assumptions made, the challenges yet to be addressed, and point out directions for future work in these emerging applications. {\bf Organization.} This paper is organized as follows. Section 2 introduces the integral components of the system, exhibits the design of the two frameworks, and presents experimental parameters and some implementation details. Section 3 presents a high-level comparison of the key characteristics of the two frameworks and the simulation results, including some cost analyses. We conclude with a reflection on the design and discussion of future directions in Section 4. \subsection{Related Work} {\bf Blockchain Technology Application in Microgrids.} Since the emergence of Blockchain technologies, people appealed to exploit their applications in the energy sector, with the hope to reduce costs, enhance security, enable automation, and improve efficiency. This adaption requires the integration of agent-based modeling, market mechanisms, microgrid infrastructures, and many more. To the best of our knowledge, there are more attempts toward this direction in industry, and few developed prototypes documented in the literature. \cite{PowerSystems2019} proposed a cooperative model in which electricity producers and consumers form coalitions and collectively negotiate the electricity price. The model included a transaction settlement system such that the contracts are recorded in a Blockchain protocol. Their simulation results revealed the connection between the number of prosumers, the average negotiation time, and the number of final contracts. \cite{2017EI2} proposed a peer-to-peer electricity trading mechanism that enables smart contract transactions. In their mechanism, there was a single seller that aggregates local producers' supply and auction off the electricity to individual consumers. The authors ran experiments with 100 prosumers by considering their satisfaction index and exploit the relation between the demand/generation ratio and satisfaction indices. Mengelkamp et al. \citep{MENGELKAMP2018870} contributed to the literature on a conceptual level by presenting a blockchain-based microgrid energy market and proposed a framework which contains seven market components. \cite{2018BigComp} investigated the pros and cons of several Blockchain protocols and concluded that Ethereum Smart Contract was most appropriate for building up a Blockchain-based electricity trading platform in Microgrid. {\bf P2P Energy Trading Platforms.} There is a large body of literature on microgrids' benchmarks, benefits, and trials \citep{Mariam2013,RenewableReviews2016}. Here, we list a few projects in which the outcome was P2P energy trading platforms. \cite{PowerLedger} is an energy trading platform that allows for decentralized selling and buying of renewable energy. Their platform has a dual-token ecosystem operating on two blockchain layers, POWR and Sparkz. It provides consumers with access to a variety of energy markets around the globe. TransActive Grid is a joint venture between a startup LO3 Energy and a blockchain tech company ConsenSys. They launched a proof-of-concept project, the Brooklyn Microgrid, which aimed at enabling local transactions based on Blockchain technologies. They did not have an Initial Coin Offer and expected to profit from transaction fees. To the best of our knowledge, they were oscillating between developing its cryptocurrencies that fuel their smart contract or employing Ethereum straightaway. There are other P2P energy trading platforms such as Grid Singularity, NRGcoin, SolarCoin, Piclo, and Vandebron. {\bf Auction Mechanisms.} Continuous double auction (CDA) is very common and has become one of the dominant auction models for trading financial products. The Nasdaq Stock Market and the New York Stock Exchange, for example, implement variants of the CDA. With the flourish of the electronic commerce, CDA is also adopted by many online marketplaces, such as LabX and Dallas Gold and Silver Exchange. Since \citep{DBLP:conf/ijcai/DasHKT01}, researchers have developed software-based agent mechanisms to automate double auction for stock trading with or without human interaction. More recent research on bidding in continuous double auctions and its truthfulness can be seen in \citep{DBLP:conf/ijcai/ChowdhuryKT018, DBLP:conf/aaai/Segal-HaleviHA18}. In a uniform-priced double-sided auction \citep{DBLP:journals/sj/MaityR10,Sinha2008}, a single price at the end of each time interval is determined by the intersection of the aggregated demand and supply curve derived within the epoch. \section{Framework Design and Implementation} \subsection{Components of the Model} {\bf The Rationale of Adopting a Blockchain Protocol.} The most influential consensus mechanisms are Proof-of-Work (e.g., Bitcoin \citep{bitcoin2009}, Ethereum \citep{ethereum2014}, and other Altcoins) and Proof-of-Stake (e.g., Algorand \citep{DBLP:journals/iacr/GiladHMVZ17}, Ouroboros \citep{DBLP:conf/crypto/KiayiasRDO17}, Ouroboros Praos \citep{DBLP:conf/eurocrypt/DavidGKR18}). Since finding a nonce to validate transactions and create new blocks is computationally intensive, PoW is often criticized by wasting electricity. Therefore, PoS is more preferred for environmental consideration but is awaiting validation. For example, Ethereum was the first to call for a switch from PoW to PoS and proposed the Slasher and Casper protocols, and they had even set up deadlines for such a switch; however, upon till now, they are still using a PoW protocol. Depends on the range of nodes that can write and read the data on a chain, blockchain protocols can be classified as permissionless/public blockchain and permissioned/private Blockchain. A public blockchain (e.g., Bitcoin and Ethereum) offers a high level of decentralization, security, and immutability. Permissioned Blockchain (e.g., Hyperledger) offers a higher level of efficiency, but it is similar to a distributed system, which is challenged by fault detection, fault tolerance, and fault resilience. The three leading contenders that enabled smart contracts are Ethereum, Hyperledger Fabric, and NEO. Hyperledger Fabric offers an enterprise-grade framework. However, only a small portion and fragment tools are open access. Implementing a complex and integrated P2P energy trading platform would require much inferring and debugging their tools. NEO provides fewer resources and documentation than Ethereum. Ethereum can execute relative complex business logic in a secure network. Its smart contract is a piece of code that can contain data, its balance of ether, and functions that can be triggered if the conditions/terms are fulfilled within it. Smart contracts enable a more vast range of applications that can be built on top of the Ethereum network where they can interact with each other. Therefore, Ethereum was the chosen blockchain technology. \textbf{Ethereum Blockchain Costs.} In Ethereum, any transaction that changes state in the blockchain database needs to be mined. Transactions that change state infer a cost, which is paid in \emph{gas}, the unit of computation in the Ethereum blockchain. Each operation (e.g., adding, multiplying) have a pre-defined cost of gas \citep{ethereum2019}. Therefore, the overall gas necessary to complete a transaction amounts to the number of operations it needs to perform up to its completion. The total cost of a transaction is given by: \begin{equation} \hspace{10em} \textit{Cost of Transaction} = \textit{Gas Limit} \times \textit{Gas Price} \end{equation} \emph{Gas limit} is a parameter that is defined in any transaction; it aims to set a maximum limit of gas in that the user is willing to pay for said transaction. Since the simulation will be run in a local network using Ganache-cli, we set the gas price to be $2e^{10}$ Wei, where Wei is equivalent to $e^{-18}$ Ether and is the smallest unit of Ether. One Ether will be considered to cost 250 dollars for the entirety of the analysis. Although the volatility of cryptocurrencies is beyond the scope of this work, all these parameters can be customized in the simulation file. Lastly, each transaction between user accounts has a set fee of 21,000 gas. {\bf Market Mechanism.} The market mechanism defines the conditions and rules in which energy is trade. The two mainstream market mechanisms identified in the literature, the continuous double auction \citep{MENGELKAMP2018870} and the uniform-price double-sided auction \citep{MaityRao2010,Sinha2008}, are investigated in this work. The continuous double auction is a mechanism often used in financial market exchanges. There are two sides, the buyer (bids) and sellers (asks) side. Each participant bids their value and quantity of the desired good; if any bid surpasses the lowest-priced ask, they are matched and trade. The uniform-priced double-sided auction \citep{DBLP:journals/sj/MaityR10} is more common in day-ahead energy markets. These markets intend to buy and sell energy for defined time intervals for the following day. This process prevents volatility in prices of energy, along with providing the reliability of energy available for the buyer. It also serves for producers to better schedule their generation capacity. The auction works in discrete intervals, in our case, set to be one hour, where all the bidders and sellers submit offers. At the end of this hour, a single price is defined by the intersection between the aggregated demand and supply curve. Trade occurs between all the participants at the equilibrium price, and the market is reset for the next hour. \textbf{Dataset Acquisition.} The dataset is one of the essential components to validate our design. We acquire a dataset through the Pecan Street website, a research network with data of over a thousand homes, out of which 250 were households with installed solar systems \citep{pecan}. This dataset was comprised of houses mainly located in Austin, Texas, USA, providing plenty of solar coverage and with information available about their energy profile, load, demand, how much was required from the grid, and which utilities constituted the total demand. Each of these agent instances represents a household, in which all its data (e.g., amount of battery charge, load, demand, ether balance, historical bids/asks) is stored, along with functions representing the functionality that the agent is able to perform. {\bf Formulate the Price of Energy.} The data was preprocessed using Python to isolate the households into separate CSV files and calculating their respective \emph{levelised cost of energy} (LCOE), which is the sum of the system cost over a year (considering system life cycle to be 25 years) over the sum of generation that system produced - giving a \$/kWh metric. \begin{equation} \hspace{10em} \textit{LCOE} = \frac{\textit{Sum of the system cost over a year}}{\textit{Sum of generation that system produced}} \end{equation} The cost of the system was derived by taking the national average of price per kWh of installed solar capacity applied to the different tiers of installed capacity - e.g., 0-4 kWp, 4-10 kWp, and 10-50 kWp. Additionally, battery prices were included in the derivation of LCOE, by introducing an 8 kWh battery (Powervault Lithium-ion G200) cost for every system that had a capacity between 2 and 4 kWp, one or two Tesla power wall batteries for systems with a capacity of 4 to 6 kWp or 6 kwp and above respectively. At the end of this process, each household ended up having their two individual LCOE derived for a system without a battery, and another one with. Finally, a large biomass producer was introduced as it is a controllable renewable energy system that can be used to generate a more stable supply than solar - LCOE prices ranging from 5 to 12 cents per kWh to emulate the different types of biomass used. This provides the microgrid with self-sustainability by increasing its generation capability. The pricing of energy is set at a price that benefits both the producer and consumer. That is, pricing the energy above the sellers base costs but also setting a maximum price for the buyer, which did not exceed the national grid price. If any energy were to be sold at higher than the grid price, there would be no incentive for consumers to use the microgrid market. Hence, these two constraints were taken into consideration, and a normal distribution was attributed to each agent. The minimum price, i.e., the LCOE for each agent (base electricity cost), was set a two standard of deviations below the mean, and the national grid price at two standards of deviations above the mean - resulting in a \emph{truncated normal distribution}. \subsection{Two Frameworks} In this section, we present two frameworks. The first one takes full advantage of the blockchain technologies, and the second one is less decentralized as it requires a component to be off-chain in order to alleviate the computational work from the blockchain. We will describe how data and information flow in the systems and how the application was going to interact with the blockchain. \textbf{Framework 1 (Decentralized, Continuous Double Auction).} The first framework (Figure 2(a)) uses the full functionality of the blockchain. It is a server that households can connect to, a database with immutable information that people can access and write to, as well as a platform for the exchange of value. The bottom of the diagram indicates that each household, holding a smart meter, connects and relays information in real-time to its representative agent in the microgrid application. This connection is of restricted access to the smart meter that corresponds to the respective household owner and provides all the information the agent needs in order to make rational decisions on their behalf. If this is the first time an agent is connecting to the blockchain, the agents will request the creation of a household contract to the household factory through the Web3 interface technology, which enables agents to communicate to the blockchain. The household factory creates the household contracts on the agent's behalf and attributes their Ethereum address, respectively. It is to ensure consistency of the smart contract codes, i.e., no malicious code can be inserted by any participant if all have to be created by this factory. Moreover, it creates contracts with the decentralized exchange address automatically configured, and it is a useful point of access to request statistics about every participant's readable information. Household contacts now have a direct restricted channel to their agent, which provides and updates all the necessary information - serving as a repository of information. These are then allowed to submit bids/asks, delete them from the exchange, and make calls to the exchange when triggered by the agent's logic. The decentralized exchange takes all the bids/asks and orders them in ascending and descending order of price, respectively. This way, it automates the matching process of the offers and relays the appropriate transaction commands between household contracts. All these trades records are stored in the blockchain. The accountability of each entity's balance is taken care of by the consensus mechanism. \begin{figure}[h] \begin{subfigure}[t]{0.4\textwidth} \hspace{-1em} \includegraphics[width=85mm,scale=1]{Framework1.png} \end{subfigure}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ \begin{subfigure}[t]{0.4\textwidth} \hspace{-1em} \includegraphics[width=85mm,scale=1]{Framework2.png} \end{subfigure} \caption{\,\,\,\,\,\,\,\,\,\ (a) Framework 1; \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ (b) Framework 2.} \label{fig:Framework} \end{figure} \textbf{Framework 2 (Semi-decentralized, Uniform-price Double Sided Auction).} The second framework is shown in Figure 2(b). Comparing to the first framework, it implements a uniform-price double-sided auction instead of a continuous double auction. More importantly, the household contacts and the factory are removed to reduce an intermediary layer of transactions that gets information onto the exchange. The decentralized exchange no longer automates the matching of offers coming from the agents. Instead, the smart contract is used to store the information relayed from all the different participants of the microgrid. At the end of each epoch, the application collects the offers from the decentralized exchange, consequently calculating the regression curves of the supply and demand side in order to find the interception - the uniform price. The matching algorithm is then initialized, which relays the respective matching with their respective prices to each agent. The algorithm triggers their payment functionality, thereby transacting the necessary amount of Ether from one account to the other through the Web3 interface. \textbf{Agent Bidding Strategy Logic.} The agent bidding strategy logic is shown in Figure \ref{fig:agentlogic}. The first decision node from the top is checking whether an agent has excess energy or is lack of energy. Then, it follows with several thresholds of batteries in which the agent decides to sell/buy energy or to charge/discharge the batteries. This process results in a behavior in which energy is sold when prices are high and bought when prices are low. Furthermore, it has a level of "forecasting" capability in which it checks the necessary energy that is going to be needed for the following 10 hours (configurable). This attempts to emulate the actual forecasting done in energy markets in order to predict how much demand and supply individuals will need in the upcoming day. Forecasting provides a more reliable level of energy supply, as suppliers can be aware of the amount of necessary energy that will be required at different times of day, and control systems can prepare accordingly. Preprocessed data is prepared by acquiring weather data from the National Renewable Energy Laboratory (NREL) within the same time frame and location \cite{nrel}. This "forecasting" function permits agents to reduce the number of transactions they would have to do a day, by filling their batteries with the necessary energy they will need for that time interval. Otherwise, they would have to bid almost every hour that is inconvenient and resulting in a system that would be very costly just in transaction fees. Finally, there is a safety net mechanism in which the agents will buy energy from the national grid if the batteries fall under 20\% of their total capacity. \begin{figure*} \includegraphics[width=160mm]{Picture1.png} \caption{Digram showing the logic behind the agents strategic bidding or selling.} \label{fig:agentlogic} \end{figure*} {\bf Implementation.} For the implementation, a wide range of technologies are used, including Solc, Mocha, React.js, Next.js, Ganachecli, Metamask, Ganache-cli, and Web3. Ganache-cli is an Ethereum development tool that simulates full client behavior by running a personal blockchain on a local machine. It is essential to make the testing faster and easier as it can run transactions very quickly compared to the online Ethereum test networks. Web3 enables an interface to communicate between the application and the ganache provider, i.e., sending transactions, creating contracts, requesting blockchain accounts information. The initial step taken is the creation of adequate smart contracts that would enable the storage of information and added bidding functionality to the households, name household contracts. These would then communicate to a smart contract that serves the function of a decentralized exchange market. The smart contracts are then tested using the Mocha library to check if they are working as desired and to learn about the contract's limitations - i.e., memory limitations, computational cost optimization, the functionality of transaction between contracts, and between Ethereum accounts. Once the smart contracts are sufficiently refined, the simplified UI application is developed using React.js and Next.js. We program the market simulation in Javascript as the underlying UI application to match the stack of technologies that are used to code the front-end of the application. We start by importing the respective CSV files containing each household's hourly load and demand into separate class instances (data structures) called agents. Each of these agent instances represents a household, in which all its data (e.g., amount of battery charge, load, demand, ether balance, historical bids/asks) is stored, along with functions representing the functionality that the agent is able to perform. These are used to interact either with its own data structures or to interact with the smart contracts. The agents are coded to take independent actions based on the most attractive conditions in the market. No coordination between the agents is built into the system. At every epoch, each agent is given the chance to act according to these sets of actions. At the end of the simulation, the data is collected from the separate agent data structures, and written to a CSV file to be examined by a Python script (better visualization and mathematical tools). The analysis of the results would be undertaken by calculating the aggregated costs of all the households if they were to always pay to the national grid when they lack energy or not generating enough from their PV systems. This data is then compared to the results obtained from the two frameworks. This analysis will be done predominantly from an economic point of view. These results will be discussed to assess whether using microgrids is economically beneficial for both consumers and producers, as well as providing an insight into whether blockchain technologies are effective in integrating necessary components to constitute a micro-market setup. \section{Evaluation} \textbf{A high-level comparison of the key characteristics of the two frameworks.} A comparison of the two frameworks is given in Table \ref{tab:keycharacteristics}. At a high-level, Framework 1 uses the storage, computation, and exchange of value function of the blockchain, whereas Framework 2 does the computation externally. It takes advantage of the fact that processing information is quicker and free of costs, hence vastly reducing the number of transactions used within its operation. However, this computation has to be done in a centralized server, which is prone to attacks as it is less secure than the data on a blockchain. Nevertheless, one could assume that the current centralized servers have the same level of security. This is necessary as the computation required to calculate the uniform price was too much to implement in a smart contract. In contrast, Framework 1 is more secure as all the information is within the blockchain, yet slower due to the more significant amount of transactions has to execute. In addition to this, it can only run continuous double-auction. Framework 1 is, therefore, more decentralized as every participant pays its share of computation of the decentralized exchange when placing a bid/ask. \begin{table*}[] \centering \begin{tabular}{|l|l|} \hline Framework 1 & Framework 2 \\ \hline Computation and matching are done within \\ the blockchain through smart contracts & Matching is done externally to the blockchain \\ \hline Uses more transactions = more gas expenditure & Uses less transactions = less gas expenditure \\ \hline \begin{tabular}[c]{@{}l@{}}Less flexible - can not implement heavy \\ computation procedures\end{tabular} & \begin{tabular}[c]{@{}l@{}}More flexible - can implement more complex \\ procedures (e.g., other auction mechanisms)\end{tabular} \\ \hline Slower system & Faster system \\ \hline More secure & Less secure \\ \hline \end{tabular} \caption{The difference of the key characteristics of the two frameworks} \label{tab:keycharacteristics} \end{table*} \smallskip In the next, we present a number of simulation results. We implemented the two frameworks with real-world data on a weekly base, a monthly base, and a six-month base. Fig. 4(a) and Fig. 4(b) show the simulation results of Framework 1 and 2 in a week, respectively. At times in which there was more supply than demand in the microgrid, there were no buying and selling activities in the market. In this case, the prices of electricity were left unchanged from the previous one, as the flat lines represent. This is merely to allow better visualization of the prices. \begin{figure}[h] \begin{subfigure}[t]{0.4\textwidth} \hspace{-1em} \includegraphics[width=85mm,scale=1]{F1week.png} \end{subfigure}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ \begin{subfigure}[t]{0.4\textwidth} \hspace{-1em} \includegraphics[width=85mm,scale=1]{F2week.png} \end{subfigure} \caption{\footnotesize (a) Framework 1, continuous double auction; \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ (b) Framework 2, uniform-price, double-sided auction.} \end{figure} {\bf Electricity Prices in the Microgrid.} Fig. 4(a) and Fig. 4(b) show that the continuous double auction results in a higher price average of around 8.0 cents per kWh. In contrast, the uniform-price auction results in an average of 4.4 cents per kWh as well as exhibiting a more volatile range of prices. In hindsight, this is to some extent because the mechanism used in the latter does not follow individual rationality. That is, the interception between demand and supply curve may be found at lower prices than the cost of the producers (averaged at 7 cents per kWh), which results in a negative utility. The continuous double auction follows individual rationality, as consumers are paying less than they would be paying with the national grid, and producers are selling at a higher cost than their LCOE value, resulting in a beneficial situation for both parties. Furthermore, the uniform-price auction demonstrates a larger amount of electricity volume being traded. This characteristic is due to the fact that some orders do not get filled for not having a high enough price tag in relation to the sell-side in the continuous double auction. Perhaps introducing more producers present in the market would add a stronger supply side, thereby pushing the interception of the curves to a higher price, which would lead to more reasonable prices. Simulations of longer periods were done and are consistent with this observation. By removing the zero-activity periods, the average of prices is shown to be 8.1 cents per kWh in Framework 1 and 4.2 cents per kWh in Framework 2, in a six-month simulation. {\bf Blockchain Framework Average Costs vs. National Grid Average Cost.} Fig 5(a) and 5(b) demonstrate the costs comparison between using the Microgrid P2P trading platforms and the cost of using the national grid solely. The averaged total trading cost per household comes to around 0.74 dollars per day, whereas the averaged cost of using the grid per household is 2.18 dollars a day. These remain consistent with the longer periods simulation results, which show that national grid costs are around three-fold as much as the blockchain P2P trading platform. \begin{figure}[h] \begin{subfigure}[t]{0.4\textwidth} \hspace{-1em} \includegraphics[width=85mm,scale=1]{F1vsNGrid.png} \end{subfigure}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ \begin{subfigure}[t]{0.4\textwidth} \hspace{-1em} \includegraphics[width=85mm,scale=1]{F2vsNGrid.png} \end{subfigure} \caption{\footnotesize (a) Average Costs of Framework 1 vs National Grid Average Cost.; \,\,\ (b) Average Costs of Framework 2 vs National Grid Average Cost.} \end{figure} {\bf Battery Charge Percentages.} Figure 6(a) and Figure 6(b) show the average battery charge percentages in the two frameworks, respectively. We can observe a periodic pattern in the uniform-price double auction in times when the battery charge gets to 50\%, as this triggers the Agents Logic to purchase enough energy to cover their needs for the following 10 hours. This also coincides with the supply going up during daylight, implying that the Agents Logic correctly manages the agents' needs on a day-to-day basis. It regulates its purchases in order for the energy storage to cover its demand needs up to the following day at early hours in the morning when the sun is present when it knows it will be able to charge its battery. Similar patterns can be observed in the continuous double auction mechanism, where the battery charge does not tend to drop much below 50\%. The differences in volume traded in both mechanisms is reflected in the differences between these two Figures, like the spikes of battery charge in Figure \ref{fig:battery1} tend to be less abrupt, some of the bids were not filled in the order book due to their lower price. \begin{figure}[h] \begin{subfigure}[t]{0.4\textwidth} \hspace{-1em} \includegraphics[width=85mm,scale=1]{battery1.jpg} \label{fig:battery1} \end{subfigure}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ \begin{subfigure}[t]{0.4\textwidth} \hspace{-1em} \includegraphics[width=85mm,scale=1]{battery2.jpg} \end{subfigure} \caption{\footnotesize (a) Average Percentage of Battery Charge in Framework 1; \,\,\,\,\,\,\,\,\,\,\,\,\ (b) Average Percentage of Battery Charge in Framework 2.} \end{figure} \section{Reflection and Future Work} This section comprises of a critical reflection over the framework designs implemented, some results presented, along with some observations about the applicability of blockchain technologies in the microgrids use case. Some possibilities for further work are discussed. {\bf Reflection.} Two market mechanisms have been implemented with real-world data, showing quite different results. The uniform-price auction has shown to result in more amount of volume being traded but with a reduction in prices. Some assumptions were made in calculating the prices of solar energy systems. Since the focus of this paper is the overall architecture design, for simplicity, it does not yet take into consideration the transmission cost, the distribution costs, and battery charging/discharging inefficiencies. More severely, there are still many challenges facing blockchain technologies. For example, smart contract limitations prevented the introduction of more complex logic. Also, for this specific scenario in a closed testing environment, Ethereum's transaction throughput was not a concern as this market would be trading every hour. Therefore the amount of data to the process was not a problem. As Ethereum has 15s block intervals, there could be plenty of block times to insert all the necessary transactions for an hour increment. However, this could become more of a problem as trading intervals lengthen (e.g., 1 or 5 minutes). In that case, not all transactions would be processed in time for the time interval to finalize, leaving the network congested and the market with incomplete information. Innovative solutions to some of these problems are under developing but would probably take some time. An example of this is the IOTA project. It is a 'ledger of things' involving the use of a tangle framework, which is computationally lighter, has low energy consumption, highly scalable, and has zero transaction fees. Indeed, this technology is promising but is a highly academic and untested framework yet, which does not have the same level of reliability as the existing blockchain protocols. Lately, the IOTA Foundation is working on additional smart contracts layer on top of their existing tangle framework, which could open new doors to a more effective, low-cost alternative solution to the problems that the blockchain faces today. It will be interesting to see how these technologies unfold in the near future and if they can fulfill their very ambitious ideas that they have set themselves. {\bf Future Work.} The transaction costs and efficiency of the system may be improved by optimizing the parameters in the Agents Logic algorithm. For example, the time and price at which an order is placed, and the percentage at which the battery is charged and discharged. It is essentially a minimization problem that takes into account a more substantial amount of parameters and conditions from the market, along with constraints related to the household owner's components, such as battery's depth of discharge and inefficiencies associated with them. Further testing would need more extensive simulations and a consistent framework across both market mechanisms to come to a solid conclusion about which one is most applicable. One may also attempt to construct alternative frameworks in which agents would communicate directly to the decentralized exchange smart contract that uses the continuous double auction (for its preferable price range and being computationally cheap). A caveat for it is that the smart contract would not be able to automate the transaction from one party to another when it found a matching. This is due to the inability of a contract to trigger transactions from a user account to another as it only has the capability to do that with contract addresses. Alternatively, a contract could be created for each successful matching/trade that would contain a balance sufficient to fill the desired amount being bought. This way, when a match is found, the market would automatically trigger the smart contract to exchange value between the two participants and return the remaining balance. Surely, any further work would have to test if the creation of said contracts for the trade would be cheaper or more costly than the current approach. One may want to consider agents' possible strategic behavior in designing a framework. Bidding strategy is an important problem in auction mechanisms, and many strategies for the continuous double auction have been subsequently developed. Over the last decade, there has been a considerable emphasis on strategies for software trading agents with the emergence of electronic markets \citep{Friedman1992}, such as Zero-Intelligence (ZI) strategy \citep{GodeSunder1993}, Zero-Intelligence Plus (ZIP) strategy \citep{Cliff97minimal-intelligenceagents}, GD strategy \citep{Gjerstad2001PriceFI}. \section{Conclusion} The decentralized models, if fully developed, can potentially minimize the power transmission dissipation, reduce the probability of cascade failure due to its inherent islanding capability, and minimize the load on the grid by a significant ratio. The autonomous microgrid and the centralized national grid should complement each other, and contribute to a greener and economical energy market. \setlength{\parskip}{1em} \noindent {\bf Acknowledgements:} Jie Zhang was supported by a Leverhulme Trust Research Project Grant (2021 -- 2024). \bibliographystyle{cas-model2-names}
1,116,691,498,913
arxiv
\section{Introduction} High transverse momentum partons and their hadronic fragments are powerful and valuable probes of the high energy density matter created at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). These energetic partons are created in the early stage of the collisions and therefore provide access to the space-time history of the transient hot and dense nuclear medium - the quark-gluon plasma (QGP) created in the heavy ion reactions. Specifically, as they propagate through the QGP, they interact with the medium and lose energy, a phenomenon known as ``jet quenching''~\cite{Gyulassy:1993hr,Baier:1996sk,Zakharov:1997uu,Wiedemann:2000za,Gyulassy:2000er,Wang:2001ifa,Arnold:2002ja,Ovanesyan:2011xy}. Experimentally, there has been tremendous progress in recent years~\cite{d'Enterria:2009am} in establishing the jet quenching phenomenon from different observables, such as the suppression of single hadron production~\cite{Adcox:2001jp,Adler:2002xw,Aamodt:2010jd}, dihadron correlations~\cite{Adler:2002tq,Adare:2010ry}, $\gamma$-hadron correlations~\cite{Adare:2009vd, Abelev:2009gu}, and ultimately the alteration of inclusive jet~\cite{Salur:2009vz} and dijet production~\cite{Aad:2010bu,Chatrchyan:2011sx}. Theoretically, many perturbative QCD-based models of jet quenching have been developed and used to extract the medium properties~\cite{Vitev:2009rd}, especially the QGP parton number and energy densities. They are able to successfully describe most of the experimental data and the measured nuclear modification factor $R_{AA}$ of single hadron production in particular. To study jet quenching and parton energy loss in more detail and thus constrain the medium properties better, more sophisticated observables have been proposed to improve our understanding of inelastic parton interactions in the QGP. One such example is $\gamma$-triggered hadron production and correlations~\cite{Wang:1996yh}: one studies jet quenching by measuring the $p_T$ distribution of charged hadrons in the opposite direction of a trigger direct photon. Since the photon does not interact with the dense medium, its energy approximately reflects that of the initial away-side parton before energy loss up to corrections at next-to-leading order in $\alpha_s$. One can therefore study the effective medium modification of the jet fragmentation function. Earlier studies on the $\gamma$-triggered light hadron correlations~\cite{Zhang:2009rn,Qin:2009bk,Renk:2009ur} seem to be roughly compatible with the experimental data. Although jet quenching has been successful in describing most of the experimental findings, there are still remaining puzzles. Resolving these puzzles will eventually uncover the exact underlying mechanism for the suppression of leading particles and jets. One well-known difficulty is related to the fact that the $c$ and $b$-quark parton-level energy loss in the QGP has not been sufficient in the past to explain the large suppression of non-photonic $e^{+}+e^{-}$ measured at RHIC~\cite{Dokshitzer:2001zm,Wicks:2005gt}. These non-photonic electrons come from the semileptonic decays of $D$ and $B$ mesons. In the framework of perturbative QCD, collisional dissociation of heavy-mesons in the QGP has been suggested as an additional suppression mechanism~\cite{Adil:2006ra,Sharma:2009hn}. To distinguish with confidence between theoretical models one hopes to have a direct measurement of the heavy meson cross sections in both p+p and A+A collisions. Such direct measurements are finally becoming available from both RHIC and the LHC experiments. The main goal of this paper is to investigate new experimental observables, such as $\gamma$-triggered hadron production, and make simultaneous predictions for both light and heavy meson final states. Naturally, we will first focus on the standard parton energy loss mechanism in the QGP. More specifically, we present detailed studies of both $\gamma$-triggered away-side light hadron and heavy meson spectra in heavy ion collisions. Within the same energy loss formalism - the Gyulassy-Levai-Vitev (GLV) approach - we study how the QGP medium affects the $\gamma$-triggered light and heavy meson effective fragmentation functions. We find that the nuclear modification factor $I_{AA}$ behaves very differently depending on the parent parton mass: $I_{AA}$ is considerably suppressed for light hadrons ($I_{AA} < 1$), whereas it can be suppressed ($I_{AA} < 1$) or enhanced ($I_{AA} > 1$) for heavy mesons, depending on the kinematic region, due to the different shape of the heavy quark fragmentation functions. Furthermore, we find that the nuclear modification $I_{AA}$ is flatter in the $B$-meson case compared to that of $D$-mesons due to the smaller energy loss for $b$-quarks in the non-asymptotic (finite energy) case. Thus, a comparative study of $\gamma$-triggered light and heavy meson correlations can be a very useful probe of the physics that underlays the experimentally established jet quenching. The rest of our paper is organized as follows: in Sec.~II we present the relevant formalism for both photon-tagged light hadron and heavy meson production. In Sec.~III we present our phenomenological studies. We first compare our calculation to the experimental data on photon+light hadron correlations. Then, we make predictions for photon-triggered light and heavy meson production relevant to both RHIC and LHC experiments and discuss the differences in the observed nuclear modification. We conclude our paper in Sec.~IV. \section{Photon-tagged light and heavy meson production} In this section we present the relevant formalism for the photon-triggered light and heavy meson production in nucleon-nucleon collisions. These formula will then be used in the phenomenological studies in the next section. \subsection{Photon-tagged light hadron production} Within the framework of the collinear perturbative QCD factorization approach, the lowest order (LO) differential cross section for back-to-back photon-light-hadron production can be obtained from the partonic processes: $q\bar{q}\to \gamma g$ and $qg\to \gamma q$ and is given by~\cite{Owens:1986mp} \begin{eqnarray} \frac{d\sigma^{\gamma h}_{NN}}{dy_\gamma dy_h dp_{T_\gamma}dp_{T_h}}= \frac{2\pi\alpha_{em}\alpha_s}{S^2} \sum_{a,b,c} D_{h/c}(z_T) \frac{f_{a/N}(x_a)f_{b/N}(x_b)}{x_a x_b}\overline{|M|}^2_{ab\to \gamma c}\, , \label{light} \end{eqnarray} where $S$ is the squared center of mass energy of the hadronic collisions, and $z_T$, $x_a$, and $x_b$ are given by \begin{eqnarray} z_T=\frac{p_{T_h}}{p_{T_\gamma}}, \qquad x_a=\frac{p_{T_\gamma}}{\sqrt{S}}\left(e^{y_\gamma}+e^{y_h}\right), \qquad x_b=\frac{p_{T_\gamma}}{\sqrt{S}}\left(e^{-y_\gamma}+e^{-y_h}\right). \end{eqnarray} Here, $y_\gamma$ and $p_{T_\gamma}$ ($y_h$ and $p_{T_h}$) are the rapidity and transverse momentum of the photon (away-side hadron), respectively. We denote by $f_{a,b/N}(x_{a,b})$ the distribution functions of partons $a,b$ in the nucleon and $D_{h/c}(z)$ is the fragmentation function of parton $c$ into hadron $h$. $\overline{|M|}^2_{ab\to \gamma c}$ are the squared matrix elements for $ab\to \gamma c$ partonic processes, and are given by \begin{eqnarray} \overline{|M|}^2_{qg\to \gamma q}&=&e_q^2\frac{1}{N_c}\left[-\frac{s}{t}-\frac{t}{s}\right], \qquad \overline{|M|}^2_{gq\to \gamma q}=e_q^2\frac{1}{N_c}\left[-\frac{s}{u}-\frac{u}{s}\right], \\ \overline{|M|}^2_{q\bar q\to \gamma g}&=&\overline{|M|}^2_{\bar q q\to \gamma g} =e_q^2\frac{N_c^2-1}{N_c^2}\left[\frac{t}{u}+\frac{u}{t}\right], \end{eqnarray} where $s, t, u$ are the partonic Mandelstam variables, $e_q$ is the fractional electric charge of the light quark, and $N_c=3$ is the number of colors. Experimentally, one typically defines the so-called $\gamma$-triggered fragmentation function \begin{eqnarray} D_{NN}^{\gamma h}(z_T)&=&\frac{\int dy_\gamma dy_h dp_{T_\gamma} p_{T_\gamma}\frac{d\sigma^{\gamma h}_{NN}}{dy_\gamma dy_h dp_{T_\gamma}dp_{T_h}}}{\int dy_\gamma dp_{T_\gamma} \frac{d\sigma_{NN}^{\gamma}}{dy_\gamma dp_{T_\gamma}}} \label{pplight} \end{eqnarray} for nucleon-nucleon collisions and a similar $D_{AA}^{\gamma h}(z_T)$ per binary scattering for A+A collisions. The denominator defines the normalization and is given by \begin{eqnarray} \frac{d\sigma_{NN}^{\gamma}}{dy_\gamma dp_{T_\gamma}}=\sum_h \int dy_h dp_{T_h} \frac{d\sigma^{\gamma h}_{NN}}{dy_\gamma dy_h dp_{T_\gamma}dp_{T_h}} , \end{eqnarray} which is just the cross section for direct photon production. To quantify the modification of $\gamma$-triggered fragmentation function in A+A collisions relative to that in nucleon-nucleon collisions due to the jet quenching, one introduces the nuclear modification factor, \begin{eqnarray} I_{AA}^{\gamma h}(z_T)=\frac{D_{AA}^{\gamma h}(z_T)}{D_{NN}^{\gamma h}(z_T)}. \end{eqnarray} Note that, in spite of their suggestive name, $\gamma$-triggered fragmentation functions are derived from the observed away-side meson distribution and thus reflect all input in a pQCD calculation - the parton distributions, the hard scattering cross sections, the medium-induced radiative correction (or parton energy loss) and the parton decay probabilities - not only the fragmentation process itself. \subsection{Photon-tagged heavy meson production} Within the perturbative QCD factorization approach there have been different ways to calculate heavy quark production. One of them is called a fixed-flavor-number scheme (FFNS)~\cite{Frixione:1994dv,Frixione:1995qc,Stratmann:1994bz} and is based on the assumption that the gluon and the three light quarks ($u, d, s$) are the only active partons. The heavy quark appears only in the final state and is produced in the hard scattering process of light partons. The heavy quark mass $m$ is explicitly retained throughout the calculation. The other approach is called a variable-flavor-number scheme (VFNS)~\cite{Berger:1995qe,Bailey:1996px,Stavreva:2009vi} and describes the heavy quark as a massless parton of density $f_{Q/N}(x, \mu^2)$ in the nucleon, with the boundary condition $f_{Q/N}(x, \mu^2)=0$ for $\mu\leq m$. Thus, the heavy quark mass $m$ is set to zero in the short-distant partonic cross section\footnote{Strictly speaking, this is the so-called zero-mass variable-flavor-number scheme (ZM-VFNS). There is also general-mass variable-flavor-number scheme (GM-VFNS) which combines the virtues of the FFNS and the ZM-VFNS, though more complicated.}. In this paper, we are particularly interested in studying the jet quenching effects for both light and heavy mesons. The different amount of energy loss during quark propagation in the medium created in heavy ion collisions is due to the mass difference between light and heavy quarks, at least in perturbative QCD. With this in mind, we want to keep explicitly the heavy quark mass in our calculation. In other words, we favor the FFNS scheme for the project at hand. Within this scheme, the photon-tagged heavy mesons are produced through the following partonic processes - (a) quark-antiquark annihilation: $q\bar{q}\to Q\bar{Q}\gamma$; (b) gluon-gluon fusion: $gg\to Q\bar Q \gamma$. The sample Feynman diagrams are given in Fig.~\ref{feynfig}. It is important to realize that in the FFNS scheme photon+heavy quark events are generated to LO by the $2\to 3$ processes, to be compared to the usual $2\to 2$ processes for the photon+light hadron events at LO. \begin{figure}[htb]\centering \psfig{file=qqb.eps,width=1.8in} \hskip 0.3in \psfig{file=gg1.eps,width=1.8in} \hskip 0.3in \psfig{file=gg2.eps,width=1.5in} \caption{Sample Feynman diagrams at leading order for direct photon plus heavy quark production: (a) for light quark-antiquark annihilation $q\bar q\to Q\bar Q \gamma$, (b) and (c) for gluon-gluon fusion $gg\to Q\bar Q \gamma$.} \label{feynfig} \end{figure} The differential cross section for photon-tagged heavy meson production takes the following form~\cite{Stratmann:1994bz}: \begin{eqnarray} \frac{d\sigma^{\gamma H}_{NN}}{dy_\gamma dy_h dp_{T_\gamma}dp_{T_H}} &=&\frac{\alpha_{em}\alpha_s^2}{2\pi S}\sum_{ab}\int \frac{dz}{z^2}D_{H/Q}(z) \int \frac{dx_a}{x_a} f_{a/N}(x_a) \int d\phi \frac{1}{x_b} f_{b/N}(x_b) \nonumber\\ && \times \frac{p_{T_\gamma}p_{T_H}}{x_aS-\sqrt{S}(p_{T_\gamma} e^{y_\gamma}+m_{T_Q}e^{y_H})} \overline{|M|}_{ab\to Q\bar Q\gamma}, \label{heavy} \end{eqnarray} where $D_{H/Q}(z)$ is the heavy quark $Q$ to heavy meson $H$ fragmentation function and $\phi=\phi_h-\phi_\gamma \in[\pi/2, 3\pi/2]$ is the azimuthal angle between the triggered photon and the associated heavy meson on the away side. We denote by $m_{T_Q}=\sqrt{p_{T_Q}^2+m^2}$ the transverse mass and by $p_{T_Q}=p_{T_H}/z$ the transverse momentum for the heavy quark. The parton momentum fraction $x_b$ is fixed by the kinematics and has the following form \begin{eqnarray} x_b=\frac{x_a\sqrt{S}(p_{T_\gamma}e^{-y_\gamma}+m_{T_Q}e^{-y_H}) +2p_{T_\gamma}p_{T_Q}\cos\phi-p_{T_\gamma}m_{T_Q}(e^{y_\gamma-y_H}+e^{y_H-y_\gamma})} {x_a S-\sqrt{S}(p_{T_\gamma}e^{y_\gamma}+m_{T_Q}e^{y_H})}. \end{eqnarray} The partonic matrix elements $\overline{|M|}_{ab\to Q\bar Q\gamma}$ can be calculated in perturbative QCD. Note that the cross section does not produce any singularity since the heavy quarks have been taken to be massive explicitly. Therefore no regularization is needed despite the fact that we are actually dealing with a $2\to 3$ process. The cross section for photoproduction of heavy quarks has been published in~\cite{Ellis:1987qy}. By using crossing symmetry, we can infer the spin-averaged matrix elements in our case. We list the results here for completeness. For the quark-antiquark annihilation channel $q(p_1)+\bar q(p_2)\to Q(p_3)+\bar Q(p_4)+\gamma(p_5)$ we have \begin{eqnarray} \overline{|M|}^2_{q\bar q\to Q\bar Q\gamma}=\frac{N_c^2-1}{N_c^2}\left[e_Q^2A_1+e_q^2 A_2 + e_Q e_q A_3\right], \end{eqnarray} where $e_Q$ is the fractional electric charge of the heavy quark, and the coefficients $A_1$, $A_2$, and $A_3$ are given by \begin{eqnarray} A_1&=&\frac{\left(p_{24}^2+p_{23}^2+p_{14}^2+p_{13}^2+m^2 s_{34}\right)p_{34}} {p_{45} p_{35} p_{12} s_{34}} +\frac{m^2 \left(p_{25}^2+p_{15}^2\right)}{2 p_{12}^2 p_{45}p_{35}} \nonumber\\ && +\frac{m^2\left(p_{24}^2+p_{14}^2+p_{23}^2+p_{13}^2-p_{12}s_{34}\right)} {p_{12}^2 s_{34}}\left[\frac{1}{p_{45}}+\frac{1}{p_{35}}\right] \nonumber\\ && -\frac{m^2}{2p_{12}^2} \left[\frac{p_{24}^2+p_{14}^2+m^2p_{12}}{p_{35}^2} +\frac{p_{23}^2+p_{13}^2+m^2p_{12}}{p_{45}^2} \right], \\ A_2&=&\frac{p_{24}^2+p_{23}^2+p_{14}^2+p_{13}^2+2 m^2p_{12}} {p_{25} p_{15} s_{34}} +\frac{2 m^2\left(p_{25}^2+p_{15}^2\right)}{p_{25} p_{15} s_{34}^2}, \\ A_3&=&\frac{p_{24}^2+p_{23}^2+p_{14}^2+p_{13}^2+m^2 (p_{12}+\frac{1}{2}s_{34})} {p_{12} s_{34}} \left[\frac{p_{24}}{p_{45}p_{25}}+\frac{p_{13}}{p_{35}p_{15}}-\frac{p_{14}}{p_{45}p_{15}} -\frac{p_{23}}{p_{35}p_{25}}\right] \nonumber\\ && +\frac{2m^2}{p_{12}s_{34}}\left[\frac{p_{13}-p_{23}}{p_{45}}+\frac{p_{24}-p_{14}}{p_{35}}\right]. \end{eqnarray} Here, we have used the notation $p_{ij}=p_i\cdot p_j$, $s_{34}=(p_3+p_4)^2$. For the gluon-gluon fusion channel $g(p_1)+g(p_2)\to Q(p_3)+\bar Q(p_4)+\gamma(p_5)$ we have \begin{eqnarray} \overline{|M|}^2_{g g\to Q\bar Q\gamma}=-\frac{1}{N_c(N_c^2-1)}e_Q^2 \left[\left(R_{QED}+\mbox{11 perm's}\right)+N_c^2\left(R_{KF}+\mbox{3 perm's}\right)\right]. \end{eqnarray} The $R_{QED}$ has to be summed over 12 permutations corresponding to the 6 permutations of the momenta $p_1$, $p_2$, $-p_5$ and two permutations of the momenta $p_1$ and $p_2$. $R_{KF}$ has to be summed over 4 permutations corresponding to the interchange of $p_1$, $p_2$ and $p_3$, $p_4$. One such expression for $R_{QED}$ is given by \begin{eqnarray} R_{QED}&=&\frac{s_{34} p_{24}^2}{8p_{14}p_{13}p_{45}p_{35}} \nonumber\\ && -\frac{m^2}{2p_{35}}\left[\frac{1}{p_{14}}\left(\frac{s_{34}}{2p_{13}}-\frac{p_{23}}{p_{14}}+3\right) +\frac{p_{45}-p_{23}+p_{14}}{2p_{24}p_{13}}-\frac{1}{p_{45}}-\frac{3}{2p_{13}} \right] \nonumber\\ && -\frac{m^4}{2p_{35}p_{14}}\left[ \frac{1}{2p_{13}}\left(\frac{p_{13}-3p_{14}+2p_{45}}{p_{24}}+\frac{3p_{24}}{p_{45}}-4\right) +\frac{2}{p_{35}} \right] \nonumber\\ && +\frac{m^6}{p_{35}p_{14}}\left[\frac{1}{2p_{13}}\left(\frac{1}{2p_{45}}-\frac{1}{p_{24}}\right) +\frac{1}{2p_{14}}\left(\frac{1}{2p_{35}}-\frac{1}{p_{23}}\right) \right], \end{eqnarray} and $R_{KF}$ is given by \begin{eqnarray} R_{KF}&=&-\frac{1}{2p_{12}}\left[\frac{p_{45}^2}{p_{13}p_{24}} +\frac{p_{14}^2}{p_{45}p_{35}}\left(\frac{p_{14}}{p_{24}}+\frac{p_{13}}{p_{23}}\right) \right] \nonumber\\ && +\frac{m^2}{2p_{12}^2}\left[1-\frac{2p_{14}}{p_{35}}\left(\frac{p_{14}}{p_{35}}+\frac{p_{23}}{p_{45}}\right)\right] +\frac{m^2}{4p_{14}p_{23}}\left[-\frac{2p_{45}}{p_{23}}+\frac{2(p_{45}+p_{35})}{p_{12}}-7\right] \nonumber\\ && +\frac{m^2}{2p_{35}p_{14}}\left[\frac{p_{45}-p_{12}}{p_{23}}+\frac{p_{23}-p_{12}}{p_{45}} -\frac{p_{23}}{p_{14}}+\frac{p_{12}}{p_{35}}+7\right] \nonumber\\ && +\frac{m^2}{p_{35}}\left[\frac{1}{p_{12}}\left(\frac{p_{14}-p_{45}}{p_{23}}+\frac{2p_{14}}{p_{35}} +\frac{2p_{14}}{p_{45}}-2\right)-\frac{1}{p_{45}}-\frac{3}{2p_{35}} \right] \nonumber\\ && +\frac{m^4}{p_{35}p_{23}}\left[\frac{1}{p_{12}}\left(\frac{p_{45}}{p_{14}}+\frac{p_{14}}{p_{45}}\right) +\frac{1}{2p_{45}}\left(\frac{p_{12}}{p_{14}}+2\right)\right] \nonumber\\ && +\frac{m^4}{p_{35}}\left[\frac{1}{p_{12}}\left(-\frac{1}{p_{23}}-\frac{1}{p_{35}}+\frac{1}{p_{14}}-\frac{2}{p_{45}}\right)-\frac{1}{p_{14}}\left(\frac{1}{p_{14}}-\frac{1}{p_{35}}+\frac{2}{p_{23}}\right)\right] \nonumber\\ && +\frac{m^4}{p_{14}p_{23}}\left(\frac{1}{2p_{12}}+\frac{1}{p_{14}}\right) -\frac{m^6}{p_{14}^2}\left[\left(\frac{1}{2p_{35}}+\frac{p_{14}}{2p_{23}p_{45}}\right)^2 +\frac{1}{p_{23}}\left(\frac{1}{4p_{23}}-\frac{1}{p_{35}}\right) \right]. \end{eqnarray} With the cross sections at hand, the experimentally accessible $\gamma$-triggered heavy meson fragmentation function is defined as \begin{eqnarray} D_{NN}^{\gamma H}(z_T)&=&\frac{\int dy_\gamma dy_H dp_{T_\gamma} p_{T_\gamma} \frac{d\sigma^{\gamma H}_{NN}}{dy_\gamma dy_H dp_{T_\gamma}dp_{T_H}}}{\int dy_\gamma dp_{T_\gamma} \frac{d\sigma_{NN}^{\gamma Q}}{dy_\gamma dp_{T_\gamma}}}, \end{eqnarray} where the denominator is the differential cross section of photon and heavy meson production summed over all the heavy mesons \begin{eqnarray} \frac{d\sigma_{NN}^{\gamma Q}}{dy_\gamma dp_{T_\gamma}} =\sum_H \int dy_H dp_{T_H} \frac{d\sigma^{\gamma H}_{NN}}{dy_\gamma dy_H dp_{T_\gamma}dp_{T_H}}. \end{eqnarray} It can be written as \begin{eqnarray} \frac{d\sigma_{NN}^{\gamma Q}}{dy_\gamma dp_{T_\gamma}} &=& \frac{\alpha_{em}\alpha_s^2}{2\pi S}\sum_{ab}\int dy_Q dp_{T_Q}\int \frac{dx_a}{x_a} f_{a/N}(x_a) \int d\phi \frac{1}{x_b} f_{b/N}(x_b) \nonumber\\ && \times \frac{p_{T_\gamma}p_{T_Q}}{x_aS-\sqrt{S}(p_{T_\gamma} e^{y_\gamma}+m_{T_Q}e^{y_Q})} \overline{|M|}_{ab\to Q\bar Q\gamma}, \end{eqnarray} which is just part of the inclusive photon production cross section that has $\gamma+Q$ events. Likewise, we can define a $\gamma$-triggered heavy meson fragmentation function $D_{AA}^{\gamma H}(z_T)$ in A+A collisions and the nuclear modification factor $I_{AA}^{\gamma H}(z_T)=D_{AA}^{\gamma H}(z_T)/D_{NN}^{\gamma H}(z_T)$. In the next section we will study how these fragmentation functions behave in both p+p and A+A collisions and, most importantly, how they are modified in A+A collisions due to jet quenching. \section{Phenomenological studies in p+p and A+A collisions} In this section we will study the $\gamma$-triggered light hadron and heavy meson production in both p+p and A+A collisions. Final-state medium-induced radiative corrections are process dependent - they depend on the parameters of the QGP medium and, consequently, on the details of the heavy ion collision. However, in QCD they factorize from the hard scattering cross section~\cite{Ovanesyan:2011xy} and enter the physical observables as a standard integral convolution. From the point of view of phenomenological applications, it is convenient to change the order of integration and include parton energy loss as an effective replacement of the fragmentation function $D_{h/c}(z)$~\cite{Vitev:2002pf}: \begin{eqnarray} D_{h/c}(z) \Rightarrow \int_0^{1-z} d\epsilon\, P(\epsilon) \, \frac{1}{1-\epsilon} D_{h/c}\left(\frac{z}{1-\epsilon}\right). \label{quenching} \end{eqnarray} Here, $P(\epsilon)$ is the probability distribution for a parton to lose a fraction of its energy $\epsilon=\sum_i\frac{\Delta E_i}{E}$ due to multiple gluon emission. In our calculation $P(\epsilon)$ is obtained using the GLV formalism for evaluating the medium-induced gluon bremsstrahlung~\cite{Gyulassy:2000fs,Vitev:2007ve} and the Poisson approximation. Note that it is not mandatory to employ Eq.~(\ref{quenching}) in jet quenching phenomenology. One can alternatively evaluate the attenuated partonic cross section prior to fragmentation and obtain identical results~\cite{Sharma:2009hn}. We take into account parton energy loss in $\gamma$-hadron correlation in high energy heavy-ion collisions as in Eq.~(\ref{quenching}) for simplicity. We do not perform a new evaluation of the energy loss since we aim at a direct and consistent comparison between different final states in the same jet quenching formalism. Instead, we employ the differential medium-induced bremsstrahlung distributions and energy loss probabilities $P(\epsilon)$ for light partons and heavy quarks obtained in~\cite{Sharma:2009hn,Neufeld:2010fj}. An illustration of these probabilities \begin{eqnarray} \int_0^1 P(\epsilon) d\epsilon =1, \qquad \int_0^1 \epsilon \, P(\epsilon) d\epsilon = \left\langle \frac{\Delta E}{E} \right\rangle , \qquad \epsilon=\sum_i\frac{\Delta E_i}{E} \label{prob} \end{eqnarray} is given in Fig.~\ref{peps}. Note that, in the probabilistic application of parton energy loss, larger $\langle \Delta E / E \rangle$ corresponds to a distribution skewed toward large $\epsilon$ and smaller $\langle \Delta E / E \rangle$ corresponds to a distribution skewed toward small $\epsilon$. All results include an average over the jet production points distributed according to the binary collision density in an optical Glauber model. These energetic jets propagate through the Bjorken expanding medium that follows the number of participants density~\cite{Sharma:2009hn,Neufeld:2010fj}. The left panel of Fig.~\ref{peps} corresponds to central Au+Au collisions at RHIC and parton energy $E = 10$~GeV and the right panel of Fig.~\ref{peps} corresponds to central Pb+Pb collisions at the LHC and parton energy $E=25$~GeV. The difference between light quarks and gluons arises from the quadratic Casimir, or average squared color charge, of the parton. Note that the naive relation $\langle \Delta E \rangle_g = C_A / C_F \langle \Delta E \rangle_q $ is approximately fulfilled only if $\langle \Delta E \rangle_{q,g} \ll E$. In the realistic case, $\langle \Delta E \rangle_g < C_A / C_F \langle \Delta E \rangle_q $ and the deviation form the naive scaling can be significant. On the other hand, the difference between light and heavy quarks is related to the quark mass~\cite{Dokshitzer:2001zm,Wicks:2005gt,Vitev:2007ve}. In the very high energy limit, for coherent medium-induced bremsstrahlung, the mass dependence of parton energy loss disappears. The calculations that we present in this paper are not in this asymptotic limit. Furthermore, for partons of effective mass $ gT \sim m_c$, where $g$ is the coupling constant and $T$ is the temperature of the quark-gluon plasma created in heavy ion collisions, there is very little difference between the light quarks and the charm quarks. This is clearly seen in both panels of Fig.~\ref{peps} and well understood. For the much heavier bottom quarks $m_b \gg m_c \sim gT $, however, the energy loss is noticeably smaller and $P(\epsilon)$ is skewed toward smaller values of $\epsilon$. \begin{figure}[htb]\centering \psfig{file=rhic_pe.eps,width=2.7in} \hskip 0.4in \psfig{file=lhc_pe.eps,width=2.7in} \caption{The probability distribution $P(\epsilon)$ for partons to lose a fraction $\epsilon$ of their energy is shown for gluons (dot-dashed), light quarks (dashed), charm quarks (solid) and bottom quarks (dotted). Left panel: central Au+Au collisions at RHIC and parton energy $E_J = 10$~GeV. Right panel: central Pb+Pb collisions at the LHC and parton energy $E_J = 25$~GeV. } \label{peps} \end{figure} \subsection{Comparison to experimental data: $\gamma$-triggered light hadron production} In this subsection we will compare our calculations for $\gamma$-triggered light hadron production to the experimental data at RHIC. We will use CTEQ6 parton distribution function~\cite{Pumplin:2002vw} and deFlorian-Sassot-Stratmann (DSS) light hadron fragmentation function~\cite{deFlorian:2007aj}. In Fig.~\ref{starphenix} (left and top right panels) we show our results compared to both PHENIX~\cite{Adare:2009vd} and STAR data~\cite{Abelev:2009gu}. For p+p collisions we use the definition given in Eq.~(\ref{pplight}). As seen from these figures, the perturbative QCD calculation gives a very good descriptions of the experimental data. For Au+Au collisions we use the same energy loss distributions that were used to describe single hadron suppression. As can be seen from Fig.~\ref{starphenix}, the calculated $D_{AA}^{\gamma h}(z_T)$ are in good agreement with the experimental data with the possible exception of the large $z_T \rightarrow 1$ region in the STAR data. This part of phase space is also inherently difficult to measure accurately. Such agreement gives an independent confirmation of the jet quenching mechanism in light hadron production. \begin{figure}[htb]\centering \psfig{file=dpp_phenix.eps, width=2.8in} \hskip 0.4in \psfig{file=star.eps, width=2.9in} \caption{$\gamma$-triggered fragmentation functions $D(z_T)$ (left) and (top right) are plotted as a function of $z_T$ at $\sqrt{S_{NN}}=200$ GeV. Left: top for p+p collisions, bottom for Au+Au collisions. Photon and hadron rapidities are integrated over $[-0.35, 0.35]$, while the trigger particle (the photon) momentum has been integrated over $[5, 7]$, $[7,9]$, $[9, 12]$, and $[12, 15]$ GeV from top to bottom. Data is from PHENIX~\cite{Adare:2009vd}. Right: top for $D(z_T)$ - solid for p+p collision, and dashed for Au+Au collision; bottom for nuclear modification factor $I_{AA}$. The photon and hadron rapidities are integrated over $[-1,1]$, and the photon momentum is integrated over $[8, 16]$ GeV. Data is from STAR~\cite{Abelev:2009gu}.} \label{starphenix} \end{figure} \begin{figure}[htb]\centering \psfig{file=iaa_phenix.eps, width=4in} \caption{Nuclear modification factor $I_{AA}(z_T)$ is plotted as a function of $z_T$ at $\sqrt{S_{NN}}=200$ GeV. The numbers on the right top corner in each plots are the range of the photon momentum, while the photon and hadron rapidities are integrated over $[-0.35, 0.35]$. The data is from PHENIX~\cite{Adare:2009vd}.} \label{phenix} \end{figure} It is worthwhile to point out that the $\gamma$-triggered light hadron fragmentation function $D_{AA}^{\gamma h}(z_T)$ in Au+Au collisions is suppresses relative to that in p+p collisions. This is understandable since at LO $D^{\gamma h}(z_T)\propto D_{h/c}(z_T)$ according to Eq.~(\ref{light}), where $D_{h/c}(z)$ is a fast falling function of $z$. Since jet quenching can effectively be thought of as a mechanism that probes slightly higher $z$ as in $D_{h/c}(\frac{z}{1-\epsilon})$ according to Eq.~(\ref{quenching}), sampling higher $z$ leads to a smaller fragmentation function. Thus, the $\gamma$-triggered fragmentation function in A+A collisions will be suppressed compared to p+p collisions. We will find this behavior can be altered in the $\gamma$-triggered heavy meson production as we will demonstrate below. In Fig.~\ref{starphenix} (bottom right panel) and Fig.~\ref{phenix}, we compare our calculations of $I_{AA}^{\gamma h}$ to the experimental data. Being a ratio of the $\gamma$-triggered fragmentation functions in A+A and p+p collisions, the nuclear modification factor $I_{AA}^{\gamma h}(z_T)=D_{AA}^{\gamma h}(z_T)/D_{pp}^{\gamma h}(z_T)$ has much larger error bars. Our results are consistent with the experimental data (within the error bars). \subsection{Predictions for $\gamma$-triggered light and heavy meson production} In this subsection we will make predictions for the $\gamma$-triggered light and heavy meson production at both RHIC and LHC energies and will compare the different features of the in-medium modification depending on the parton mass. We use $D$ and $B$-meson fragmentation functions as calculated in~\cite{Sharma:2009hn} and choose $m=1.3~(4.5)$ GeV for the charm (bottom) quarks. For $\gamma$-triggered heavy meson production we have integrated over the relative azimuthal angle $\phi=\phi_h-\phi_\gamma$ in the region: $|\phi-\pi| < \pi/5$ in Eq.~(\ref{heavy}), following the experimentally applied cuts~\cite{Adare:2009vd,Abelev:2009gu}. \begin{figure}[htb]\centering \psfig{file=star_dmeson.eps, width=2.8in} \hskip 0.4in \psfig{file=phenix_dmeson.eps, width=2.8in} \caption{Top panels: predictions for $\gamma$-triggered fragmentation functions $D(z_T)$ where the solid lines are for p+p collisions and the dashed lines are for central Au+Au collisions. Bottom panels: predictions for the nuclear modification factor $I_{AA}(z_T)$ where the solid lines are for $B$-meson, the dashed lines are for $D$-meson, and the dash-dotted lines are for charged hadrons. The left plot has kinematics similar to STAR, while the right plot has kinematics similar to PHENIX.} \label{rhic} \end{figure} In the upper panels of Fig.~\ref{rhic} we plot $D(z_T)$ as a function of $z_T$ at $\sqrt{S_{NN}}=200$~GeV for both p+p and central Au+Au collisions. Left panels reflect STAR kinematics and right panels reflect PHENIX kinematics. As seen clearly in the plots, $D(z_T)$ for light hadrons and heavy mesons are very different. For light hadrons, the $\gamma$-triggered fragmentation function $D^{\gamma h}(z_T)$ drops very fast as $z_T$ increases, consistent with the behavior of light hadron fragmentation function $D_{h/c}(z)$. On the other hand, the $\gamma$-triggered heavy meson fragmentation function $D^{\gamma H}(z_T)$ is a relatively flat function of $z_T$. Even though in the FFNS scheme $D^{\gamma H}(z_T)$ does not directly correspond to the heavy quark to heavy meson decay probability $D_{H/Q}(z)$ itself, it does reflect to some extent the major feature of the heavy meson fragmentation function: it grows at lower $z$ and decreases at higher $z$. This difference in the fragmentation functions will lead to distinctive difference in the nuclear modification factor $I_{AA}$ between light hadron and heavy meson. \begin{figure}[htb]\centering \psfig{file=lhc_dmeson_low.eps, width=2.8in} \hskip 0.4in \psfig{file=lhc_dmeson_high.eps, width=2.8in} \caption{Top panels: predictions for the $\gamma$-triggered fragmentation functions $D(z_T)$, where the solid lines are for p+p collisions and the dashed lines are for central Pb+Pb collisions at $\sqrt{S_{NN}}=2.76$~TeV. Bottom panels: predictions for the nuclear modification factor $I_{AA}(z_T)$, where the solid lines are for $B$-meson, the dashed lines are for $D$-meson, and the dash-dotted lines are for charged hadrons. We have integrated the photon and hadron rapidities over $[-2, 2]$. For the left plot, the photon momentum is integrated over $[10, 20]$~GeV, while for the right plot, it has been integrated over $[25, 50]$~GeV.} \label{lhc} \end{figure} In the bottom panels of Fig.~\ref{rhic} we plot $I_{AA}(z_T)$ as a function of $z_T$ at RHIC for $\gamma$-triggered light charged hadrons (dot-dashed), $D$-mesons (dashed) and $B$-mesons (solid), respectively. They exhibit very different behavior. For light hadrons, one expects $I_{AA}^{\gamma h}<1$ due to jet quenching, as explained above. The magnitude of this suppression arrises mainly from the steepness of the light hadron fragmentation function $D_{h/c}(z)$. On the other hand, according to Eq.~(\ref{heavy}), for the $\gamma$-triggered heavy meson case $I_{AA}^{\gamma H}$ really depends on the whole $z$-integration of the heavy meson fragmentation function $D_{H/Q}(z)$. If the integral is dominated by the small-$z$ region, where $D_{H/Q}(z)$ grows with increasing $z$, jet quenching in Eq.~(\ref{quenching}) means sampling relatively larger-$z$ and thus larger $D_{H/Q}(z)$. Consequently, one will then have $I_{AA}^{\gamma H}>1$. On the other hand, if the integral is dominated by the large-$z$ region, where $D_{H/Q}(z)$ decreases with increasing $z$, sampling relatively higher-$z$ means smaller $D_{H/Q}(z)$. One thus has $I_{AA}^{\gamma H}<1$. One keeps in mind that we have three-particle final state, thus $z_T=p_{T_H}/p_{T_\gamma}$ is not the same as the momentum fraction $z$ in heavy meson decay probability $D_{H/Q}(z)$ according to Eq.~(\ref{heavy}). Nevertheless, we find that the average $\langle z\rangle$ in the collision does increase as $z_T$ increases. Thus, at high $z_T$ where the $z$-integral in $I_{AA}^{\gamma H}$ is dominated by the large $z$ region, one should expect that both $B$ and $D$-mesons are suppressed due to jet quenching. The magnitude of the suppression should follow $I_{AA}^{\gamma B}>I_{AA}^{\gamma D}>I_{AA}^{\gamma h^\pm}$ if the heavy quark loses less energy than the light quark, as predicted by perturbative QCD calculations. On the other hand, in the low $z_T$ region, according to our calculation, we find that $I_{AA}^{\gamma D}>1$ for $D$-meson, consistent with our naive expectation, that is the low $z$ region dominates the $z$ integral in Eq.~(\ref{heavy}). However, for $\gamma$-triggered $B$-mesons $I_{AA}^{\gamma B}<1$ for the whole $z_T$ region for the kinematics we have chosen at RHIC. This is due to the fact that our $B$-meson fragmentation function is even harder than the $D$-meson fragmentation function. It drops very fast at high $z$ while increases only slowly at low $z$. Thus the nuclear modification from high $z$ (suppression) wins over that from low $z$ (enhancement) for the kinematic region we have chosen. Combining the analysis for both the low and the high $z_T$ ends, one immediately finds that the nuclear modification $I_{AA}^{\gamma H}$ for $\gamma$-triggered $B$-meson fragmentation function is flatter than that for $D$-meson case. Another important reason for this flatter behavior and the smaller size of the nuclear modification is the smaller energy loss of $b$-quark in comparison to $c$-quark. Thus, the shape of the nuclear modification for the $\gamma$-triggered fragmentation functions can cary valuable information about the properties of medium-induced gluon bremsstrahlung. In Fig.~\ref{lhc} we give predictions for the $\gamma$-triggered light and heavy meson production in both p+p and central Pb+Pb collisions at $\sqrt{S_{NN}}=2.76$~TeV at the LHC. We integrate over both the photon and hadron rapidities from -2 to 2. In the left (right) panel, the trigger photon momentum is integrated from $10<p_{T_{\gamma}}<20$ GeV ($25<p_{T_{\gamma}}<50$ GeV). The uncertainty band comes from the fact that we have included a $\sim 25\%$ uncertainty in the magnitude of the energy loss in our jet quenching calculation. The behavior of $I_{AA}$ follows from similar considerations for the relevant kinematics. It is also interesting to notice that for low $z_T$ both $I_{AA}^{\gamma B}$ and $I_{AA}^{\gamma D}$ can be larger than unity in the chosen kinematic region. It will be illuminating and important to measure such correlations for both light and heavy meson production in p+p and A+A collisions at RHIC and at the LHC. \section{Conclusions} We studied photon-triggered light hadron and heavy meson production in both p+p and A+A collisions. We found that the energy loss approach that was successful in describing single hadron production in A+A reactions at RHIC could simultaneously describe the STAR and PHENIX experimentally extracted photon-triggered light hadron fragmentation functions. Using the same theoretical framework, we generalized our formalism to study photon-triggered heavy meson production. To take into account the heavy quark mass explicitly, we followed the so-called fixed-flavor-number scheme and derived the differential cross section for photon+away-side heavy meson. We found that the nuclear modification of photon-tagged heavy meson fragmentation functions in A+A collision is very different from that of the photon-tagged light hadron fragmentation functions. This variance was determined to arise from the different shape of the decay probabilities for light partons into light hadrons and heavy quarks into heavy mesons, respectively. Comparing $D$ and $B$-mesons, we predicted that the nuclear modification factor $I_{AA}$ would be flatter for $\gamma$+$B$ production than the one for the $\gamma$+$D$ case. This is directly related to the different amount of energy lost by heavy quarks with different mass. Thus, the different shape of $I_{AA}$ in $\gamma$+$h$, $\gamma$+$D$, and $\gamma$+$B$ productions can be a sensitive and quantitative probe of the strength of medium-induced parton energy loss. Finally, we made detailed predictions for both photon-triggered light and heavy meson at RHIC and at the LHC. We conclude by emphasizing once again that a comprehensive study of these new final-state channels will provide fresh insight into the details of the jet quenching mechanism. \section*{Acknowledgments} We are grateful to Marco Stratmann for useful discussions. This work was supported by the U.S Department of Energy under Contract No.~DE-AC02-98CH10886 (Z.K.) and DE-AC52-06NA25396 (I.V.), and in the framework of the JET Collaboration. Z.K. thanks the theoretical division at Los Alamos National Laboratory for its hospitality and support where this work was initiated. \\ Note added: we recently became aware of the fact that F. Arleo, I. Schienbein and T. Stavreva are working on a similar problem by extending earlier work relevant to p+A collisions~\cite{Stavreva:2010mw}.
1,116,691,498,914
arxiv
\section{Superalgebras and supermatrices} Superalgebras (i.e. $\mathbb{Z}_2$-graded algebras) are a very important particular case of graded algebras that play an important role in modern mathematics and theoretical physics. They are, particularly, central objects, as basic algebraic machinery in the theory of supermanifolds, which are objects similar to the ordinary manifolds, regarded as ringed spaces, but the sheaf of smooth functions on the manifolds is replaced by a suitable sheaf of superalgebras. Their different kind of homologies can be defined in a way quite similar to the ungraded case, but we have to take into account the grading, which results in extra signs added in suitable places. We are not going to say much about superalgebras, they are described in details, for instance, in~\cite{manin} or in \cite{bruzzo}. Nevertheless, a couple of things have to be mentioned. First of all, in the case of superalgebras, the grading is called \emph{parity} and the homogeneous elements are called \emph{even}, if their parity is 0 and \emph{odd} if their parity is 1. Let us assume, once and for all, that $A=A_0\oplus A_1$ is a superalgebra with unit over a commutative ring $k$. Usually, its is assumed that the ring also has a $\mathbb{Z}_2$-grading (i.e. it is a superring). In most cases of interest, however, this is not so (actually, \emph{it is}, but the grading is trivial, everything is in degree zero). If $a\in A_i$ is a homogeneous element, its parity will be denoted by $|a|$. All the tensor products in this paper are taken over the ground ring $k$ and should be thought of in the graded sense (see, for instance, \cite{bruzzo} or \cite{maclane}). Also, the general rules for the computation of the degrees in tensor products are used (of course, we have in mind, always, that all the computations are made in $\mathbb{Z}_2$). Defining matrices in the superalgebra is a little bit tricky. Let us denote, first, by $\Pi A$ the algebra $A$, with the parity of the elements reversed. Then we define a \emph{free} $A$-module of rank $p|q$ (see \cite{manin}) over $A$ to be a (graded) module which is isomorphic to the module $A^{p|q}\equiv A^p\oplus \left(\Pi A\right)^q$. Clearly, $A^{p|q}$ has a system a system of $p+q$ generators ($p$ are odd, and $q$ are even, whence the notation). We want a matrix to represent, as in the ungraded case, a morphism between two free graded modules, when homogeneous basis are fixed in the two modules. It turns out (\cite{manin},\cite{bruzzo}) that the solution is to define a \emph{matrix in standard format} (or a \emph{supermatrix}, for short) of type $(m|n)\times(p|q)$ with entries in the superalgebra $A$, to be just a $(m+n)\times (p+q)$ matrix $M$ with entries in $A$, together with a block decomposition of the form \begin{equation}\label{supermatr} M= \begin{pmatrix} M_{11}&M_{12}\\ M_{21}&M_{22} \end{pmatrix}, \end{equation} where $M_{11}, M_{12}, M_{21}$ and $M_{22}$ are, respectively, matrices of type $m\times p$, $m\times q$, $n\times p$ and $n\times q$. Let us denote by $\mathcal{M}((m|n),(p|q),A)$ the set of all supermatrices of a given format. They have an obvious structure of $A$-module. If we define the parity of a matrix in such a way that a matrix be even if $M_{11}$ and $M_{22}$ have even entries and the other two -- odd entries and odd -- if the converse is true, then this $A$-module becomes a \emph{graded} module, which is isomorphic, as expected, to the graded $A$-module $\Hom_{A}\left(A^{m|n},A^{p|q}\right)$. In what follows, we shall be interested in the case of \emph{square supermatrices} of type $(p|q)\times (p|q)$. We shall use, for the module of this supermatrices, the simplified notation $\mathcal{M}_{p,q}(A)$. With respect to the usual matrix multiplication, this $A$-module becomes, as one can see immediately, a $k$-superalgebra, that will be called in the sequel \emph{the superalgebra of supermatrices over $A$}, which is our main concern in this paper. \section{The cyclic homology of superalgebras} The cyclic homology for superalgebras is defined in a manner completely similar to the case of ordinary, ungraded, algebras (see, for that, \cite{loday} or \cite{seibt}). We define, first, the chain groups as \begin{equation*} C_n(A)=\underbrace{A\otimes \dots \otimes A}_{n+1 \;\text{copies}}, \end{equation*} where all the tensor products are taken over the ground ring $k$, and, of course, they are considered in the graded sense. Next, we define the \emph{Hochschild differential} in the following way (\cite{kassel}). Consider the \emph{face maps} $d_i:C_n(A)\to C_{n-1}(A)$, $i=1,\dots, n$ ($n\geq 1$), defined as \begin{equation*} d_i(a_0\otimes \dots \otimes a_n)= \begin{cases} a_0\otimes \dots\otimes a_ia_{i+1}\otimes\dots \otimes a_n&\text{if}\quad 0\leq i <n\\ (-1)^{|a_n|(|a_0|+\dots|a_{n-1}|)}a_na_0\otimes a_1\otimes \dots\otimes a_{n-1}&\text{if}\quad i=n \end{cases} \end{equation*} and the \emph{degeneracy maps} $s_i:C^n(A)\to C_{n+1}(A)$, $i=0,\dots, n$, defined by \begin{equation*} s_i(a_0\otimes \dots\otimes a_n)=a_0\otimes \dots\otimes a_i\otimes 1\otimes a_{i+1}\otimes\dots \otimes a_n,\qquad, i=0,\dots, n. \end{equation*} Then the Hochschild differential is the map $b_n:C_n(A)\to C_{n-1}(A)$, given by \begin{equation*} b_n=\sum_{i=0}^n(-1)^i d_i. \end{equation*} We denote, also, \begin{equation*} b'_n=\sum_{i=0}^{n-1}(-1)^i d_i. \end{equation*} It turns out that $(C_*(A),b)$ is a chain complex, whose homology is what we call the \emph{Hochschild homology} of the superalgebra $A$, while the pair $(C_*(A),b')$ is also a complex, but an acyclic one. To get to the \emph{cyclic} homology of the superalgebra $A$, we need one more object, namely the \emph{cyclicity map} $t_n:C_n(A)\to C_n(A)$, given by \begin{equation}\label{cycl} t_n(a_0\otimes \dots \otimes a_n)=(-1)^{|a_n|(|a_0|+\dots|a_{n-1}|)}a_n\otimes a_0\otimes\dots \otimes a_{n-1}. \end{equation} Finally, we need a last ingredient to define the bicomplex whose total homology is \emph{the cyclic homology} of the superalgebra $A$, the so-called \emph{norm operator}, $N_n:C_n(A)\to C_n(A)$, defined by \begin{equation*} N_n=1+t_n+\left(t_n\right)^2+\dots+\left(t_n\right)^n. \end{equation*} Now, the cyclic homology of the superalgebra $A$ is the homology of the total complex associated to the following double complex (see \cite{kassel}): \begin{equation*} \xymatrix{% &&&\vdots\ar[d]^{b_3}&\vdots\ar[d]^{b'_3}&\vdots\ar[d]^{b_3}&&\\ j=2&&\cdots&A^{\otimes 3}\ar[l]_{N_2}\ar[d]^{b_2}&A^{\otimes 3}\ar[l]_{1-t_2}\ar[d]^{b'_2}&A^{\otimes 3}\ar[l]_{N_2}\ar[d]^{b_2}&\cdots\ar[l]_{1-t_2}\\ j=1&&\cdots&A^{\otimes 2}\ar[l]_{N_1}\ar[d]^{b_1}&A^{\otimes 2}\ar[l]_{1-t_1}\ar[d]^{b'_1}&A^{\otimes 2}\ar[l]_{N_1}\ar[d]^{b_1}&\cdots\ar[l]_{1-t_1}\\ j=0&&\cdots&A\ar[l]_{N_0}&A\ar[l]_{1-t_0}&A\ar[l]_{N_0}&\cdots\ar[l]_{1-t_0}\\ } \end{equation*} Here the even numbered columns are copies of the Hochschild complex, while the odd one are acyclic complexes. There is a close, functorial, relation between the Hochschild and the cyclic homology of a superalgebra. This is given by the \emph{Connes' exact sequence} obtained in the following way. Let us denote by $B_n:C_n(A)\to C_{n+1}(A)$ the operator given by $B_n=(1-t_{n+1})\circ s_n\circ N_n$, by $S$ the map from the cyclic bicomplex to itself that shifts everything ttwo columns to the left (\emph{Connes periodicity operator)} and by $I$ the inclusion of the Hochschild complex into the cyclic bicomplex as the 0-th column. All these maps are compatible with the differentials and induce maps in homology. Let us use the same notation for these induced maps. Then we have the following long exact sequence (\cite{kassel}). \begin{equation}\label{connes} \begin{split} \cdots \xrightarrow{S}HC_{n-1}(A)\xrightarrow{B}HH_n(A)\xrightarrow{I}HC_n(A)\xrightarrow{S}HC_{n-1}(A) \xrightarrow{B}HH_{n-1}(A)\rightarrow \cdots \end{split} \end{equation} Here $I$ is always surjective and is an isomorphism for $n=0$. \section{The cyclic homology of the superalgebra of supermatrices over a superalgebra $A$} The intention of this section is to prove the following theorem: \begin{theo} Let $A$ be a superalgebra with unit, over a graded-commutative superring $k$. Then, for any pair of natural numbers $(p,q)$, the supertrace map \begin{equation*} \str:\mathcal{M}_{p,q}(A)\to A \end{equation*} can be extended to a morphism of the two cyclic bicomplexes: \begin{equation*} \Str:CC_{*,*}\left(\mathcal{M}_{p,q}(A)\right)\to CC_{*,*}(A), \end{equation*} which, in turn, induces an isomorphism in the cyclic homology. Here we denoted by $CC_{*,*}$ the chain modules in the cyclic bicomplex. \end{theo} \begin{proof} The proof is an adaptation of the proof for the ungraded case (see, for instance, \cite{rosenberg}). Let $M$ be a homogeneous supermatrix of type $p|q$ with entries in the superalgebra $A$. Then, as we saw earlier, $M$ can be written in the block form \begin{equation}\label{supermatr1} M= \begin{pmatrix} M_{11}&M_{12}\\ M_{21}&M_{22} \end{pmatrix}. \end{equation} Then the \emph{supertrace} is defined (see \cite{bruzzo},\cite{manin}) as \begin{equation}\label{supertr} \str M=\tr M_{11}+(-1)^{1+|M|}\tr M_{22}. \end{equation} In a previous paper (\cite{blaga}), we extended this map to a map $\Str$ between the Hochschild complexes of the two superalgebras and we proved that it induces an isomorphism between the Hochschild homologies. Namely, we defined $\Str$ on (tensor products of) homogeneous supermatrices as \begin{equation}\label{deniss} \begin{split} \Str_n\left(M^0\otimes M^1\otimes\dots\otimes M^n\right)=\sum_{i_0=1}^n\sum M^0_{i_0i_1}\otimes M^1_{i_1i_2}\otimes \dots \otimes M^{n-1}_{i_{n-1}i_n}\otimes M^n_{i_ni_0}+\\ +(-1)^{1+|M^0|+\dots +|M^n|}\sum_{i_0=1}^{p+q}\sum M^0_{i_0i_1}\otimes M^1_{i_1i_2}\otimes \dots \otimes M^{n-1}_{i_{n-1}i_n}\otimes M^n_{i_ni_0}. \end{split} \end{equation} Here the lower indices of the matrices are not related to the block decomposition, they simple identify the entries of the matrices. The second sum, in both terms, is taken after all values of the indices $i_1,\dots, i_n$ (between 1 and $p+q$, of course!). It turns out that the generalized supertrace is uniquely defined by its values on (tensor products of) all elementary matrices $E^{ij}(a)$, where $a\in A$ is a homogeneous element and \begin{equation}\label{elematr} E^{ij}_{rs}(a)= \begin{cases} a&\text{if}\quad (r,s)=(i,j)\\ 0&\text{in all other cases} \end{cases}. \end{equation} Clearly, the parity of $E^{ij}(a)$ is equal to the parity of $a$ if the only non-vanishing element belongs to a diagonal block and it is equal to the opposite of the parity of $a$ in the rest of the cases. We proved (\cite{blaga}) that, on elementary matrices, $\Str$ is easy to compute and we have \begin{equation}\label{elemsuper} \Str_n\left(E^{i_0i_1}(a_0)\otimes\dots\otimes E^{i_ni_0}(a_n)\right)= \begin{cases} a_0\otimes\dots\otimes a_n, & i_0\leq p,\\ (-1)^{1+\sum\limits_{i=0}^n|a_i|}a_0\otimes\dots\otimes a_n,& i_0>p. \end{cases} \end{equation} Notice that we took a special combination of the indices of the matrices in the argument of $\Str$. It turns out (and it is very easily checked), that for any other combination of indices the generalized supertrace vanishes and, as such, it is enough to work with this kind of combinations. To prove that $\Str$ is a morphism of the cyclic bicomplexes, all we have to prove is to show that it commutes with the differentials (both horizontal and vertical). But we have already proved that it commutes with the \emph{vertical} differentials (as it commutes with the faces in the Hochschild complex (\cite{blaga})). Thus, clearly, all that remains to be seen is that the generalized supertrace commutes with the cyclicity operator. Combining the formulas~(\ref{cycl}) and~(\ref{elemsuper}), we get: \begin{equation}\label{comp1} \begin{split} &\left(t_n\circ\Str_n\right)\left(E^{i_0i_1}(a_0)\otimes\dots\otimes E^{i_ni_0}(a_n)\right)=\\ &= \begin{cases} (-1)^{|a_n|\sum\limits_{i=0}^{n-1}|a_i|}a_n\otimes a_0\otimes\dots\otimes a_{n-1} &\text{if}\quad i_0\leq p\\ (-1)^{1+\sum\limits_{i=0}^n|a_i|+|a_n|\sum\limits_{i=0}^{n-1}|a_i|}a_n\otimes a_0\otimes\dots\otimes a_{n-1} &\text{if}\quad i_0> p \end{cases} \end{split} \end{equation} and, on the other hand, \begin{equation}\label{comp2} \begin{split} &\left(\Str_n\circ t_n\right)\left(E^{i_0i_1}(a_0)\otimes\dots\otimes E^{i_ni_0}(a_n)\right)=\\ &= \begin{cases} (-1)^{\left|E^{i_ni_0}(a_n)\right|\cdot\left|E^{i_0i_n}(a_0\cdots a_{n-1})\right|}a_n\otimes\dots\otimes a_{n-1}&\text{if}\quad i_n\leq p\\ (-1)^{\left|E^{i_ni_0}(a_n)\right|\cdot\left|E^{i_0i_n}(a_0\cdots a_{n-1})\right|+1+\sum\limits_{i=0}^{n}|a_i|}a_n\otimes\dots\otimes a_{n-1}&\text{if}\quad i_n>p \end{cases}. \end{split} \end{equation} Let us denote, for simplicity, $E^{n}(a)=E^{i_0i_1}(a_0)\otimes \dots \otimes E^{i_ni_0}(a_n)$. We have four situations to consider: \begin{enumerate}[(i)] \item $i_0\leq p, i_n\leq p$. In this case, we have \begin{equation}\label{comp31} \left(t_n\circ \Str_n\right)(E^n(a))=(-1)^{|a_n|\sum\limits_{i=0}^{n-1}|a_i|}a_n\otimes a_0\otimes\dots\otimes a_{n-1}, \end{equation} while \begin{equation}\label{comp31a} \begin{split} \left(\Str_n\circ t_n\right)(E^n(a))&=(-1)^{\left|E^{i_ni_0}(a_n)\right|\cdot\left|E^{i_0i_n}(a_0\cdots a_{n-1})\right|}a_n\otimes\dots\otimes a_{n-1}=\\ &=(-1)^{|a_n|\sum\limits_{i=0}^{n-1}|a_i|}a_n\otimes a_0\otimes\dots\otimes a_{n-1}=\left(t_n\circ \Str_n\right)(E^n(a)). \end{split} \end{equation} \item $i_0\leq p, i_n>p$. In this case, $\left(t_n\circ \Str_n\right)(E^n(a))$ is still given by~(\ref{comp31}), while \begin{equation}\label{comp32a} \begin{split} \left(\Str_n\circ t_n\right)(E^n(a))&=(-1)^{(1+|a_n|)\bigg(1+\sum\limits_{i=0}^{n-1}|a_i|\bigg)+1+\sum\limits_{i=0}^{n}|a_i|}a_n\otimes a_0\otimes\dots\otimes a_{n-1}. \end{split} \end{equation} Having in mind that the parities are elements of $\mathbb{Z}_2$,~(\ref{comp32a}) leads again to \begin{equation*} \left(\Str_n\circ t_n\right)(E^n(a))=\left(t_n\circ\Str_n\right)(E^n(a)). \end{equation*} \item $i_0>p, i_n\leq p$. We have, now, \begin{equation}\label{comp33} \left(t_n\circ \Str_n\right)(E^n(a))=(-1)^{1+\sum\limits_{i=0}^n|a_i|+|a_n|\sum\limits_{i=0}^{n-1}|a_i|}a_n\otimes a_0\otimes\dots\otimes a_{n-1} \end{equation} and \begin{equation}\label{comp33a} \begin{split} \left(\Str_n\circ t_n\right)(E^n(a))&= (-1)^{(1+|a_n|)\bigg(1+\sum\limits_{i=0}^{n-1}|a_i|\bigg)}a_n\otimes\dots\otimes a_{n-1}=\\ &=\left(t_n\circ \Str_n\right)(E^n(a)) \end{split} \end{equation} \item $i_0>p, i_n>p$. In this situation, $\left(t_n\circ \Str_n\right)(E^n(a))$ is given by~(\ref{comp33}) and \begin{equation}\label{comp34a} \begin{split} \left(\Str_n\circ t_n\right)(E^n(a))&=(-1)^{|a_n|\sum\limits_{i=0}^n|a_i|+1+\sum\limits_{i=0}^n|a_i|}a_n\otimes a_0\otimes \dots\otimes a_{n-1}=\\ &=\left(t_n\circ \Str_n\right)(E^n(a)). \end{split} \end{equation} \end{enumerate} We are now, left with the problem of showing that $\Str$ is a quasi-isomorphism between the cyclic bicomplexes. We are going to do that exactly as in the classical, ungraded case. Namely, by induction. Since all the cyclic homology groups of negative degree are zero for both superalgebras, we shall start by $n=0$. But, as we mentioned earlier, as a consequence of the existence of the Connes long exact sequence, at this level the cyclic and Hochschild homologies coincide, in other words, we have $HH_0(A)=HC_0(A)$ and the same for the case of the superalgebra $\mathcal{M}_{p,q}(A)$. Thus, for the zeroth order homology, its is, actually, enough to prove that $\Str$ is a quasi-isomorphism in the \emph{Hochschild} homology, which we now already for our previous paper. Let us assume, now, that $\Str$ induces isomorphisms between the cyclic homology groups up to the degree $n_0-1$, where $n_0$ is a natural number and let us prove that, in this situation, it also induces an isomorphism in the homology of degree $n_0$. Writing down the Connes long exact sequences both for $\mathcal{M}_{p,q}(A)$ and $A$, we get the following diagram: \begin{equation*} \xymatrix{% HC_{n_0-1}\big(\mathcal{M}_{p,q}(A)\big)\ar[d]_B\ar[r]&HC_{n_0-1}(A)\ar[d]_B\\ HH_{n_0}\big(\mathcal{M}_{p,q}(A)\big)\ar[d]_I\ar[r]&HH_{n_0}(A)\ar[d]_I\\ HC_{n_0}\big(\mathcal{M}_{p,q}(A)\big)\ar[d]_S\ar[r]&HC_{n_0}(A)\ar[d]_S\\ HC_{n_0-2}\big(\mathcal{M}_{p,q}(A)\big)\ar[d]_B\ar[r]&HC_{n_0-2}(A)\ar[d]_B\\ HH_{n_0-1}\big(\mathcal{M}_{p,q}(A)\big)\ar[r]&HH_{n_0-1}(A) } \end{equation*} The columns of this diagram are exact and the horizontal maps are those induced by the generalized supertrace in the Hochschild and cyclic homology, respectively. Using the induction hypothesis, as well as the fact that the generalized supertrace is a quasi-isomorphism at the level of the Hochschild complexes, it follows that the first two and the last two horizontal maps are isomorphisms. By the lemma of five (\cite{cartan}) it follows, therefore, that the fifth horizontal map (the one in the middle) is, also, an isomorphism and, thus, we proved that $\Str$ induces an isomorphism in the cyclic homology of degree $n_0$, which concludes our proof that $\Str$ is a quasi-isomorphism at the level of the cyclic bicomplex, as well. \end{proof} \section{Final remarks} The notion of Morita invariance can be extended to the case of graded algebras (see \cite{marcus}) and, in particular, to the case of superalgebras (\cite{blaga1}). It can be shown easily that the superalgebra of supermatrices of a given type over $A$ and the superalgebra $A$ itself are Morita equivalent. In the ungraded case, there is well known that the Hochschild and cyclic homology are Morita invariant. In \cite{blaga1} we shown (by using a spectral sequences argument) that the same is true for superalgebras, in the case of Hochschild homology. It is, probably, possible to adapt the arguments of McCarthy (\cite{mccarthy}) to the ``supercase'', obtaining, in this way, a proof of the Morita invariance of the cyclic homology for superalgebras. The advantage of the approach we used here is that (in the particular case of superalgebras of supermatrices), we can provide explicitly the isomorphism, using some elementary constructions.